...
[12:20:46 CST(-0600)] <dmccallum54> (dont worry about the jira tickets themselves right now, they need to be re-worked to reflect what we're actually doing, and that doesn't need to block any current work)
[12:21:36 CST(-0600)] <js70> yep. will do
[12:22:29 CST(-0600)] <dmccallum54> but my two thoughts were this: 1) we don't necessarily need all the bloody, blow-by-blow liquibase from SSP proper is that's hard to reuse for any reason, and 2) we need to make sure we dont accidentally cause conflict with anything that's going on in SSP's DATABASECHANGELOG* tables
[12:24:07 CST(-0600)] <dmccallum54> so what i was actually thinking was… for our staging tables we really just need to know what version of SSP we're talking about and the DDL that defines staging tables for that version… and we could get that fairly easily by using liquibase to generate changesets given an existing install of whatever version of SSP we're talking about
[12:24:24 CST(-0600)] <dmccallum54> we dont really need "migrations" for these staging tables b/c their contents are totally transient
[12:25:39 CST(-0600)] <dmccallum54> for the framework tables, i think what they provide are DDL files and we just need to decide how to execute those
[12:26:04 CST(-0600)] <dmccallum54> in the sample app i think they just use the jdbc namespace to execute those DDL files
[12:27:01 CST(-0600)] <dmccallum54> so in my mind, at this point, the question is whether liquibase really buys us anything, or if it's simpler to just go straight DDL for both framework and staging tables
[12:28:15 CST(-0600)] <dmccallum54> because we don't really have a need for "migrations" in any of our custom tables, and SpringBatch doesn't use liquibase for migrations of its framework tables, I'm actually leaning toward just maintaining possible db-specific ddl files in our source, using db-specific tools to generate those files from existing, "known-good" SSP installs
[12:30:13 CST(-0600)] <TonyUnicon> ok
[12:31:19 CST(-0600)] <TonyUnicon> so I'll start with that
[12:31:51 CST(-0600)] <dmccallum54> https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml
[12:32:16 CST(-0600)] <dmccallum54> here's what i was referring to with the cryptic comment about the "jdbc namespace"
[12:32:35 CST(-0600)] <dmccallum54> see the "jdbc:initialize-database" element at the tiop?
[12:34:37 CST(-0600)] <TonyUnicon> yeah
[12:46:34 CST(-0600)] <dmccallum54> TonyUnicon it also occurred to me that if we take this staging table approach, it might make sense to split the "writeFilteredItems" step into 2 steps… one for the writing to the stage tables and one for the upserts into the "live" tables
[12:47:18 CST(-0600)] <dmccallum54> the write to the stage tables seems like it's right in SpringBatch's wheelhouse
[12:47:51 CST(-0600)] <TonyUnicon> right
[12:48:00 CST(-0600)] <dmccallum54> the upserts would be more of a custom Tasklet that just executes a couple of db statements