DTL to Record-mapped Batch output creation
In terms of general through-put design and long term support, I'm considering what would be a "best approach" for needing to create multiple batch files in a few different layouts from the same data-sets.
I need to iterate over a few very large tables to generate consistent batch outputs for different partner applications that need to be working on the same "version" of the source data. Initially, this will be 30-50 million records, and then after the 1st iteration, a "delta" version will need to be created for all followup batches (not 100% sure what percentage of the total dataset will have changed from batch-build run to batch build run).
Each batch file does NOT have to be "the entire data set", but rather a collection of smaller batch files that collectively represent the entire data set being output, so I should be able to create multiple processes that are working in parallel.
Since there will potentially be multiple output formats (each partner application may have a unique format and field sub-set of the whole), what do people think would be the most performant and supportable design for this sort of problem?