For simple headers (and footers), you can use the 'batch class' feature of the record map file batch service (EnsLib.RecordMap.Service.BatchFileService) and operation (EnsLib.RecordMap.Operation.BatchFileOperation), and a class such as EnsLib.RecordMap.SimpleBatch to specify a header string.

The record map file operation append records to the output file. The initial value of the 'Filename' setting is '%Q', hence you get one file per timestamp.

If you set "Filename" to '%f', the output file name will be the same as the input file name and records from one input file will be appended to an output file with the same name.

For a simple message transformation flow example, I would go for record map :

So you can focus on DTL and the whole flow can be done from the administration portal, look ma, no code  ;-)

You can call the Tranform(source,target) class method implemented by classes that extend Ens.DataTransform :

  • directly in objectscript code
  • through code generated for a BPL or router process ('send' action optionally uses a list of transformation classes that gets applied in sequence to the message before it is sent to target(s)). 
  • through code generated by a DTL ('subtransform')

If you need to process the entire file and no line filtering, I would go for using pass through file service (EnsLib.File.PassthroughService) to send an instance of stream container (Ens.StreamContainer) to either a message router (EnsLib.MsgRouter.RoutingEngine) or custom (BPL or code) process, and use a transform (class extending Ens.DataTransform) to transform the source stream container into a target stream container and send it to the file pass through operation (EnsLib.File.PassthroughOperation) for output.

I would use a custom process over message router if transform needs data source(s) (e.g. response from another process or operation) other than the input file. The transform can pick a suitable target stream class (Extending %Stream.Object) to hold in the Ens.StreamContainer depending on where you want to store the data (database vs file system,…)



Assuming your question is about HL7 (or EDI) message serialization.

The DTL is meant for parsing and transforming the message into another one.
Serialization occurs when you output the message using the corresponding instance methods.

For an instance of the EnsLib.HL7.Message class, methods that output the message such as OutputToFile() are using instance properties to determine what separators to use : .Separators, .SegmentTerminator.

Also, business operations (extending EnsLib.HL7.Operation.Standard) expose a setting (Separators) that let you configure what separators to use.

To get an IRIS session integrated in VSCode terminal, you can add it to settings.json :

  "": {
        "IRIS Terminal": {
            "path": [
            "args": ["<instance name>"],
            "icon": "terminal-cmd"

However, this terminal window will lack the niceties of the ISC Terminal application, such as command history.

Hi Ben,

You can use EnsLib.HL7.Schema ResolveSchemaTypeToDocType() class method to resolve DocType dynamically.

For example, if message is an OREF to an instance of EnsLib.HL7.Message :

s message.DocType = ##class(EnsLib.HL7.Schema).ResolveSchemaTypeToDocType("2.5",message.Name)

Thank you Dmitry, I unfortunately am also quite busy and have no spare time to invest on this nice to have at the moment.
It is certainly possible to use the TextMate grammar as basis to write a lexer class for Rouge.
I'll let you known if I get started on this !