We've got the same issue, but with an incoming HL7 feed with embedded, encoded characters - would be nice to be able to detect what's coming in, but I take it from this discussion that's not (reliably) possible. Don't really want to scan the whole text of every incoming message to heuristically look for possible encodings. Upstream say/think they are sending UTF-8, but we seem to be getting Window-1252, for the characters we've seen in the (limited) testing. Who knows what will come through the feed once it goes live!

Auto-adjust / design question 4: we'd find this useful, especially if it handles bulk renames - a bunch of classes implementing a data type, all being moved in one go from one place in the class hierarchy to another and all being consistently renamed.

So files X/Y/A.cls, X/Y/B.cls, X/Y/C.cls, and containing classes X.Y.A, X.Y.B, and X.Y.C being moved to Q/P/A.cls and Q.P.A etc. Especially if Properties defined in X.Y.A as "Property pp As X.Y.B" becomes "Property pp As Q.P.B" when renamed.

Closing off an old question for completeness, we never did get Zen working. In the end we used Apache FOP directly:

  • HL7 -> XML as described in the original question
  • call Apache FOP using $ZF(-100,  $$$fopbat, "-xml", XMLfilename, "-xsl", StyleSheetFilename, "-pdf", PDFfilename)

This puts the output PDF onto the filesystem from where, in our solution, it is later picked up for onward transmission to a downstream system.

For the sake of closing off this old question, and to answer my own question, in light of more experience and some testing...

  • side effects of the transformation could, in theory, change the behaviour - but it'd have to be a transformation that had side-effects (eg kept some kind of state across executions, whether in globals or on the filesystem or in some other way)
  • performance could be affected, since transformation is called twice rather than once, but in most cases the difference is likely to be negligible.

Business Operation is one generated by SOAP Wizard. It is being fed by a custom Business Process that runs in response to a scheduled task - the BP queries a database table and extracts a set of documents to send. At certain points in the day we want to query the table like this:

      SELECT * from TABLE

while at other points in the day we want to query the table like this:

      SELECT TOP NN * from TABLE

Then the documents selected by the query are sent, in turn, to the Business Operation for onward transmission.

We have a situation that looks suspiciously similar:

  • job that runs an external program via $ZF(-100,...) to perform a task from a business process runs perfectly when the Pool Size = 1
  • not all of the external tasks complete successfully when Pool Size > 1

More detail:

  • Production takes incoming stream of HL7 ORU_R01 messages, and for each one produces a PDF
  • this is done by converting each HL7 to an XML representation, then calling Apache FOP (the one pre-installed in Ensemble) with a stylesheet and the XML to build the PDF. A business process takes care of this step.
  • with Pool Size = 1 runs correctly
  • with Pool Size = 2 all the XML files are generated (via a call to a class method) but only a small subset of the PDF files are generated - maybe 4 out of 20?
  • no error messages that we've been able to find yet

Here's an illustrative screenshot - yellow are first HL7->XML->PDF, green are second HL7->XML->PDF. Yellow produces a PDF, green doesn't. As far as we can tell the FOP commands should be independent (no shared files - unless stylesheets can't be opened by multiple processes simultaneously?)



Only thing we've seen in documentation that gives us pause is the line: "On a Windows system you should never omit both the /ASYNC and /STDIN flags." (from $ZF(-100) | InterSystems IRIS Data Platform 2024.2) - but when only one copy is running it appears to be fine with "" as the flags argument.

Is this a $ZF/Ensemble issue, or is it something about FOP specifically?

Say you have the same code (Production) running on different servers - for example, a local instance on a developers own machine, a test server used for system testing and a production server.

Your code accesses an external web-service. The actual web-service will be different for each system - maybe a mock service for the developer, a test version of the web-service for the test system and a production version for your production server. Then the URL for accessing the web-service would be different for each one.

In your code you have a setting on the business operation in your production that connects to the web-service. The value of this setting can be set from the System Defaults Settings page, and will contain different values between the servers.

This allows you to separate out settings that will be the same across all servers, and settings that will differ between servers - settings that are the same on all servers can be set on the services/processes/operations themselves, settings that differ will be set via system defaults.

Not sure it counts as an answer, but what we did to step round this issue was to move the bulk of the functionality - where the error handling was required - into a new business process, leaving only the most basic "pass the trigger message along" functionality in the business service. Added an extra component to the production, but we can now see errors in the log when they occur, and they are passed appropriately to Ens.Alert.

Never mind, I'm an idiot. One of my colleagues found the issue - I thought I had, but I hadn't managed to add both:

  • Property ReplyCodeActions As %String(MAXLEN = 1000);
  • Parameter SETTINGS = "ReplyCodeActions:Additional,...."

I think I'd added one to TNHS.SOAPclassExtra, hadn't worked, tried the other, but somehow failed to check both together...🙄 Working now.

Follow-up, as a quick and dirty check, I replaced the call to GetValueAt above with:

set step1 = $PIECE(message.RawContent,"OBR|")
set step2 = $PIECE(part1, "ORC|", *)
set ReportId = $PIECE(part2, "|", 3, 3)

This works before calling the XML generation code when GetValueAt doesn't - so GetValueAt is definitely doing something to the contents of the HL7 message....

I'm interested in finding out more about the GitLab CI/CD pipeline options that might be available outside of the Cloud offering.

We are currently Ensemble 2018.1, though hopefully moving to Iris soon. Our development workflow is:

  • local development, VS Code + ObjectScript extension + management portal
  • source control using git from VS Code to a an on-prem GitLab instance
  • testing and deployment onto a shared dev server, a shared test server and ultimately a production server, but deploying code from the local dev server/GitLab to dev/test/prod servers via exporting classes, not integrated with GitLab.

So we'd be really interested in the CI/CD options mentioned in the GitLab instance offered as part of the Iris/Health Connect Cloud - the dev, test and production deployment deployments. Is the stuff offered on the Cloud available on prem? Is there more information available somewhere about the CI/CD options in GitLab and integrating with Iris?

Thanks, helpful to explore another potential option. If I'm reading https://docs.intersystems.com/iris20233/csp/docbook/DocBook.UI.Page.cls?... correctly then we'd need to use the OnProductionStart method of the business process to run when the job is scheduled - and not sure that's obviously accessible in a BPL business process. There are otherwise no incoming messages to trigger action.