Say you have the same code (Production) running on different servers - for example, a local instance on a developers own machine, a test server used for system testing and a production server.

Your code accesses an external web-service. The actual web-service will be different for each system - maybe a mock service for the developer, a test version of the web-service for the test system and a production version for your production server. Then the URL for accessing the web-service would be different for each one.

In your code you have a setting on the business operation in your production that connects to the web-service. The value of this setting can be set from the System Defaults Settings page, and will contain different values between the servers.

This allows you to separate out settings that will be the same across all servers, and settings that will differ between servers - settings that are the same on all servers can be set on the services/processes/operations themselves, settings that differ will be set via system defaults.

Not sure it counts as an answer, but what we did to step round this issue was to move the bulk of the functionality - where the error handling was required - into a new business process, leaving only the most basic "pass the trigger message along" functionality in the business service. Added an extra component to the production, but we can now see errors in the log when they occur, and they are passed appropriately to Ens.Alert.

Never mind, I'm an idiot. One of my colleagues found the issue - I thought I had, but I hadn't managed to add both:

  • Property ReplyCodeActions As %String(MAXLEN = 1000);
  • Parameter SETTINGS = "ReplyCodeActions:Additional,...."

I think I'd added one to TNHS.SOAPclassExtra, hadn't worked, tried the other, but somehow failed to check both together...🙄 Working now.

Follow-up, as a quick and dirty check, I replaced the call to GetValueAt above with:

set step1 = $PIECE(message.RawContent,"OBR|")
set step2 = $PIECE(part1, "ORC|", *)
set ReportId = $PIECE(part2, "|", 3, 3)

This works before calling the XML generation code when GetValueAt doesn't - so GetValueAt is definitely doing something to the contents of the HL7 message....

I'm interested in finding out more about the GitLab CI/CD pipeline options that might be available outside of the Cloud offering.

We are currently Ensemble 2018.1, though hopefully moving to Iris soon. Our development workflow is:

  • local development, VS Code + ObjectScript extension + management portal
  • source control using git from VS Code to a an on-prem GitLab instance
  • testing and deployment onto a shared dev server, a shared test server and ultimately a production server, but deploying code from the local dev server/GitLab to dev/test/prod servers via exporting classes, not integrated with GitLab.

So we'd be really interested in the CI/CD options mentioned in the GitLab instance offered as part of the Iris/Health Connect Cloud - the dev, test and production deployment deployments. Is the stuff offered on the Cloud available on prem? Is there more information available somewhere about the CI/CD options in GitLab and integrating with Iris?

Thanks, helpful to explore another potential option. If I'm reading https://docs.intersystems.com/iris20233/csp/docbook/DocBook.UI.Page.cls?... correctly then we'd need to use the OnProductionStart method of the business process to run when the job is scheduled - and not sure that's obviously accessible in a BPL business process. There are otherwise no incoming messages to trigger action.

Can I clarify: "Run at a scheduled time" could mean either:

  1. Run during a designated period of time, thus making the service / business process available to be triggered at anytime during that period but not outside that period, or
  2. Run once at a specific time, and never otherwise be triggered.

We are interested in the second of these - we want a batch job to run at 0800, at 1300, and at 1600 each day. When it runs it will scoop all the current labs results out of the database table where they have been accumulating, and send them on to the downstream system.

Thanks @Ashok Kumar , with a bit of tweaking that got us to where we needed to go. For clarity for anyone coming along and reading this later the steps would be:

  1. Create the BPL Business Process you want to call - at least have something in place you can call
  2. Create the HL7Task.Test class in ObjectScript from the example above - renamed as appropriate. The argument to CreateBusinessService is the name in the Production of the Service you will be calling (created below). Also you need to set pInput as a class suitable for use as a request, such as Ens.Request or (we used for an example) Ens.StringContainer
  3. Create the HL7.Feed.TriggerService class in ObjectScript from the example above, renamed as appropriate. 
  4. In the Production, create a new service (whose name is that used in step 2), and whose class is the one created in step 3. Set the TargetConfigName on the service to the name of the BPL. Set the Pool Size to zero.
  5. Create a new scheduled task - run it in the appropriate namespace, and pick the class name from step 2 in the Task Type dropdown - and schedule as appropriate.

Once all this is in place, you can call a BPL Business Process from a scheduled task.

Thanks Alexander. Terminal Task Manager:

  • can delete other tasks, but
  • trying to delete the "corrupted" one get: "ERROR #5803: Failed to acquire exclusive lock on instance of '%SYS.Task'" (same as when using Management Portal)

Re figuring our where system tasks are stored via journalling, I understand the principle of what you are saying but we are probably reckoning the effort in doing that at least as great as scrubbing and reinstalling - we lose some config (we've got it documented, but the developer who did it originally has left), but no important running code.