I figured it out. Code was fine but production wasn't starting due to a licencing not applying. Turns out coping key to iris build folder wasn't enough - once placed a copy in the/usr/irissys/mgr/ that allowed the production to run. That perhaps problem explains the issue I was having with ZPM also because the key wasn't applying until a later stage from the build location. I need to review the way we handle the licences but this will get me running for the moment.
ARG IMAGE=containers.intersystems.com/intersystems/irishealth:2022.1.0.209.0
FROM $IMAGE
WORKDIR /home/irisowner/irisbuild
COPY iris.key /usr/irissys/mgr/iris.key
COPY iris.key iris.key
Hi Rich. Sorry for the delay. Thanks for your input. In answer to some of your questions:
Re: Use of messages/tracing.
The interface would just produce too much data. For instance, if I had to call an API and retrieve a full set of data every night and then paginate through it, then every http response would have to be passed back to process or another operation and get attached to the session and become part of massive view trace that takes an age to render and causes the data namespace messages to expand too much.
What happens to the data if the process aborts in the middle?
As all our adapters have AlertGroups, this will just use the Ens.Alert an email will be generated for someone to look at so we have some visibility.
Would missing any of the data being pulled have a negative effect on the business?
In this instance no because we are making a copy of some reporting data and the business can just check the source if needed. As mentioned, at minimum we catch any exceptions and alert a technical contact. Obviously with an API pull to SQL, its possible to get non-2XX status code that ruins a whole run. We would probably use the http adapter settings here to manage that but because we are not getting a delta unfortunately, only the latest run is ever important.
Is there any concern over data lineage for security or external auditing?
Generally no, because data from the http response is just passed into a SQL table. The actual transformation of the data happens after it reaches its destination via reporting engine external to IRIS.
Re: Temporary holding table.
I previously did this to produce 5GB of json for a MongoDB parser, it works well but you have to compact the namespace after kill the globals or just live with reserving an amount of space for subsequent runs in the data namespace.
Yes, a global mapping to IRISTEMP could work to get around the journalling. My only issue is it looks like I'd have to add this mapping manually to my namespace in every environment before I promote my code. Is there are way of do this via method so I can script it?
Re: Batching, this would only run once a day, however I'm generating a run identifier so should be unique enough.
Using %CSP.REST class to get value will work. It's a bit weird that %REST.Impl does not have this out of the box though. My guess there's probably an expectation to have an IAM in front of this (e.g. Kong or MS Azure)