<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and
assign the returned object to the context, and an error is produced when the context is
saved because the returned object has, say, missing properties, IRIS will only see this error
too late and the scope we defined will not be able to help us to deal with this fault.
So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
And try to save the returned object before assigning it to the context and returning. If we can't
save it, we must assign the error to the status variable and leave the context alone. This way
the <scope> will capture the problem and take us to the <catchall> activity.
Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
If $$$ISERR(status) Quit
Set status = normalizedRequest.%Save()
If $$$ISERR(status) Quit
</code> <assign name="Done" property="context.Action" value=""Done"" action="set" xpos='200' ypos='550' /> <faulthandlers> <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
<trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."
Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code.
That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.
For an example of this, look at this demo (based on IRIS):
Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:
I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.
This is another demo based on IRIS for Health:
Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.
If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.
When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.
Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!
But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.
But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.
Thank you! That helps a lot. The problem I was having is that I was implementing XXXGetInfo but not XXXGetODBCInfo().
To leave a comment or answer to post please log in
Please log in
To leave a post please log in