Question
· Nov 3, 2023

What's the best practice for containerized application code upgrade (while using Durable %SYS feature)?

We are using IRIS health 2023.1 to build an application that runs on kubernetes cluster as container images. In the container image, we have our own PRODUCTION "APP" created with its routines database and global database located at:

/usr/irissys/mgr/APPCODE/IRIS.DAT

/usr/irissys/mgr/APPDATA/IRIS.DAT

To persist the data, we use the “Durable %SYS for Persistent” feature and set the ISC_DATA_DIRECTORY to an external durable folder (/durable/iconfig).

However, we encounter a problem when we try to upgrade the application:

  1. We deploy the initial version of the application image at the customer site, and the application databases (APPCODE and APPDATA) are persisted in /durable/iconfig/mgr. After deployment, production settings are modified to meet customer site environment needs (E.g. DICOM service settings).
  2. We create a new version of the application image with some bug fixes and changes in the code and the production settings, and we update the APPCODE database in the image. We want to replace the old image with the new one at the customer site, and keep the persisted data.
  3. We use the existing durable directory to upgrade the application image. After the startup, we find that the new codes are not visible in the management portal, and the production settings updated in new version also didn't take effect.

Our question is: how can we upgrade the application routines database without losing the persisted routines database from the previous version? Is there any recommended way to do this?

We also notice that IRIS provides a feature for "Upgrading InterSystems IRIS Containers". Does this feature only work for IRIS code, or can it also handle our application routines database?

Product version: IRIS 2023.1
Discussion (9)3
Log in or sign up to continue

"we find that the new codes are not visible in the management portal"

Are you sure that your code is from APPCODE database?

Regarding the production settings, replacing the production class only is not sufficient, you need to compile the production class in the final environment. When a production is compiled "some data" is saved in the (APPDATA in your case) database.

Enrico

Hi Enrico,

Thank you for the reply! 

We have changed both classes codes (.cls files) as well as prodution settings (class extending Ens.Production).

During the container image build process (by Dockerfile), all the code files (.cls) were compiled and built into APPCODE database inside the container image. Do you mean we have to compile those changed codes in the final deployment environment, at customer site? That would be not easy to do and to manage for service engineers at each customer site (in case of a major version upgrade, a lot of code files could be changed)

Is there any other capability provided by IRIS for such scenario?

What's "RO" stand for? Yes in face we tried another approach where we leave the APPCODE inside container image and APPDATA in the durable like this (example of runtime merge configuration):

[Actions]
CreateResource:Name=%DB_APP,PublicPermission=0
CreateDatabase:Name=APPDATA,Directory=/durable/iconfig/APPDATA,MountAtStartup=1,Resource=%DB_APP
CreateDatabase:Name=APPCODE,Directory=/home/irisowner/mgr/APPCODE,MountAtStartup=1
CreateNamespace:Name=APP,Globals=APPDATA,Routines=APPCODE,Interop=1
CreateMapGlobal:Name=Ens.Config.*,Namespace=APP,Database=APPCODE
CreateMapGlobal:Name=Ens.AutoStart,Namespace=APP,Database=APPCODE

By doing this, APPCODE upgrade will be very easy when upgrading container images, but there is a new problem: after the customer deployment, if there is any change made on the production settings (e.g. changing an AE Title on the DICOM service), those settings are stored in the APPCODE, and the settings will be lost after a container PODS restart (kubernetes recreating the PODS).

What's "RO" stand for?  

ReadOnly.

After the customer deployment, if there is any change made on the production settings (e.g. changing an AE Title on the DICOM service), those settings are stored in the APPCODE, and the settings will be lost after a container PODS restart (kubernetes recreating the PODS).

Two ways to avoid that:

Agree. Use the default settings.

They can be inserted with SQL statements.

Also, if you want to add Configuration Items to the Production post deployment, without having to recompile or change the production manually, use a session script, or create a classmethod or routine that can import a CSV containing the relevant information.

E.g.

&sql( INSERT INTO Ens_Config.DefaultSettings (Deployable, Description, HostClassName, ItemName, ProductionName, SettingName, SettingValue) 
VALUES (1, 'My setting description', 'My.Operation', 'My Nifty Operation', 'My.Production', 'TheSetting', 'SettingValue'))

and for the Production:

Set tProduction = ##class(Ens.Config.Production).%OpenId("My.Production")
Set tObj = ##class(Ens.Config.Item).%New()
Set tObj.Name = "My Nifty Operation"
Set tObj.ClassName = "My.Operation"
Set tObj.PoolSize = 1
Set tObj.Enabled = 1
Set tObj.LogTraceEvents = 0
Set tSC = tProduction.Items.Insert(tObj)
Set tSC = tProduction.%Save()
w $System.Status.GetErrorText(tSC)
Set tSC = ##class(Ens.Director).UpdateProduction()
w $System.Status.GetErrorText(tSC)

You can also open existing confiritems in order to update the Pool Size or any of those other settings that are not configurable in the "System Default Settings" for some reason.

Thank you all for the help. 

Just want to share with everyone about our progress:

As the first stage, we successfully mapped production settings to a thrid read-write database.

Example Dockerfile to build the image containing the CODE database and initial CONFIG (we removed a lot of details, just leaving the key statements):

ARG BASE_IMAGE=intersystems/irishealth:2023.1.0.229.0
FROM ${BASE_IMAGE} as staging

### Launch IRIS instance and executes staging process using staging.script
### This will deploy the production, import and compile all source codes
### This process creates APPDATA/APPCODE/APPCONFIG
RUN /usr/irissys/bin/iris start IRIS && \
    /usr/irissys/bin/iris session IRIS < staging.script && \
    /usr/irissys/bin/iris stop IRIS quietly

FROM ${BASE_IMAGE} as final
USER irisowner
### Run install.cpf and initialize databases
### in installc.cpf, we only create APPDATA and APPCONFIG
RUN iris start IRIS \
 && iris merge IRIS /tmp/install.cpf \
 && iris stop IRIS quietly
RUN rm /tmp/install.cpf

### Make a copy of all CODE & CONFIG databases, as they're initialized during the staging process
COPY --from=staging --chown=51773:51773 /output/APPCODE.DAT /home/irisowner/mgr/appcode/IRIS.DAT
COPY --from=staging --chown=51773:51773 /output/APPCONFIG.DAT /usr/irissys/mgr/appconfig/IRIS.DAT

Example install.cpf:

[Startup]
[Actions]
CreateResource:Name=%DB_APP,PublicPermission=0
# We cannot create CODE databases during Docker build, otherwise they're be automatically moved to /durable
CreateDatabase:Name=APPDATA,Directory=/usr/irissys/mgr/appdata,MountAtStartup=1,Resource=%DB_APP
CreateDatabase:Name=APPCONFIG,Directory=/usr/irissys/mgr/appconfig,MountAtStartup=1,Resource=%DB_APP

# Since we're not creating CODE databases, let's create namespace during runtime merge.cpf

At the runtime container, use merge.cpf as well as durable %SYS feature to create namespaces and map everything:

[Startup]
WebServer=1
[Actions]
CreateDatabase:Name=APPCODE,Directory=/home/irisowner/mgr/appcode,MountAtStartup=1
CreateNamespace:Name=APP,Globals=APPDATA,Routines=APPCODE,Interop=1
CreateMapGlobal:Name=Ens.Config.*,Namespace=APP,Database=APPCONFIG
CreateMapGlobal:Name=Ens.AutoStart,Namespace=APP,Database=APPCODE
CreateMapGlobal:Name=EnsLib.DICO*,Namespace=APP,Database=APPCONFIG
CreateMapGlobal:Name=Ens.LookupTable,Namespace=APP,Database=APPCONFIG
CreateMapGlobal:Name=Ens.Util.LookupTableS,Namespace=APP,Database=APPCONFIG

# ...other stuff for APP

With this setup, the CODE is inside container image itself (with an initial CONFIG), and DATA and CONFIG will be persistent at customer durable volume. Next time, we'll be able to deploy a new image with new CODE, but keeping the DATA and CONFIG from durable volumes.

Next step, we'll try to implement default settings.