Search

Clear filter
Article
Mikhail Khomenko · Aug 16, 2017

Grafana-based GUI for mgstat, a system monitoring tool for InterSystems Caché, Ensemble or HealthShare

Hello! This article continues the article "Making Prometheus Monitoring for InterSystems Caché". We will take a look at one way of visualizing the results of the work of the ^mgstat tool. This tool provides the statistics of Caché performance, and specifically the number of calls for globals and routines (local and over ECP), the length of the write daemon’s queue, the number of blocks saved to the disk and read from it, amount of ECP traffic and more. ^mgstat can be launched separately (interactively or by a job), and in parallel with another performance measurement tool, ^pButtons.I’d like to break the narrative into two parts: the first part will graphically demonstrate the statistics collected by ^mgstat, the second one will concentrate on how exactly these stats are collected. In short, we are using $zu-functions. However, there is an object interface for the majority of collected parameters accessible via the classes of the of the SYS.Stats package. Just a fraction of the parameters that you can collect are shown in ^mgstat. Later on, we will try to show all of them on Grafana dashboards. This time, we will only work with the ones provided by ^mgstat. Apart from this, we’ll take a bite of Docker containers to see what they are.Installing DockerThe first part is about installing Prometheus and Grafana from tarballs. We will now show how to launch a monitoring server using Docker’s capabilities. This is the demo host machine:# uname -r4.8.16-200.fc24.x86_64# cat /etc/fedora-releaseFedora release 24 (Twenty Four)Two more virtual machines will be used (192.168.42.131 and 192.168.42.132) in the VMWare Workstation Pro 12.0 environment, both with Caché on board. These are the machines we will be monitoring. Versions:# uname -r3.10.0-327.el7.x86_64# cat /etc/redhat-releaseRed Hat Enterprise Linux Server release 7.2 (Maipo)…USER>write $zversionCache for UNIX (Red Hat Enterprise Linux for x86-64) 2016.2 (Build 721U) Wed Aug 17 2016 20:19:48 EDTLet’s install Docker on the host machine and launch it:# dnf install -y docker# systemctl start docker# systemctl status docker● docker.service — Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)Active: active (running) since Wed 2017-06-21 15:08:28 EEST; 3 days ago...Launching Prometheus in a Docker containerLet’s load the last Prometheus image:# docker pull docker.io/prom/prometheusIf we look at the Docker file, we will see that the image reads the configuration from its /etc/prometheus/prometheus.yml file, and collected statistics are saved to the /prometheus folder:…CMD [ "-config.file=/etc/prometheus/prometheus.yml", \"-storage.local.path=/prometheus", \...When starting Prometheus in a Docker container, let’s make it load the configuration file and the metrics database from the host machine. This will help us “survive” the restart of the container. Let’s now create folders for Prometheus on the host machine:# mkdir -p /opt/prometheus/data /opt/prometheus/etcLet’s create a Prometheus configuration file:# cat /opt/prometheus/etc/prometheus.ymlglobal: scrape_interval: 10sscrape_configs: - job_name: 'isc_cache' metrics_path: '/mgstat/5' # Tail 5 (sec) it's a diff time for ^mgstat. Should be less than scrape interval. static_configs: - targets: ['192.168.42.131:57772','192.168.42.132:57772'] basic_auth: username: 'PromUser' password: 'Secret'We can now launch a container with Prometheus:# docker run -d --name prometheus \--hostname prometheus -p 9090:9090 \-v /opt/prometheus/etc/prometheus.yml:/etc/prometheus/prometheus.yml \-v /opt/prometheus/data/:/prometheus \docker.io/prom/prometheusCheck if it launched fine:# docker ps --format "{{.ID}}: {{.Command}} {{.Status}} {{.Names}}"d3a1db5dec1a: "/bin/prometheus -con" Up 5 minutes prometheusLaunching Grafana in a Docker containerFirst, let’s download the most recent image:# docker pull docker.io/grafana/grafanaWe’ll then launch it, specifying that the Grafana database (SQLite by default) will be stored on the host machine. We’ll also make a link to the container with Prometheus, so that we can then link to it from the one containing Grafana:# mkdir -p /opt/grafana/db# docker run -d --name grafana \--hostname grafana -p 3000:3000 \--link prometheus \-v /opt/grafana/db:/var/lib/grafana \docker.io/grafana/grafana# docker ps --format "{{.ID}}: {{.Command}} {{.Status}} {{.Names}}"fe6941ce3d15: "/run.sh" Up 3 seconds grafanad3a1db5dec1a: "/bin/prometheus -con" Up 14 minutes prometheusUsing Docker-composeBoth our containers are launched separately. A more convenient method of launching several containers at once is the use of Docker-compose. Let’s install it and suspend the current two containers, then reconfigure their restart via Docker-compose and launch them again.The same in the cli language:# dnf install -y docker-compose# docker stop $(docker ps -a -q)# docker rm $(docker ps -a -q)# mkdir /opt/docker-compose# cat /opt/docker-compose/docker-compose.ymlversion: '2'services: prometheus: image: docker.io/prom/prometheus container_name: prometheus hostname: prometheus ports: - 9090:9090 volumes: - /opt/prometheus/etc/prometheus.yml:/etc/prometheus/prometheus.yml - /opt/prometheus/data/:/prometheus grafana: image: docker.io/grafana/grafana container_name: grafana hostname: grafana ports: - 3000:3000 volumes: - /opt/grafana/db:/var/lib/grafana# docker-compose -f /opt/docker-compose/docker-compose.yml up -d# # Both containers can be disabled and removed with the following command: # # docker-compose -f /opt/docker-compose/docker-compose.yml down# docker ps --format "{{.ID}}: {{.Command}} {{.Status}} {{.Names}}"620e3cb4a5c3: "/run.sh" Up 11 seconds grafanae63416e6c247: "/bin/prometheus -con" Up 12 seconds prometheusPost-installation proceduresAfter starting Grafana for the first time, you need to do two more things: change the admin password for the web interface (by default, the login/password combination is admin/admin) and add Prometheus as a data source. This can be done either from the web interface or by directly editing the Grafana SQLite database (located by default at /opt/grafana/db/grafana.db), or using REST requests.Let me show the third option:# curl -XPUT "admin:admin@localhost:3000/api/user/password" \-H "Content-Type:application/json" \-d '{"oldPassword":"admin","newPassword":"TopSecret","confirmNew":"TopSecret"}'If the password has been changed successfully, you will get the following response:{"message":"User password changed"}Response of the following type:curl: (56) Recv failure: Connection reset by peermeans that the Grafana server hasn’t started yet and we just need to wait a little longer before running the previous command again. You can wait like this, for example:# until curl -sf admin:admin@localhost:3000 > /dev/null; do sleep 1; echo "Grafana is not started yet";done; echo "Grafana is started"Once you’ve successfully changed the password, add a Prometheus data source:# curl -XPOST "admin:TopSecret@localhost:3000/api/datasources" \-H "Content-Type:application/json" \-d '{"name":"Prometheus","type":"prometheus","url":"http://prometheus:9090","access":"proxy"}'If the data source has been added successfully, you will get a response:{"id":1,"message":"Datasource added","name":"Prometheus"}Creating an equivalent of ^mgstat^mgstat saves output to a file and the terminal in an interactive mode. We don’t care about output to a file. This is why we are going to use the Studio to create and compile a class called my.Metrics containing some an object-oriented implementation of ^mgstat in the USER space. /// This class is an object-oriented implementation of ^mgstat routine./// Unlike the last the Caché version check is skipped./// If you want to monitor seizes you should set parameter ISSEIZEGATHERED = 1./// Unlike ^mgstat routine Seizes metrics show as diff (not as percentage)./// Some of $zutil functions are unknown for me, but they are used in ^mgstat so they're leaved here.Class my.Metrics Extends %RegisteredObject{/// Metrics prefixParameter PREFIX = "isc_cache_mgstat_";/// Metrics for Prometheus must be divided by 'new line'Parameter NL As COSEXPRESSION = "$c(10)";/// Unknown parameter -). Used as in ^mgstat.intParameter MAXVALUE = 1000000000;/// 2**64 - 10. Why minus 10? It's a question -) Used as in ^mgstat.intParameter MAXVALGLO = 18446744073709551610;/// Resources that we are interested to monitor. You can change this listParameter SEIZENAMES = "Global,ObjClass,Per-BDB";/// Default value - $zutil(69,74). You can start gather seize statistics it by setting "1"Parameter ISSEIZEGATHERED = 0;Parameter MAXECPCONN As COSEXPRESSION = "$system.ECP.MaxClientConnections()";/// Number of global buffers types (8K, 16K etc.)Parameter NUMBUFF As COSEXPRESSION = "$zutil(190, 2)";/// Memory offset (for what? -))Parameter WDWCHECK As COSEXPRESSION = "$zutil(40, 2, 146)";/// Memory offset for write daemon phaseParameter WDPHASEOFFSET As COSEXPRESSION = "$zutil(40, 2, 145)";/// Memory offset for journalsParameter JOURNALBASE As COSEXPRESSION = "$zutil(40, 2, 94)";ClassMethod getSamples(delay As %Integer = 2) As %Status{ set sc = $$$OK try { set sc = ..gather(.oldValues) hang delay set sc = ..gather(.newValues) set sc = ..diff(delay, .oldValues, .newValues, .displayValues) set sc = ..output(.displayValues) } catch e { write "Error: "_e.Name_"_"_e.Location, ..#NL } quit sc}ClassMethod gather(Output values) As %Status{ set sc = $$$OK // Get statistics for globals set sc = ..getGlobalStat(.values) // Get write daemon statistics set sc = ..getWDStat(.values) // Get journal writes set values("journal_writes") = ..getJournalWrites() // Get seizes statistics set sc = ..getSeizeStat(.values) // Get ECP statistics set sc = ..getECPStat(.values) quit sc}ClassMethod diff(delay As %Integer = 2, ByRef oldValues, ByRef newValues, Output displayValues) As %Status{ set sc = $$$OK // Process metrics for globals set sc = ..loopGlobal("global", .oldValues, .newValues, delay, 1, .displayValues) set displayValues("read_ratio") = $select( displayValues("physical_reads") = 0: 0, 1: $number(displayValues("logical_block_requests") / displayValues("physical_reads"),2) ) set displayValues("global_remote_ratio") = $select( displayValues("remote_global_refs") = 0: 0, 1: $number(displayValues("global_refs") / displayValues("remote_global_refs"),2) ) // Process metrics for write daemon (not per second) set sc = ..loopGlobal("wd", .oldValues, .newValues, delay, 0, .displayValues) // Process journal writes set displayValues("journal_writes") = ..getDiff(oldValues("journal_writes"), newValues("journal_writes"), delay) // Process seize metrics set sc = ..loopGlobal("seize", .oldValues, .newValues, delay, 1, .displayValues) // Process ECP client metrics set sc = ..loopGlobal("ecp", .oldValues, .newValues, delay, 1, .displayValues) set displayValues("act_ecp") = newValues("act_ecp") quit sc}ClassMethod getDiff(oldValue As %Integer, newValue As %Integer, delay As %Integer = 2) As %Integer{ if (newValue < oldValue) { set diff = (..#MAXVALGLO - oldValue + newValue) \ delay if (diff > ..#MAXVALUE) set diff = newValue \ delay } else { set diff = (newValue - oldValue) \ delay } quit diff}ClassMethod loopGlobal(subscript As %String, ByRef oldValues, ByRef newValues, delay As %Integer = 2, perSecond As %Boolean = 1, Output displayValues) As %Status{ set sc = $$$OK set i = "" for { set i = $order(newValues(subscript, i)) quit:(i = "") if (perSecond = 1) { set displayValues(i) = ..getDiff(oldValues(subscript, i), newValues(subscript, i), delay) } else { set displayValues(i) = newValues(subscript, i) } } quit sc}ClassMethod output(ByRef displayValues) As %Status{ set sc = $$$OK set i = "" for { set i = $order(displayValues(i)) quit:(i = "") write ..#PREFIX_i," ", displayValues(i),..#NL } write ..#NL quit sc}ClassMethod getGlobalStat(ByRef values) As %Status{ set sc = $$$OK set gloStatDesc = "routine_refs,remote_routine_refs,routine_loads_and_saves,"_ "remote_routine_loads_and_saves,global_refs,remote_global_refs,"_ "logical_block_requests,physical_reads,physical_writes,"_ "global_updates,remote_global_updates,routine_commands,"_ "wij_writes,routine_cache_misses,object_cache_hit,"_ "object_cache_miss,object_cache_load,object_references_newed,"_ "object_references_del,process_private_global_refs,process_private_global_updates" set gloStat = $zutil(190, 6, 1) for i = 1:1:$length(gloStat, ",") { set values("global", $piece(gloStatDesc, ",", i)) = $piece(gloStat, ",", i) } quit sc}ClassMethod getWDStat(ByRef values) As %Status{ set sc = $$$OK set tempWdQueue = 0 for b = 1:1:..#NUMBUFF { set tempWdQueue = tempWdQueue + $piece($zutil(190, 2, b), ",", 10) } set wdInfo = $zutil(190, 13) set wdPass = $piece(wdInfo, ",") set wdQueueSize = $piece(wdInfo, ",", 2) set tempWdQueue = tempWdQueue - wdQueueSize if (tempWdQueue < 0) set tempWdQueue = 0 set misc = $zutil(190, 4) set ijuLock = $piece(misc, ",", 4) set ijuCount = $piece(misc, ",", 5) set wdPhase = 0 if (($view(..#WDWCHECK, -2, 4)) && (..#WDPHASEOFFSET)) { set wdPhase = $view(..#WDPHASEOFFSET, -2, 4) } set wdStatDesc = "write_daemon_queue_size,write_daemon_temp_queue,"_ "write_daemon_pass,write_daemon_phase,iju_lock,iju_count" set wdStat = wdQueueSize_","_tempWdQueue_","_wdPass_","_wdPhase_","_ijuLock_","_ijuCount for i = 1:1:$length(wdStat, ",") { set values("wd", $piece(wdStatDesc, ",", i)) = $piece(wdStat, ",", i) } quit sc}ClassMethod getJournalWrites() As %String{ quit $view(..#JOURNALBASE, -2, 4)}ClassMethod getSeizeStat(ByRef values) As %Status{ set sc = $$$OK set seizeStat = "", seizeStatDescList = "" set selectedNames = ..#SEIZENAMES set seizeNumbers = ..getSeizeNumbers(selectedNames) // seize statistics set isSeizeGatherEnabled = ..#ISSEIZEGATHERED if (seizeNumbers = "") { set SeizeCount = 0 } else { set SeizeCount = isSeizeGatherEnabled * $length(seizeNumbers, ",") } for i = 1:1:SeizeCount { set resource = $piece(seizeNumbers, ",", i) set resourceName = ..getSeizeLowerCaseName($piece(selectedNames, ",", i)) set resourceStat = $zutil(162, 3, resource) set seizeStat = seizeStat_$listbuild($piece(resourceStat, ",")) set seizeStat = seizeStat_$listbuild($piece(resourceStat, ",", 2)) set seizeStat = seizeStat_$listbuild($piece(resourceStat, ",", 3)) set seizeStatDescList = seizeStatDescList_$listbuild( resourceName_"_seizes", resourceName_"_n_seizes", resourceName_"_a_seizes" ) } set seizeStatDesc = $listtostring(seizeStatDescList, ",") set seizeStat = $listtostring(seizeStat, ",") if (seizeStat '= "") { for k = 1:1:$length(seizeStat, ",") { set values("seize", $piece(seizeStatDesc, ",", k)) = $piece(seizeStat, ",", k) } } quit sc}ClassMethod getSeizeNumbers(selectedNames As %String) As %String{ /// USER>write $zu(162,0) // Pid,Routine,Lock,Global,Dirset,SatMap,Journal,Stat,GfileTab,Misc,LockDev,ObjClass... set allSeizeNames = $zutil(162,0)_"," //Returns all resources names set seizeNumbers = "" for i = 1:1:$length(selectedNames, ",") { set resourceName = $piece(selectedNames,",",i) continue:(resourceName = "")||(resourceName = "Unused") set resourceNumber = $length($extract(allSeizeNames, 1, $find(allSeizeNames, resourceName)), ",") - 1 continue:(resourceNumber = 0) if (seizeNumbers = "") { set seizeNumbers = resourceNumber } else { set seizeNumbers = seizeNumbers_","_resourceNumber } } quit seizeNumbers}ClassMethod getSeizeLowerCaseName(seizeName As %String) As %String{ quit $tr($zcvt(seizeName, "l"), "-", "_")}ClassMethod getECPStat(ByRef values) As %Status{ set sc = $$$OK set ecpStat = "" if (..#MAXECPCONN '= 0) { set fullECPStat = $piece($system.ECP.GetProperty("ClientStats"), ",", 1, 21) set activeEcpConn = $system.ECP.NumClientConnections() set addBlocks = $piece(fullECPStat, ",", 2) set purgeBuffersByLocal = $piece(fullECPStat, ",", 6) set purgeBuffersByRemote = $piece(fullECPStat, ",", 7) set bytesSent = $piece(fullECPStat, ",", 19) set bytesReceived = $piece(fullECPStat, ",", 20) } set ecpStatDesc = "add_blocks,purge_buffers_local,"_ "purge_server_remote,bytes_sent,bytes_received" set ecpStat = addBlocks_","_purgeBuffersByLocal_","_ purgeBuffersByRemote_","_bytesSent_","_bytesReceived if (ecpStat '= "") { for l = 1:1:$length(ecpStat, ",") { set values("ecp", $piece(ecpStatDesc, ",", l)) = $piece(ecpStat, ",", l) } set values("act_ecp") = activeEcpConn } quit sc}}In order to call my.Metrics via REST, let’s create a wrapper class for it in the USER space.Class my.Mgstat Extends %CSP.REST{XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]{<Routes><Route Url="/:delay" Method="GET" Call="getMgstat"/></Routes>}ClassMethod getMgstat(delay As %Integer = 2) As %Status{ // By default, we use 2 second interval for averaging quit ##class(my.Metrics).getSamples(delay)}}Creating a resource, a user and a web applicationNow that we have a class feeding us metrics, we can create a RESTful web application. Like in the first article, we’ll allocate a resource to this web application and create a user who will use it and on whose behalf Prometheus will be collecting metrics. Once done, let’s grant the user rights to particular databases. In comparison with the first article, we have added a permission to write to the CACHESYS database (to avoid the <UNDEFINED>loop+1^mymgstat *gmethod" error) and the possibility to use the %Admin_Manage resource (to avoid the <PROTECT>gather+10^mymgstat *GetProperty,%SYSTEM.ECP" error). Let’s repeat these steps on both virtual servers, 192.168.42.131 and 192.168.42.132. Before doing that, we’ll upload our code, the my.Metrics class and the my.Mgstat class to the USER space on both servers (the code is available on github).That is, we perform the following steps on each virtual server:# cd /tmp# wget https://raw.githubusercontent.com/myardyas/prometheus/master/mgstat/cos/udl/Metrics.cls# wget https://raw.githubusercontent.com/myardyas/prometheus/master/mgstat/cos/udl/Mgstat.cls## # If servers are not connected to the Internet, copy the program and the class locally, then use scp.## csession <instance_name> -U userUSER>do $system.OBJ.Load("/tmp/Metrics.cls*/tmp/Mgstat.cls","ck")USER>zn "%sys"%SYS>write ##class(Security.Resources).Create("PromResource","Resource for Metrics web page","") 1%SYS>write ##class(Security.Roles).Create("PromRole","Role for PromResource","PromResource:U,%Admin_Manage:U,%DB_USER:RW,%DB_CACHESYS:RW")1%SYS>write ##class(Security.Users).Create("PromUser","PromRole","Secret")1%SYS>set properties("NameSpace") = "USER"%SYS>set properties("Description") = "RESTfull web-interface for mgstat"%SYS>set properties("AutheEnabled") = 32 ; See description%SYS>set properties("Resource") = "PromResource"%SYS>set properties("DispatchClass") = "my.Mgstat" %SYS>write ##class(Security.Applications).Create("/mgstat",.properties)1Check the availability of metrics using curl(Don't forget to open port 57772 in firewall)# curl --user PromUser:Secret -XGET http://192.168.42.131:57772/mgstat/5isc_cache_mgstat_global_refs 347isc_cache_mgstat_remote_global_refs 0isc_cache_mgstat_global_remote_ratio 0…# curl --user PromUser:Secret -XGET http://192.168.42.132:57772/mgstat/5isc_cache_mgstat_global_refs 130isc_cache_mgstat_remote_global_refs 0isc_cache_mgstat_global_remote_ratio 0...Check the availability of metrics from PrometheusPrometheus listens to port 9090. Let’s check the status of Targets first:Then look at a random metric:Showing one metricWe’ll now show just one metric – for example, isc_cache_mgstat_global_refs, as a graph. We’ll need to update the dashboard and insert the graph there. To do this, we go to Grafana (http://localhost:3000, login/pass — admin/TopSecret) and add a new dashboard:Add a graph:Edit it by clicking on “Panel title”, then “Edit”:Set Prometheus as the data source and pick our metric, isc_cache_mgstat_global_refs. Set the resolution to 1/1:Let’s give our graph a name:Add a legend:Click the “Save” button at the top of the window and type the dashboard’s name:We end up having something like this:Showing all metricsLet’s add the rest of the metrics the same way. There will be two text metrics – Singlestat. As the result, we’ll get get the following dashboard (upper and lower parts shown):Two things obviously don’t seem right:— Scrollbars in the legend (as the number of servers goes up, you’ll have to do a lot of scrolling);— Absence of data in Singlestat panels (that, of course, imply a single value). We have two servers and two values, accordingly.Adding the use of a templateLet’s try fixing these issues by introducing a template for instances. To do that, we’ll need to create a variable storing the value of the instance, and slightly edit requests to Prometheus, according to the rules. That is, instead of the "isc_cache_mgstat_global_refs" request, we should use "isc_cache_mgstat_global_refs{instance="[[instance]]"}" after creating an “instance” variable.Creating a variable:In our request to Prometheus, let’s select the values of instance labels from each metric. In the lower part, we can see that the values of our two instances have been identified. Click the “Add” button:A variable with possible values has been added to the upper part of the dashboard:Let us now add this variable to requests for each panel on the dashboard; that is, turn request like "isc_cache_mgstat_global_refs" into "isc_cache_mgstat_global_refs{instance="[[instance]]"}". The resulting dashboard will look like this (instance names have been left next to the legend on purpose):Singlestat panels are already working:The template of this dashboard can be downloaded from github. The process of importing it to Grafana was described in part 1 of the article.Finally, let’s make server 192.168.42.132 the ECP client for 192.168.42.131 and create globals for generating ECP traffic. We can see that ECP client monitoring is working:ConclusionWe can replace the display of ^mgstat results in Excel with a dashboard full of nice-looking graphs that are available online. The downside is the need to use an alternative version of ^mgstat. In general, the code of the source tool can change, which wasn’t taken into account. However, we get a really convenient method of monitoring Caché’s performance.Thank you for your attention!To be continued...P.S.The demo (for one instance) is available here, no login/password required. Nice! Thanks for this! Here's what I'm currently using to deal with pButtons/mgstat output: https://community.intersystems.com/post/visualizing-data-jungle-part-iv-running-yape-docker-image Cheers, Fab Fabian. thanks for link! I'll try it. Thank you for sharing . Good job. Hi Mikhail, you've done a really nice job!I'm just curious, why:We don’t care about output to a file.Wasn't it easier to parse mgstat's output file? Hi, Alexey! Thanks! What about your question: I think, in that case we should:1) run mgstat continuously2) parse file.Although both of these steps are not difficult, a REST-interface enables us to merge them in one step when we run class in that time we want. Besides we can always extend our metrics set. For example, it's worth to add monitoring of databases sizes as well as Mirroring, Shadowing etc. Mikhail, haven't you considered to keep the time series in Cache and use Grafana directly? @David Loveluck seems got something working in this direction: https://community.intersystems.com/post/using-grafana-directly-iris. Cashe/IRIS is powerful database, so another database in the middle feels like Boeing using parts from Airbus. @Arto.Alatalo At that time i've used Prometheus as a central monitoring system for hosts, services and Cache as well.Also Simple JSON plugin should be improved to provide similar to Prometheus/Grafana functionality, at least at that moment I've looked at it last time. Someone keep json from dashboard ? Hi Mikhail, Where did you get the information for the meaning of the returned values from $system.ECP.GetProperty("ClientStats")? Nice job, anyway ;-) Hi David,Thanks -) Regarding a meaning - it's taken from mgstat source-code (%SYS, routine mgstat.int).Starting point was a line 159 in my local Cache 2017.1: i maxeccon s estats=$p($system.ECP.GetProperty("ClientStats"),",",1,21),array($i(i))=+$system.ECP.NumClientConnections(),array($i(i))=$p(estats,",",2),array($i(i))=$p(estats,",",6),array($i(i))=$p(estats,",",7),array($i(i))=$p(estats,",",19),array($i(i))=$p(estats,",",20) Then I guessed a meaning from a subroutine "heading" (line 289). But the best option for you, I think, is to ask WRC. Support is very good.
Article
Eduard Lebedyuk · Aug 14, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part IX: Container architecture

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?Containers infrastructureCD using containersCD using ICMContainer architectureIn this article, we would talk about building your own container and deploying it.Durable %SYSSince containers are rather ephemeral they should not store any application data. Durable %SYS feature enables us to do just that - store settings, configuration, %SYS data and so on a host volume, namely:The iris.cpf file.The /csp directory, containing the web gateway configuration and log files.The file /httpd/httpd.conf, the configuration file for the instance’s private web server.The /mgr directory, containing the following:The IRISSYS system database, comprising the IRIS.DAT and iris.lck files and the stream directory, and the iristemp, irisaudit, iris and user directories containing the IRISTEMP, IRISAUDIT, IRIS and USER system databases.The write image journaling file, IRIS.WIJ.The /journal directory containing journal files.The /temp directory for temporary files.Log files including messages.log, journal.log, and SystemMonitor.log.Container architectureOn the other hand, we need to store application code inside our container so that we can upgrade it when required.All that brings us to this architecture:To achieve this during build time we, at the minimum, need to create one additional database (to store application code) and map it into our application namespace. In my example, I would use USER namespace to hold application data as it already exists and is durable.InstallerBased on the above, our installer needs to:Create APP namespace/databaseLoad code into APP NamespaceMap our application classes to USER namespaceDo all other installation (in this case I created CSP web app and REST app) Class MyApp.Hooks.Local { Parameter Namespace = "APP"; /// See generated code in zsetup+1^MyApp.Hooks.Local.1 XData Install [ XMLNamespace = INSTALLER ] { <Manifest> <Log Text="Creating namespace ${Namespace}" Level="0"/> <Namespace Name="${Namespace}" Create="yes" Code="${Namespace}" Ensemble="" Data="IRISTEMP"> <Configuration> <Database Name="${Namespace}" Dir="/usr/irissys/mgr/${Namespace}" Create="yes" MountRequired="true" Resource="%DB_${Namespace}" PublicPermissions="RW" MountAtStartup="true"/> </Configuration> <Import File="${Dir}Form" Recurse="1" Flags="cdk" IgnoreErrors="1" /> </Namespace> <Log Text="End Creating namespace ${Namespace}" Level="0"/> <Log Text="Mapping to USER" Level="0"/> <Namespace Name="USER" Create="no" Code="USER" Data="USER" Ensemble="0"> <Configuration> <Log Text="Mapping Form package to USER namespace" Level="0"/> <ClassMapping From="${Namespace}" Package="Form"/> <RoutineMapping From="${Namespace}" Routines="Form" /> </Configuration> <CSPApplication Url="/" Directory="${Dir}client" AuthenticationMethods="64" IsNamespaceDefault="false" Grant="%ALL" Recurse="1" /> </Namespace> </Manifest> } /// This is a method generator whose code is generated by XGL. /// Main setup method /// set vars("Namespace")="TEMP3" /// do ##class(MyApp.Hooks.Global).setup(.vars) ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 0, pInstaller As %Installer.Installer) As %Status [ CodeMode = objectgenerator, Internal ] { Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "Install") } /// Entry point ClassMethod onAfter() As %Status { try { write "START INSTALLER",! set vars("Namespace") = ..#Namespace set vars("Dir") = ..getDir() set sc = ..setup(.vars) write !,$System.Status.GetErrorText(sc),! set sc = ..createWebApp() } catch ex { set sc = ex.AsStatus() write !,$System.Status.GetErrorText(sc),! } quit sc } /// Modify web app REST ClassMethod createWebApp(appName As %String = "/forms") As %Status { set:$e(appName)'="/" appName = "/" _ appName #dim sc As %Status = $$$OK new $namespace set $namespace = "%SYS" if '##class(Security.Applications).Exists(appName) { set props("AutheEnabled") = $$$AutheUnauthenticated set props("NameSpace") = "USER" set props("IsNameSpaceDefault") = $$$NO set props("DispatchClass") = "Form.REST.Main" set props("MatchRoles")=":" _ $$$AllRoleName set sc = ##class(Security.Applications).Create(appName, .props) } quit sc } ClassMethod getDir() [ CodeMode = expression ] { ##class(%File).NormalizeDirectory($system.Util.GetEnviron("CI_PROJECT_DIR")) } } To create the not-durable database I use a subdirectory of /usr/irissys/mgr, which is not persistent. Note that call to ##class(%File).ManagerDirectory() returns path to the durable directory and not the path to internal container directory. Continuous delivery configuration Check part VII for complete info, but all we need to do is to add these two lines (in bold) to our existing configuration. run image: stage: run environment: name: $CI_COMMIT_REF_NAME url: http://$CI_COMMIT_REF_SLUG.docker.eduard.win/index.html tags: - test script: - docker run -d --expose 52773 --volume /InterSystems/durable/$CI_COMMIT_REF_SLUG:/data --env ISC_DATA_DIRECTORY=/data/sys --env VIRTUAL_HOST=$CI_COMMIT_REF_SLUG.docker.eduard.win --name iris-$CI_COMMIT_REF_NAME docker.eduard.win/test/docker:$CI_COMMIT_REF_NAME --log $ISC_PACKAGE_INSTALLDIR/mgr/messages.log volume argument mounts host directory to the container and ISC_DATA_DIRECTORY variable shows InterSystems IRIS what directory to use. To quote documentation: When you run an InterSystems IRIS container using these options, the following occurs: The specified external volume is mounted.If the durable %SYS directory specified by the ISC_DATA_DIRECTORY environment variable, iconfig/ in the preceding example, already exists and contains durable %SYS data, all of the instance’s internal pointers are reset to that directory and the instance uses the data it contains.If the durable %SYS directory specified by the ISC_DATA_DIRECTORY environment variable already exists but does not contain durable %SYS data, no data is copied and the instance runs using the data in the installation tree inside the container, which means that the instance-specific data is not persistent. For this reason, you may want to include in scripts a check for this condition prior to running the container.If the durable %SYS directory specified by ISC_DATA_DIRECTORY does not exist: The specified durable %SYS directory is created.The directories and files listed in contents of the Durable %SYS Directory are copied from their installed locations to the durable %SYS directory (the originals remain in place). All of the instance’s internal pointers are reset to the durable %SYS directory and the instance uses the data it contains. Updates When application evolves and a new version (container) gets released sometimes you may need to run some code. It could be pre/post compile hooks, schema migrations, unit tests but all it boils down to is that you need to run arbitrary code. That's why you need a framework that manages your application. In previous articles, I outlined a basic structure of this framework, but it, of course, can be considerably extended to fit specific application requirements. Conclusion Creating a containerized application takes some thoughts but InterSystems IRIS offers several features to make this process easier. Links IndexCode for the articleTest projectComplete CD configurationComplete CD configuration with durable %SYS Great article, Eduard, thanks!It's not clear though what do I need to have on a production host initially? In my understanding, installed docker and container with IRIS and container with IRIS is enough. In that case how Durable %SYS and Application data (USER NAMESPACE) appear on the production host for the first time? what do I need to have on a production host initially?First of all you need to have:FQDNGitLab serverDocker RegistryInterSystems IRIS container inside your Docker RegistryThey could be anywhere as long as they are accessible from production host.After that on a production host (and on every separate host you want to use), you need to have:DockerGitLab RunnerNginx reserve proxy containerAfter all these conditions are met you can create Continuous Delivery configuration in GitLab and it would build and deploy your container to production host.In that case how Durable %SYS and Application data (USER NAMESPACE) appear on the production host for the first time? When InterSystems IRIS container is started in Durable %SYS mode, it checks directory for durable data, if it does not exist InterSystems IRIS creates it and copies the data from inside the container. If directory already exists and contains databases/config/etc then it's used.By default InterSystems IRIS has all configs inside. Thanks, Eduard! But why Gitlab? Can I manage all the Continuous Delivery say with Github? GitHub does not offer CI functionality but can be integrated with CI engine. Another question: can I deploy a container manually on production host with a reasonable number of steps in management or CI is the only option? can I deploy a container manuallySure, to deploy a container manually it's enough to execute this command: docker run -d --expose 52773 --volume /InterSystems/durable/master:/data --env ISC_DATA_DIRECTORY=/data/sys --name iris-master docker.eduard.win/test/docker:master --log $ISC_PACKAGE_INSTALLDIR/mgr/messages.log Alternatively, you can use GUI container management tools to configure a container you want to deploy. For example, here's Portainer web interface, you can define volumes, variables, etc there: it also allows browsing registry and inspecting your running containers among other things: Wow. Do you have any experience with Portanier + IRIS containers? I think it deserves a new article. I ran InterSystems IRIS containers via Rancher and Portainer it's all the same stuff. GUI over docker run. Excellent work! Way to go! Thanks for the sharing, Eduard. That is a good demonstration of the use of -volume in the docker run command.In the situation of bringing up multiple containers of the same InterSystems IRIS image (such as starting up a number of sharding nodes, etc.), the system administrator may consider organising the host by some files/folders among instances as sharable but read-only (such as the license key, etc.), and some as isolated yet temporary. Maybe we can go further and enhance the docker run command in this direction, so that the same command with a small change (such as the port number and the container name) is enough for starting up a cluster of instances quickly. I think you'll need a full fledged orchestrator for that. Docker run always starts one container, bit several volumes could be mounted, some of them RW, some RO. I'm not sure about mounting the same folder into several containers (again, orchestration).You can also use volumes_from argument to mount directory from one container to another. Thank you, Luca! That's right. Maybe we can consider running docker-compose as an option for such an orchestration.
Announcement
Fabiano Sanches · Jul 12, 2023

InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2023.1.1 are available

The extended maintenance releases of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2023.1.1 are now available. This release provides bug fixes for the previous 2023.1.0 releases. You can find the detailed change lists / upgrade checklists on these pages: InterSystems IRIS InterSystems IRIS for Health HealthShare Health Connect How to get the software The software is available as both classic installation packages and container images. For the complete list of available installers and container images, please refer to the Supported Platforms webpage. Full installation packages for both InterSystems IRIS and InterSystems IRIS for Health are available from this WRC's InterSystems IRIS Data Platform Full Kits. page. HealthShare Health Connect kits are available from WRC's HealthShare Full Kits page. Container images are available from the InterSystems Container Registry. There are no Community Edition kits or containers available for this release. The number of all kits & containers in this release is 2023.1.1.380.0.
Announcement
Sam Schafer · Jun 21, 2023

Managing InterSystems Servers: In-Person July 10-14, 2023 - Registration space available

Course: Managing InterSystems Servers When: July 10-14, 2023 9am to 5pm US Eastern Time (ET) Where: In-person only at InterSystems Headquarters in Cambridge, MA. This five-day course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques. This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu. Self-register on our registration site Email education@intersystems.com with questions
Announcement
Celeste Canzano · Mar 24

Reminder: Beta Testers Needed for Our Upcoming InterSystems IRIS Developer Professional Certification Exam

Hello again IRIS community, We have officially released our InterSystems IRIS Developer Professional certification exam for beta testing. The beta test will be available until April 20, 2025. As a beta tester, you have the chance to earn the certification for free! Interested in beta testing? See the InterSystems IRIS Developer Professional Beta Test Developer Community post for exam details, recommended preparation, and instructions on how to schedule and take the beta exam. Thank you!
Announcement
Larry Finlayson · Mar 26

Managing InterSystems Servers – In-Person April 14-18, 2025 / Registration space available

Managing InterSystems Servers – In-Person April 14-18, 2025 Configure, manage, plan, and monitor system operations of InterSystems Data Platform technology This five-day course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques. This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu. Self-Register Here
Announcement
Anastasia Dyubaylo · Mar 25

[Video] Building and Deploying React Frontend for the InterSystems FHIR server in 20 minutes with Lovable AI

Hi Community, Check out this new video that shows how to build the UI for InterSystems FHIR server from scratch using AI with no previous frontend experience: 📺 Building and Deploying React Frontend for the InterSystems FHIR server in 20 minutes with Lovable AI 🗣 Presenter: @Evgeny Shvarov, Senior Manager of Developer and Startup Programs, InterSystems The tools and products used: InterSystems IRIS for Health (FHIR Server) Lovable.dev (Frontend) VSCode (dev env) Github (code repository) ngrok (domain forwarding) Share your thoughts or questions in the comments to this post. Enjoy! Here is the related demo project on OEX.
Announcement
Daniel Palevski · Apr 2

Alert: InterSystems IRIS 2024.3 – AIX JSON Parsing Issue & IntegratedML Incompatibilities

Summary of Alerts Alert ID Product & Versions Affected Explicit Requirements DP-439207 InterSystems IRIS® data platform 2024.3 (AIX) AIX installations Using JSON processing and Unicode non-Latin-1 character sets DP-439280 InterSystems IRIS 2024.3 (containers with IntegratedML) IntegratedML Containers using TensorFlow Detail of Alerts DP-439207 - AIX JSON Unicode Parsing Issue A bug has been identified in InterSystems IRIS 2024.3.0 on AIX instances that affects the parsing of JSON Unicode strings. The issue arises when either the %FromJSON() or %FromJSONFile() method parses strings that contain characters with values less than $CHAR(256) followed by Unicode characters exceeding $CHAR(255). The conversion process incorrectly transforms the earlier characters into $CHAR(0), leading to silent data corruption. This issue only affects AIX version 2024.3 of the following products: InterSystems IRIS InterSystems IRIS for Health HealthShare® Health Connect Impact Assessment When this occurs, incorrect characters may be stored in the database or passed to interfaces without triggering errors. The defect was introduced in IRIS 2024.3.0 and has been resolved with DP-439207. Affected Workflows: This issue only occurs on Unicode installations running AIX, affecting applications processing data that contain a mix of ASCII and Unicode characters. Resolution If you are using InterSystems IRIS 2024.3.0 on AIX instances, then you should upgrade to InterSystems IRIS 2025.1.0 as soon as possible. Customer Actions Required Identify Impacted Systems: Determine if you are running InterSystems IRIS 2024.3.0 on an AIX instance with Unicode databases and have a mix of Unicode and non-Unicode characters. Upgrade Path: Upgrade to InterSystems IRIS 2025.1.0 as soon as possible. DP-439280 - IntegratedML Container TensorFlow Issues Customers using either of the following containerized version of IRIS 2024.3 may encounter training errors when leveraging IntegratedML. containers.intersystems.com/intersystems/iris-ml:2024.3 Impact Assessment Customers leveraging IntegratedML on the InterSystems-provided IRIS 2024.3 containers will experience model training failures due to compatibility issues with TensorFlow and related dependencies. Resolution Customers wishing to use IntegratedML with IRIS or IRIS for Health in containers are encouraged to create your own containers by following the advice published on the Developer Community. Customer Actions Required To continue using IntegratedML with AutoML, customers must manually manage dependencies using the pip package manager as described above. This ensures compatibility and proper functionality of AutoML components like scikit-learn within your IntegratedML Python environment. For More Information If you have questions or need assistance, please contact the InterSystems Worldwide Response Center (WRC).
Article
Evgeny Shvarov · May 9

Building the UI by Prompting vs InterSystems IRIS Backend: Lovable, Spec First and REST API

Hi developers! Observing the avalanche of AI-driven and vibe-coding developer tools that have been appearing lately almost every month with more and more exciting dev features, I was puzzled whether it is possible to leverage it with InterSystems IRIS. At least to build a frontend. And the answer - yes! At least with the approach I followed. Here is my recipe to prompt the UI vs InterSystems IRIS Backend: Have the REST API on the IRIS side, which reflects some Open API (swagger) spec. Generate the UI with any vibe-coding tool (e.g., Lovable) and point the UI to the REST API endpoint. Profit! Here is the result of my own exercise - a 100% prompted UI vs IRIS REST API that allows to list, create, update delete entries of a persistent class (Open Exchange, frontend source, video): What is the recipe in detail? How to get the Open API (Swagger) spec vs IRIS backend? I took the template with persistent class dc.Person, which contains a few simple fields: Name, Surname, Company, Age, etc. I thought that ChatGPT could generate swagger spec but it would be shy to do it vs ObjectScript (perhaps it can now) so I generated DDL vs the class in IRIS terminal: Do $SYSTEM.SQL.Schema.ExportDDL("dc_Sample","*","/home/irisowner/dev/data/ddl.sql") and fed ChatGPT with the DDL + prompt: Please create an Open API spec in JSON version 2.0 vs the following DDL which will allow to get all the entries and individual ones, create, update, delete entries. Also, add _spec endpoint with OperationId GetSpec. Please provide meaningful operation id's for all the endpoints. The DDL: CREATE TABLE dc_Sample.Person( %PUBLICROWID, Company VARCHAR(50), DOB DATE, Name VARCHAR(-1), Phone VARCHAR(-1), Title VARCHAR(50) ) GO CREATE INDEX DOBIndex ON dc_Sample.Person(DOB) GO And it worked quite well - here is the result. Then I used %REST command-line tool to generate rest-api backend classes. After that, I implemented IRIS REST backend logic with ObjectScript for GET, PUT, POST, and DELETE calls. Mostly manually ;) with some help from the Co-pilot in VSCode. Tested the REST API manually with swagger-ui and after everything was ready to build the UI: The UI was prompted with Lovable.dev tool which I fed with the following prompt: Please create a modern, convenient UI vs. the following open API spec, which will allow list, create, update, and delete persons { "swagger":"2.0", "info":{ "title":"Person API," "version":"1.0.0" }, .... the whole swagger spec. After it was built and tested (manually), I asked Lovable to direct it vs the REST API endpoint in IRIS. First, locally in Docker, and after testing and fixing a few bugs (via prompts), the result was deployed. A few caveats and lessons learned: REST API security on the IRIS side is not always clear from scratch (mostly because of CORS-related topics), e.g. I needed to add a special cors.cls and modify swagger spec manually to make CORS work. Swagger documentation is not working automatically for docker in IRIS but it is fixable with special _spec endpoint and some objectscript code lines. Swagger spec for IRIS should be 2.0, not the latest 3.1. Other than that, this approach is quite an effective way for a backend IRIS developer to build comprehensive full-stack application prototypes in a very short time with zero knowledge of front-end development. Share what you think. And what is your "vibe" coding development experience with IRIS? Here is the video walkthrough:
Announcement
Larry Finlayson · May 15

Developing with InterSystems Objects and SQL – In Person June 9-13, 2025 / Registration space available

Developing with InterSystems Objects and SQL – In Person June 9-13, 2025 This 5-day course teaches programmers how to use the tools and techniques within the InterSystems® development environment. Students develop a database application using object-oriented design, building different types of IRIS classes. They learn how to store and retrieve data using Objects or SQL, and decide which approach is best for different use cases. They write code using ObjectScript, Python, and SQL, with most exercises offering the choice between ObjectScript and Python, and some exercises requiring a specific language. This course is applicable for users of InterSystems IRIS® data platform and InterSystems Caché® Self-Register Here
Announcement
Larry Finlayson · Jun 23

Using InterSystems Embedded Analytics – Virtual July 14-18, 2025 - Registration space available

Using InterSystems Embedded Analytics – Virtual July 14-18, 2025 Embed analytics capabilities in applications and create the supporting business intelligence cubes. This 5-day course teaches developers and business intelligence users how to embed real-time analytics capabilities in their applications using InterSystems IRIS® Business Intelligence. This course presents the basics of building data models from transactional data using the InterSystems IRIS BI Architect, exploring those models and building pivot tables and charts using the InterSystems IRIS BI Analyzer, as well as creating dashboards for presenting pivot tables, meters, and other interactive widgets. The course also covers securing models, tools and elements (dashboards, pivot tables, etc.) using the InterSystems® security infrastructure. Additionally, the course presents topics such as customizing the User Portal, troubleshooting and deployment. This course is applicable for users of InterSystems IRIS® data platform, InterSystems IRIS or Health™, and InterSystems DeepSee. SELF REGISTER HERE
Announcement
Celeste Canzano · Jul 16

Reminder: Beta Testers Needed for Our Upcoming InterSystems IRIS SQL Professional Certification Exam

Hello Again, InterSystems Certification is still looking for people to beta test the InterSystems IRIS SQL Professional Certification exam. This is a great way to earn the certification for free! We have extended the deadline of the beta test to August 31, 2025. Please note, only candidates with the pre-existing InterSystems IRIS SQL Specialist certification are eligible to take the beta. For details, see the original announcement. Thank you!
Announcement
Brad Nissenbaum · Jul 14

[Demo Video] Care Compass – InterSystems IRIS powered RAG AI assistant for Care Managers

#InterSystems Demo Games entry ⏯️ Care Compass – InterSystems IRIS powered RAG AI assistant for Care Managers Care Compass is a prototype AI assistant that helps caseworkers prioritize clients by analyzing clinical and social data. Using Retrieval Augmented Generation (RAG) and large language models, it generates narrative risk summaries, calculates dynamic risk scores, and recommends next steps. The goal is to reduce preventable ER visits and support early, informed interventions. Presenters:🗣 @Brad.Nissenbaum, Sales Engineer, InterSystems🗣 @Andrew.Wardly, Sales Engineer, InterSystems🗣 @Fan.Ji, Solution Developer, InterSystems🗣 @Lynn.Wu, Sales Engineer, InterSystems 🔗 Related resources: Care Compass application Article 👉 Like this demo? Support the team by voting for it in the Demo Games!
Announcement
Stephan Mohr · Jul 17

[Demo Video] The Ultimate 3D Industrial Simulation powered by a Game Engine with InterSystems IRIS

#InterSystems Demo Games entry ⏯️ The Ultimate 3D Industrial Simulation powered by a Game Engine with InterSystems IRIS In this demo, InterSystems IRIS Interoperability comes alive in an amazing, game-like user experience based on our Ultimate Control Tower demo. We visualize machines in a virtual 3D factory building, interacting with InterSystems IRIS in real time—displaying current statuses and sensor data, simulating machine outages and predictive maintenance scenarios, and triggering workflow tasks and actions in InterSystems IRIS. By using a mobile app on a tablet—and even a VR headset for a fully immersive experience—we unleash the power of InterSystems IRIS. Presenters:🗣 @Stephan.Mohr, Sales Engineer, InterSystems🗣 @Jannis.Stegmann, Sales Engineer, InterSystems🗣 @Benjamin.Kiwitz, Intern, InterSystems🗣 @Tuba.Incedag, Intern, InterSystems 👉 Like this demo? Support the team by voting for it in the Demo Games!
Article
Eduard Lebedyuk · Jul 13, 2022

Continuous Delivery of your InterSystems solution using GitLab - Part X: Beyond the code

After almost four years on hiatus, [my CI/CD series](https://community.intersystems.com/post/continuous-delivery-your-intersystems-solution-using-gitlab-index) is back! Over the years, I have worked with several InterSystems clients, developing CI/CD pipelines for different use cases. I hope the information presented in this article will be helpful to someone. This [series of articles](https://community.intersystems.com/post/continuous-delivery-your-intersystems-solution-using-gitlab-index) discusses several possible approaches toward software development with InterSystems technologies and GitLab. We have an exciting range of topics to cover: today, let's talk about things beyond the code - namely configurations and data. # Issue Previously we discussed code promotions, and that was, in a way, stateless - we always go from a (presumably) empty instance to a complete codebase. But sometimes, we need to provide data or state. There are different data types: - Configuration: users, web apps, LUTs, custom schemas, tasks, business partners, and many more - Settings: environment-specific key-value pairs - Data: reference tables and such often must be provided for your app to work Let's discuss all these data types and how they can be first committed into source control and later deployed. # Configuration System configuration is spread across many different classes, but InterSystems IRIS can export most of them into XMLs. First of all, is a [Security package](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&PACKAGE=Security) containing information about: - Web Applications - DocDBs - Domains - Audit Events - KMIPServers - LDAP Configs - Resources - Roles - SQL Privileges - SSL Configs - Services - Users All these classes provide Exists, Export, and Import methods, allowing you to move them between environments. A few caveats: - Users and SSL Configurations might contain sensitive information, such as passwords. It is generally NOT recommended to store them in source control for security reasons. Use Export/Import methods to facilitate one-off transfers. - By default, Export/Import methods output everything in one file, which might not be source control friendly. Here's a [utility class](https://gist.github.com/eduard93/3a9abdb2eb150a456191bf387c1fc0c3) that can export and import Lookup Tables, Custom Schemas, Business Partners, Tasks, Credentials, and SSL Configuration. It exports one item per file, so you get a directory with LUT, another directory with Custom Schemas, and so on. For SSL Configurations, it also exports files: certificates and keys. Also worth noting that instead of export/import, you can use [%Installer](https://community.intersystems.com/post/deploying-applications-intersystems-cache-installer) or [Merge CPF](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ACMF) to create most of these. Both tools also support the creation of namespaces and databases. Merge CPF can adjust system settings, such as global buffer size. ## Tasks [%SYS.Task](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25SYS.Task) class stores tasks and provides `ExportTasks` and `ImportTasks` methods. You can also check the utility class above to import and export tasks one by one. Note that when you import tasks, you can get import errors (`ERROR #7432: Start Date and Time must be after the current date and time`) if `StartDate` or other schedule-related properties are in the past. As a solution, set `LastSchedule` to `0`, and InterSystems IRIS would reschedule a newly imported task to run in the nearest future. ## Interoperability Interoperability productions contain: - Business Partners - System Default Settings - Credentials - Lookup Tables The first two are available in [Ens.Config](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&PACKAGE=Ens.Config) package with `%Export` and `%Import` methods. Export Credentials and Lookup Tables using the [utility class](https://gist.github.com/eduard93/3a9abdb2eb150a456191bf387c1fc0c3) above. In recent versions, Lookup Tables can be exported/imported via [$system.OBJ](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25SYSTEM.OBJ) class. # Settings [System Default Settings](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_other_default_settings#ECONFIG_other_default_settings_purpose) - is a default interoperability mechanism for environment-specific settings: > The purpose of system default settings is to simplify the process of copying a production definition from one environment to another. In any production, the values of some settings are determined as part of the production design; these settings should usually be the same in all environments. Other settings, however, must be adjusted to the environment; these settings include file paths, port numbers, and so on. > > System default settings should specify only the values that are specific to the environment where InterSystems IRIS is installed. In contrast, the production definition should specify the values for settings that should be the same in all environments. I highly recommend making use of them in production environments. Use [%Export](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Config.DefaultSettings#%25Export) and [%Import](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Config.DefaultSettings#%25Import) to transfer system default settings. ## Application Settings Your application probably also uses settings. In that case, I recommend using System Default Settings. While it's an interoperability mechanism, settings can be accessed via: `%GetSetting(pProductionName, pItemName, pHostClassName, pTargetType, pSettingName, Output pValue)` ([docs](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Config.DefaultSettings#%25GetSetting)). You can write a wrapper which would set the defaults you don't care about, for example: ```objectscript ClassMethod GetSetting(name, Output value) As %Boolean [Codemode=expression] { ##class(Ens.Config.DefaultSettings).%GetSetting("myAppName", "default", "default", , name, .value) } ``` If you want more categories, you can also expose `pItemName` and/or `pHostClassName` arguments. Settings can be initially set by importing, using System Management Portal, creating objects of `Ens.Config.DefaultSettings` class, or setting `^Ens.Config.DefaultSettingsD` global. My main advice here would be to keep settings in one place (it can be either System Default Settings or a custom solution), and the application must get the settings using only a provided API. This way application itself does not know about the environment and what's left is supplying centralized setting storage with environment-specific values. To do that, create a settings folder in your repository containing settings files, with file names the same as the environment branch names. Then during CI/CD phase, use the `$CI_COMMIT_BRANCH` [environment variable](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html) to load the correct file. ``` DEV.xml TEST.xml PROD.xml ``` If you have several settings files per environment, use folders named after environment branches. To get environment variable value from inside InterSystems IRIS [use](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25SYSTEM.Util#GetEnviron) `$System.Util.GetEnviron("name")`. # Data If you want to make some data (reference tables, catalogs, etc.) available, you have several ways of doing it: - Global export. Use either a binary [GOF export](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GGBL_managing#GGBL_managing_export) or a new XML export. With GOF export, remember that locales on source and target systems must match (or at least global collation must be available on the target system). [XML export](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25SYSTEM.OBJ) takes more space. You can improve it by exporting global into an `xml.gz` file, `$system.OBJ` methods automatically (un)archive `xml.gz` files as required. The main disadvantage of this approach is that data is not human-readable, even XML - most of it is base64 encoded. - CSV. [Export CSV](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25SQL.StatementResult#%25DisplayFormatted) and import it with [LOAD DATA](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_loaddata). I prefer CSV as it's the most storage-efficient human-readable format, which anything can import. - JSON. Make class [JSON Enabled](https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_adaptor). - XML. Make class [XML Enabled](https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?KEY=GXMLPROJ_intro) to project objects into XML. Use it if your data has a complex structure. Which format to choose depends on your use case. Here I listed the formats in the order of storage efficiency, but that's not a concern if you don't have a lot of data. # Conclusions State adds additional complexity for your CI/CD deployment pipelines, but InterSystems IRIS provides a vast array of tools to manage it. # Links - [Utility Class](https://gist.github.com/eduard93/3a9abdb2eb150a456191bf387c1fc0c3) - [%Installer](https://community.intersystems.com/post/deploying-applications-intersystems-cache-installer) - [Merge CPF](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ACMF) - [$System.OBJ](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25SYSTEM.OBJ) - [System Default Settings](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Config.DefaultSettings#%25GetSetting) This is a great resource, nice work and a top chapter in this series for sure. There seems to be different ways to approach declared IRIS state by codifying things, you can codify the exported objects and import them or like you mentioned, use the installer method that builds things as code.... which I have had pretty good success with in the past, like *Tasks* below. ``` 1 pVars,pLogLevel,tInstaller %Status Excellent article! Thank you for taking the time to write this up :) A couple of comments: 1) I really like the idea of using Default Settings for application specific configuration ... that ties it in with existing import/export APIs and keeps things stored together nicely ... well done :) 2) The challenging thing with respect to reference / code tables is that directly exporting those will also export the local RowIDs for that data element, which can vary from environment to environment. InterSystems IRIS provides a new way to handle this using the XML Exchange functionality built into the product. Basically, when a persistent class extends %XML.Exchange.Adaptor it will ensure that GUIDs automatically get assigned to each data element, and that referenced objects are referenced in the exported XML by GUID rather than ID, which means that on import time it can ensure referential integrity by looking for the intended GUIDs in the imported object relationships. TrakCare uses this to expose its 1000+ code tables for source control and versioning and we use it in AppServices as well. Check it out: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25XML.Exchange.Adaptor Thanks again for this very comprehensive article about an important part of environment management :) Thanks, Ben! %XML.Exchange.Adaptor sounds great. And you're right, I'm mainly talking about the easiest scenario where there is no id/data drift. There are certainly trickier situations, where unique identifiers are required.