Search

Clear filter
Article
Robbie Luman · Feb 28, 2020

Using a Static C Library with the C Callout Gateway in Intersystems IRIS

Our company is in the process of converting our software for use in Intersystems IRIS and one of the major sections of the software makes use of a custom statically-linked C library using the $ZF("function-name") functionality. During this, I found out that the process for setting up the C library to be used within the database platform has changed significantly between Cache and IRIS. If you have used or still use the C Callout Gateway with Cache, then you will know that in order for Cache to be made aware of any functions provided in a custom C library to be used with $ZF("function-name"), the library must be linked directly into the Cache kernel binary (see Cache 2012.1 documentation for this process: https://cedocs.intersystems.com/ens20121/csp/docbook/DocBook.UI.Page.cls?KEY=GCIO_callout#GCIO_callout_extapps). If any changes had to be made to the custom C library, this would mean having to re-link the changes into the Cache binary and bounce the Cache instance, which meant downtime for our applications. Since this documentation was removed from the documentation from Cache 2012.2 on, it does not appear in the IRIS documentation at the moment (though I have been told that this new process may eventually be added back into IRIS documentation). So, here I will walk through how to implement a custom C function using $ZF("function-name"). This is a separate process than using the $ZF(-3,"function-name") which uses dynamically-linked libraries but, as it turns out, the $ZF("function-name") variety is pretty similar to it now compared to Cache. The example below will depict a custom library called "simplecallout" to demonstrate how this process is carried out within IRIS. This example was performed in a Linux environment. Note: Thank you to Intersystems Support for the assistance in identifying the correct procedure to enable this functionality in IRIS. 1. Create and save the source program "simplecallout.c". Example source looks like this: //Include the IRIS callout header file #define ZF_DLL #include "iris-cdzf.h" //Public function to add two integers int AddTwoIntegers(int a, int b, int*outsum) { *outsum = a+b; /* set value to be returned by the $ZF function call */ return 0; /* set the exit status code */ } //IRIS directives to map the AddTwoIntegers function into a $ZF function ZFBEGIN ZFENTRY("AddInt","iiP",AddTwoIntegers) ZFEND 2. Compile the source to create the object (.o) file, including the include/ directory from the instance install directory (where the iris-cdzf.h file is located): # gcc -c -fPIC simplecallout.c -I /intersystems/IRIR/dev/iris-callin/include/ -o simplecallout.o 3. Generate a shared object (.so) file from the object file: # gcc simplecallout.o -shared -o simplecallout.so 4. Copy and rename the shared object file into the <instance_install_dir>/bin directory as "iriszf.so" # cp simplecallout.so /intersystems/IRIS/bin/iriszf.so 5. From the IRIS programmer prompt, the function can now be used: USER>write $zf("AddInt",1,4) 5 If changes need to be made to the library file, a new version of the shared object file is created and put in place of the existing iriszf.so file in the bin/ directory. These changes will be immediately available within IRIS without the need to restart the IRIS instance. This new process also means that this linking process no longer has to be done each time a new instance is installed, and can simply be copied into the instance installation without the need for further action. As this is my first post on the Developer Community, I hope that folks out there find this as useful and informative as I did in the discovery process of this new procedure.
Question
Ben Spead · Dec 6, 2022

How does InterSystems IRIS determine it's hostname for $System / $System.INetInfo.LocalHostName()?

We have a UNIX VM with an InterSystems IRIS instance which we cloned for testing purposes, and we have found that $System (which is used for self-identification in email notifications) is still showing the hostname of the original VM, rather than the hostname of the cloned VM. This is coming from $System.INetInfo.LocalHostHame(). Does anyone know what you need to change on a UNIX clone in order for it to display the appropriate new host name in $System? As we looked into things further on our side, it appears as though the UNIX 'hostname'. We were able to resolve the issue in UNIX by doing the following to change the hostname to 'mynewhost': log in as roothostname mynewhostecho mynewhost > /etc/hostname The 2nd line will change the hostname in the current processes, and the 3rd will make sure that the change will persist during a reboot. Hope this is helpful to someone else.
Announcement
Anastasia Dyubaylo · Mar 11, 2020

New Coding Talk: How to Create and Submit an Application for InterSystems IRIS Online Programming Contest 2020

Hey Developers, New "Coding Talk" video was specially recorded by @Evgeny.Shvarov for the IRIS Programming Contest: ⏯ How to Create and Submit an Application for InterSystems IRIS Online Programming Contest 2020 In this video, you will know how to create your InterSystems IRIS ObjectScript application which runs with Docker. You will see how to submit it to Open Exchange and then how to participate with the application in InterSystems IRIS Online Programming Contest. Check out the related InterSystems ObjectScript GitHub template project. Enjoy watching the video! 🔥 Join the Online Programming Contest, show your best ObjectScript skills on InterSystems IRIS and earn some $ and glory! 🔥 Stay tuned for updates in this post. Hello, For the contest we use Iris Community Edition. Does it support Ensemble? I got compile error because Ens.BusinessOperation base class did not exist. Thanks, Oliver Any IRIS edition includes formerly known as Ensemble feature, which is now is Interoperability (and one of the letters in IRIS). You may get this error if you work in the namespace where Ensemble was activated. By default for USER was not activated. When you create a new namespace Ensemble checkbox checked by default and, but you can uncheck it. to enable namespace manually, do this command do ##class(%EnsembleMgr).EnableNamespace("YOUR-NAMESPACE") Hi Oliver! In addition what @Dmitry.Maslennikov said you can alter this in %Installer.cls. Just turn "no" -to "1" in this line <Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="1"> Evgeny, it still does not work. I had tried to use "yes" instead of "no" for "Ensemble" flag. Now I also tried "1". It still gives me error that superclasses do not exist. Can you please look at my terminal output? I show Installer.cls at the bottom below: [node1] (local) root@192.168.0.23 ~$ git clone https://github.com/oliverwilms/iriscontest2020Cloning into 'iriscontest2020'...remote: Enumerating objects: 64, done.remote: Counting objects: 100% (64/64), done.remote: Compressing objects: 100% (56/56), done.remote: Total 64 (delta 21), reused 19 (delta 4), pack-reused 0Unpacking objects: 100% (64/64), done.[node1] (local) root@192.168.0.23 ~$ cd iriscontest2020/[node1] (local) root@192.168.0.23 ~/iriscontest2020$ docker-compose buildBuilding irisStep 1/16 : ARG IMAGE=store/intersystems/irishealth:2019.3.0.308.0-communityStep 2/16 : ARG IMAGE=store/intersystems/iris-community:2019.3.0.309.0Step 3/16 : ARG IMAGE=store/intersystems/iris-community:2019.4.0.379.0Step 4/16 : ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0Step 5/16 : ARG IMAGE=intersystemsdc/iris-community:2019.4.0.383.0-zpmStep 6/16 : FROM $IMAGE2019.4.0.383.0-zpm: Pulling from intersystemsdc/iris-community898c46f3b1a1: Pull complete63366dfa0a50: Pull complete041d4cd74a92: Pull complete6e1bee0f8701: Pull complete973e47831f38: Pull completeb0c3b996c3e3: Pull completeb48eef952cda: Pull complete8254746f78e2: Pull completeec1f0f74baf0: Pull completefdc6015ec77d: Pull completeb72c9a7f8270: Pull completec108d032e6d0: Pull completecaf30f8515db: Pull complete02b9549ccbc9: Pull completeDigest: sha256:fc52a2359da312a5c39010a24d15e84a09ed6f3f2828e54021ebed54a3cb4c1aStatus: Downloaded newer image for intersystemsdc/iris-community:2019.4.0.383.0-zpm ---> 2f9eb08f28e3Step 7/16 : USER root ---> Running in 78efd07fbcb0Removing intermediate container 78efd07fbcb0 ---> c8f8dddbd96eStep 8/16 : WORKDIR /opt/irisapp ---> Running in a436b17bdd9cRemoving intermediate container a436b17bdd9c ---> da8cdcae02a1Step 9/16 : RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp ---> Running in 1698d44c50ebRemoving intermediate container 1698d44c50eb ---> c9c75c4960cdStep 10/16 : USER irisowner ---> Running in dd49a263a773Removing intermediate container dd49a263a773 ---> abef13ffc4deStep 11/16 : COPY Installer.cls . ---> b01d3b536e89Step 12/16 : COPY src src ---> 6ed1aa33393eStep 13/16 : COPY irissession.sh / ---> 2b8436f61475Step 14/16 : SHELL ["/irissession.sh"] ---> Running in 9eca533f3ef5Removing intermediate container 9eca533f3ef5 ---> 63ee158e5412Step 15/16 : RUN do $SYSTEM.OBJ.Load("Installer.cls", "ck") set sc = ##class(App.Installer).setup() ---> Running in afd2046fb280This copy of InterSystems IRIS has been licensed for use exclusively by:InterSystems IRIS CommunityCopyright (c) 1986-2019 by InterSystems CorporationAny other use is a violation of your license agreementStarting IRIS Node: afd2046fb280, Instance: IRIS %SYS> %SYS> Load started on 03/24/2020 02:21:26Loading file Installer.cls as udlCompiling class App.InstallerCompiling routine App.Installer.1Load finished successfully.2020-03-24 02:21:26 0 App.Installer: Installation starting at 2020-03-24 02:21:26, LogLevel=32020-03-24 02:21:26 3 Evaluate: #{$system.Process.CurrentDirectory()}src -> /opt/irisapp/src2020-03-24 02:21:26 3 SetVariable: SourceDir=/opt/irisapp/src2020-03-24 02:21:26 3 SetVariable: Namespace=IRISAPP2020-03-24 02:21:26 3 SetVariable: app=irisapp2020-03-24 02:21:26 3 Evaluate: ${Namespace} -> IRISAPP2020-03-24 02:21:26 3 Evaluate: ${Namespace} -> IRISAPP2020-03-24 02:21:26 3 Evaluate: ${Namespace} -> IRISAPP2020-03-24 02:21:26 3 Evaluate: ${Namespace} -> IRISAPP2020-03-24 02:21:26 3 Evaluate: /opt/${app}/data -> /opt/irisapp/data2020-03-24 02:21:26 3 Evaluate: %DB_${Namespace} -> %DB_IRISAPP2020-03-24 02:21:26 1 CreateDatabase: Creating database IRISAPP in /opt/irisapp/data/ with resource %DB_IRISAPP2020-03-24 02:21:26 2 CreateDatabase: Overwriting /opt/irisapp/data/IRIS.DAT2020-03-24 02:21:26 2 CreateDatabase: Adding database IRISAPP2020-03-24 02:21:26 2 CreateDatabase: Creating and assigning resource '%DB_IRISAPP' to IRISAPP2020-03-24 02:21:26 1 CreateNamespace: Creating namespace IRISAPP using IRISAPP/IRISAPP2020-03-24 02:21:26 2 CreateNamespace: Adding namespace IRISAPP2020-03-24 02:21:26 1 ActivateConfiguration: Activating Configuration2020-03-24 02:21:26 3 Evaluate: ${SourceDir} -> /opt/irisapp/src2020-03-24 02:21:26 1 Import: Loading /opt/irisapp/src (isdir=1) into IRISAPP, recurse=1 Load of directory started on 03/24/2020 02:21:26 Loading file /opt/irisapp/src/DMLSS/FilePassthroughService.cls as udlLoading file /opt/irisapp/src/DMLSS/EmailPassthroughOperation.cls as udlLoading file /opt/irisapp/src/DMLSS/Production.cls as udlLoading file /opt/irisapp/src/PackageSample/ObjectScript.cls as udlLoading file /opt/irisapp/src/DMLSS/Util.cls as udlLoading file /opt/irisapp/src/PackageSample/PersistentClass.cls as udl Compilation started on 03/24/2020 02:21:26 with qualifiers 'ck'ERROR #5373: Class 'Ens.BusinessOperation', used by 'DMLSS.EmailPassthroughOperation:superclass', does not existSkip class DMLSS.EmailPassthroughOperationERROR #5373: Class 'Ens.BusinessService', used by 'DMLSS.FilePassthroughService:superclass', does not existSkip class DMLSS.FilePassthroughServiceERROR #5373: Class 'Ens.Production', used by 'DMLSS.Production:superclass', does not existSkip class DMLSS.ProductionCompiling 3 classes, using 3 worker jobsCompiling class DMLSS.UtilCompiling class PackageSample.ObjectScriptCompiling class PackageSample.PersistentClassCompiling table PackageSample.PersistentClassCompiling routine PackageSample.ObjectScript.1Compiling routine DMLSS.Util.1Compiling routine PackageSample.PersistentClass.1Detected 3 errors during compilation in 0.112s. ERROR #5373: Class 'Ens.BusinessOperation', used by 'DMLSS.EmailPassthroughOperation:superclass', does not existDetected 3 errors during load.2020-03-24 02:21:26 0 App.Installer: ERROR #5373: Class 'Ens.BusinessOperation', used by 'DMLSS.EmailPassthroughOperation:superclass', does not exist2020-03-24 02:21:26 0 App.Installer: ERROR #ConfigFailed: Unknown status code: <Ins>ConfigFailed ) > ERROR #5373: Class 'Ens.BusinessOperation', used by 'DMLSS.EmailPassthroughOperation:superclass', does not exist2020-03-24 02:21:26 0 App.Installer: Installation failed at 2020-03-24 02:21:262020-03-24 02:21:26 0 %Installer: Elapsed time .423668s %SYS>ERROR: Service 'iris' failed to build: The command '/irissession.sh do $SYSTEM.OBJ.Load("Installer.cls", "ck") set sc = ##class(App.Installer).setup()' returned a non-zero code: 1[node1] (local) root@192.168.0.23 ~/iriscontest2020$ cat Installer.clsClass App.Installer{ XData setup{<Manifest> <Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/> <Default Name="Namespace" Value="IRISAPP"/> <Default Name="app" Value="irisapp" /> <Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="1"> <Configuration> <Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/> <Import File="${SourceDir}" Flags="ck" Recurse="1"/> </Configuration> <CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32" /> </Namespace> </Manifest>} ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger) As %Status [ CodeMode = objectgenerator, Internal ]{ #; Let XGL document generate code for this method. Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "setup")} } Hi Oliver! It was an issue in %Installer - the Import statement <Import File="${SourceDir}" Flags="ck" Recurse="1"/> was inside the Configuration tag started import before Interoperability (Ensemble) enabled in Namespace. I put it below the configuration and it works. Thanks @Dmitry.Maslennikov for your help. I sent you a pull request with the fix. And thanks for this! And I've updated the templates (Docker template, Contest template) accordingly. Thank you!
Announcement
Anastasia Dyubaylo · Apr 21, 2020

New Coding Talk: How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS

Hi Developers, Please welcome another "Coding Talk" video specially recorded for the second IRIS Programming Contest: ⏯ How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS In this video, @Evgeny Shvarov shows how to build the Package with ObjectScript classes and REST API application from scratch, how to test it and how to publish it on a test registry and in InterSystems Open Exchange registry. ➡️ Download ZMP from Open Exchange And... You're very welcome to join the second IRIS Programming Contest! Show your best coding skills on InterSystems IRIS with REST API! Stay tuned! 👍🏼
Article
Sylvain Guilbaud · Apr 19, 2022

How to manage in a CI/CD pipeline an InterSystems API Manager (IAM aka Kong GATEWAY) configuration ?

Kong provides an open source configuration management tool (written in Go), called decK (which stands for declarative Kong) Check that decK recognizes your Kong Gateway installation via deck ping deck ping Successfully connected to Kong! Kong version: 2.3.3.2-enterprise-edition Export Kong Gateway configuration to a file named "kong.yaml" via deck dump deck dump After modifying the kong.yaml, show the differences via deck diff deck diff updating service alerts { "connect_timeout": 60000, - "host": "172.24.156.176", + "host": "192.10.10.18", "id": "3bdd7db4-0b75-4148-93b3-2ff11e961f64", "name": "alerts", "path": "/alerts", "port": 50200, "protocol": "http", "read_timeout": 60000, "retries": 5, "write_timeout": 60000 } Summary: Created: 0 Updated: 1 Deleted: 0 Apply changes via deck sync deck sync updating service alerts { "connect_timeout": 60000, - "host": "172.24.156.176", + "host": "192.10.10.18", "id": "3bdd7db4-0b75-4148-93b3-2ff11e961f64", "name": "alerts", "path": "/alerts", "port": 50200, "protocol": "http", "read_timeout": 60000, "retries": 5, "write_timeout": 60000 } Summary: Created: 0 Updated: 1 Deleted: 0 deck sync -s workspace1.yaml --workspace workspace1 deck sync -s workspace2.yaml --workspace workspace2 For more information : https://docs.konghq.com/deck/1.11.x/guides/getting-started/ https://docs.konghq.com/deck/1.11.x/guides/best-practices/
Announcement
Andreas Dieckow · Dec 21, 2022

InterSystems IRIS kits: Discontinue Apache web server installations (aka Private Web Server (PWS))

If you would like to try the new installation process for the NoPWS project, you can get access to the Early Access Program (EAP) here. (https://evaluation.intersystems.com/Eval/) Once you have registered, please send InterSystems the email address you used to register for the EAP to nopws@intersystems.com. Look here for background: Original Post @Andreas.Dieckow - thank you for the clarification!
Announcement
Anastasia Dyubaylo · Oct 3, 2019

[October 15, 2019] Upcoming Webinar in Spanish: Developing and managing APIs with InterSystems IRIS Data Platform

Hi Community! We are pleased to invite you to the upcoming webinar in Spanish "Desarrollar y gestionar APIs con InterSystems IRIS Data Platform" / "Developing and managing APIs with InterSystems IRIS Data Platform" on October 15 at 16:00 CET! Are you a backend developer? Or a Systems integration specialist? If so… this webinar is for you! What you will learn: We'll develop an API REST from OpenAPI (Swagger) specifications and we'll also see how to expose in JSON format the objects stored in our database. We'll show how to manage our API, considering the management of tokens for API consumers, monitoring activities, how to establish controls to restrict API use for traffic and how to release new versions. We'll publish the examples in GitHub and we'll use Docker and Visual Studio Code images. We are waiting for you! REGISTER FOR FREE HERE!
Announcement
Anastasia Dyubaylo · Sep 11, 2019

New Coding Talk: GitHub Repository Template To Develop and Debug ObjectScript in InterSystems IRIS

Hi Everyone, New Coding Talk, recorded by @Evgeny.Shvarov, is already on InterSystems Developers YouTube: GitHub Repository Template To Develop and Debug ObjectScript in InterSystems IRIS In this video you will learn how to start development ObjectScript in VSCode with InterSystems IRIS on docker container with a minimum required set of files in GitHub repository. Please check the additional links: GitHub Template ObjectScript Docker Template App on Open Exchange Feel free to ask your questions in the comments to this post. Enjoy watching the video! Thanks, Anastasia!The video is 20 min long, but it describes how to start coding, compiling, debugging ObjectScript on InterSystems IRIS in less than a minute!All you need is Git, VSCode and Docker installed. Excellent! Can we use vscode with Intersystems Cache? Meaning classes, CSPs and Mac routines? Hi Bernard! Pinging @Dmitry.Maslennikov ​​​​and @John.Murray for the answer cause both develop VSCode plugins for ObjectScript. My VSCode-ObjectScript extension supports the same versions as Atelier, so Caché/Ensemble versions from 2016.1.You can not just edit mac routines, and classes, compile and do many other features. And it is also supporting Intellisense. CSP support, just very simple, only syntax highlighting as HTML, as it already stored as files, you can just use the same folder in Caché and in VSCode, and everything should be OK. You can find details about the latest features here, and use the arrow buttons to navigate to the previous release notes.And now, we are also offering enterprise support, so, you will get faster issues solving, and may get some priority in realization new features. Hello Evgeny, Slightly off-topic but it is interesting you are using VSCode. I have asked at various times, including the last symposium, what the plans are for development tools given that both Studio and Atelier have essentially been end of life for quite a while now, well over a year, but the response is always the same, there is a plan but we cannot announce it yet. Is VSCode the way Intersystems is going? Does this mean we are now relient on 3rd party development tools? Are there plans to create some way to replicate functionality such as the SOAP wizard in VSCode? I was hoping there might be some details out of the summit but I haven't seen anything yet. Regards David Hi David! InterSystems doesn't develop VSCode plugins. In this video, I'm enjoying VSCode ObjectScript plugin by @Dmitry.Maslennikov from CaretDev as a Developers Community effort. My job is to support all the best efforts of the InterSystems Developers Community and I think this plugin is suitable for some ways of developing and debugging InterSystems Solutions with ObjectScript. There is also another pretty mature VSCode plugin Serenji introduced by @John.Murray from GeorgeJames Software which has a lot of advanced features to develop and debug InterSystems solutions with ObjectScript and REST API development. and AFAIK there is also yet another VSCode plugin coming from @Rubens.Silva9155. As for Atelier and Studio, there was an announcement by @Andreas.Dieckow a year ago on the terms of InterSystems support for Atelier and Studio. The Serenji extension from George James Software works all the way back to Cache 2008.1 (that's not a typo). Hello Evgeny, Thanks for the response, of course I am aware of the various plugins hence my question of relying on 3rd party tools. That old post does not really answer the question on what Intersystems future plans for development tools are, it just confirms effective end of life for the current ones. Also, only fixing critical issues means reported problems will generally not be fixed anymore as they are pretty stable, something I have already experienced. I don't have a particular problem with moving to VSCode supported by 3rd parties but some confirmation of this from Intersystems so developers can plan to move in that direction would be appreciated but noone seeems to want to commit to any answers. It also starts the question of how will Intersystems provide the ability for these 3rd party tools to replicate and extend the functionality available already, again there are no answers. Regards David Good questions, David. So I'm tagging @Jeff.Fried and @Raj.Singh to help with the answers. If you fill well in moving to 3rd party tools, you can use my VSCode-ObjectScript extension, by now you already getting more than just Studio. With my company CaretDev, we already offer commercial support, so you can be sure that you will get help if you will face some troubles. Hello Dmitry, I have used the extension and am very impressed but obviously any change in toolset has to be agreed accross the business and will require changes to the development and version control process, hence a reluctance until we know what Intersystems have planned. Apologies if I have missed a feature post but does the extension replicate the Studio Add-Ins such as the SOAP Wizard as these are very useful to us. Regards David Studio's Add-ins have not added, yet. Please fill the issue. So, I will know that this feature useful. And you can issues for anything you found, and anything you would like to see in extension. I know Jeff and Raj are busy people but there never was a response. Intersystems simply need to commit to an answer as at this point the actual answer isn't really important anymore, just getting an answer so commitments to any change can be made.
Article
Evgeny Shvarov · Apr 12, 2023

Launching InterSystems IRIS Community Edition Docker Image with User, Password and Namespace via Environment Variables

Hi Developers! There is a recent update came for developer community images of InterSystems IRIS and IRIS For Health. This release comes with Environment variables support. Currently 3 variables are supported: IRIS_USERNAME=user to create IRIS_PASSWORD=with password IRIS_NAMESPACE=create namespace if doesn't exist Here is what you can do - see below. Start iris with your username and password created: docker run --rm --name iris-sql -d -p 9091:1972 -p 9092:52773 -e IRIS_PASSWORD=demo -e IRIS_USERNAME=demo intersystemsdc/iris-community Launch SQL terminal irissqlcli or Dbeaver: $ irissqlcli iris://demo:demo@localhost:9091/USER Server: InterSystems IRIS Version 2022.3.0.606 xDBC Protocol Version 65 Version: 0.5.1 [SQL]demo@localhost:USER> select $zversion +---------------------------------------------------------------------------------------------------------+ | Expression_1 | +---------------------------------------------------------------------------------------------------------+ | IRIS for UNIX (Ubuntu Server LTS for ARM64 Containers) 2022.3 (Build 606U) Mon Jan 30 2023 09:07:49 EST | +---------------------------------------------------------------------------------------------------------+ 1 row in set Time: 0.050s [SQL]demo@localhost:USER> And you can start IRIS terminal: docker exec -it iris-sql iriscli Node: fd7911f0b130, Instance: IRIS USER> And you can use IRIS_NAMESPACE variable to create a new namespace. Let's stop and kill the container we created: docker stop iris-sql And launch a new one with namespace DEMO introduced: docker run --rm --name iris-sql -d -p 9091:1972 -p 9092:52773 -e IRIS_PASSWORD=demo -e IRIS_USERNAME=demo -e IRIS_NAMESPACE=DEMO intersystemsdc/iris-community Entering SQL terminal: $ irissqlcli iris://demo:demo@localhost:9091/DEMO Server: InterSystems IRIS Version 2022.3.0.606 xDBC Protocol Version 65 Version: 0.5.1 [SQL]demo@localhost:DEMO> exit Goodbye! Entering IRIS terminal: $ docker exec -it iris-sql iriscli -U DEMO Node: 6c52cb612bc0, Instance: IRIS DEMO> And lets enter the IRIS terminal: docker exec -it iris-sql iriscli -U DEMO PS: if you run the container locally you also can connect the SQL terminal via embedded python as: $ docker exec -it iris-sql irissqlcli iris+emb:///DEMO Server: IRIS for UNIX (Ubuntu Server LTS for ARM64 Containers) 2022.3 (Build 606U) Mon Jan 30 2023 09:07:49 EST Version: 0.5.2 [SQL]irisowner@/usr/irissys/:DEMO> select $username +--------------+ | Expression_1 | +--------------+ | irisowner | +--------------+ 1 row in set Time: 0.047s [SQL]irisowner@/usr/irissys/:DEMO> Credit goes to @Dmitry.Maslennikov DC Community images you can use The latest stable releases of InterSystems IRIS: intersystemsdc/iris-community - InterSystems IRIS Community Edition intersystemsdc/irisheatlh-community - InterSystems IRIS For Health Community Edition intersystemsdc/iris-ml-community - InterSystems IRIS Community Edition with IntegratedML package intersystemsdc/irishealth-ml-community - InterSystems IRIS Community Edition for Health with IntegratedML package And the preview releases: intersystemsdc/iris-community:preview intersystemsdc/irisheatlh-community:preview intersystemsdc/iris-ml-community:preview intersystemsdc/irishealth-ml-community:preview Happy coding! UPDATE. Env variables without underscore are supported too: IRISUSERNAME=user to create IRISPASSWORD=with password IRISNAMESPACE=create namespace if doesn't exist A nice extension to this: run iris and with an ipm package on-board. Here is one command to start IRIS and install web-terminal: docker run --rm --name iris-ce -d -p 9091:1972 -p 9092:52773 -e IRIS_PASSWORD=demo -e IRIS_USERNAME=demo intersystemsdc/iris-community -a "echo 'zpm \"install webterminal\"' | iriscli"
Article
Maria Nesterenko · Apr 20, 2023

Using InterSystems IRIS Cloud SQL and IntegratedML to build a sleep analysis application Sheep's Galaxy.

Many factors affect a person's quality of life, and one of the most important is sleep. The quality of our sleep determines our ability to function during the day and affects our mental and physical health. Good quality sleep is critical to our overall health and well-being. Therefore, by analyzing indicators preceding sleep, we can determine the quality of our sleep. This is precisely the functionality of the Sheep's Galaxy application. @Evgeny.Shvarov dsfgsdfg @Marykutty.George1462 sdrfg sdfg dfg @Maria.Nesterenko
Announcement
Fabiano Sanches · Sep 2, 2022

Future updates for InterSystems IRIS, IRIS for Health 2022.2 Community Editions developer preview 7

In addition to the new supported platforms (Ubuntu 22.04 LTS and RHEL 9), Community Edition limits will soon be updated to: Maximum cores: 20 Maximum connections: 8 NOTE: These limits aren't available yet for the latest developer preview build 2022.2.0.322.0. It's expected to the Developer Preview 7, to be released by next week. After installing version 2022.2.0.322.0. Product=Server License Type=Concurrent User Server=Single Platform=IRIS Community Licensed Users=5 Licensed Cores=8 Authorized Cores=8 Extended feature codes=3A1F1B02 Interoperability enabled. BI User enabled. BI Development enabled. HealthShare enabled. Analytics Run enabled. Analytics Analyzer enabled. Analytics Architect enabled. NLP enabled. HealthShare Foundation enabled. Analytics VR Format enabled. Analytics VR Data Define enabled. IntegratedML enabled. InterSystems IRIS enabled. Columnar Storage enabled. Non-Production Not Maximum cores: 20 Maximum connections: 8 You're right, it's not yet in the developer preview. It should be released next week, as part of the developer preview 7. Thank you for the head up.
Article
Eduard Lebedyuk · Mar 7, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part II: GitLab workflow

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDIn the previous article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software. Still, our focus was on the implementation part of software development, but this part presents:GitLab Workflow - a complete software life cycle process - from idea to user feedbackContinuous Delivery - software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently.GitLab WorkflowGitLab Workflow is a logical sequence of possible actions to be taken during the entire lifecycle of the software development process.GitLab Workflow takes into account the GitLab Flow, which we discussed in a previous article. Here's how it looks like:Idea: every new proposal starts with an idea.Issue: the most effective way to discuss an idea is creating an issue for it. Your team and your collaborators can help you to polish and improve it in the issue tracker.Plan: once the discussion comes to an agreement, it's time to code. But first, we need to prioritize and organize our workflow by assigning issues to milestones and issue board.Code: now we're ready to write our code, once we have everything organized.Commit: once we're happy with our draft, we can commit our code to a feature-branch with version control. GitLab flow was explained in detail in the previous article.Test: run our scripts using GitLab CI, to build and test our application.Review: once our script works and our tests and builds succeeds, we are ready to get our code reviewed and approved.Staging: now it's time to deploy our code to a staging environment to check if everything worked as we were expecting or if we still need adjustments.Production: when we have everything working as it should, it's time to deploy to our production environment!Feedback: now it's time to look back and check what stage of our work needs improvement.Again, the process itself is not new (or unique to GitLab for that matter) and can be achieved with other tools of choice.Let's discuss several of these stages and what they entail. There is also documentation available.Issue and PlanThe beginning stages of GitLab workflow are centered on an issue - a feature or a bug or some other kind of semantically separate piece of work.The issue has several purposes such as:Management: an issue has a due date, an assigned person, due date, time spent and estimates, etc. to help track with issue resolving.Administrative: an issue is a part of a milestone, kanban board that allows us to track our software as it progresses from version to version.Development: an issue has a discussion and commits associated with it.Planning stage allows us to group issues by their priority, milestone, kanban board and have an overview for that.Development was discussed in the previous part, just follow any git flow you wish. After we developed our new feature and merged it into master - what happens next?Continuous DeliveryContinuous Delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.Continuous Delivery in GitLabIn GitLab continuous delivery configuration is defined on a per-repository basis as a YAML config file.Continuous delivery configuration is a series of consecutive stages.Each stage has one or several scripts that are executed in parallel.Script defines one action and what conditions should be met to execute it:What to do (run OS command, run a container)?When to run the script:What triggers it (commit to a specific branch)?Do we run it if previous stages failed?Run manually or automatically?In what environment to run the script?What artifacts to save after executing the scripts (they are uploaded from the environment into GitLab for easier access)?Environment - is a configured server or container in which you can run your scripts.Runners execute scripts in specific environments. They are connected to your GitLab and execute scripts as required.Runner can be deployed on a server, a container or even your local machine.How does Continuous Delivery happen?New commit is pushed into the repository.GitLab checks Continuous Delivery configuration.Continuous Delivery configuration contains all possible scripts for all cases so they are filtered to a set of scripts that should be run for this specific commit (for example a commit to master branch triggers only actions related to a master branch). This set is called a pipeline.Pipeline is executed in a target environment, the results of the execution are saved and displayed in GitLab.For example, here's one pipeline executed after a commit to a master branch:It consists of four stages, executed consecutivelyLoad stage loads code into a serverTest stage runs unit testsPackage stage consists of two scripts run in parallel:Build clientExport server code (for information purposes mainly)Deploy stage moves built client into the web-server directory.As we can see, each script has run successfully, if one of the scripts failed by default later scripts would not be run (but we can change this behavior):If we open the script we can see the log and determine why it failed: Running with gitlab-runner 10.4.0 (857480b6) on test runner (ab34a8c5) Using Shell executor... Running on gitlab-test... Fetching changes... Removing diff.xml Removing full.xml Removing index.html Removing tests.html HEAD is now at a5bf3e8 Merge branch '4-versiya-1-0' into 'master' From http://gitlab.eduard.win/test/testProject * [new branch] 5-versiya-1-1 -> origin/5-versiya-1-1 a5bf3e8..442a4db master -> origin/master d28295a..42a10aa preprod -> origin/preprod 3ac4b21..7edf7f4 prod -> origin/prod Checking out 442a4db1 as master... Skipping Git submodules setup $ csession ensemble "##class(isc.git.GitLab).loadDiff()" [2018-03-06 13:58:19.188] Importing dir /home/gitlab-runner/builds/ab34a8c5/0/test/testProject/ [2018-03-06 13:58:19.188] Loading diff between a5bf3e8596d842c5cc3da7819409ed81e62c31e3 and 442a4db170aa58f2129e5889a4bb79261aa0cad0 [2018-03-06 13:58:19.192] Variable modified var=$lb("MyApp/Info.cls") Load started on 03/06/2018 13:58:19 Loading file /home/gitlab-runner/builds/ab34a8c5/0/test/testProject/MyApp/Info.cls as udl Load finished successfully. [2018-03-06 13:58:19.241] Variable items var="MyApp.Info.cls" var("MyApp.Info.cls")="" Compilation started on 03/06/2018 13:58:19 with qualifiers 'cuk /checkuptodate=expandedonly' Compiling class MyApp.Info Compiling routine MyApp.Info.1 ERROR: MyApp.Info.cls(version+2) #1003: Expected space : '}' : Offset:14 [zversion+1^MyApp.Info.1] TEXT: quit, "1.0" } Detected 1 errors during compilation in 0.010s. [2018-03-06 13:58:19.252] ERROR #5475: Error compiling routine: MyApp.Info.1. Errors: ERROR: MyApp.Info.cls(version+2) #1003: Expected space : '}' : Offset:14 [zversion+1^MyApp.Info.1] > ERROR #5030: An error occurred while compiling class 'MyApp.Info' ERROR: Job failed: exit status 1 Compilation error caused our script to fail. Conclusion GitLab supports all main stages of software development.Continuous delivery can help you to automate tasks of building, testing and deploying your software. Links Part I: GitIntroduction to GitLab workflowGitLab CI/CD documentationGitLab flowCode for this article What's next? In the next article, we'll: Install GitLab.Connect it to several environments with InterSystems products installed.Write a Continuous Delivery configuration. Let's discuss how our Continuous Delivery should work. First of all, we need several environments and branches that correspond to them. Code goes into this branch and delivered to the target environment: EnvironmentBranchDeliveryWho can commitWho can mergeTestmasterAutomaticDevelopers OwnersDevelopers OwnersPreprodpreprodAutomaticNo oneOwnersProdprodSemiautomatic (press button to deliver)No oneOwners And as an example, we'll develop one new feature using GitLab flow and deliver it using GitLab CD. Feature is developed in a feature branch.Feature branch is reviewed and merged into the master branch.After a while (several features merged) master is merged into preprodAfter a while (user testing, etc.) preprod is merged into prod Here's how it would look like: Development and testingDeveloper commits the code for the new feature into a separate feature branchAfter feature becomes stable, the developer merges our feature branch into the master branchCode from the master branch is delivered to the Test environment, where it's loaded and testedDelivery to the Preprod environmentDeveloper creates merge request from master branch into the preprod branchRepository Owner after some time approves merge requestCode from the preprod branch is delivered to the Preprod environmentDelivery to the Prod environmentDeveloper creates merge request from preprod branch into the prod branchRepository Owner after some time approves merge requestRepository owner presses "Deploy" buttonCode from prod branch is delivered to the Prod environment Or the same but in a graphic form: The article is considered as InterSystems Data Platform Best Practice.
Article
Eduard Lebedyuk · May 10, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part VII: CD using containers

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?Containers infrastructureCD using containersIn the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.In the third article, we covered GitLab installation and configuration and connecting your environments to GitLabIn the fourth article, we wrote a CD configuration.In the fifth article, we talked about containers and how (and why) they can be used.In the sixth article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.In this article, we'll build Continuous Delivery configuration discussed in the previous articles.WorkflowIn our Continuous Delivery configuration we would:Push code into GitLab repositoryBuild docker imageTest itPublish image to our docker registrySwap old container with the new version from the registryOr in graphical format:Let's start.BuildFirst, we need to build our image.Our code would be, as usual, stored in the repository, CD configuration in gitlab-ci.yml but in addition (to increase security) we would store several server-specific files on a build server.GitLab.xmlContains CD hooks code. It was developed in the previous article and available on GitHub. This is a small library to load code, run various hooks and test code. As a preferable alternative, you can use git submodules to include this project or something similar into your repository. Submodules are better because it's easier to keep them up to date. One other alternative would be tagging releases on GitLab and loading them with ADD command.iris.keyLicense key. Alternatively, it can be downloaded during container build rather than stored on a server. It's rather insecure to store in the repository.pwd.txtFile containing default password. Again, it's rather insecure to store it in the repository. Also, if you're hosting prod environment on a separate server it could have a different default password.load_ci.scriptInitial script, it:Enables OS authenticationLoads GitLab.xmlInitializes GitLab utility settingsLoads the code set sc = ##Class(Security.System).Get("SYSTEM",.Properties) write:('sc) $System.Status.GetErrorText(sc) set AutheEnabled = Properties("AutheEnabled") set AutheEnabled = $zb(+AutheEnabled,16,7) set Properties("AutheEnabled") = AutheEnabled set sc = ##Class(Security.System).Modify("SYSTEM",.Properties) write:('sc) $System.Status.GetErrorText(sc) zn "USER" do ##class(%SYSTEM.OBJ).Load(##class(%File).ManagerDirectory() _ "GitLab.xml","cdk") do ##class(isc.git.Settings).setSetting("hooks", "MyApp/Hooks/") do ##class(isc.git.Settings).setSetting("tests", "MyApp/Tests/") do ##class(isc.git.GitLab).load() halt Note that the first line is intentionally left empty. As some settings can be server-specific, it's stored not in the repository, but rather separately. If this initial hook is always the same, you can just store it in the repository. gitlab-ci.yml Now, to Continuous Delivery configuration: build image: stage: build tags: - test script: - cp -r /InterSystems/mount ci - cd ci - echo 'SuperUser' | cat - pwd.txt load_ci.script > temp.txt - mv temp.txt load_ci.script - cd .. - docker build --build-arg CI_PROJECT_DIR=$CI_PROJECT_DIR -t docker.domain.com/test/docker:$CI_COMMIT_REF_NAME . What is going on here? First of all, as docker build can access only subdirectories of a base build directory - in our case repository root, we need to copy our "secret" directory (the one with GitLab.xml, iris.key, pwd.txt and load_ci.script) into the cloned repository. Next, first terminal access requires a user/pass so we'd add them to load_ci.script (that's what empty line at the beginning of load_ci.script is for btw). Finally, we build docker image and tag it appropriately: docker.domain.com/test/docker:$CI_COMMIT_REF_NAME where $CI_COMMIT_REF_NAME is the name of a current branch. Note that the first part of the image tag should be named same as project name in GitLab, so it could be seen in GitLab Registry tab (instructions on tagging are available in Registry tab). Dockerfile Building docker image is done using Dockerfile, here it is: FROM docker.intersystems.com/intersystems/iris:2018.1.1.611.0 ENV SRC_DIR=/tmp/src ENV CI_DIR=$SRC_DIR/ci ENV CI_PROJECT_DIR=$SRC_DIR COPY ./ $SRC_DIR RUN cp $CI_DIR/iris.key $ISC_PACKAGE_INSTALLDIR/mgr/ \ && cp $CI_DIR/GitLab.xml $ISC_PACKAGE_INSTALLDIR/mgr/ \ && $ISC_PACKAGE_INSTALLDIR/dev/Cloud/ICM/changePassword.sh $CI_DIR/pwd.txt \ && iris start $ISC_PACKAGE_INSTANCENAME \ && irissession $ISC_PACKAGE_INSTANCENAME -U%SYS < $CI_DIR/load_ci.script \ && iris stop $ISC_PACKAGE_INSTANCENAME quietly We start from the basic iris container. First of all, we copy our repository (and "secret" directory) inside the container. Next, we copy license key and GitLab.xml to mgr directory. Then we change the password to the value from pwd.txt. Note that pwd.txt is deleted in this operation. After that, the instance is started and load_ci.script is executed. Finally, iris instance is stopped. Here's the job log (partial, skipped load/compilation logs): Running with gitlab-runner 10.6.0 (a3543a27) on docker 7b21e0c4 Using Shell executor... Running on docker... Fetching changes... Removing ci/ Removing temp.txt HEAD is now at 5ef9904 Build load_ci.script From http://gitlab.eduard.win/test/docker 5ef9904..9753a8d master -> origin/master Checking out 9753a8db as master... Skipping Git submodules setup $ cp -r /InterSystems/mount ci $ cd ci $ echo 'SuperUser' | cat - pwd.txt load_ci.script > temp.txt $ mv temp.txt load_ci.script $ cd .. $ docker build --build-arg CI_PROJECT_DIR=$CI_PROJECT_DIR -t docker.eduard.win/test/docker:$CI_COMMIT_REF_NAME . Sending build context to Docker daemon 401.4kB Step 1/6 : FROM docker.intersystems.com/intersystems/iris:2018.1.1.611.0 ---> cd2e53e7f850 Step 2/6 : ENV SRC_DIR=/tmp/src ---> Using cache ---> 68ba1cb00aff Step 3/6 : ENV CI_DIR=$SRC_DIR/ci ---> Using cache ---> 6784c34a9ee6 Step 4/6 : ENV CI_PROJECT_DIR=$SRC_DIR ---> Using cache ---> 3757fa88a28a Step 5/6 : COPY ./ $SRC_DIR ---> 5515e13741b0 Step 6/6 : RUN cp $CI_DIR/iris.key $ISC_PACKAGE_INSTALLDIR/mgr/ && cp $CI_DIR/GitLab.xml $ISC_PACKAGE_INSTALLDIR/mgr/ && $ISC_PACKAGE_INSTALLDIR/dev/Cloud/ICM/changePassword.sh $CI_DIR/pwd.txt && iris start $ISC_PACKAGE_INSTANCENAME && irissession $ISC_PACKAGE_INSTANCENAME -U%SYS < $CI_DIR/load_ci.script && iris stop $ISC_PACKAGE_INSTANCENAME quietly ---> Running in 86526183cf7c . Waited 1 seconds for InterSystems IRIS to start This copy of InterSystems IRIS has been licensed for use exclusively by: ISC Internal Container Sharding Copyright (c) 1986-2018 by InterSystems Corporation Any other use is a violation of your license agreement %SYS> 1 %SYS> Using 'iris.cpf' configuration file This copy of InterSystems IRIS has been licensed for use exclusively by: ISC Internal Container Sharding Copyright (c) 1986-2018 by InterSystems Corporation Any other use is a violation of your license agreement 1 alert(s) during startup. See messages.log for details. Starting IRIS Node: 39702b122ab6, Instance: IRIS Username: Password: Load started on 04/06/2018 17:38:21 Loading file /usr/irissys/mgr/GitLab.xml as xml Load finished successfully. USER> USER> [2018-04-06 17:38:22.017] Running init hooks: before [2018-04-06 17:38:22.017] Importing hooks dir /tmp/src/MyApp/Hooks/ [2018-04-06 17:38:22.374] Executing hook class: MyApp.Hooks.Global [2018-04-06 17:38:22.375] Executing hook class: MyApp.Hooks.Local [2018-04-06 17:38:22.375] Importing dir /tmp/src/ Loading file /tmp/src/MyApp/Tests/TestSuite.cls as udl Compilation started on 04/06/2018 17:38:22 with qualifiers 'c' Compilation finished successfully in 0.194s. Load finished successfully. [2018-04-06 17:38:22.876] Running init hooks: after [2018-04-06 17:38:22.878] Executing hook class: MyApp.Hooks.Local [2018-04-06 17:38:22.921] Executing hook class: MyApp.Hooks.Global Removing intermediate container 39702b122ab6 ---> dea6b2123165 [Warning] One or more build-args [CI_PROJECT_DIR] were not consumed Successfully built dea6b2123165 Successfully tagged docker.domain.com/test/docker:master Job succeeded Note that I'm using GitLab Shell executor and not Docker executor. Docker executor is used when you need something from inside of the image, for example, you're building an Android application in java container and you only need an apk. In our case, we need a whole container and for that, we need Shell executor. So we're running Docker commands via GitLab Shell executor. Run We have our image, next let's run it. In the case of feature branches, we can just destroy old container and start the new one. In the case of the environment, we can run a temporary container and replace environment container in case tests succeed (that is left as an exercise to the reader). Here is the script. destroy old: stage: destroy tags: - test script: - docker stop iris-$CI_COMMIT_REF_NAME || true - docker rm -f iris-$CI_COMMIT_REF_NAME || true This script destroys currently running container and always succeeds (by default docker fails if it tries to stop/remove non-existing container). Next, we start the new image and register it as an environment. Nginx container automatically proxies requests using VIRTUAL_HOST environment variable and expose directive (to know which port to proxy). run image: stage: run environment: name: $CI_COMMIT_REF_NAME url: http://$CI_COMMIT_REF_SLUG. docker.domain.com/index.html tags: - test script: - docker run -d --expose 52773 --env VIRTUAL_HOST=$CI_COMMIT_REF_SLUG.docker.eduard.win --name iris-$CI_COMMIT_REF_NAME docker.domain.com/test/docker:$CI_COMMIT_REF_NAME --log $ISC_PACKAGE_INSTALLDIR/mgr/messages.log Tests Let's run some tests. test image: stage: test tags: - test script: - docker exec iris-$CI_COMMIT_REF_NAME irissession iris -U USER "##class(isc.git.GitLab).test()" Publish Finally, let's publish our image in the registry publish image: stage: publish tags: - test script: - docker login docker.domain.com -u dev -p 123 - docker push docker.domain.com/test/docker:$CI_COMMIT_REF_NAME User/pass could be passed using GitLab secret variables. Now we can see the image in GitLab: And other developers can pull it from the registry. On environments tab all our environments are available for easy browsing: Conclusion In this series of articles, I covered general approaches to the Continuous Delivery. It is an extremely broad topic and this series of articles should be seen more as a collection of recipes rather than something definitive. If you want to automate building, testing and delivery of your application Continuous Delivery in general and GitLab in particular is the way to go. Continuous Delivery and containers allows you to customize your workflow as you need it. Links Code for the articleTest projectComplete CD configuration What's next That's it. I hope I covered the basics of continuous delivery and containers. There's a bunch of topics I did not talked about (so maybe later), especially towards the containers: Data could be persisted outside of container, here's the documentation on thatOrchestration platforms like kubernetesInterSystems Cloud ManagerEnvironment management - creating temporary environments for testing, removing old environments after feature branch mergingDocker compose for multi-container deploymentsDecreasing docker image size and build times... After loading your classes you should re-scramble the password, so you don't accidentally distribute an image with a defined/known instance password. On non-prod servers we don't need that - developer/ci server should be able to pull the image and work with it.On prod Durable %SYS should be used with secret password and not changed with each new app version, but rather through a separate process (once a year, etc). the pwd should be set at runtime of the container (we have the parameters for that). No matter if prod or non-prod. If you have any pwd fixed in the image, it introduces an unnecessary risk. What risk?There are two distinct cases - where we have users/passwords in place or we don't.In the first case container should use these passwords (via Durable %SYS) and they shouldn't be be superseded by anything happening with a container change.In the second case we have some empty application - no users, no data and so specifying random password only adds unnecessary steps down the line.There is one case where we need to force our container user to create new password - when we are:supplying a complete applicationdon't have control over how it is deployedpasswords are stored inside the containerin this case yes (when all conditions are met), password should be scrambled, but this situation is wrong on itself (mainly in storing passwords inside the container), and should be resolved by other means. great job. Wow, Thank you very much Thank you for sharing this! We followed same process overall, but handled code promotion differently. Our setup is to run IRIS on Kubernetes with different volumes for namespaces (code, globals) & journaling. Given this requirement, we can not copy/package code into the docker image. So we made use of post install script option to pull the code and import as IRIS container starts. Great to hear that this project is being used in the field!
Announcement
RB Omo · May 21, 2018

Software Defined Data Centers (SDDC) and Hyper-Converged Infrastructure (HCI) – Important Considerations for InterSystems Clients

This document describes considerations around SDDC and HCI solutions for InterSystems Clients.Software Defined Data Centers (SDDC) and Hyper-Converged Infrastructure (HCI) – Important Considerations for InterSystems ClientsA growing number of IT organizations are exploring the potential use of SDDC and HCI solutions. These solutions appear attractive and are marketed as simplification of IT management and potential cost reductions across heterogeneous data centers and cloud infrastructure options. The potential benefits to IT organizations are significant, and many InterSystems clients are embracing SDDC, HCI, or both.If you are considering SDDC or HCI solutions, please contact your Sales Account Manager or Sales Engineer to schedule a call with a Technical Architect. This is important to ensure your success.These solutions are highly configurable, and organizations can choose from many permutations of software and hardware components. We have seen our clients use a variety of SDDC and HCI solutions, and through those experiences we have realized that it is important to carefully consider solution configurations to avoid risk. In several cases, there have been clients whose implementations did not match the performance and resiliency capabilities required for mission-critical transactional database systems. This resulted in poor application performance and unexpected downtime. Where the goal is to provide mission-critical transactional database systems with high resiliency and consistently low-latency storage performance, component selection and configuration requires careful consideration and planning for your situation, including:Selecting the right componentsProperly configuring those componentsEstablishing appropriate operational practicesSDDC and HCI offer flexibility and ease of management and they operate within or alongside the hypervisor layer between the operating system and the physical storage layers. This adds varying degrees of overhead. When misconfigured, this can radically affect disk latency – which is disastrous for performance of applications.Design Considerations for InterSystems IRIS, Caché, and EnsembleThe following list of minimum requirements and design considerations is based on our internal testing of SDDC and HCI solutions. Note that this is not a reference architecture, which means that your application-specific requirements will depend on your situation and performance targets.NetworkingTwo or more 10Gb NIC interfaces per node that are dedicated for the exclusive use of storage traffic.Two local non-blocking line-rate 10Gb switches for switch connectivity resiliency.As an option, 25, 40, 50, or 100Gb instead of 10Gb for future-proofing investment in HCI and the specific benchmarked and measured application requirements.ComputingAt least a six-node cluster for higher resiliency and predictable performance during maintenance and failure scenarios.Intel Scalable Gold or Platinum processors or later, at 2.2Ghz or higher.Installing RAM in groups of 6 x DDR4-2666 DIMMs per CPU socket (384GB minimum).StorageAll-flash storage. This is the only recommended option for storage. InterSystems strongly recommends against hybrid or tiered HCI storage for production workloads.Minimum of two disk groups per physical node. Each disk group should support at least three capacity drives.Exclusive use of write-intensive 12Gbps SAS SSDs or NVMe SSDs.For all-flash solutions with cache and capacity tiers, it is recommended to use NVMe for the cache tier and write-intensive 12Gbps SAS for the capacity tier.Use of LVM PE striping with Linux virtual machines, which spreads IO across multiple disk groups (contact InterSystems for guidance).Use of the Async IO with the rtkaio library for all databases and write image journal (WIJ) files with Linux virtual machines. This bypasses file-system caching and reduces write latency (see the documentation or contact the WRC for assistance with properly enabling Async IO on Linux).These minimum recommendations have shown to alleviate the overhead of SDDC and HCI but do not ensure application performance. As with any new technology, testing of your own application for performance and resiliency is paramount to any successful deployment.So again, If you are considering SDDC or HCI solutions, please contact your Sales Account Manager or Sales Engineer to schedule a call with a Technical Architect. This is important to ensure your success.
Article
Eduard Lebedyuk · Jul 6, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part VIII: CD using ICM

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?Containers infrastructureCD using containersCD using ICMIn this article, we'll build Continuous Delivery with InterSystems Cloud Manager. ICM is a cloud provisioning and deployment solution for applications based on InterSystems IRIS. It allows you to define the desired deployment configuration and ICM would provision it automatically. For more information take a look at First Look: ICM.WorkflowIn our Continuous Delivery configuration we would:Push code into GitLab repositoryBuild docker imagePublish image to docker registryTest it on a test serverIf tests pass, deploy on a production serverOr in graphical format:As you can see it's all pretty much the same except we would be using ICM instead of managing Docker containers manually.ICM configurationBefore we can start upgrading containers, they should be provisioned. For that we need to define defaults.json and definitions.json, describing our architecture. I'll provide these 2 files for a LIVE server, definitions for a TEST server are the same, defaults are the same except for Tag and SystemMode values.defaults.json: { "Provider": "GCP", "Label": "gsdemo2", "Tag": "LIVE", "SystemMode": "LIVE", "DataVolumeSize": "10", "SSHUser": "sample", "SSHPublicKey": "/icmdata/ssh/insecure.pub", "SSHPrivateKey": "/icmdata/ssh/insecure", "DockerImage": "eduard93/icmdemo:master", "DockerUsername": "eduard93", "DockerPassword": "...", "TLSKeyDir": "/icmdata/tls", "Credentials": "/icmdata/gcp.json", "Project": "elebedyu-test", "MachineType": "n1-standard-1", "Region": "us-east1", "Zone": "us-east1-b", "Image": "rhel-cloud/rhel-7-v20170719", "ISCPassword": "SYS", "Mirror": "false" } definitions.json [ { "Role": "DM", "Count": "1", "ISCLicense": "/icmdata/iris.key" } ] Inside the ICM container /icmdata folder is mounted from the host and: TEST server definitions are placed in /icmdata/test folderLIVE server definitions are placed in /icmdata/live folder After obtaining all required keys: keygenSSH.sh /icmdata/ssh keygenTLS.sh /icmdata/tls And placing required files in /icmdata: iris.keygcp.json (for deployment to Google Cloud Platform) Call ICM to provision your instances: cd /icmdata/test icm provision icm run cd /icmdata/live icm provision icm run This would provision one TEST and one LIVE server with one standalone InterSystems IRIS instance on each. Please refer to ICM First Look for a more detailed guide. Build First, we need to build our image. Our code would be, as usual, stored in the repository, CD configuration in gitlab-ci.yml but in addition (to increase security) we would store several server-specific files on a build server. iris.key License key. Alternatively, it can be downloaded during container build rather than stored on a server. It's rather insecure to store in the repository. pwd.txt File containing default password. Again, it's rather insecure to store it in the repository. Also, if you're hosting prod environment on a separate server it could have a different default password. load_ci_icm.script Initial script, it: Loads installerInstaller does application initializationLoads the code set dir = ##class(%File).NormalizeDirectory($system.Util.GetEnviron("CI_PROJECT_DIR")) do ##class(%SYSTEM.OBJ).Load(dir _ "Installer/Global.cls","cdk") do ##class(Installer.Global).init() halt Note that the first line is intentionally left empty. Several things are different from previous examples. First of all we are not enabling OS Authentication as ICM would interact with container instead of GitLab directly. Second of all I'm using Installer manifest to initialize our application to show different approaches to initialization. Read more on Installer in this article. Finally we'll publish our image in a Docher Hub as a private repo. Installer/Global.cls Our installer manifest looks like this: <Manifest> <Log Text="Creating namespace ${Namespace}" Level="0"/> <Namespace Name="${Namespace}" Create="yes" Code="${Namespace}" Ensemble="" Data="IRISTEMP"> <Configuration> <Database Name="${Namespace}" Dir="${MGRDIR}/${Namespace}" Create="yes" MountRequired="true" Resource="%DB_${Namespace}" PublicPermissions="RW" MountAtStartup="true"/> </Configuration> <Import File="${Dir}MyApp" Recurse="1" Flags="cdk" IgnoreErrors="1" /> </Namespace> <Log Text="Mapping to USER" Level="0"/> <Namespace Name="USER" Create="no" Code="USER" Data="USER" Ensemble="0"> <Configuration> <Log Text="Mapping MyApp package to USER namespace" Level="0"/> <ClassMapping From="${Namespace}" Package="MyApp"/> </Configuration> <CSPApplication Url="/" Directory="${Dir}client" AuthenticationMethods="64" IsNamespaceDefault="false" Grant="%ALL" /> <CSPApplication Url="/myApp" Directory="${Dir}" AuthenticationMethods="64" IsNamespaceDefault="false" Grant="%ALL" /> </Namespace> </Manifest> And it implements the following changes: Creates application Namespace.Creates application code database (data would be stored in USER database).loads code into application code database.Maps MyApp package to USER namespace.Creates 2 web applications: for HTML and for REST. gitlab-ci.yml Now, to Continuous Delivery configuration: build image: stage: build tags: - master script: - cp -r /InterSystems/mount ci - cd ci - echo 'SuperUser' | cat - pwd.txt load_ci_icm.script > temp.txt - mv temp.txt load_ci.script - cd .. - docker build --build-arg CI_PROJECT_DIR=$CI_PROJECT_DIR -t eduard93/icmdemo:$CI_COMMIT_REF_NAME . What is going on here? First of all, as docker build can access only subdirectories of a base build directory - in our case repository root, we need to copy our "secret" directory (the one with iris.key, pwd.txt and load_ci_icm.script) into the cloned repository. Next, first terminal access requires a user/pass so we'd add them to load_ci.script (that's what empty line at the beginning of load_ci.script is for btw). Finally, we build docker image and tag it appropriately: eduard93/icmdemo:$CI_COMMIT_REF_NAME where $CI_COMMIT_REF_NAME is the name of a current branch. Note that the first part of the image tag should be named same as project name in GitLab, so it could be seen in GitLab Registry tab (instructions on tagging are available in Registry tab). Dockerfile Building a docker image is done using Dockerfile, here it is: FROM intersystems/iris:2018.1.1-released ENV SRC_DIR=/tmp/src ENV CI_DIR=$SRC_DIR/ci ENV CI_PROJECT_DIR=$SRC_DIR COPY ./ $SRC_DIR RUN cp $CI_DIR/iris.key $ISC_PACKAGE_INSTALLDIR/mgr/ \ && cp $CI_DIR/GitLab.xml $ISC_PACKAGE_INSTALLDIR/mgr/ \ && $ISC_PACKAGE_INSTALLDIR/dev/Cloud/ICM/changePassword.sh $CI_DIR/pwd.txt \ && iris start $ISC_PACKAGE_INSTANCENAME \ && irissession $ISC_PACKAGE_INSTANCENAME -U%SYS < $CI_DIR/load_ci.script \ && iris stop $ISC_PACKAGE_INSTANCENAME quietly We start from the basic iris container. First of all, we copy our repository (and "secret" directory) inside the container. Next, we copy license key to mgr directory. Then we change the password to the value from pwd.txt. Note that pwd.txt is deleted in this operation. After that, the instance is started and load_ci.script is executed. Finally, iris instance is stopped. Note that I'm using GitLab Shell executor and not Docker executor. Docker executor is used when you need something from inside of the image, for example, you're building an Android application in java container and you only need an apk. In our case, we need a whole container and for that, we need Shell executor. So we're running Docker commands via GitLab Shell executor. Publish Now, let's publish our image in a Docker Hub publish image: stage: publish tags: - master script: - docker login -u eduard93 -p ${DOCKERPASSWORD} - docker push eduard93/icmdemo:$CI_COMMIT_REF_NAME Note ${DOCKERPASSWORD} variable, it's a GitLab secret variable. We can add them in GitLab > Project > Settings > CI/CD > Variables: Job logs also do not contain password value: Running with gitlab-runner 10.6.0 (a3543a27) on icm 82634fd1 Using Shell executor... Running on docker... Fetching changes... Removing ci/ HEAD is now at 8e24591 Add deploy to LIVE Checking out 8e245910 as master... Skipping Git submodules setup $ docker login -u eduard93 -p ${DOCKERPASSWORD} WARNING! Using --password via the CLI is insecure. Use --password-stdin. Login Succeeded $ docker push eduard93/icmdemo:$CI_COMMIT_REF_NAME The push refers to repository [docker.io/eduard93/icmdemo] master: digest: sha256:d1612811c11154e77c84f0c08a564a3edeb7ddbbd9b7acb80754fda97f95d101 size: 2620 Job succeeded and on Docker Hub we can see our new image: Run We have our image, next let's run it on our test server. Here is the script. run image: stage: run environment: name: $CI_COMMIT_REF_NAME tags: - master script: - docker exec icm sh -c "cd /icmdata/test && icm upgrade -image eduard93/icmdemo:$CI_COMMIT_REF_NAME" With ICM we need to run only one command (icm upgrade) to upgrade existing deployment. We're calling it by running "docker exec icm sh -c " which executes a specified command inside the icm container. First we mode into /icmdata/test where our ICM deployment definition is defined for a TEST server. After that we call icm upgrade to replace currently existing container with a new container. Test Let's run some tests. test image: stage: test tags: - master script: - docker exec icm sh -c "cd /icmdata/test && icm session -namespace USER -command 'do \$classmethod(\"%UnitTest.Manager\",\"RunTest\",\"MyApp/Tests\",\"/nodelete\")' | tee /dev/stderr | grep 'All PASSED' && exit 0 || exit 1" Again, we're executing one command inside our icm container. icm session executes a command on a deployed node. The command runs unit tests. After that it pipes all output to the screen and also to grep to find Unit Tests results and exit the process successfully or with an error. Deploy Deploy on a Production server is absolutely the same as deploy on test, except for another directory for the LIVE deployment definition. In the case where tests failed this stage would not be executed. deploy image: stage: deploy environment: name: $CI_COMMIT_REF_NAME tags: - master script: - docker exec icm sh -c "cd /icmdata/live && icm upgrade -image eduard93/icmdemo:$CI_COMMIT_REF_NAME" Conclusion ICM gives you a simple, intuitive way to provision cloud infrastructure and deploy services on it, helping you get into the cloud now without major development or retooling. The benefits of infrastructure as code (IaC) and containerized deployment make it easy to deploy InterSystems IRIS-based applications on public cloud platforms such as Google, Amazon, and Azure, or on your private VMware vSphere cloud. Define what you want, issue a few commands, and ICM does the rest. Even if you are already using cloud infrastructure, containers, or both, ICM dramatically reduces the time and effort required to provision and deploy your application by automating numerous otherwise manual steps. Links Code for the articleTest projectICM DocumentationFirst Look: ICM