Clear filter
Announcement
Anastasia Dyubaylo · Mar 25, 2021
Hey Developers,
We're pleased to invite you to the upcoming webinar in English called "InterSystems IRIS & the industry 4.0 roadmap - Smart Factory Starter Pack"!
🗓 Date & time: March 31, 02:00 PM CEST
🗣 Speakers:
@Marco.denHartog, CTO, ITvisors
@Aldo.Verlinde9118, Sales Engineer, InterSystems
Manufacturing organizations today are rapidly transforming their existing factories into “smart factories.” In a smart factory, data from Operational Technologies (OT) systems and real-time signals from the shop floor are combined with enterprise IT and analytics applications. This enables manufacturers to improve quality and efficiency, respond faster to events, and predict and avoid problems before they occur, among many other benefits.
In this webinar, we discuss the 5 maturity levels towards Industry 4.0 from a data-driven point of view and how the IRIS and the Smart Factory Starter Pack are helping manufacturers at each level:
Data Collection: Easy connections to many different types of sources
Data Unification: Clean data, data management, bridging the OT/IT gap, …
Data Exploration: Variety of tools, connectivity (!), translytical database, …
Operationalization: Minimize operator overhead & errors, two-way connections (ERP, MES, order flow), flexible & easy to use database, …
Industry 4.0: Modelling production processes & equipment, predicting quality & performance, continuous improvement cycle
Discover how IRIS and the Smart Factory Starter Pack empower manufacturers in their smart factory initiatives.
Note: The language of the webinar is English.
➡️ JOIN THE WEBINAR!
Announcement
Anastasia Dyubaylo · Apr 15, 2021
Hi Developers,
Please welcome the new video specially recorded for Developer Tools programming contest:
⏯ Deploying InterSystems IRIS docker solutions to GKE cloud in 5 minutes
This video will demo how one can use a template of the InterSystems IRIS data platform, build a solution using Docker and deploy it with an arbitrary DNS with HTTPS enabled as a working service on Google cloud using GKE and cloud run technology.
⬇️ iris-google-run-deploy-template
This demo supports the programming contest happening now and lets all the participants use the secret key and deploy their contest solutions on the InterSystems account at name.contest.community.intersystems.com.
The key could be obtained in a Docker channel in Discord.
🗣 Presenter: @Evgeny.Shvarov, Developer Ecosystem Manager, InterSystems
Stay tuned! 👍🏼
Announcement
Evgeny Shvarov · Nov 30, 2022
Hi Developers!
Here is the score of technical bonuses for participants' applications in the InterSystems IRIS for Health Contest: FHIR for Women's Health!
Project
Women’s Health Topic
Women’s Health Dataset
IRIS For Health or FHIR Cloud Usage
Healthcare Interoperability
Embedded Python
Docker
ZPM
Online Demo
Code Quality
First Article on DC
Second Article on DC
Video on YouTube
First Time Contribution
Total Bonus
Nominal
5
3
2
4
3
2
2
2
1
2
1
3
3
33
FemTech Reminder
5
2
4
2
2
2
1
2
3
3
28
FHIR Questionnaires
2
2
2
1
2
3
12
ehh2022-diabro
2
3
3
8
Dia-Bro-App
2
2
2
2
1
3
3
15
Dexcom Board
2
3
3
8
Beat Savior
2
3
3
8
Contest-FHIR
2
3
1
2
1
9
NeuraHeart
2
2
2
3
3
12
Pregnancy Symptoms Tracker
5
2
2
2
2
1
2
1
3
20
fhir-healthy-pregnancy
5
2
2
1
3
3
16
iris-fhir-app
2
4
2
2
2
1
2
15
Bonuses are subject to change upon the update.
Please claim here in the comments below or in the Discord chat. Hello, Evgeny, thank you for this information. If I'm not mistaken we have opportunity (FemTech Reminder) to get following bonuses:
ZPM, IRIS For Health or FHIR Cloud Usage
Could you please check ehh2022-diabro had working online demo
Our deployment on https://portal.events.isccloud.io/ has been deleted for some reason without our knowledge :'( NeuraHeart uses docker for our deployment and we use a python microservice for running machine learning tasks. HI! You achieved Docker bonus! But Embedded Python is new InterSystems technology, you don't use it. Read more about Embedded Python: https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_epython Hi @Maksym.Shcherban !
Bring my apologies for that! The "life" of all the deployments on https://portal.events.isccloud.io/ is 7 days long by default.
Could you create a new one?
Also if you need the deployment to keep alive longer please reach me out in direct message. Points adjusted, thank you! Hey team,
I would like to request the 3 points for the Youtube video
Application Pregnancy Symptoms Tracker
Also, another check ✅ for Quality Code as you can find here https://community.objectscriptquality.com/dashboard?id=intersystems_iris_community%2Fpregnancy-symp-tracker-fhir-app
Updated
Discussion
Olga Zavrazhnova · Oct 31, 2022
Hi Community,Watch the recording of the Second Community Roundtable: "What is the best source control system for development with InterSystems IRIS?"
Some great discussions have been started during this roundtable. We invite you to continue these discussions. in the comments to this post.
Tell us, which source control you use and why – in the comments to this post.
Announcement
Olga Zavrazhnova · Oct 18, 2022
Hi Community,
Let's meet virtually at our Second Community Roundtable!This will be a 45-min friendly discussion on a given topic: What is the best source control system for development with InterSystems IRIS.
>> Register here <<
Speaker: @Evgeny.Shvarov Co-speakers: @Dmitry.Maslennikov , @Timothy.Leavitt and @George.James
📅 Date: October 27🕑 Time: 9:00 am ET | 3:00 pm CEST
>> Register here <<Do you have specific questions about the topic you wish to discuss at the roundtable?Please share them in the comments! Hi All - just a friendly reminder that the Roundtable is happening today! You will have to register here to enter the call. Excited to see you soon!
Announcement
Eduard Lebedyuk · Oct 3, 2019
We're back with more AI, more ML and more presenters!
We are pleased to invite you to the upcoming webinar in English: AI Robotization (Python, R, Interoperability) for InterSystems IRIS on November 7 at 11:00 EST!
Machine Learning (ML) Toolkit is a suite of extensions for Machine Learning and Artificial Intelligence in-platform with InterSystems IRIS. During this webinar we will present an approach to robotizing AI/ML, i.e. making it run autonomously and adaptively within constraints and rules that you define. Self-learning neural nets, self-monitored analytical processes, agent-based systems of analytical processes and distributed AI/ML computations – are all the topics to expect in our webinar.
Three demos are on the agenda:
Realtime predictive maintenance
Self-monitored sentiment analysis
Distributed AI/ML computations involving cloud services
The webinar is for expert audience (Data Science, Data Engineering, Robotic Process Automation) as well as for those that discover the worldof data science. As a preparation, we would recommend revisiting our previous webinar’s recording.
Time: Nov 7, 2019 11:00 AM in Eastern Time (US and Canada)
Update. Download the presentation. Tomorrow! And now this webinar recording is available on InterSystems Developers YouTube Channel:
Enjoy!
Article
Evgeny Shvarov · Jan 25, 2020
Hi Developers!
"objectscript.conn" :{
"ns": "IRISAPP",
"active": true,
"docker-compose": {
"service": "iris",
"internalPort": 52773
}
I want to share with you a nice new feature I came across in a new 0.8 release of VSCode ObjectScript plugin by @Dmitry.Maslennikov and CaretDev.
The release comes with a new configuration setting "docker-compose" which solves the issue with ports you need to set up to make your VSCode Editor connect to IRIS. It was not very convenient if you had more than one docker container with IRIS running on the same machine. Now, this is solved!
Read below how it works now.
The concept of using docker locally for development with IRIS supposes that you have dockerfile and docker-compose.yml in the repository which you run to build the environment of the project and load all the ObjectScript code into IRIS container pulled from Docker Hub. And you also have the VSCode .vscode/settings.json file in the repository where you point the IRIS web-server port you connect to (along with other connection settings such as URL, Namespace and login credentials).
The question is - what is the port VSCode should connect to?
You can use the port 52773 which is default IRIS port for web server. But if you try to launch the second docker container it will fail cause you cannot run two docker containers on same machine which wait connections on the same port. But you can expose an external port for Docker container and this could be set up via docker-compose.yml file, here is an example (mapped port is bold):
version: '3.6'
services:
iris:
build: .
restart: always
ports:
- 52791:52773
volumes:
- ~/iris.key:/usr/irissys/mgr/iris.key
- ./:/irisdev/app
So you look into the docker-compose and type the same port to .vscode/settings.json:
But where is the problem you ask me?
The problem is when you expose your project as a library or demo and invite people to run and edit code with VSCode you don't want them to setup the port manually and you want them to clone the repo, run docker and have the option to collaborate immediately. Here comes the question: what should you map your project to in docker-compose which will not have a conflict in someone's environment?
Even if you don't expose it to anyone but use for yourself - what port do you put in the .vscode/settings.jsonn connection settings?
The answer is that when you launch a new docker with IRIS, you see the error message that port has been already taken and you either stop other containers or invent a new port which is probably not taken and try with it in docker-compose and settings.json.
Boring. Time-consuming, useless operation. Nobody likes it.
And you introduce same if you expose the library.
The relief comes with the new 0.8 VSCode ObjectScript release where you can introduce docker-compose section, which solves the issue forever:
"objectscript.conn" :{
"ns": "IRISAPP",
"active": true,
"docker-compose": {
"service": "iris",
"internalPort": 52773
}
it contains service and internalPort parameters which say to VSCode that in order find the port to connect to it should look into the docker-compose.yml file we have in the same repo and find "iris" service section there and then get the port which is mapped for internal 52773 port.
Yey!
And what is even cooler, Docker now has the mode for docker-compose.yml when you are able not to set up any port in docker, but leave the "-" sign form mapped port, which means that docker will use a random available port.
iris:
build:
context: .
dockerfile: Dockerfile-zpm
restart: always
ports:
- 51773
- 52773
- 53773
volumes:
- ~/iris.key:/usr/irissys/mgr/iris.key
- ./:/irisdev/app
Yey, two times! Cause this gives you an option not to bother about IRIS web-server ports VSCode connects to anymore whatsoever.
How it works in this case. We run docker-compose.yml, docker takes the random web-server port and runs IRIS with it, VSCode gets this port form docker and connects to IRIS with this port and you are able to edit and compile code immediately. With no additional settings.
Profit!
And you can reproduce the same fantastic feelings with the following template I updated recently according to the new feature of VSCode ObjectScript 0.8 which has the updated settings.json and docker-compose.yml. To test the template run the following commands in terminal (tested on Mac). You also need git and Docker Desktop to be installed.
$ git clone https://github.com/intersystems-community/objectscript-docker-template.git
$ cd objectscript-docker-template
$ docker-compose up -d
Open this folder in VSCode (make sure you have VScode ObjectScript plugin installed):
Check if VSCode is connected - click on VSCode status line:
Reconnect VSCode if needed.
You can open IRIS Terminal in VSCode if needed with the ObjectScript menu.
Done! Now you are able to run, compile and debug the code!
Happy coding! Thanks, for sharing this.
As an addition, if you have would like to use a different file name from default docker-compose.yml for your configuration, you can set it as well.
"objectscript.conn" :{
"ns": "IRISAPP",
"active": true,
"docker-compose": {
"service": "iris",
"internalPort": 52773,
"file": "docker-compose.yml"
}
} Hi Dmitriy,
I'm testing it in Windows 10 Pro... everything works except the terminal... when I try to open it in VSCode
I got the following error:
The terminal process command 'The terminal process command 'docker-compose exec iris /bin/bash -c 'command -v ccontrol >/dev/null 2>&1 && ccontrol session $ISC_PACKAGE_INSTANCENAME -U IRISAPP || iris session $ISC_PACKAGE_INSTANCENAME -U IRISAPP'' failed to launch (exit code: {2})' failed to launch (exit code: {2})
I've tested directly the command within the Powershell console executing:
docker-compose exec iris /bin/bash -c 'command -v ccontrol >/dev/null 2>&1 && ccontrol session $ISC_PACKAGE_INSTANCENAME -U IRISAPP || iris session $ISC_PACKAGE_INSTANCENAME -U IRISAPP'
and it works... but not through the Open terminal in Docker which is what is more user-friendly...
Hi Jose,
Interesting, I don't have windows, but as I can see, exit code 2, means
The system cannot find the file specified.
That's strange when it successfully called docker-compose when looked for a port.
You can also try to open an ordinary terminal in VSCode (Menu, View->Terminal), and put full command there, to see how it works. That's what seemed strange to me... I already did it withing VSCode Terminal directly and it works as if I do it in Powershell ... it fails when it goes through the "Open Terminal in Docker" option...
It works with Windows Powershell and with Powershell core (within VSCode Terminal and in PS Console):
Just in case it gives you any cloud... when I enable the connection it seems that it tries to connect through the terminal or something... because I got an error also:
This is when I just Toggle Disable and then Enable (see the error message in the botton right):
It says: The terminal process command 'docker-compose exec iris /bin/bash -c 'command -v ccontrol >/dev/null 2>&1 && ccontrol session $ISC_PACKAGE_INSTANCENAME -U IRISAPP || iris session $ISC_PACKAGE_INSTANCENAME -U IRISAPP'' failed to launch (exit code: {2}) Yep, it tries to open terminal automatically when docker-compose option used You need a Windows 10 machine... ... if you want me to gather any other info for you just let me know. 💡 This article is considered as InterSystems Data Platform Best Practice.
Article
Peter Steiwer · Feb 25, 2019
AnalyzeThis is a tool for getting a personalized preview of your own data inside of InterSystems BI. This allows you to get first hand experience with InterSystems BI and understand the power and value it can bring to your organization. In addition to getting a personalized preview of InterSystems BI through an import of a CSV file with your data, Classes and SQL Queries are now supported as Data Sources in v1.1.0!
At the 2018 Global Summit, AnalyzeThis(link to original DC article) was announced and released to InterSystems Open Exchange. Version 1.1.0 is now available through InterSystems Open Exchange. This release adds both Classes and SQL Queries as supported Data Sources. This allows you to install AnalyzeThis into an existing namespace and quickly select an existing class to start understanding the insights that InterSystems BI can bring to your organization. Hi Peter!
It's really a great app, I'm using every week to quickly analyze CSVs and make reports.
The question: suppose I built a cube, pivots, and dashboards against the particular CSV file. And then I'm getting the new csv with the same format but different data.
What is my approach to using the data from CSV with what I already built for CSV and to avoid building everything from scratch? Hi @Evgeny.Shvarov
AnalyzeThis.Utils.cls has a method called RefreshCube.
At one point, this was briefly in then UI. We took out the UI reference since we weren't (and still aren't) sure how this should fit into AnalyzeThis. Historically the goal was to be able to quickly get a personalized preview of DeepSee. It was not intended to be an automatic cube generator. The goal was to let people see the benefit of DeepSee, but then allow them to create their own cube and no longer rely on the generated cube from AnalyzeThis. However, I do know of multiple cases where people are using it as you explain in this case and would like to refresh the data once a new CSV has been produced with updated data.
Please let me know your thoughts on the Pros vs Cons of using the AnalyzeThis generated cube VS using your own cube once you have a model you like and would like to continue using. Hi Peter!Thanks for the answer.
I don't see any Cons.
Speaking about Pros: I do have a monthly updated CSV with the same format but with different data. I used Analizethis to build a cube and I added some extra pivots and dashboards by myself and next month I don't want to repeat all the work (I really spend a good hour on building new pivots and dashboards) but just want to run all the pivots and dashboards against new data.
So, if you have a regular weekly or monthly CSV you can successfully use Analyzethis to build a cube quickly and then just rerun everything for new data without any manual work.
Only Pros here. If you could add UI and programmatic way to reload or append the data for the cube (without changing it) would be just great. 💡 The article is considered as InterSystems Data Platform Best Practice.
Article
Mikhail Khomenko · Nov 18, 2019
Most of us are more or less familiar with Docker. Those who use it like it for the way it lets us easily deploy almost any application, play with it, break something and then restore the application with a simple restart of the Docker container.InterSystems also likes Docker. The InterSystems OpenExchange project contains a number of examples that run InterSystems IRIS images in Docker containers that are easy to download and run. You’ll also find other useful components, such as the Visual Studio IRIS plugin.It’s easy enough to run IRIS in Docker with additional code for specific use cases, but if you want to share your solutions with others, you’ll need some way to run commands and repeat them after each code update. In this article, we’ll see how to use Continuous Integration/Continuous Delivery (CI/CD) practices to simplify that process.
Setting Up
We’ll start with a simple REST API application based on IRIS. The details of the application can be found in the video Creating REST API with InterSystems IRIS, ObjectScript and Docker. Let’s see how we could share similar applications with others using CI/CD.
Initially, we’ll clone the code into a personal GitHub repository. If you don’t have an account on GitHub, sign up for one. For convenience, add access via SSH so you don’t need to enter a password with each pull or push. Then go to the intersystems-community/objectscript-rest-docker-template project page on GitHub and click the "Use this Template" button to create your own version of the repo based on the template. Give it a name like “my-objectscript-rest-docker-template”.
Now pull the project to your local machine:
$ git clone git@github.com:<your_account>/my-objectscript-rest-docker-template.git
Next, we’ll add a REST endpoint in the spirit of “hello, world!”.
Endpoints are defined in the src/cls/Sample/PersonREST.cls class. Our endpoint will look like this (defined before the first <Route>):
<Route Url="/helloworld" Method="GET" Call="HelloWorld"/><Route Url="/all" Method="GET" Call="GetAllPersons"/>...
It calls the HelloWorld method:
ClassMethod HelloWorld() As %Status{ Write "Hello, world!" Quit $$$OK}
Now we need to consider how this works when pushing to a remote repository. We need to:
Build a Docker image.
Save the Docker image.
Run the container based on this image.
We’ll use the CircleCI service, which is already integrated with GitHub, to build the Docker image. And we’ll use Google Cloud, which allows you to store Docker images and run containers based on them in Kubernetes. Let’s delve into this a little.
Google Cloud Prerequisites
Let’s assume you’ve registered for an account with Google Cloud, which provides a free tier of services. Create a project with the name "Development", then create a Kubernetes cluster by clicking the "Create cluster" button:
For the demo, select "Your first cluster" on the left. Choose a newer version of Kubernetes and a machine type of n1-standard-1. For our purposes, one machine should be enough.
Click the Create button, then set up a connection to the cluster. We’ll use the kubectl and gcloud utilities:
$ gcloud init[2] Create a new configurationConfiguration name. “development”[2] Log in with a new accountPick cloud project to useconfigure a default Compute Region and Zone? (Y/n)? yHere europe-west-1b was chosen$ gcloud container clusters get-credentials dev-cluster --zone europe-west1-b --project <project_id>
You can get the last command by clicking the "Connect" button:
Check the status from kubectl:
$ kubectl config current-contextgke_possible-symbol-254507_europe-west1-b_dev-cluster$ kubectl get nodesNAME STATUS ROLES AGE VERSIONgke-dev-cluster-pool-2-8096d93c-fw5w Ready <none> 17m v1.14.7-gke.10
Now create a directory called k8s/ under the root project directory to hold the three files that describe the future application in Kubernetes: Namespace, which describes the workspace, Deployment, and Service:
$ cat namespace.yamlapiVersion: v1kind: Namespacemetadata: name: iris$ cat deployment.yamlapiVersion: apps/v1beta1kind: Deploymentmetadata: name: iris-rest namespace: irisspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: iris template: metadata: labels: app: iris spec: containers: - image: eu.gcr.io/iris-rest:v1 name: iris-rest ports: - containerPort: 52773 name: web$ cat service.yamlapiVersion: v1kind: Servicemetadata: name: iris-rest namespace: irisspec: selector: app: iris ports: - protocol: TCP port: 52773 targetPort: 52773 type: LoadBalancer
Send those definitions from your k8s/ directory to the Google Kubernetes Engine (GKE):
$ kubectl apply -f namespace.yaml$ kubectl apply -f deployment.yaml -f service.yaml
Things won’t be working correctly yet, since we haven’t yet sent the eu.gcr.io/iris-rest:v1 image to the Docker registry, so we see an error:
$ kubectl -n iris get poNAME READY STATUS RESTARTS AGEiris-rest-64cdb48f78-5g9hb 0/1 ErrImagePull 0 50s
$ kubectl -n iris get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEiris-rest LoadBalancer 10.0.13.219 <pending> 52773:31425/TCP 20s
When Kubernetes sees a LoadBalancer service, it tries to create a balancer in the Google Cloud environment. If it succeeds, the service will get a real IP address instead of External IP = <pending>.
Before leaving Kubernetes for a bit, let's give CircleCI the ability to push Docker images into the registry and restart Kubernetes deployments by creating a service account. Give your service account EDITOR permission to the project. You’ll find information here on creating and storing a service account.
A bit later, when we create and set up the project in CircleCI, you’ll need to add the following three environment variables:
The names of these variables speak for themselves. The value of GCLOUD_SERVICE_KEY is the JSON structure Google sends you when you press “Create key” and select a key in the JSON format after creating the Service Account:
CircleCILet's turn our attention to CircleCI now, where we’ll register using our GitHub account (click Sign Up, then Sign Up with GitHub). After registration, you’ll see the dashboard with projects from your GitHub repository listed on the Add Project tab. Click the Set Up Project button for “my-objectscript-rest-docker-template” or whatever you named the repository created from the objectscript-rest-docker-template repo:
Note: all CircleCI screenshots are made as of October 2019. Changes may occur in new versions.
The page that opens tells you how to make your project work with CircleCI. The first step is to create a folder called .circleci and add a file named config.yml to it. The structure of this configuration file is well described in the official documentation. Here are the basic steps the file will contain:
Pull the repository
Build the Docker image
Authenticate with Google Cloud
Upload image to Google Docker Registry
Run the container based on this image in GKE
With any luck, we’ll find some already created configurations (called orbs) we can use. There are certified orbs and third-party ones. The certified GCP-GKE orb has a number of limitations, so let's take a third-party orb — duksis — that meets our needs. Using it, the configuration file turns into (replace names — for example, the cluster name — with correct ones for your implementation):
$ cat .circleci/config.ymlversion: 2.1orbs: gcp-gke: duksis/gcp-gke@0.1.9workflows: main: jobs: - gcp-gke/publish-and-rollout-image: google-project-id: GOOGLE_PROJECT_ID gcloud-service-key: GCLOUD_SERVICE_KEY registry-url: eu.gcr.io image: iris-rest tag: ${CIRCLE_SHA1} cluster: dev-cluster namespace: iris deployment: iris-rest container: iris-rest
The initial configuration of the publish-and-rollout-image task can be viewed on the project page.
We don’t actually need the final three notification steps of this orb, which is good because they won’t work anyway without some additional variables. Ideally, you can prepare your own orb once and use it many times, but we won’t get into that now.
Note that the use of third-party orbs has to be specifically allowed on the "Organization settings" tab in CircleCI:
Finally, it’s time to send all our changes to GitHub and CircleCI:
$ git add .circleci/ k8s/ src/cls/Sample/PersonREST.cls$ git commit -m "Deploy project to GKE using CircleCI"$ git push
Let’s check the CircleCI dashboard:
If you forgot to add Google Service Account keys, here’s what you’ll soon see:
So don’t forget to add those environment variables as described in the end of the Google Cloud Prerequisites section. If you forgot, update that information, then click "Rerun workflow."
If the build is successful you’ll see a green bar:
You can also check the Kubernetes pod state separately from the CircleCI Web UI:
$ kubectl -n iris get po -wNAME READY STATUS RESTARTS AGEiris-rest-64chdb48f78-q5sbw 0/1 ImagePullBackOff 0 15m…iris-rest-5c9c86c768-vt7c9 1/1 Running 0 23s
That last line — 1/1 Running — is a good sign.
Let’s test it. Remember, your IP address will differ from mine. Also, you’ll have to figure out about passwords over HTTP yourself as it’s out of scope for this article.
$ kubectl -n iris get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEiris-rest LoadBalancer 10.0.4.242 23.251.143.124 52773:30948/TCP 18m
$ curl -XGET -u _system:SYS 23.251.143.124:52773/person/helloworldHello, world!
$ curl -XPOST -H "Content-Type: application/json" -u _system:SYS 23.251.143.124:52773/person/ -d '{"Name":"John Dou"}'
$ curl -XGET -u _system:SYS 23.251.143.124:52773/person/all[{"Name":"John Dou"},]
It seems the application works. You can continue with tests described on project page.
In sum, the combination of using GitHub, CircleCI, and Google Cloud Kubernetes Engine looks quite promising for testing and deployment of IRIS applications, even though it’s not completely free. Also, do not forget that running Kubernetes cluster can gradually eat your virtual (and then real) money. We are not responsible for any charges you may incur.
💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Dec 6, 2019
Hi Community,
This week we have two new videos from Global Summit 2019. Please welcome:
1. An ML Toolkit for InterSystems IRIS: Co-Innovation in Healthcare
Presenters: 🗣 @Paul.Hegel5642, Senior Developer, Inland Imaging 🗣 David Paxton, Senior Integration Developer, Inland Imaging 🗣 Shiao-Bin Soong, Developer, InterSystems 🗣 David Lepzelter, Data Scientist, InterSystems 🗣 @Sergey.Lukyanchikov, Sales Engineer, InterSystems 🗣 @Eduard.Lebedyuk, Sales Engineer, InterSystems
2. An ML Toolkit for InterSystems IRIS: Co-Innovation in Banking
Presenters: 🗣 Gulzahida Sadyrova, Head of Digital, Halyk Bank (People's Bank of Kazakhstan) 🗣 Arman Zhaulbaev, Data & Analytics Manager, Halyk Bank (People's Bank of Kazakhstan) 🗣 @Sergey.Lukyanchikov, Sales Engineer, InterSystems 🗣 @Eduard.Lebedyuk, Sales Engineer, InterSystems
Python and R are the de facto standard languages for data science, due to their ease of use and huge array of third party libraries for machine learning and analytics. The ML Toolkit, available on OpenExchange. enables developers to edit and invoke Python/R code from within InterSystems IRIS. These videos provide an introduction to the ML Toolkit and demonstrate using InterSystems IRIS as both a standalone development platform and an orchestration tool for predictive modeling.
Takeaway: The ML Toolkit enables machine learning and other complex application development in the R and Python languages.
Additional materials to this video you can find in this InterSystems Online Learning Course.
Enjoy watching these videos! 👍🏼
Article
Evgeny Shvarov · Jan 17, 2020
Hi Developers!
Recently we published on Docker Hub images for InterSystems IRIS Community Edition and InterSystems IRIS Community for Health containers.
What is that?
There is a repository that publishes it, and in fact, it is the same container IRIS Community Edition containers you have on official InterSystems listing which have the pre-loaded ObjectScript Package Manager (ZPM) client.
So if you run this container with IRIS CE or IRIC CE for Health you can immediately start using ZPM and install packages from Community Registry or any others.
What does this mean for you?
It means, that anyone can deploy any of your InterSystems ObjectScript application in 3 commands:
run IRIS container;
open terminal;
install your application as ZPM package.
It is safe, fast and cross-platform.
It's really handy if you want to test a new interesting ZPM package and not harm any of your systems.
Suppose, you have docker-desktop installed. You can run the image, which wiil pull the latest container if you don't have it locally:
$ docker run --name iris-ce -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm-dev
Or the following for InterSystems IRIS for Health:
$ docker run --name iris-ce -d --publish 52773:52773 intersystemsdc/irishealth-community:2019.4.0.383.0-zpm-dev
Open terminal to it:
$ docker exec -it iris-ce iris session iris
Node: e87717c3d95d, Instance: IRIS
USER>
Install ZPM module:
USER>zpm
zpm: USER>install objectscript-math
[objectscript-math] Reload START
[objectscript-math] Reload SUCCESS
[objectscript-math] Module object refreshed.
[objectscript-math] Validate START
[objectscript-math] Validate SUCCESS
[objectscript-math] Compile START
[objectscript-math] Compile SUCCESS
[objectscript-math] Activate START
[objectscript-math] Configure START
[objectscript-math] Configure SUCCESS
[objectscript-math] Activate SUCCESS
zpm: USER>q
USER>w ##class(Math.Math).LeastCommonMultiple(134,382)
25594
USER>
Happy coding with ObjectScript and ZPM!
As soon as docker pull command is not necessary I omitted it from the text, but you have it here just in case:
For InterSystems IRIS:
docker pull intersystemsdc/iris-community:2019.4.0.383.0-zpm-dev
For InterSystems IRIS for Health is:
docker pull intersystemsdc/irishealth-community:2019.4.0.383.0-zpm-dev
Announcement
Anastasia Dyubaylo · Oct 17, 2019
Hi Developers,
New Coding Talk, recorded by @Evgeny.Shvarov, is available on InterSystems Developers YouTube:
🎯 Creating REST API with InterSystems IRIS, ObjectScript and Docker
In this video you will learn how to create REST API with InterSystems IRIS, ObjectScript from scratch using GitHub template, VSCode, and Docker container.
Please check the additional links:
GitHub Template
ObjectScript Docker Template App on Open Exchange
Feel free to ask your questions in the comments to this post.
Enjoy watching the video! 👍🏼
Announcement
Anastasia Dyubaylo · Oct 22, 2019
Hi Community,
We're pleased to invite you to the Europe's biggest data science gathering called Data Natives Conference 2019 on 25-26 November in Berlin, Germany! Join us and learn how InterSystems technology can support your AI & ML initiatives to help you shape our world! 🔥
What's more?
Take part in @Benjamin.DeBoe's keynote "From data swamps to clean data lakes" in which he will portray best practices for sustainable AI/ML. And visit InterSystems booth on the 1st floor and discuss real-world use cases and state-of-the-art tools with our experts!
So, remember:
⏱ Time: November 25-26, 2019
📍Venue: Kühlhaus Berlin, Berlin, Germany
✅ Registration: GET YOUR TICKET HERE
We look forward to welcoming you in Berlin!
Article
Evgeny Shvarov · Nov 18, 2019
Hi Developers!
Recently we announced two new challenges on Global Masters: 'Bugs Bounty' and 'Pull Requests'.
And we are getting a lot of submits to the challenges which are not the thing we are expecting there. So I hope this post will give some shine to this quest.
'Bugs Bounty'
Ok! What are we expecting from 'Bugs bounty'?
There are a lot of Open Exchange solutions that come with public open-source repositories on Github: project and repo, another project and the repo, another one and its repo, and many more on Open Exchange.
The idea of the challenge is to install the solution or the tool which has Github repo, test it and if you find the bug submit an issue. E.g. to the repo of this project.
And then submit the link to the issue to the 'Bugs Bounty' challenge on Global Masters - we will check the link and send you the points.
If repo maintainer close the issue send it to us again and get even more points!
That's it! What is the next one?
'Pull Request'
This is a somewhat related challenge to 'Bugs Bounty' - this could be a fix to the bug you find with the previous challenges. Let me explain it a bit.
We invite you not only to find bugs but also to fix it! And the way how you can do it with Github are Pull Requests or PR.
How to make a PR? You fork the repo, clone it to your local machine, fix the bug, commit and push changes to your fork repo and then create a pull request. Check the video about Github Pull Requests and the video on How to Make Github Pull Requests with InterSystems IRIS.
Once you created you can submit the link to the pull request into the challenge and collect your GM points. Check the example of the PR.
Moreover! If repo maintainer accepts your pull request you get even more points!
And you can fix not only bugs you find - you can 'pull request' for any issue listed in the repo you know how to fix.
And this could be not only the bug but enhancement.
I hope this helps in understanding and we are looking for new issues posted and pull requests submitted!
Article
Eduard Lebedyuk · Sep 26, 2022
Welcome to the next chapter of [my CI/CD series](https://community.intersystems.com/post/continuous-delivery-your-intersystems-solution-using-gitlab-index), where we discuss possible approaches toward software development with InterSystems technologies and GitLab.
Today, let's talk about interoperability.
# Issue
When you have an active interoperability production, you have two separate process flows: a working production that processes messages and a CI/CD process flow that updates code, production configuration and [system default settings](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_other_default_settings).
Clearly, CI/CD process affects interoperability. But questions are:
- What exactly happens during an update?
- What do we need to do to minimize or eliminate production downtime during an update?
# Terminology
- Business Host (BH) - one configurable element of Interoperability Production: Business Service (BS), Business Process (BP, BPL), or Business Operation (BO).
- Business Host Job (Job) - InterSystems IRIS job that runs Business Host code and is managed by Interoperability production.
- Production - interconnected collection of Business Hosts.
- System Default Settings (SDS) - values that are specific to the environment where InterSystems IRIS is installed.
- Active Message - a request which is currently being processed by one Business Host Job. One Business Host Job can have a maximum of one Active Message. Business Host Job, which does not have an Active Message, is idle.
# What's going on?
Let's start with the Production Lifecycle.
## Production Start
First of all, Production can be started. Only one production per namespace can run simultaneously, and in general (unless you really know what and why you're doing it), only one production should be run per namespace, ever. Switching back and forth in one namespace between two or more different productions is not recommended. Starting production starts all enabled Business Hosts defined in the production. Failure of some Business Hosts to start does not affect Production start.
Tips:
- Start the production from the System Management Portal or by calling: `##class(Ens.Director).StartProduction("ProductionName")`
- Execute arbitrary code on Production start (before any Business Host Job is started) by implementing an `OnStart` method
- Production start is an auditable event. You can always see who and when did that in the Audit Log.
## Production Update
After Production has been started, [Ens.Director](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Director) continuously monitors running production. Two production states exist: _target state_, defined in production class, and System Default Settings; and _running state_ - currently running jobs with settings applied when the jobs were created. If desired and current states are identical, everything is good, but production could (and should) be updated if there's a difference. Usually, you see that as a red `Update` button on the Production Configuration page in the System Management Portal.
Updating production means an attempt to get the current Production state to match the target Production state.
When you run `##class(Ens.Director).UpdateProduction(timeout=10, force=0)` To Update the production, it does the following for each Business Host:
1. Compares active settings to production/SDS/class settings
2. If, and only if (1) shows a mismatch, the Business Host would be marked as out-of-date and requiring an update.
After running this for each Business Host, `UpdateProduction` builds the set of changes:
- Business Hosts to stop
- Business Hosts to start
- Production settings to update
And after that, applies them.
This way, “updating” settings without changing anything results in no production downtime.
Tips:
- Update the production from the System Management Portal or by calling: `##class(Ens.Director).UpdateProduction(timeout=10, force=0)`
- Default System Management Portal update timeout is 10 seconds. If you know that processing your messages takes more than that, call `Ens.Director:UpdateProduction` with a larger timeout.
- Update Timeout is a production setting, and you can change it to a larger value. This setting applies to the System Management Portal.
## Code Update
`UpdateProduction` DOES NOT UPDATE the BHs with out-of-date code. This is a safety-oriented behavior, but if you want to automatically update all running BHs if the underlying code changes, follow these steps:
First, load and compile like this:
```objectscript
do $system.OBJ.LoadDir(dir, "", .err, 1, .load)
do $system.OBJ.CompileList(load, "curk", .errCompile, .listCompiled)
```
Now, `listCompiled` would contain all items which were actually compiled (use [git diffs](https://github.com/intersystems-ru/GitLab/blob/master/isc/git/Diff.cls) to minimize loaded set) due to the `u` flag. Use this `listCompiled` to get a $lb of all classes which were compiled:
```objectscript
set classList = ""
set class = $o(listCompiled(""))
while class'="" {
set classList = classList _ $lb($p(class, ".", 1, *-1))
set class=$o(listCompiled(class))
}
```
And after that, calculate a list of BHs which need a restart:
```sql
SELECT %DLIST(Name) bhList
FROM Ens_Config.Item
WHERE 1=1
AND Enabled = 1
AND Production = :production
AND ClassName %INLIST :classList
```
Finally, after obtaining `bhList` stop and start affected hosts:
```objectscript
for stop = 1, 0 {
for i=1:1:$ll(bhList) {
set host = $lg(bhList, i)
set sc = ##class(Ens.Director).TempStopConfigItem(host, stop, 0)
}
set sc = ##class(Ens.Director).UpdateProduction()
}
```
## Production Stop
Productions can be stopped, which means sending a request to all Business Host Jobs to shut down (safely, after they are done with their active messages, if any).
Tips:
- Stop the production from the System Management Portal or by calling: `##class(Ens.Director).StopProduction(timeout=10, force=0)`
- Default System Management Portal stop timeout it 120 seconds. If you know that processing your messages takes more than that, call `Ens.Director:StopProduction` with a larger timeout.
- Shutdown Timeout is a production setting. You can change that to a larger value. This setting applies to the System Management Portal.
- Execute arbitrary code on Production stop by implementing an `OnStop` method
- Production stop is an auditable event, you can always see who and when did that in the Audit Log.
The important thing here is that Production is a sum total of the Business Hosts:
- Starting production means starting all enabled Business Hosts.
- Stopping production means stopping all running Business Hosts.
- Updating production means calculating a subset of Business Hosts which are out of date, so they are first stopped and immediately after that started again. Additionally, a newly added Business Host is only started, and a Business Host deleted from production is just stopped.
That brings us to the Business Hosts lifecycle.
## Business Host Start
Business Hosts are composed of identical Business Hosts Jobs (according to a Pool Size setting value). Starting a Business Host means starting all Business Hosts Jobs. They are started in parallel.
Individual Business Host Job starts like this:
1. Interoperability JOBs a new process that would become a Business Host Job.
2. The new process registers as an Interoperability job.
3. Business Host code and Adapter code is loaded into process memory.
4. Settings related to a Business Host and Adapter are loaded into memory. The order of precedence is:
a. Production Settings (overrides System Default and Class Settings).
b. System Default Settings (overrides Class Settings).
c. Class Settings.
5. Job is ready and starts accepting messages.
After (4) is done, the Job can’t change settings or code, so when you import new/same code and new/same systems default settings, it does not affect currently running Interoperability jobs.
## Business Host Stop
Stopping a Business Host Job means:
1. Interoperability orders Job to stop accepting any more messages/inputs.
2. If there’s an active message, Business Host Job has timeout seconds to process it (by completing it - finishing `OnMessage` method for BO, `OnProcessInput` for BS, state `S` method for BPL BPs, and `On*` method for BPs).
3. If an active message has not been processed till the timeout and `force=0`, production update fails for that Business Host (and you’ll see a red Update button in the SMP).
4. Stop succeeds if anything on this list is true:
- No active message
- Active message was processed before the `timeout`
- Active message was not processed before the timeout BUT `force=1`
5. Job is deregistered with Interoperability and halts.
## Business Host Update
Business host update means stopping currently running Jobs for the Business Host and starting new Jobs.
## Business Rules, Routing Rules, and DTLs
All Business Hosts immediately start using new versions of Business Rules, Routing Rules, and DTLs as they become available. A restart of a Business Host is not required in this situation.
# Offline updates
Sometimes, however, Production updates require downtime of individual Business Hosts.
## Rules depend on new code
Consider the situation. You have a current Routing Rule X which routes messages to either Business Process A or B based on arbitrary criteria.
In a new commit, you add, simultaneously:
- Business Process C
- A new version of Routing Rule X, which routes messages to A, B, or C.
In this scenario, you can't just load the rule first and update the production second. Because the newly compiled rule would immediately start routing messages to Business Process C, which InterSystems IRIS might not have yet compiled, or Interoperability did not yet Update to use.
In this case, you need to disable the Business Host with a Routing Rule, update the code, update production and enable the Business Host again.
Notes:
- If you update a production using [produciton deployment file](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_deploying) it would automatically disable/enable all affected BHs.
- For InProc invoked hosts, the compilation invalidates the cache of the particular host held by the caller.
## Dependencies between Business Hosts
Dependencies between Business Hosts are critical. Imagine you have Business Processes A and B, where A sends messages to B.
In a new commit, you add, simultaneously:
- A new version of Process A, which sets a new property X in a request to B
- A new version of Process B which can process a new property X
In this scenario, we MUST update Process B first and A second. You can do this in one of two ways:
- Disable Business Hosts for the duration of the update
- Split the update into two: first, update Process B only, and after that, in a separate update, start sending messages to it from Process A.
A more challenging variation on this theme, where new versions of Processes A and B are incompatible with old versions, requires Business Host downtime.
## Queues
If you know that after the update, a Business Host will not be able to process old messages, you need to guarantee that the Business Host Queue is empty before the update. To do that, disable all Business Hosts that send messages to the Business Host and wait till its queue becomes empty.
## State change in BPL Business Processes
First, a little intro into how BPL BPs work. After you compile a BPL BP, two classes get created into the package with the same name as a full BPL class name:
- `Thread1` class contains methods S1, S2, ... SN, which correspond to activities within BPL
- `Context` class has all context variables and also the next state which BPL would execute (i.e., `S5`)
Also BPL class is persistent and stores requests currently being processed.
BPL works by executing `S` methods in a `Thread` class and correspondingly updating the BPL class table, `Context` table, and `Thread1` table where one message "being processed" is one row in a BPL table. After the request is processed, BPL deletes the BPL, `Context`, and `Thread` entries. Since BPL BPs are asynchronous, one BPL job can simultaneously process many requests by saving information between `S` calls and switching between different requests.
For example, BPL processed one request till it got to a `sync` activity - waiting for an answer from BO. It would save the current context to disk, with `%NextState` property (in `Thread1` class) set to response activity `S` method, and work on other requests until BO answers. After BO answers, BPL would load Context into memory and execute the method corresponding to a state saved in `%NextState` property.
Now, what happens when we update the BPL?
First, we need to check that at least one of the two conditions is satisfied:
- During the update, the Context table is empty, meaning no active messages are being worked on.
- The New States are the same as the old States, or new States are added after the old States.
If at least one condition is satisfied, we are good to go. There are either no pre-update requests for post-update BPL to process, or States are added at the end, meaning old requests can also go there (assuming that pre-update requests are compatible with post-update BPL activities and processing).
But what if you have active requests in processing and BPL changes state order? Ideally, if you can wait, disable BPL callers and wait till the Queue is empty. Validate that the Context table is also empty. Remember that the Queue shows only unprocessed requests, and the Context table stores requests which are being worked on, so you can have a situation where a very busy BPL shows zero Queue size, and that's normal.
After that, disable the BPL, perform the update and enable all previously disabled Business Hosts.
If that's not possible (usually in a case where there is a very long BPL, i.e., I remember updating one that took around a week to process a request, or the update window is too short), use [BPL versioning](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EBPLR_process).
Alternatively, you can write an update script. In this update script, map old next states to new next states and run it on `Thread1` table so that updated BPL can process old requests. BPL, of course, must be disabled for the duration of the update.
That said, it's an extremely rare situation, and usually, you don't have to do this, but if you ever need to do that, that's how.
# Conclusion
Interoperability implements a sophisticated algorithm to minimize the number of actions required to actualize Production after the underlying code change. Call UpdateProduction with a safe timeout on every SDS update. For every code update, you need to decide on an update strategy.
Minimizing the amount of compiled code by using [git diffs](https://github.com/intersystems-ru/GitLab/blob/master/isc/git/Diff.cls) helps with the compilation time, but "updating" the code with itself and recompiling it or "updating" the settings with the same values does not trigger or require a Production Update.
Updating and compiling Business Rules, Routing Rules, and DTLs makes them immediately accessible without a Production Update.
Finally, Production Update is a safe operation and usually does not require downtime.
# Links
- [Ens.Director](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Director)
- [Building git diffs](https://github.com/intersystems-ru/GitLab/blob/master/isc/git/Diff.cls)
- [System Default Settings](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_other_default_settings)
Author would like to thank @James.MacKeith, @Dmitry.Zasypkin, and @Regilo.Souza for their invaluable help with this article.