Introduction
Several resources tell us how to run IRIS in a Kubernetes cluster, such as Deploying an InterSystems IRIS Solution on EKS using GitHub Actions and Deploying InterSystems IRIS solution on GKE Using GitHub Actions. These methods work but they require that you create Kubernetes manifests and Helm charts, which might be rather time-consuming.
To simplify IRIS deployment, InterSystems developed an amazing tool called InterSystems Kubernetes Operator (IKO). A number of official resources explain IKO usage in details, such as  New Video: Intersystems IRIS Kubernetes Operator and InterSystems Kubernetes Operator.

50
1 2 391

In this article, we’ll look at one of the ways to monitor the InterSystems IRIS data platform (IRIS) deployed in the Google Kubernetes Engine (GKE). The GKE integrates easily with Cloud Monitoring, simplifying our task. As a bonus, the article shows how to display metrics from Cloud Monitoring in Grafana

Note that the Google Cloud Platform used in this article is not free (price list), but you can leverage a free tier. This article assumes that you already have a project in the Google Cloud Platform (referred to as <your_project_id>) and have permission to use it. 

40
1 0 66
Question
Sai Sai · Sep 15
Arranging Project

I want to start this project and wants to know the best practices, you guys using to arrange the project. I have done lot of mvc projects and API's in c#. But Intersystems is new to me. Please give me some suggestions like how can I Arrange the objects. Like for eg. Where can I store the productions objects like services,process and operations. will that be like under each resouces folder name? and what are the base classes, and how can I store them? basically please give me some idea about how Can I arrange them . 

 

Thank you 

Sai

00
0 8 158

Hello,

We have a need to track Database changes over time - down to the SQL level of granularity if possible. Such as: User xyz runs routine ^abc and we get something similar to a changelog that tells us: table A had this value updated, insert, update etc....

Is that possible using IRIS level tools (Audit Log, Journal File, etc...) , is there a way to convert the global sets and kills from the journals into SQL level changes?

 

10
1 3 166
InterSystems Official
Steven LeBlanc · Aug 21, 2020
Introducing InterSystems Container Registry

I am pleased to announce the availability of InterSystems Container Registry. This provides a new distribution channel for customers to access container-based releases and previews. All Community Edition images are available in a public repository with no login required. All full released images (IRIS, IRIS for Health, Health Connect, System Alerting and Monitoring, InterSystems Cloud Manager) and utility images (such as arbiter, Web Gateway, and PasswordHash) require a login token, generated from your WRC account credentials.

150
6 14 1,162

Some changes in IRIS configuration require a restart of IRIS.
This is no big issue as long as I have access to the server command line with sufficient privileges.

In a container, this is not always given.
Stopping IRIS from the terminal/session prompt is no problem.
But the restart after is.  

Note1: container start-stop is no option as it might be removed by option --rm in docker run
Note2: the target is linux (manly in docker).  Windows is excluded

90
0 5 148

 

I'm almost running out of disk space so I want to move 1 DB to a different hard drive.  
It's a rather simple but lengthy action during a shutdown of IRIS.  
But is this somehow possible under runtime in a stand-alone installation?  
I'm looking for kind of a "local drive failover"  

note: Mirror or Shadow is not an option.  

10
0 3 112

As we all well know, InterSystems IRIS has an extensive range of tools for improving the scalability of application systems. In particular, much has been done to facilitate the parallel processing of data, including the use of parallelism in SQL query processing and the most attention-grabbing feature of IRIS: sharding. However, many mature developments that started back in Caché and have been carried over into IRIS actively use the multi-model features of this DBMS, which are understood as allowing the coexistence of different data models within a single database. For example, the HIS qMS database contains both semantic relational (electronic medical records) as well as traditional relational (interaction with PACS) and hierarchical data models (laboratory data and integration with other systems). Most of the listed models are implemented using SP.ARM's qWORD tool (a mini-DBMS that is based on direct access to globals). Therefore, unfortunately, it is not possible to use the new capabilities of parallel query processing for scaling, since these queries do not use IRIS SQL access.

Meanwhile, as the size of the database grows, most of the problems inherent to large relational databases become right for non-relational ones. So, this is a major reason why we are interested in parallel data processing as one of the tools that can be used for scaling.

In this article, I would like to discuss those aspects of parallel data processing that I have been dealing with over the years when solving tasks that are rarely mentioned in discussions of Big Data. I am going to be focusing on the technological transformation of databases, or, rather, technologies for transforming databases.

110
2 4 337

This article is a continuation of Deploying InterSystems IRIS solution on GKE Using GitHub Actions, in which, with the help of GitHub Actions pipeline, our zpm-registry was deployed in a Google Kubernetes cluster created by Terraform. In order not to repeat, we’ll take as a starting point that:

10
1 1 362