*** archived ***

The question has come up several times and I saw mixed answers and no quick example

My personal preference is using CPIPE device as you get back exactly the output you will get at the command line interface of your OS .
The tricky thing is to stop reading in time.
The example just displays what you normally see in your console.
it becomes useful if you look for things that you can't get from any $system.whatever()

15 5
4 2.1K

Container Images

In this second post on containers fundamentals, we take a look at what container images are.

What is a container image?

A container image is merely a binary representation of a container.

A running container or simply a container is the runtime state of the related container image.

Please see the first post that explains what a container is.

6 1
0 2K
InterSystems Official
· Aug 21, 2020
Introducing InterSystems Container Registry

I am pleased to announce the availability of InterSystems Container Registry. This provides a new distribution channel for customers to access container-based releases and previews. All Community Edition images are available in a public repository with no login required. All full released images (IRIS, IRIS for Health, Health Connect, System Alerting and Monitoring, InterSystems Cloud Manager) and utility images (such as arbiter, Web Gateway, and PasswordHash) require a login token, generated from your WRC account credentials.

15 14
7 2K

Released with no formal announcement in IRIS preview release 2019.4 is the /api/monitor service exposing IRIS metrics in Prometheus format. Big news for anyone wanting to use IRIS metrics as part of their monitoring and alerting solution. The API is a component of the new IRIS System Alerting and Monitoring (SAM) solution that will be released in an upcoming version of IRIS.

11 2
7 1.8K

If a picture is worth a thousand words, what's a video worth? Certainly more than typing a post.

Please check out my "Coding talks" on InterSystems Developers YouTube:

1. Analysing InterSystems IRIS System Performance with Yape. Part 1: Installing Yape

https://www.youtube.com/embed/3KClL5zT6MY
[This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]

Running Yape in a container.

2. Yape Container SQLite iostat InterSystems

https://www.youtube.com/embed/cuMLSO9NQCM
[This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]

Extracting and plotting pButtons data including timeframes and iostat.

13 3
2 1.4K

Last time we deployed a simple IRIS application to the Google Cloud. Now we’re going to deploy the same project to Amazon Web Services using its Elastic Kubernetes Service (EKS).

We assume you’ve already forked the IRIS project to your own private repository. It’s called <username>/my-objectscript-rest-docker-template in this article. <root_repo_dir> is its root directory.

Before getting started, install the AWS command-line interface and, for Kubernetes cluster creation, eksctl, a simple CLI utility. For AWS you can try to use aws2, but you’ll need to set aws2 usage in kube config file as described here.

3 1
2 1.4K

Last time we launched an IRIS application in the Google Cloud using its GKE service.

And, although creating a cluster manually (or through gcloud) is easy, the modern Infrastructure-as-Code (IaC) approach advises that the description of the Kubernetes cluster should be stored in the repository as code as well. How to write this code is determined by the tool that’s used for IaC.

In the case of Google Cloud, there are several options, among them Deployment Manager and Terraform. Opinions are divided as to which is better: if you want to learn more, read this Reddit thread Opinions on Terraform vs. Deployment Manager? and the Medium article Comparing GCP Deployment Manager and Terraform.

5 1
2 1.3K

Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the theory and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: Samples BI. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, install IRIS there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good.

Inevitably, though, you’ll need to make changes.

4 1
1 1K

Let's say I want to execute this cache script (saved as test.txt) from OS terminal:

zn "USER"
write 1
zn "%SYS"
write 2
halt

Executing the following command in a terminal:

csession cache < test.txt

Would yield this output:

$ csession cache < script.txt

Node: gitlab-test, Instance: CACHE

USER>

USER>
1
USER>

%SYS>
2
%SYS>
Job succeeded

Is there a better way to run these scripts?

Currently I have two problems:

1 1
0 1K

In an earlier article (hope, you’ve read it), we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.

In this article we’re going to try using GitHub Actions to deploy the server part of InterSystems Package Manager, ZPM-registry, on Google Kubernetes Engine (GKE).

4 1
1 890

Introduction
Several resources tell us how to run IRIS in a Kubernetes cluster, such as Deploying an InterSystems IRIS Solution on EKS using GitHub Actions and Deploying InterSystems IRIS solution on GKE Using GitHub Actions. These methods work but they require that you create Kubernetes manifests and Helm charts, which might be rather time-consuming.
To simplify IRIS deployment, InterSystems developed an amazing tool called InterSystems Kubernetes Operator (IKO). A number of official resources explain IKO usage in details, such as New Video: Intersystems IRIS Kubernetes Operator and InterSystems Kubernetes Operator.

5 2
1 879

A lot of developers like to work with Studio and have been looking into source code version control such as GIT or into enabling modern development workflows like CICD or DevOps processes.

This article describe an elementary solution to get you started in CICD and DevOps, even if you are not yet ready to move to Atelier or forth coming VS Code approach which enable client side source code version control.

1 0
0 813

Most of us are more or less familiar with Docker. Those who use it like it for the way it lets us easily deploy almost any application, play with it, break something and then restore the application with a simple restart of the Docker container.

5 1
4 810

Adding VSCode into your IRIS container

One of the easiest ways to setup repeatable development environments is to spin up containers for them. I find that when iterating quickly, it was very convenient to host a vscode instance within my development container. Thus, I have created a quick container script to add a browser-based vscode into an IRIS container. This should work for most 2021.1+ containers. My code repository can be found here

11 5
0 735
Article
· Jul 21, 2022 11m read
ECP With Docker

Hi community,

This is the third article in the series about initializing IRIS instances with Docker. This time, we will focus on Enterprise Cache Protocol (ECP).

In a very simplified way, ECP allows configuring some IRIS instances as application servers and others as data servers. Detailed technical information can be found in the official documentation.

This article aims to describe:

4 0
1 724

Some changes in IRIS configuration require a restart of IRIS.
This is no big issue as long as I have access to the server command line with sufficient privileges.

In a container, this is not always given.
Stopping IRIS from the terminal/session prompt is no problem.
But the restart after is.

Note1: container start-stop is no option as it might be removed by option --rm in docker run
Note2: the target is linux (manly in docker). Windows is excluded

GitHub

9 4
0 701

As we all well know, InterSystems IRIS has an extensive range of tools for improving the scalability of application systems. In particular, much has been done to facilitate the parallel processing of data, including the use of parallelism in SQL query processing and the most attention-grabbing feature of IRIS: sharding. However, many mature developments that started back in Caché and have been carried over into IRIS actively use the multi-model features of this DBMS, which are understood as allowing the coexistence of different data models within a single database. For example, the HIS qMS database contains both semantic relational (electronic medical records) as well as traditional relational (interaction with PACS) and hierarchical data models (laboratory data and integration with other systems). Most of the listed models are implemented using SP.ARM's qWORD tool (a mini-DBMS that is based on direct access to globals). Therefore, unfortunately, it is not possible to use the new capabilities of parallel query processing for scaling, since these queries do not use IRIS SQL access.

Meanwhile, as the size of the database grows, most of the problems inherent to large relational databases become right for non-relational ones. So, this is a major reason why we are interested in parallel data processing as one of the tools that can be used for scaling.

In this article, I would like to discuss those aspects of parallel data processing that I have been dealing with over the years when solving tasks that are rarely mentioned in discussions of Big Data. I am going to be focusing on the technological transformation of databases, or, rather, technologies for transforming databases.

12 4
3 700
Question
· Aug 25, 2019
Automation with Ansible

Trying to modernize tasks I have to do on cache like change global variables on different servers, different namespaces....

Actually, I have a bash script doing ssh on each server and running bash script on each server like this

echo "zn \"namespace\"
s ^Var=1
"|csession ensapp

3 3
1 660

Hi All,

With this article, I would like to show you how easily and dynamically System Alerting and Monitoring (or SAM for short) can be configured. The use case could be that of a fast and agile CI/CD provisioning pipeline where you want to run your unit-tests but also stress-tests and you would want to quickly be able to see if those tests are successful or how they are stressing the systems and your application (the InterSystems IRIS backend SAM API is extendable for your APM implementation).

4 0
1 639

We are currently implementing the Data Innovations Instrument Manager product. In setting up our backup process we are wanting to use Veam snapshots. The application runs in a Caché 2016.1/Windows Server 2016 instance. We are running an HA primary/secondary/arbiter config. The statement below is from DI. I am curious to see what others that have implemented the DI Instrument Manager in the same or similar config have in place for backup.

"DI recommends is recommending that we not perform snapshots, but if you do choose to do so, here is some important information to consider.

0 4
0 595

This is a continuation of my story about the development of my project isc-tar started in the first part.

Just having tests is not enough, it does not mean that you will run tests after all changes. Running tests should be automated, and when you cover all your functionality with tests, everything should work well after any change in any place. And Continuous Integration (CI) helps to keep the code and deployment procedure with as fewer bugs as possible and automates the routine procedures, like publishing releases.

I use GitHub to store the source code. And some time ago GitHub started to work on its own CI/CD platform and named it GitHub Actions. It is not widely available, yet. You have to be signed as a beta tester for this feature, as I did. GitHub Actions uses quite a different way how to deal with a build workflow. What is important that Github Actions allows to use Docker, and it’s quite easy to customize available actions. And interesting that GitHub Actions is really much bigger than any classic CI like we have in Travis, Circle or Gitlab CI and so on. You can find more in the official documentation.

3 0
1 589