Introduction
Several resources tell us how to run IRIS in a Kubernetes cluster, such as Deploying an InterSystems IRIS Solution on EKS using GitHub Actions and Deploying InterSystems IRIS solution on GKE Using GitHub Actions. These methods work but they require that you create Kubernetes manifests and Helm charts, which might be rather time-consuming.
To simplify IRIS deployment, InterSystems developed an amazing tool called InterSystems Kubernetes Operator (IKO). A number of official resources explain IKO usage in details, such as  New Video: Intersystems IRIS Kubernetes Operator and InterSystems Kubernetes Operator.

40
1 0 253

This article is a continuation of Deploying InterSystems IRIS solution on GKE Using GitHub Actions, in which, with the help of GitHub Actions pipeline, our zpm-registry was deployed in a Google Kubernetes cluster created by Terraform. In order not to repeat, we’ll take as a starting point that:

10
1 1 279

Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the theory and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: Samples BI. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, install IRIS there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good. 

Inevitably, though, you’ll need to make changes.

40
1 1 615

In an earlier article (hope, you’ve read it), we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.

In this article we’re going to try using GitHub Actions to deploy the server part of  InterSystems Package Manager, ZPM-registry, on Google Kubernetes Engine (GKE).

30
1 1 517

Last time we launched an IRIS application in the Google Cloud using its GKE service.

And, although creating a cluster manually (or through gcloud) is easy, the modern Infrastructure-as-Code (IaC) approach advises that the description of the Kubernetes cluster should be stored in the repository as code as well. How to write this code is determined by the tool that’s used for IaC.

In the case of Google Cloud, there are several options, among them Deployment Manager and Terraform. Opinions are divided as to which is better: if you want to learn more, read this Reddit thread Opinions on Terraform vs. Deployment Manager? and the Medium article Comparing GCP Deployment Manager and Terraform

40
2 1 746

Last time we deployed a simple IRIS application to the Google Cloud. Now we’re going to deploy the same project to Amazon Web Services using its Elastic Kubernetes Service (EKS).

We assume you’ve already forked the IRIS project to your own private repository. It’s called <username>/my-objectscript-rest-docker-template in this article. <root_repo_dir> is its root directory.

Before getting started, install the AWS command-line interface and, for Kubernetes cluster creation, eksctl, a simple CLI utility. For AWS you can try to use aws2, but you’ll need to set aws2 usage in kube config file as described here.

30
2 1 894

Most of us are more or less familiar with Docker. Those who use it like it for the way it lets us easily deploy almost any application, play with it, break something and then restore the application with a simple restart of the Docker container.

50
4 1 502

Hi, all!
As I know, InterSystems recommends the use of Huge Pages. And if count of Huge Pages is enough, we'll see (in cconsole.log) something like this during Cache startup:

12/29/17-14:40:50:360 (3625) 0 Allocated 4630MB shared memory using Huge Pages: 4096MB global buffers, 256MB routine buffers

But if count of Huge Pages is not enough for location of all Globals and Routines caches, Cache won't use Huge Pages.

10
0 2 520