Article
· Aug 4, 2021 3m read
IRIS Mirror in the cloud (AWS)

I have been working on redesigning a Health Connect production which runs on a mirrored instance of Healthshare 2019. We were told to take advantage of containers. We got to work on IRIS 2020.1 and split the database part from the Interoperability part. We had the IRIS mirror running on EC2 instances and used containers to run IRIS interoperability application. Eventually we decided to run the data tier in containers as well.

12 14
2 697

Being equipped by science and technology, human being have walked a long way by great inventions such as steam-engines or aeroplannes; while after decades, people gradually recognize that single creation could not lauch an industry-boom again. That is why and when, technologies grow up with a community, where are we now=P. An eco-system of technology would be born with the power of a system and grow up with the power of system-science, such as InterSystems, with which seated the letters "s-y-s-t-e-m". Graduated with M.S.

1 2
2 212

Challenges of real-time AI/ML computations

We will start from the examples that we faced as Data Science practice at InterSystems:

  • A “high-load” customer portal is integrated with an online recommendation system. The plan is to reconfigure promo campaigns at the level of the entire retail network (we will assume that instead of a “flat” promo campaign master there will be used a “segment-tactic” matrix). What will happen to the recommender mechanisms? What will happen to data feeds and updates into the recommender mechanisms (the volume of input data having increased 25000 times)? What will happen to recommendation rule generation setup (the need to reduce 1000 times the recommendation rule filtering threshold due to a thousandfold increase of the volume and “assortment” of the rules generated)?
  • An equipment health monitoring system uses “manual” data sample feeds. Now it is connected to a SCADA system that transmits thousands of process parameter readings each second. What will happen to the monitoring system (will it be able to handle equipment health monitoring on a second-by-second basis)? What will happen once the input data receives a new bloc of several hundreds of columns with data sensor readings recently implemented in the SCADA system (will it be necessary, and for how long, to shut down the monitoring system to integrate the new sensor data in the analysis)?
  • A complex of AI/ML mechanisms (recommendation, monitoring, forecasting) depend on each other’s results. How many man-hours will it take every month to adapt those AI/ML mechanisms’ functioning to changes in the input data? What is the overall “delay” in supporting business decision making by the AI/ML mechanisms (the refresh frequency of supporting information against the feed frequency of new input data)?

4 0
1 581
Article
· Apr 26, 2021 3m read
SSH for IRIS container

Why SSH ?

If you do not have direct access to the server that runs your IRIS Docker container
you still may require access to the container outside "iris session" or "WebTerminal".
With an SSH terminal (PuTTY, KiTTY,.. ) you get access inside Docker, and then, depending
on your needs you run "iris session iris" or display/manipulate files directly.

Note:
This is not meant to be the default access for the average application user
but the emergency backdoor for System Management, Support, and Development.

2 34
0 921

Like hardware hosts, virtual hosts in public and private clouds can develop resource bottlenecks as workloads increase. If you are using and managing InterSystems IRIS instances deployed in public or private clouds, you may have encountered a situation in which addressing performance or other issues requires increasing the capacity of an instance's host (that is, vertically scaling).

5 1
0 368

What is Distributed Artificial Intelligence (DAI)?

Attempts to find a “bullet-proof” definition have not produced result: it seems like the term is slightly “ahead of time”. Still, we can analyze semantically the term itself – deriving that distributed artificial intelligence is the same AI (see our effort to suggest an “applied” definition) though partitioned across several computers that are not clustered together (neither data-wise, nor via applications, not by providing access to particular computers in principle). I.e., ideally, distributed artificial intelligence should be arranged in such a way that none of the computers participating in that “distribution” have direct access to data nor applications of another computer: the only alternative becomes transmission of data samples and executable scripts via “transparent” messaging. Any deviations from that ideal should lead to an advent of “partially distributed artificial intelligence” – an example being distributed data with a central application server. Or its inverse. One way or the other, we obtain as a result a set of “federated” models (i.e., either models trained each on their own data sources, or each trained by their own algorithms, or “both at once”).

2 0
1 564

Purpose

Most CloudFormation articles are Linux-based (no wonder), but there seems to be a demand for automation for Windows as well. Based on this original article by Anton, I implemented an example of deploying a mirror cluster to Windows servers using CloudFormation.I also added a simple walk through.
The complete source code can be found here.

Update: 2021 March 1 I added a way to connect to Windows shell by public key authentication via a bastion host as a one-liner.

0 0
0 546

This article is a continuation of Deploying InterSystems IRIS solution on GKE Using GitHub Actions, in which, with the help of GitHub Actions pipeline, our zpm-registry was deployed in a Google Kubernetes cluster created by Terraform. In order not to repeat, we’ll take as a starting point that:

1 1
1 591

Hi All,

With this article, I would like to show you how easily and dynamically System Alerting and Monitoring (or SAM for short) can be configured. The use case could be that of a fast and agile CI/CD provisioning pipeline where you want to run your unit-tests but also stress-tests and you would want to quickly be able to see if those tests are successful or how they are stressing the systems and your application (the InterSystems IRIS backend SAM API is extendable for your APM implementation).

4 0
1 646

Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the theory and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: Samples BI. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, install IRIS there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good.

Inevitably, though, you’ll need to make changes.

4 1
1 1.1K

In an earlier article (hope, you’ve read it), we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.

In this article we’re going to try using GitHub Actions to deploy the server part of InterSystems Package Manager, ZPM-registry, on Google Kubernetes Engine (GKE).

4 1
1 895

Last time we deployed a simple IRIS application to the Google Cloud. Now we’re going to deploy the same project to Amazon Web Services using its Elastic Kubernetes Service (EKS).

We assume you’ve already forked the IRIS project to your own private repository. It’s called <username>/my-objectscript-rest-docker-template in this article. <root_repo_dir> is its root directory.

Before getting started, install the AWS command-line interface and, for Kubernetes cluster creation, eksctl, a simple CLI utility. For AWS you can try to use aws2, but you’ll need to set aws2 usage in kube config file as described here.

3 1
2 1.4K

Most of us are more or less familiar with Docker. Those who use it like it for the way it lets us easily deploy almost any application, play with it, break something and then restore the application with a simple restart of the Docker container.

5 1
4 821

Last time we launched an IRIS application in the Google Cloud using its GKE service.

And, although creating a cluster manually (or through gcloud) is easy, the modern Infrastructure-as-Code (IaC) approach advises that the description of the Kubernetes cluster should be stored in the repository as code as well. How to write this code is determined by the tool that’s used for IaC.

In the case of Google Cloud, there are several options, among them Deployment Manager and Terraform. Opinions are divided as to which is better: if you want to learn more, read this Reddit thread Opinions on Terraform vs. Deployment Manager? and the Medium article Comparing GCP Deployment Manager and Terraform.

5 1
2 1.3K

Loading your IRIS Data to your Google Cloud Big Query Data Warehouse and keeping it current can be a hassle with bulky Commercial Third Party Off The Shelf ETL platforms, but made dead simple using the iris2bq utility.

Let's say IRIS is contributing to workload for a Hospital system, routing DICOM images, ingesting HL7 messages, posting FHIR resources, or pushing CCDA's to next provider in a transition of care. Natively, IRIS persists these objects in various stages of the pipeline via the nature of the business processes and anything you included along the way. Lets send that up to Google Big Query to augment and compliment the rest of our Data Warehouse data and ETL (Extract Transform Load) or ELT (Extract Load Transform) to our hearts desire.

A reference architecture diagram may be worth a thousand words, but 3 bullet points may work out a little bit better:

  • It exports the data from IRIS into DataFrames
  • It saves them into GCS as .avro to keep the schema along the data: this will avoid to specify/create the BigQuery table schema beforehands.
  • It starts BigQuery jobs to import those .avro into the respective BigQuery tables you specify.

5 3
0 947

Database systems have very specific backup requirements that in enterprise deployments require forethought and planning. For database systems, the operational goal of a backup solution is to create a copy of the data in a state that is equivalent to when application is shut down gracefully. Application consistent backups meet these requirements and Caché provides a set of APIs that facilitate the integration with external solutions to achieve this level of backup consistency.

1 7
2 2.6K
Article
· Jan 18, 2019 2m read
Free IRIS Community Edition in AWS

Good News!! You can use now the Free InterSystems IRIS Community Edition in the AWS Cloud

Hello,

It's very common that people new in InterSystems IRIS want to start to work in a personal project in a full free environment. If you are one of this, Good News!! You can use now the Free InterSystems IRIS Community Edition in the AWS Cloud.

7 15
4 1.5K
Article
· Jan 31, 2018 3m read
Container - What is a Container?

Containers

With the launch of InterSystems IRIS Data Platform, we provide our product even in a Docker container. But what is a container?

The fundamental container definition is that of a sandbox for a process.

Containers are software-defined packages that have some similarities to virtual machines (VM) like for example they can be executed.

Containers provide isolation without a full OS emulation. Containers are therefore much lighter than a VM.

12 20
1 2.4K

Google Cloud Platform (GCP) provides a feature rich environment for Infrastructure-as-a-Service (IaaS) as a cloud offering fully capable of supporting all of InterSystems products including the latest InterSystems IRIS Data Platform. Care must be taken, as with any platform or deployment model, to ensure all aspects of an environment are considered such as performance, availability, operations, and management procedures. Specifics of each of those areas will be covered in this article.

6 0
2 3.8K

For Global Summit 2016, I set out to showcase a Reference Architecture I had been working on for a National Provider Directory solution with State Level Instances and a National Instance all running HealthShare Provider Directory and all running on AWS Infrastructure.

In short, I wanted to highlight:

  • The implementation of Amazon Web Services to provision the infrastructure, including the auto-creation of the state level instances through Cloud Formation.
  • The use of the HSPD Broadcast functionality to Notify Upstream Systems Changes in Master Provider Data.
  • The implementation of a transformation of the standard Broadcast Object to HL7 MFN for interoperability.
  • The principals of Master Data Management applied to the Provider Directory.

5 4
0 1.6K

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers
  • CD using ICM

In this article, we'll build Continuous Delivery with InterSystems Cloud Manager. ICM is a cloud provisioning and deployment solution for applications based on InterSystems IRIS. It allows you to define the desired deployment configuration and ICM would provision it automatically. For more information take a look at First Look: ICM.

3 0
2 1.2K

++ Update: August 1, 2018

The use of the InterSystems Virtual IP (VIP) address built-in to Caché database mirroring has certain limitations. In particular, it can only be used when mirror members reside the same network subnet. When multiple data centers are used, network subnets are not often “stretched” beyond the physical data center due to added network complexity (more detailed discussion here). For similar reasons, Virtual IP is often not usable when the database is hosted in the cloud.

Network traffic management appliances such as load balancers (physical or virtual) can be used to achieve the same level of transparency, presenting a single address to the client applications or devices. The network traffic manager automatically redirects clients to the current mirror primary’s real IP address. The automation is intended to meet the needs of both HA failover and DR promotion following a disaster.

14 12
4 5.9K

Container Images

In this second post on containers fundamentals, we take a look at what container images are.

What is a container image?

A container image is merely a binary representation of a container.

A running container or simply a container is the runtime state of the related container image.

Please see the first post that explains what a container is.

6 1
0 2K

Last week saw the launch of the InterSystems IRIS Data Platform in sunny California.

For the engaging eXPerience Labs (XP-Labs) training sessions, my first customer and favourite department (Learning Services), was working hard assisting and supporting us all behind the scene.

11 3
0 981