We are ridiculously good at mastering data. The data is clean, multi-sourced, related and we only publish it with resulting levels of decay that guarantee the data is current. We chose the HL7 Reference Information Model (RIM) to land the data, and enable exchange of the data through Fast Healthcare Interoperability Resources (FHIR®).

13 2
1 533

Being equipped by science and technology, human being have walked a long way by great inventions such as steam-engines or aeroplannes; while after decades, people gradually recognize that single creation could not lauch an industry-boom again. That is why and when, technologies grow up with a community, where are we now=P. An eco-system of technology would be born with the power of a system and grow up with the power of system-science, such as InterSystems, with which seated the letters "s-y-s-t-e-m". Graduated with M.S.

1 2
2 186

Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the theory and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: Samples BI. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, install IRIS there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good.

Inevitably, though, you’ll need to make changes.

4 1
1 938

Container Images

In this second post on containers fundamentals, we take a look at what container images are.

What is a container image?

A container image is merely a binary representation of a container.

A running container or simply a container is the runtime state of the related container image.

Please see the first post that explains what a container is.

6 1
0 1,807

This article is a continuation of Deploying InterSystems IRIS solution on GKE Using GitHub Actions, in which, with the help of GitHub Actions pipeline, our zpm-registry was deployed in a Google Kubernetes cluster created by Terraform. In order not to repeat, we’ll take as a starting point that:

1 1
1 515

Overview

Predictable storage IO performance with low latency is vital to provide scalability and reliability for your applications. This set of benchmarks is to inform users of IRIS considering deploying applications in AWS about EBS gp3 volume performance.

Summary

  • An LVM stripe can increase IOPS and throughput beyond single EBS volume performance limits.
  • An LVM stripe lowers read latency.
3 1
0 985

Like hardware hosts, virtual hosts in public and private clouds can develop resource bottlenecks as workloads increase. If you are using and managing InterSystems IRIS instances deployed in public or private clouds, you may have encountered a situation in which addressing performance or other issues requires increasing the capacity of an instance's host (that is, vertically scaling).

5 1
0 301

We have a rule to disable a user account if they have not logged in for a certain number of days. IRIS Audit database logs many events such as login failures for example. It can be configured to log successful logins as well. We have IRIS clusters with many IRIS instances. I like to run queries against audit data from ALL IRIS instances and identify user accounts which have not logged into ANY IRIS instance.

0 1
0 70

Most of us are more or less familiar with Docker. Those who use it like it for the way it lets us easily deploy almost any application, play with it, break something and then restore the application with a simple restart of the Docker container.

5 1
4 727

Last time we deployed a simple IRIS application to the Google Cloud. Now we’re going to deploy the same project to Amazon Web Services using its Elastic Kubernetes Service (EKS).

We assume you’ve already forked the IRIS project to your own private repository. It’s called <username>/my-objectscript-rest-docker-template in this article. <root_repo_dir> is its root directory.

Before getting started, install the AWS command-line interface and, for Kubernetes cluster creation, eksctl, a simple CLI utility. For AWS you can try to use aws2, but you’ll need to set aws2 usage in kube config file as described here.

3 1
2 1,307

Last time we launched an IRIS application in the Google Cloud using its GKE service.

And, although creating a cluster manually (or through gcloud) is easy, the modern Infrastructure-as-Code (IaC) approach advises that the description of the Kubernetes cluster should be stored in the repository as code as well. How to write this code is determined by the tool that’s used for IaC.

In the case of Google Cloud, there are several options, among them Deployment Manager and Terraform. Opinions are divided as to which is better: if you want to learn more, read this Reddit thread Opinions on Terraform vs. Deployment Manager? and the Medium article Comparing GCP Deployment Manager and Terraform.

5 1
2 1,239

In an earlier article (hope, you’ve read it), we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.

In this article we’re going to try using GitHub Actions to deploy the server part of InterSystems Package Manager, ZPM-registry, on Google Kubernetes Engine (GKE).

4 1
1 765

This summer the Database Platforms department here at InterSystems tried out a new approach to our internship program. We hired 10 bright students from some of the top colleges in the US and gave them the autonomy to create their own projects which would show off some of the new features of the InterSystems IRIS Data Platform. The team consisting of Ruchi Asthana, Nathaniel Brennan, and Zhe “Lily” Wang used this opportunity to develop a smart review analysis engine, which they named Lumière. As they explain:

2 0
0 432

RESTful API Call From Cache to Particle.io Electron

Tom Fitzgibbon | Multidata | 212-967-6700 x537 | tom@mul.com

Summary: Simple Blink Tutorial for Particle.io Electron Device from Cache

Electron device is a tiny ARM processor ($40-$60) that connects to Particle’s world wide leased 2G/3G network (about $3/mo) and runs off an included LiPo battery. Using Cache’s %Net.HttpRequest you can send/receive data, control hardware and read sensors.

Step by Step (about 1 hour)

1) Get the Electron from store.particle.io.

2 0
0 1,161

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers
  • CD using ICM

In this article, we'll build Continuous Delivery with InterSystems Cloud Manager. ICM is a cloud provisioning and deployment solution for applications based on InterSystems IRIS. It allows you to define the desired deployment configuration and ICM would provision it automatically. For more information take a look at First Look: ICM.

2 0
1 1,023

Hi All,

With this article, I would like to show you how easily and dynamically System Alerting and Monitoring (or SAM for short) can be configured. The use case could be that of a fast and agile CI/CD provisioning pipeline where you want to run your unit-tests but also stress-tests and you would want to quickly be able to see if those tests are successful or how they are stressing the systems and your application (the InterSystems IRIS backend SAM API is extendable for your APM implementation).

3 0
1 552

Google Cloud Platform (GCP) provides a feature rich environment for Infrastructure-as-a-Service (IaaS) as a cloud offering fully capable of supporting all of InterSystems products including the latest InterSystems IRIS Data Platform. Care must be taken, as with any platform or deployment model, to ensure all aspects of an environment are considered such as performance, availability, operations, and management procedures. Specifics of each of those areas will be covered in this article.

6 0
2 3,376

Presenter: Luca Ravazzolo
Task: Define, provision, and configure a cloud infrastructure
Approach: Use open-source tools for DevOps automation

This session demonstrates how a cloud infrastructure of clustered servers can be defined, provisioned and configured efficiently with a simple yet powerful provisioning technology.

Content related to this session, including slides, video and additional learning content can be found here.

0 0
0 194

Purpose

Most CloudFormation articles are Linux-based (no wonder), but there seems to be a demand for automation for Windows as well. Based on this original article by Anton, I implemented an example of deploying a mirror cluster to Windows servers using CloudFormation.I also added a simple walk through.
The complete source code can be found here.

Update: 2021 March 1 I added a way to connect to Windows shell by public key authentication via a bastion host as a one-liner.

0 0
0 411

Presenter: Mark Bolinsky
Task: Provide failover for distributed systems without using a VIP
Approach: Demonstrate using InterSystems’ database mirroring with external traffic managers such as F5 LTM/GTM

With distributed environments and even public cloud environments, the use of a VIP sometimes is not desirable or even possible given network topology or deployment. The session will demonstrate integrating database mirroring with external traffic managers such F5 LTM/GTM using API based triggers in InterSystems products to interface with the F5 appliances. This not only presents automated redirection for the local mirror members, but also provided automated client redirection to asynchronous DR mirror members.

Content related to this session, including slides, video and additional learning content can be found here.

0 0
0 325

What is Distributed Artificial Intelligence (DAI)?

Attempts to find a “bullet-proof” definition have not produced result: it seems like the term is slightly “ahead of time”. Still, we can analyze semantically the term itself – deriving that distributed artificial intelligence is the same AI (see our effort to suggest an “applied” definition) though partitioned across several computers that are not clustered together (neither data-wise, nor via applications, not by providing access to particular computers in principle). I.e., ideally, distributed artificial intelligence should be arranged in such a way that none of the computers participating in that “distribution” have direct access to data nor applications of another computer: the only alternative becomes transmission of data samples and executable scripts via “transparent” messaging. Any deviations from that ideal should lead to an advent of “partially distributed artificial intelligence” – an example being distributed data with a central application server. Or its inverse. One way or the other, we obtain as a result a set of “federated” models (i.e., either models trained each on their own data sources, or each trained by their own algorithms, or “both at once”).

2 0
1 442

Presenter: Tony Pepper
Task: Host an application based on InterSystems’ technology in a public cloud environment
Approach: Provide a checklist of things to think about before you deploy

Are you looking at hosting your applications in the public cloud? This talk will highlight what you need to think about when deploying InterSystems technology in any public cloud environment.

Content related to this session, including slides, video and additional learning content can be found here.

1 0
0 185

Challenges of real-time AI/ML computations

We will start from the examples that we faced as Data Science practice at InterSystems:

  • A “high-load” customer portal is integrated with an online recommendation system. The plan is to reconfigure promo campaigns at the level of the entire retail network (we will assume that instead of a “flat” promo campaign master there will be used a “segment-tactic” matrix). What will happen to the recommender mechanisms? What will happen to data feeds and updates into the recommender mechanisms (the volume of input data having increased 25000 times)? What will happen to recommendation rule generation setup (the need to reduce 1000 times the recommendation rule filtering threshold due to a thousandfold increase of the volume and “assortment” of the rules generated)?
  • An equipment health monitoring system uses “manual” data sample feeds. Now it is connected to a SCADA system that transmits thousands of process parameter readings each second. What will happen to the monitoring system (will it be able to handle equipment health monitoring on a second-by-second basis)? What will happen once the input data receives a new bloc of several hundreds of columns with data sensor readings recently implemented in the SCADA system (will it be necessary, and for how long, to shut down the monitoring system to integrate the new sensor data in the analysis)?
  • A complex of AI/ML mechanisms (recommendation, monitoring, forecasting) depend on each other’s results. How many man-hours will it take every month to adapt those AI/ML mechanisms’ functioning to changes in the input data? What is the overall “delay” in supporting business decision making by the AI/ML mechanisms (the refresh frequency of supporting information against the feed frequency of new input data)?

4 0
1 475

Enterprises need to grow and manage their global computing infrastructures rapidly and efficiently while simultaneously optimizing and managing capital costs and expenses. Amazon Web Services (AWS) and Elastic Compute Cloud (EC2) computing and storage services meet the needs of the most demanding Caché based application by providing
 a highly robust global computing infrastructure.

15 0
4 7,703

These days the vast majority of applications are deployed on public cloud services. There are multiple advantages, including the reduction in human and material resources needed, the ability to grow quickly and cheaply, greater availability, reliability, elastic scalability, and options to improve the protection of digital assets. One of the most favored options is the Google Cloud. It lets us deploy our applications using virtual machines (Compute Engine), Docker containers (Cloud Run), or Kubernetes (Kubernetes Engine). The first one does not use Docker.

4 0
2 140