In theprevious article, we looked at one way to create a custom operator that manages the IRIS instance state. This time, we’re going to take a look at a ready-to-go operator, InterSystems Kubernetes Operator (IKO).Official documentation will help us navigate the deployment steps.
IAM - InterSystems API Manager is a great tool for monitoring your traffic. If you are trying to use it in your Kubernetes cluster you may have tried doing a deployment similar to this one:
In an earlier article (hope, you’ve read it), we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.
In this article we’re going to try using GitHub Actions to deploy the server part of InterSystems Package Manager, ZPM-registry, on Google Kubernetes Engine (GKE).
Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the theory and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: Samples BI. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, install IRIS there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good.
In this InterSystems IRIS 2020.1 Tech Talk, we focus on DevOps. We'll talk about InterSystems System Alerting and Monitoring, which offers unified cluster monitoring in a single pane for all your InterSystems IRIS instances. It is built on Prometheus and Grafana, two of the most respected open source offerings available.
Next, we'll dive into the InterSystems Kubernetes Operator, a special controller for Kubernetes that streamlines InterSystems IRIS deployments and management. It's the easiest way to deploy an InterSystems IRIS cluster on-prem or in the Cloud, and we'll show how you can configure mirroring, ECP, sharding and compute nodes, and automate it all.
Finally, we'll discuss how to speed test InterSystems IRIS using the open source Ingestion Speed Test. This tool is available on InterSystems Open Exchange for your own testing and benchmarking.
With this article, I would like to show you how easily and dynamically System Alerting and Monitoring(or SAM for short) can be configured. The use case could be that of a fast and agile CI/CD provisioning pipeline where you want to run your unit-tests but also stress-tests and you would want to quickly be able to see if those tests are successful or how they are stressing the systems and your application (the InterSystems IRIS backend SAM API is extendable for your APM implementation).
This is the third article in the series about initializing IRIS instances with Docker. This time, we will focus on Enterprise Cache Protocol (ECP).
In a very simplified way, ECP allows configuring some IRIS instances as application servers and others as data servers. Detailed technical information can be found in the official documentation.
In case you're planning on deploying IRIS For Health, or any of our containerized products, via the IKO on OpenShift, I wanted to share some of the hurdles we had to overcome.
As with any IKO based installation, we first need to deploy the IKO itself. However we were getting this error:
In this article, I’m excited to introduce CodeInspector, a tool designed to simplify code validation by applying custom rules tailored to your development requirements. Whether you're managing a large codebase or working in an agile environment, CodeInspector helps ensure code quality by offering flexibility and adaptability to specific project needs.
In the modern world, the most valuable asset for companies is their data. Everything from business processes and applications to transactions is based on data which defines the success of the organization's operations, analysis, and decisions. In this scenario, the data structures need to be ready for frequent changes, yet in a managed and governed way. Otherwise, we will inevitably lose money, time, and quality of corporate solutions.
1. A deployment may consist of two high availability instances and two disaster recovery instances in a different data center.
The corresponding UAT environment could replicate this giving a total of 8 instances. How do you confirm CPF and Scheduled task alignment across ALL instances.
Last time we deployed a simple IRIS application to the Google Cloud. Now we’re going to deploy the same project to Amazon Web Services using its Elastic Kubernetes Service (EKS).
We assume you’ve already forked the IRIS project to your own private repository. It’s called <username>/my-objectscript-rest-docker-template in this article. <root_repo_dir> is its root directory.
Before getting started, install the AWS command-line interface and, for Kubernetes cluster creation, eksctl, a simple CLI utility. For AWS you can try to use aws2, but you’ll need to set aws2 usage in kube config file as described here.
https://www.youtube.com/embed/JoSS4QYaELc [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
https://www.youtube.com/embed/rRJ8_O4Y3gs [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
https://www.youtube.com/embed/yDRZwK3maeQ [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
Watch this video to learn about observability of your InterSystems IRIS application with InterSystems System Alerting and Monitoring (SAM) and modern DevOps tooling:
https://www.youtube.com/embed/pJUm5GABMXU [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
This is a continuation of my story about the development of my project isc-tar started in the first part.
Just having tests is not enough, it does not mean that you will run tests after all changes. Running tests should be automated, and when you cover all your functionality with tests, everything should work well after any change in any place. And Continuous Integration (CI) helps to keep the code and deployment procedure with as fewer bugs as possible and automates the routine procedures, like publishing releases.
I use GitHub to store the source code. And some time ago GitHub started to work on its own CI/CD platform and named it GitHub Actions. It is not widely available, yet. You have to be signed as a beta tester for this feature, as I did. GitHub Actions uses quite a different way how to deal with a build workflow. What is important that Github Actions allows to use Docker, and it’s quite easy to customize available actions. And interesting that GitHub Actions is really much bigger than any classic CI like we have in Travis, Circle or Gitlab CI and so on. You can find more in the official documentation.
The Certification Team of InterSystems Learning Services is developing an InterSystems IRIS Developer Professional certification exam, and we are reaching out to our community for feedback that will help us evaluate and establish the contents of this exam.
Flyway is a open source product used to develop database code to migration, ddl version control, automate database procedures, etc. It is the most used product to do DevOps automation procedures to database. Do you consider create iris support to flyway?
I am just recently announced my project isc-tar. But sometimes it is not less interesting what’s behind the scene: how it was built, how it works and what happens around the project. Here is the story:
Windows Subsystem for Linux (WSL) is a feature of Windows that allows you to run a Linux environment on your Windows machine, without the need for a separate virtual machine or dual booting.
WSL is designed to provide a seamless and productive experience for developers who want to use both Windows and Linux at the same time**.
In the previous article, we combine ZPM with Config-API to load a configuration on module loading\install.
It could be useful for small applications, but for a large application, it's not convenient.
https://www.youtube.com/embed/K2xm6LIVA6U [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]