Clear filter
Announcement
Anastasia Dyubaylo · Jul 5, 2021
Hey Developers,
We have some good news for you:
💥 InterSystems AI contest participants can use Embedded Python in their solutions! So if you are not yet a member of the Embedded Python Early Access Program (EAP), now is the time!
Refer to python-interest@intersystems.com and you'll get FREE access to the InterSystems IRIS Embedded Python features.
In addition, we invite all EAP participants to the special Embedded Python kick-off webinar tomorrow, July 6 at 10:00 AM EDT – an easy start on how to use Embedded Python! Demonstration of the new features of the data platform, examples applications, and of course rewards.
After you become an EAP member, you will receive a special link to join the kick-off webinar:
➡️ RSVP: python-interest@intersystems.com
So,
Now any Python developer can easily join the current InterSystems AI contest!
Duration: June 28 - July 25, 2021
Total prize: $8,750
Don't miss it! Hey,
Become a member of the Early Access Program program and you will get access to a private EAP channel on our DC Discord Server!
Don't miss such an opportunity!
Note: Didn't find yourself in a private channel? Chat to me in Discord ;)
Article
Piyush Adhikari · Oct 19, 2022
I am demonstrating a use case of how we can create an IRIS Interoperability Production for special use in an external language. InterSystems IRIS, within Interoperability has a framework called Production Extension (PEX), using which we can create productions and program them as per their purpose using external languages like Java, Python etc, and also develop custom inbound and outbound adapters to communicate with other applications. Here in this demo, I will demonstrate a PEX framework-based production created by @Guillaume.Rongier7183 on Python, and I am using ‘task-specific’ business operations created by @Lucas.Enard2487 in that production to interoperate with a third party open source machine learning platform called HuggingFace, and bring its machine learning capabilities to InterSystems IRIS via PEX interoperability framework and Python.
Walkthrough video: https://www.loom.com/share/239a15e8c510406faac1bcdea8030d1d
Prerequisite
Docker and Docker Compose
Installation and testing
Clone the repository https://github.com/LucasEnard/Contest-Sustainability
Open the folder where the above repo is cloned, and open command prompt and start the docker container using the following command: docker-compose up
The container may start ‘unhealthy’ therefore, it is also a nice practice to start the container with the command: docker-compose up -d
Identify the port that the localhost is running on by using Docker command line and run the IRIS instance via the docker container
Open the production by clicking on the link as follows: http://localhost:port/csp/irisapp/EnsPortal.ProductionConfig.zen?RODUCTION=INFORMATION.QuickFixProduction
There are a few business operations, each for some HuggingFace models. Enable an operation and go to settings.
And on the ‘Python’ section, fill the boxes as per the model you would like to use.
Follow the content on the link as follows for reference: https://github.com/LucasEnard/Contest-Sustainability#521-settings
And click ‘Apply’.
Go to ‘Actions’ and run a test.
In the dialogue box that next appears, select request type as Grongier.PEX.Message.
And, the classname as msg.MLRequest.
And the JSON must include arguments needed by the model.
In the content on the link as follows, there are JSON content for some models: https://github.com/LucasEnard/Contest-Sustainability#522-testing
Click on ‘Invoke Testing Service’ to see test results.
Click on ‘Visual Trace’ and click on messages to see the response and content.
Exception/s
One exception I came across while running this demo on my machine was the docker container starting as ‘unhealthy’.
Adding the HEALTHCHECK statement as follows in the Dockerfile seems to fix the issue.
References
Lucas, E. (2022). Contest-Sustainability. [online] GitHub. Available at: https://github.com/LucasEnard/Contest-Sustainability [Accessed 20 Sep. 2022].
Enard, L. (n.d.). Sustainable Machine Learning for the InterSystems Interoperability Contest. [online] InterSystems Developer Community. Available at: https://community.intersystems.com/post/sustainable-machine-learning-intersystems-interoperability-contest [Accessed 20 Sep. 2022].
This is great @Piyush.Adhikari - nice example of InterSystems IRIS for Machine Learning! Hey @Piyush.Adhikari!
Congrats on your first contribution to the Developer Community! Welcome to the club :) Thank you for using my work and for the credit.
I'm glad you find a way to play and learn with it. Interesting Article, thanks for sharing. Nice tip re: HEALTHCHECK
Thanks @Piyush.Adhikari
Announcement
Anastasia Dyubaylo · Jan 28, 2020
Hi Community!
New "Coding Talk" video is already on InterSystems Developers YouTube:
⏯ How to Install and Use ObjectScript Package Manager with InterSystems IRIS
In this screencast, @Evgeny Shvarov describes how to install and use the ObjectScript Package Manager (ZPM) with InterSystems IRIS.
➡️ Download ZMP from Open Exchange
You can user InterSystems IRIS Community Edition to work with Package Manager: InterSystems IRIS on Docker
Packages we tested in the video:
ObjectScript Math
ZPM command: install objectscript-math
WebTerminal
ZPM command: install webterminal
Samples-BI
ZPM command: install samples-bi
DeepSeeWeb
ZPM command: install dsw
And...
You're very welcome to watch all Coding Talks in a dedicated "Coding Talks" playlist on our InterSystems Developers YouTube Channel.
Enjoy watching the video! 👍🏼
Announcement
Anastasia Dyubaylo · Mar 28, 2019
Hi Everyone!
Please meet InterSystems at hub.berlin - Europe's interactive business festival for digital movers and makers on 10 - 11 April 2019 in Berlin.
We look forward to two-day inspirational lectures and intensive technical discussions and invite you and your colleagues to our InterSystems booth for a personal conversation. In addition, we'll also present a keynote presentation and host a masterclass session.
See the details below.
InterSystems Keynote | 10 April 2019, 11:30 – 11:50
Interoperability enables the next wave of intelligent service-rich applications | Thomas Dyar
The next generation of digital solutions will be increasingly complex, as they leverage more data and a growing array of intelligent services. I will review how these trends shift the challenges from custom software and model development to integrating myriad services and ensuring they interoperate. With traditional programming environments and even dedicated low-code platforms, managing more than a few connected services becomes complex and unwieldy. Also, the volume, velocity and variety of data compounds these integration problems since traditional databases cannot efficiently provide transactional and analytic workload support at scale. Then we will see how developers in logistics, finance and healthcare fields are composing data-intensive, intelligent applications that overcome these obstacles, using new methodologies and platforms specifically designed for this latest phase of technology evolution.
More info here.
InterSystems Masterclass | 10 April 2019, 14:10 – 15:30
Learn how to build and scale intelligent service-rich applications with less custom code | @Benjamin.DeBoe, @Stefan.Wittmann, Thomas Dyar
The new landscape of intelligent services let you build and scale applications and services through integration and interoperability, while relying less on custom development. Thomas Dyar shows you how you can leverage the wide array of services available in your big data applications, highlighting the critical aspects of interoperability when combining multiple services that consume disparate types of data. Then you'll build an intelligent application using Spark for machine learning, and the InterSystems IRIS data platform for service coordination and data management, among other technologies.
What you'll learn, and how you can apply it?
Explore intelligent services concepts and best practices for designing analytic applications
Learn design patterns for ingesting, collecting, storing, analyzing, and visualizing big data
Build a data-intensive application using technologies such as InterSystems IRIS and SparkML
And...
If you still do not have a ticket for hub.berlin: with the promo code hb19-intersystems you get a 20% discount on the regular price of the ticket.
Register now and see you at the event!
Question
Evgeny Shvarov · Jul 23, 2019
Hi developers!Just want to check with you on best practices for that.You collaborate for InterSystems IRIS repository. You fork it, then make changes, commit, push, pull request, discuss(if any), your PR is accepted.What's next? Do you delete the repository you forked in? Something to note, if you delete the repo, the pull request will show up as "unknown repository"and any history attached to that repo will be lost. Also any references to it will of course be broken. But deleting the branch is encouraged by github and won't break any references. For me, I don't like broken links and references, but of course there's the argument of wanting a clean profile instead of 1000 old forked repos :) Good point David, thanks! Keep the fork for the next time you contribute. The issue is, if you keep it Github shows that it is "Behind the remote origin". How do you fix this? Or how do you deal with this? I typically leave it for some amount of time. I sometimes go through my repos and delete the stale forks.
Even though it does have the broken links back to the deleted repo, the PR merge will show the commit history in the new repo, which I think is the important part Neither alternatives. I'd usually __archive__ the repository instead.
Announcement
Andreas Dieckow · Oct 17, 2019
With the recent release of macOS 10.15, Apple has tightened its control mechanism , called Gatekeeper, so that it now requires executables to be notarized. InterSystems products are not currently supported for use on macOS 10.15 and the executables have not been notarized. (As a reminder, InterSystems products are supported on macOS as a development platform only.)
InterSystems is working to provide compatibility with macOS 10.15 for future releases of InterSystems IRIS, InterSystems IRIS for Health, Caché, and Ensemble. Until that time, we recommend not running InterSystems products on macOS 10.15. Another option is running InterSystems IRIS in a container on macOS including 10.15.
If you have any questions regarding this advisory, please contact the Worldwide Response Center.
Announcement
Anastasia Dyubaylo · Aug 14, 2019
Hi Community!New "Coding Talk" video is already on InterSystems Developers YouTube:How to Submit Your InterSystems Solution, Connector or Library to Open ExchangeIn this screencast, presented by @Evgeny Shvarov, you will know how to submit the GitHub application to InterSystems Open ExchangeLearn more about how to publish your applications on InterSystems Open Exchange in this post.And...You're very welcome to watch all Coding Talks in a dedicated "Coding Talks" playlist on our InterSystems Developers YouTube Channel.Stay tuned!
Article
Evgeny Shvarov · Jun 8, 2023
Hi Community!
Just want to share with you an exercise I made to create "my own" chat with GPT in Telegram.
It became possible because of two components on Open Exchange: Telegram Adapter by @Nikolay.Soloviev and IRIS Open-AI by @Francisco.López1549
So with this example you can setup your own chat with ChatGPT in Telegram.
Let's see how to make it work!
Prerequisites
Create a bot using @BotFather account and get the Bot Token. Then add bot into a telegram chat or channel and give it admin rights. Learn more at https://core.telegram.org/bots/api
Open (create if you don't have it) an account on https://platform.openai.com/ and get your Open AI API Key and Organization id.
Make sure you have IPM installed in your InterSystems IRIS. if not here is one liner to install:
USER> s r=##class(%Net.HttpRequest).%New(),r.Server="pm.community.intersystems.com",r.SSLConfiguration="ISC.FeatureTracker.SSL.Config" d r.Get("/packages/zpm/latest/installer"),$system.OBJ.LoadStream(r.HttpResponse.Data,"c")
Or you can use community docker image with IPM onboard like this:
$ docker run --rm --name iris-demo -d -p 9092:52797 -e IRIS_USERNAME=demo -e IRIS_PASSWORD=demo intersystemsdc/iris-community:latest
$ docker exec -it iris-demo iris session iris -U USER
USER>
Installation
Install the IPM package in a namespace with Interoperability enabled.
USER>zpm "install telegram-gpt"
Usage
Open the production.
Put your bot's Telegram Token into Telegram business service and Telegram Business operation both:
Also initialize St.OpenAi.BO.Api.Connect operation with your Chat GPT API key and Organization id:
Start the production.
Ask any question in the telegram chat. You'll get an answer via Chat GPT. Enjoy!
And in visual trace:
Details
This example uses 3.5 version of Chat GPT Open AI. It could be altered in the data-transformation rule for the Model parameter.
pic 1 didnt show up How about now? It looks like that organization field for Open AI integration is not mandatory, so only Telegram Token and ChatGPT key needed. Great!!! Good job Thank you, @Francisco.López1549! And thanks for introducing chatGPT package to the community! In a new version can also be installed as:
USER>zpm "install telegram-gpt -D TgToken=your_telegram_token -D GPTKey=your_ChatGPT_key"
so you can pass the Telegram Token and ChatGPT API keys as production parameters. A new version is coming soon... New features 😉 Looking forward!
Announcement
Jacquie Clermont · Nov 30, 2022
Hi Community:
Pleased to let you know that in Forrester's latest "Wave" report on analytical data platforms, we have been designated a "leader."
You can learn more from this InterSystems Press Release, or even better, read The Forrester Wave™: Translytical Data Platforms, Q4 2022.
Article
Megumi Kakechi · May 4, 2023
InterSystems FAQ rubric
In Windows, set the processes with the following image names as monitoring targets.
[irisdb.exe]
contains important system processes.* Please refer to the attachment for how to check important system processes that should be monitored.
[IRISservice.exe]
This is the process for handling IRIS instances via services.When this process ends, it does not directly affect the IRIS instance itself, but stopping IRIS (stopping the service) is no longer possible.
[ctelnetd.exe]
%Service_Telnet Starts when the service is enabled and becomes a daemon process to access IRIS via Telnet.Once this process is finished, Telnet access to the IRIS instance will no longer be possible.
[iristrmd.exe]
%Service_Console This is a daemon process that starts when the service is enabled (enabled by default) and accesses IRIS from the server's local terminal (from the server's IRIS launcher to the terminal).After this process ends, local terminal access to the IRIS instance is no longer possible.
[iristray.exe]
The process for the IRIS Launcher that appears in the system tray.The presence or absence of this process does not affect the IRIS instance, so there is no particular need to monitor it.
[licmanager.exe]
License server process is started when using multi-server type licenses.
[httpd.exe]
Apache process for the management portal.
To obtain a list of processes inside IRIS, select the following menu of the Management Portal or [Management Portal] > [System Operation] > [Process].
You can also check the process list from the command line by launching a terminal.
do ALL^%SS
Please see the following documents for details.IRIS Core Processes
Announcement
Shane Nowack · Feb 14, 2023
Hello InterSystems HL7 Community,
InterSystems Certification is developing a certification exam for InterSystems HL7 interface specialists and, if you match the exam candidate description given below, we would like you to beta test the exam. The dates the exam will be available for beta testing are March 28 - April 30, 2023. Interested beta testers should sign up now by emailing certification@intersystems.com (see below for more details).
Note: The InterSystems HL7 Interface Specialist certification exam is version 2.0 of our HealthShare Health Connect HL7 Interface Specialist certification exam, which will be retired at the time of the release of this exam. The expiration dates for individuals currently holding the HealthShare Health Connect HL7 Interface Specialist credential will NOT change from when the credential was earned. However, the name on your digital badge will be updated to reflect the new name at the time of release.
FAQs
What are my responsibilities as a beta tester?
You will be assigned the exam and will need to take it within one month of the beta release. The exam will be administered in an online proctored environment, free of charge (the standard fee of $150 per exam is waived for all beta testers), and then the InterSystems Certification Team will perform a careful statistical analysis of all beta test data to set a passing score for the exam. The analysis of the beta test results will take 6-8 weeks, and after the passing score is established, you will receive an email notification from InterSystems Certification informing you of the results. If your score on the exam is at or above the passing score, you will have earned the certification!
Note: Beta test scores are completely confidential.Can I beta test the exam if I am already a certified HealthShare Health Connect HL7 Interface Specialist?Yes! If you receive a passing score on your beta test your credential expiration date will be updated to be 5 years from the release date of the new exam.
Exam Details
Exam title: InterSystems HL7 Interface Specialist
Candidate description: An IT professional who:
designs, builds, and performs basic troubleshooting of HL7 interfaces with InterSystems products, and
has at least six months full-time experience in the technology.
Number of questions: 68
Time allotted to take exam: 2 hours
Recommended preparation:
Classroom course Building and Managing HL7 Integrations or equivalent experience. Online courses Configuring Validation of HL7 V2 Messages in Productions, Building Basic HL7 V2 Integrations with InterSystems, and Using HL7 V2 Bracket and Parentheses Syntax To Identify Virtual Properties are recommended, as well as experience searching InterSystems Documentation.
Recommended practical experience:
6 months - 1 year designing, building, and performing basic troubleshooting of HL7 interfaces with InterSystems products version 2019.1 or higher.
Exam practice questions
A set of practice questions will be sent to you via email when we notify you of the beta release.
Exam format
The questions are presented in two formats: multiple choice and multiple response.
System requirements for beta testing
Version 6.1.34.2 or later of Questionmark Secure
Adobe Acrobat set as the default PDF viewer on your system
Exam topics and content
The exam contains question items that cover the areas for the stated role as shown in the KSA (Knowledge, Skills, Abilities) chart immediately below.
KSA Group
KSA Group Description
KSA Description
Target items
T1
Designs HL7 productions
Interprets interface requirements
Determines productions and namespaces needed
Determines appropriate production components and the flow of messages
Determines production needs from interface specifications
Determines data transformation needs
Determines validation settings
Designs routing rules
Chooses production architecture components
Identifies basic functionality of production components
Identifies adapters used by built-in HL7 components
Identifies the components in a production diagram
Names production components, rules, and DTLs according to conventions
Designs custom schemas
Identifies custom segments in custom schema categories
Determines where sample messages deviate from schema requirements
T2
Builds HL7 productions
Adds production components to build interfaces
Adds production components to productions
Imports and exports productions and their components using the deploy tool
Creates and applies routing rules
Creates and interprets rule sets
Accesses HL7 message contents using expression editor
Identifies how constraints affect code completion in the expression editor
Uses virtual property path syntax to implement rule conditions
Uses virtual property syntax and utility functions to retrieve HL7 data
Applies foreach actions
Determines problems within routing rules
Applies key configuration settings in productions
Identifies key configuration settings in business services and operations
Maps key settings to correct components
Configures pool size and actor pool size settings to ensure FIFO
Configures alert configuration settings
Configures failure timeout setting to ensure FIFO
Configures and uses credentials
Identifies behavior caused by using system default settings
Uses DTL Editor to build DTLs
Configures source and target message classes
Adds functions to DTL expressions
Differentiates between Create New versus Create Copy settings
Applies foreach actions
Applies if actions
Applies group actions
Applies switch actions
Tests DTLs
Creates custom schemas
Determines custom schema requirements
Creates new custom schemas based on requirements
Identifies segment characteristics from message structure
Applies ACK/NACK functionality
Selects appropriate ACK mode settings
Identifies default ACK/NACK settings for business service
Determines reply code actions for desired behaviors
Manages messages
Purges messages manually
Purges messages automatically
Ensures purge tasks are running
T3
Troubleshoots HL7 productions
Identifies and uses tools for troubleshooting
Uses production configuration page
Configures Archive I/O setting
Identifies the name of the central alert component
Uses bad message handler
Uses Jobs page, Messages page, Production Monitor page, and Queues page
Identifies root cause of production problem states
Tests message routing rules using testing tool
Uses Visual Trace
Locates session ID of a message
Interprets information displayed in Visual Trace
Interprets different icons in the Visual Trace
Locates information in tabs of Visual Trace
Determines causes of alerts
Troubleshoots production configuration problems
Uses Message Viewer
Optimizes search options
Searches messages using Basic Criteria
Searches messages using Extended Criteria
Uses search tables in productions
Resends Messages
Troubleshoots display problems in Message Viewer
Uses logs for debugging
Uses Business Rule Log
Uses the Event Log to examine log entries
Identifies auditable events
Searches the Event Log
Interested in participating? Email certification@intersystems.com now to sign up! Is it still possible to participate?
Article
Eduard Lebedyuk · Apr 6, 2018
In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?Containers infrastructureGitLab CI/CD using containersIn the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.In the third article, we covered GitLab installation and configuration and connecting your environments to GitLabIn the fourth article, we wrote a CD configuration.In the fifth article, we talked about containers and how (and why) they can be used.In this article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.Our configuration would look like this:Here we can see the separation of the three main stages:BuildShipRunBuildIn the previous parts, the build was often incremental - we calculated the difference between current environment and current codebase and modified our environment to correspond to the codebase. With containers, each build is a full build. The result of a build is an image that could be run anywhere with to dependencies.ShipAfter our image is built and passed the tests it is uploaded into the registry - specialized server to host docker images. There it can replace the previous image with the same tag. For example, due to new commit to the master branch we built the new image (project/version:master) and if tests passed we can replace the image in the registry with the new one under the same tag, so everyone who pulls project/version:master would get a new version.RunFinally, our images are deployed. CI solution such as GitLab can control that or a specialized orchestrator, but the point is the same - some images are executed, periodically checked for health and updated if a new image version becomes available.Check out docker webinar explaining these different stages.Alternatively, from the commit point of view:In our delivery configuration we would:Push code into GitLab repositoryBuild docker imageTest itPublish image to our docker registrySwap old container with the new version from the registryTo do that we'll need:DockerDocker registryRegistered domain (optional but preferable)GUI tools (optional) DockerFirst of all, we need to run docker somewhere. I'd recommend starting with one server with more mainstream Linux flavor like Ubuntu, RHEL or Suse. Don't use cloud-oriented distributions like CoreOS, RancherOS etc. - they are not really aimed at the beginners. Don't forget to switch storage driver to devicemapper. If we're talking about big deployments then using container orchestration tools like Kubernetes, Rancher or Swarm can automate most tasks but we're not going to discuss them (at least in this part). Docker registryThat's a first container we need to run, and it is a stateless, scalable server-side application that stores and lets you distribute Docker images.You should use the Registry if you want to: tightly control where your images are being stored fully own your images distribution pipeline integrate image storage and distribution tightly into your in-house development workflowHere's registry documentation.Connecting registry and GitLabNote: GitLab includes in-built registry. You can run it instead of external registry. Read GitLab docs linked in this paragraph.To connect your registry to GitLab, you'll need to run your registry with HTTPS support - I use Let's Encrypt to get certificates, and I followed this Gist to get certificates and passed them into a container. After making sure that the registry is available over HTTPS (you can check from browser) follow these instructions on connecting registry to GitLab. These instructions differ based on what you need and your GitLab installation, in my case configuration was adding registry certificate and key (properly named and with correct permissions) to /etc/gitlab/ssl and these lines to /etc/gitlab/gitlab.rb:
registry_external_url 'https://docker.domain.com'
gitlab_rails['registry_api_url'] = "https://docker.domain.com"
And after reconfiguring GitLab I could see a new Registry tab where we're provided with the information on how to properly tag newly built images, so they would appear here.
Domain
In our Continuous Delivery configuration, we would automatically build an image per branch and if the image passes tests then we'd publish it in the registry and run it automatically, so our application would be automatically available in all "states", for example, we can access:
Several feature branches at <featureName>.docker.domain.comTest version at master.docker.domain.comPreprod version at preprod.docker.domain.comProd version at prod.docker.domain.com
To do that we need a domain name and add a wildcard DNS record that points *.docker.domain.com to the IP address of docker.domain.com. Other option would be to use different ports.
Nginx proxy
As we have several feature branches we need to redirect subdomains automatically to the correct container. To do that we can use Nginx as a reverse proxy. Here's a guide.
GUI tools
To start working with containers you can use either command line or one of the GUI interfaces. There are many available, for example:
RancherMicroBadgerPortainerSimple Docker UI...
They allow you to create containers and manage them from the GUI instead of CLI. Here's how Rancher looks like:
GitLab runner
Same as before to execute scripts on other servers we'll need to install GitLab runner. I discussed that in the third article.
Note that you'll need to use Shell executor and not Docker executor. Docker executor is used when you need something from inside of the image, for example you're building an Android application in java container and you only need an apk. In our case we need a whole container and for that we need Shell executor.
Conclusion
It's easy to start running containers and there are many tools to choose from.
Continuous Delivery using containers differs from the usual Continuous Delivery configuration in several ways:
Dependencies are satisfied at build time and after image is built you don't need to think about dependencies.Reproducibility - you can easily reproduce any existing environment by running the same container locally.Speed - as containers do not have anything except what you explicitly added, they can be built faster more importantly they are built once and used whenever required.Efficiency - same as above containers produce less overhead than, for example, VMs.Scalability - with orchestration tools you can automatically scale your application to the workload and consume only resources you need right now.
What's next
In the next article, we'll talk create a CD configuration that leverages InterSystems IRIS Docker container.
Announcement
Evgeny Shvarov · Apr 11, 2018
Hi, Community!Continuous Delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. Join us at 07:00 UTC, April 24th for a webinar with a live demo "Git flows and Continuous Delivery" by @Eduard.Lebedyuk The language of the webinar is Russian.Also, see the related articles on DC. This webinar recording is available in a dedicated Webinars in Russian playlist on InterSystems Developers YouTube Channel: Enjoy it!
Article
Eduard Lebedyuk · Mar 20, 2018
In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:
Git 101
Git flow (development process)
GitLab installation
GitLab Workflow
Continuous Delivery
GitLab installation and configuration
GitLab CI/CD
In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.
In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.
In the third article, we covered GitLab installation and configuration and connecting your environments to GitLab
I this article we'll finally write a CD configuration.
Plan
Environments
First of all, we need several environments and branches that correspond to them:
Environment
Branch
Delivery
Who can commit
Who can merge
Test
master
Automatic
Developers Owners
Developers Owners
Preprod
preprod
Automatic
No one
Owners
Prod
prod
Semiautomatic (press button to deliver)
No one
Owners
Development cycle
And as an example, we'll develop one new feature using GitLab flow and deliver it using GitLab CD.
Feature is developed in a feature branch.
Feature branch is reviewed and merged into the master branch.
After a while (several features merged) master is merged into preprod
After a while (user testing, etc.) preprod is merged into prod
Here's how it would look like (I have marked the parts that we need to develop for CD in cursive):
Development and testing
Developer commits the code for the new feature into a separate feature branch
After feature becomes stable, the developer merges our feature branch into the master branch
Code from the master branch is delivered to the Test environment, where it's loaded and tested
Delivery to the Preprod environment
Developer creates merge request from master branch into the preprod branch
Repository Owner after some time approves merge request
Code from the preprod branch is delivered to the Preprod environment
Delivery to the Prod environment
Developer creates merge request from preprod branch into the prod branch
Repository Owner after some time approves merge request
Repository owner presses "Deploy" button
Code from prod branch is delivered to the Prod environment
Or the same but in a graphic form:
Application
Our application consists of two parts:
REST API developed on InterSystems platform
Client JavaScript web application
Stages
From the plan above we can determine the stages that we need to define in our Continuous Delivery configuration:
Load - to import server-side code into InterSystems IRIS
Test - to test client and server code
Package - to build client code
Deploy - to "publish" client code using web server
Here's how it looks like in .gitlab-ci.yml configuration file:
stages:
- load
- test
- package
- deploy
Scripts
Load
Next, let's define the scripts. Scripts docs. First, lets define a script load server that loads server-side code:
load server:
environment:
name: test
url: http://test.hostname.com
only:
- master
tags:
- test
stage: load
script: csession IRIS "##class(isc.git.GitLab).load()"
What happens here?
load server is a script name
next, we describe the environment where this script runs
only: master - tells GitLab that this script should be run only when there's a commit to master branch
tags: test specifies that this script should only run on a runner which has test tag
stage specifies stage for a script
script defines code to execute. In our case, we call classmethod load from isc.git.GitLab class
Important note
For InterSystems IRIS replace csession with iris session.
For Windows use: irisdb -s ../mgr -U TEST "##class(isc.git.GitLab).load()
Now let's write the corresponding isc.git.GitLab class. All entry points in this class look like this:
ClassMethod method()
{
try {
// code
halt
} catch ex {
write !,$System.Status.GetErrorText(ex.AsStatus()),!
do $system.Process.Terminate(, 1)
}
}
Note that this method can end in two ways:
by halting current process - that registers in GitLab as a successful completion
by calling $system.Process.Terminate - which terminates the process abnormally and GitLab registers this as an error
That said, here's our load code:
/// Do a full load
/// do ##class(isc.git.GitLab).load()
ClassMethod load()
{
try {
set dir = ..getDir()
do ..log("Importing dir " _ dir)
do $system.OBJ.ImportDir(dir, ..getExtWildcard(), "c", .errors, 1)
throw:$get(errors,0)'=0 ##class(%Exception.General).%New("Load error")
halt
} catch ex {
write !,$System.Status.GetErrorText(ex.AsStatus()),!
do $system.Process.Terminate(, 1)
}
}
Two utility methods are called:
getExtWildcard - to get a list of relevant file extensions
getDir - to get repository directory
How can we get the directory?
When GitLab executes a script first it specifies a lot of environment variables. One of them is CI_PROJECT_DIR - The full path where the repository is cloned and where the job is run. We can easily get it in our getDir method:
ClassMethod getDir() [ CodeMode = expression ]
{
##class(%File).NormalizeDirectory($system.Util.GetEnviron("CI_PROJECT_DIR"))
}
Tests
Here's test script:
load test:
environment:
name: test
url: http://test.hostname.com
only:
- master
tags:
- test
stage: test
script: csession IRIS "##class(isc.git.GitLab).test()"
artifacts:
paths:
- tests.html
What changed? Name and script code of course, but artifact also was added. An artifact is a list of files and directories which are attached to a job after it completes successfully. In our case after the tests are completed, we can generate HTML page redirecting to the test results and make it available from GitLab.
Note that there's a lot of copy-paste from the load stage - environment is the same, script parts, such as environments can be labeled separately and attached to a script. Let's define test environment:
.env_test: &env_test
environment:
name: test
url: http://test.hostname.com
only:
- master
tags:
- test
Now our test script looks like this:
load test:
<<: *env_test
script: csession IRIS "##class(isc.git.GitLab).test()"
artifacts:
paths:
- tests.html
Next, let's execute the tests using UnitTest framework.
/// do ##class(isc.git.GitLab).test()
ClassMethod test()
{
try {
set tests = ##class(isc.git.Settings).getSetting("tests")
if (tests'="") {
set dir = ..getDir()
set ^UnitTestRoot = dir
$$$TOE(sc, ##class(%UnitTest.Manager).RunTest(tests, "/nodelete"))
$$$TOE(sc, ..writeTestHTML())
throw:'..isLastTestOk() ##class(%Exception.General).%New("Tests error")
}
halt
} catch ex {
do ..logException(ex)
do $system.Process.Terminate(, 1)
}
}
Tests setting, in this case, is a path relative to repository root where unit tests are stored. If It's empty then we skip tests. writeTestHTML method is used to output html with a redirect to test results:
ClassMethod writeTestHTML()
{
set text = ##class(%Dictionary.XDataDefinition).IDKEYOpen($classname(), "html").Data.Read()
set text = $replace(text, "!!!", ..getURL())
set file = ##class(%Stream.FileCharacter).%New()
set name = ..getDir() _ "tests.html"
do file.LinkToFile(name)
do file.Write(text)
quit file.%Save()
}
ClassMethod getURL()
{
set url = ##class(isc.git.Settings).getSetting("url")
set url = url _ $system.CSP.GetDefaultApp("%SYS")
set url = url_"/%25UnitTest.Portal.Indices.cls?Index="_ $g(^UnitTest.Result, 1) _ "&$NAMESPACE=" _ $zconvert($namespace,"O","URL")
quit url
}
ClassMethod isLastTestOk() As %Boolean
{
set in = ##class(%UnitTest.Result.TestInstance).%OpenId(^UnitTest.Result)
for i=1:1:in.TestSuites.Count() {
#dim suite As %UnitTest.Result.TestSuite
set suite = in.TestSuites.GetAt(i)
return:suite.Status=0 $$$NO
}
quit $$$YES
}
XData html
{
<html lang="en-US">
<head>
<meta charset="UTF-8"/>
<meta http-equiv="refresh" content="0; url=!!!"/>
<script type="text/javascript">
window.location.href = "!!!"
</script>
</head>
<body>
If you are not redirected automatically, follow this <a href='!!!'>link to tests</a>.
</body>
</html>
}
Package
Our client is a simple HTML page:
<html>
<head>
<script type="text/javascript">
function initializePage() {
var xhr = new XMLHttpRequest();
var url = "${CI_ENVIRONMENT_URL}:57772/MyApp/version";
xhr.open("GET", url, true);
xhr.send();
xhr.onloadend = function (data) {
document.getElementById("version").innerHTML = "Version: " + this.response;
};
var xhr = new XMLHttpRequest();
var url = "${CI_ENVIRONMENT_URL}:57772/MyApp/author";
xhr.open("GET", url, true);
xhr.send();
xhr.onloadend = function (data) {
document.getElementById("author").innerHTML = "Author: " + this.response;
};
}
</script>
</head>
<body onload="initializePage()">
<div id = "version"></div>
<div id = "author"></div>
</body>
</html>
And to build it we need to replace ${CI_ENVIRONMENT_URL} with its value. Of course, real-world application would probably require npm, but it's just an example. Here's the script:
package client:
<<: *env_test
stage: package
script: envsubst < client/index.html > index.html
artifacts:
paths:
- index.html
Deploy
And finally, we deploy our client by copying index.html into webserver root directory.
deploy client:
<<: *env_test
stage: deploy
script: cp -f index.html /var/www/html/index.html
That's it!
Several environments
What to do if you need to execute the same (similar) script in several environments? Script parts can also be labels, so here's a sample configuration that loads code in test and preprod environments:
stages:
- load
- test
.env_test: &env_test
environment:
name: test
url: http://test.hostname.com
only:
- master
tags:
- test
.env_preprod: &env_preprod
environment:
name: preprod
url: http://preprod.hostname.com
only:
- preprod
tags:
- preprod
.script_load: &script_load
stage: load
script: csession IRIS "##class(isc.git.GitLab).loadDiff()"
load test:
<<: *env_test
<<: *script_load
load preprod:
<<: *env_preprod
<<: *script_load
This way we can escape copy-pasting the code.
Complete CD configuration is available here. It follows the original plan of moving code between test, preprod and prod environments.
Conclusion
Continuous Delivery can be configured to automate any required development workflow.
Links
Hooks repository (and sample configuration)
Test repository
Scripts docs
Available environment variables
What's next
In the next article, we'll create CD configuration that leverages InterSystems IRIS Docker container. Hi Eduard,I had a look at your continuous delivery articles and found them awesome! I tried to set up a similar environment but I'm struggling with a detail... Hope you'll be able to help me out.I currently have a working gitlab-runner installed on my Windows Laptop with a working Ensemble 2018.1.1 with the isc.gitlab package you provided.C:\Users\gena6950>csession ENS2K18 -U ENSCHSPROD1 "##class(isc.git.GitLab).test()"===============================================================================Directory: C:\Gitlab-Runner\builds\ijGUv41q\0\ciussse-drit-srd\ensemble-continuous-integration-tests\Src\ENSCHS1\Tests\Unit\===============================================================================[...]Use the following URL to view the result:http://10.225.31.79:8971/csp/sys/%25UnitTest.Portal.Indices.cls?Index=23&$NAMESPACE=ENSCHSPROD1All PASSEDDC:\Users\gena6950>I had to manually "alter" the .yml file because of a new bug with parenthesis in the gitlab-runner shell commands (see https://gitlab.com/gitlab-org/gitlab-runner/issues/1941). Relevant parts of this file are there (the file itself is larger but I think it's irrelevant). I put a "Echo" there to see how the command was received by the runner.stages: - load - test - packagevariables: LOAD_DIFF_CMD: "##class(isc.git.GitLab).loadDiff()" TEST_CMD: "##class(isc.git.GitLab).test()" PACKAGE_CMD: "##class(isc.git.GitLab).package()".script_test: &script_test stage: test script: - echo csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%" - csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%" artifacts: paths: - tests.htmlAnd this is the output seen by the gitlab output : Running with gitlab-runner 11.7.0 (8bb608ff) on Laptop ACGendron ijGUv41qUsing Shell executor...Running on CH05CHUSHDP1609...Fetching changes...Removing tests.htmlHEAD is now at b1ef284 Ajout du fichier de config du pipeline GitlabChecking out b1ef284e as master...Skipping Git submodules setup$ echo csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%"csession ENS2K18 -U ENSCHSPROD1 "##class(isc.git.GitLab).test()"$ csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%"<NOTOPEN>>ERROR: Job failed: exit status 1I'm pretty sure it must be a small thing but I can't put my finger on it!Hope you'll be able to help!Kind regardsAndre-Claude I have not tested the code on Windows, but here's my idea.As you can see in the code for test method in cause of exceptions I end all my processes with
do $system.Process.Terminate(, 1)
it seems this path is getting hit.
How to fix this exception:
Check that test method actually gets called. Write to a global in a first line.In exception handler add do ex.Log() and check application error log to see exception details. Thank you, I will have a look at that and also, I'm replicating my setup on a redhat gitlab-runner. I'll update this post if I find my way out of this on Windows. I also noticed that the "ENVIRONMENT" variables were not passed appropriately in a way that csession understands. The $system.Util.GetEnviron("CI_PROJECT_DIR") and $system.Util.GetEnviron("CI_COMMIT_SHA") calls both returns an empty string. Perhaps the <NOTOPEN> is related to the way the stdout is read in windows. Thanks for this detailed article but have a few questions in mind ( probably you might have faced/answered when you have implemented this solution) 1) These CI/CD builds on the corresponding servers are Clean build ? If the commit is about deleting 4/5 classes unless we do a "Delete" Exclusive/all files on the servers regular load from Sandbox may not overwrite/delete the files that are present on the Server where we are building ? 2) Are the Ensemble Production classes stored as One Production file but in different versions (with Different settings) in respective branches ? or dedicated production file for each Branch/Server so Developers merge Items (Business Service, Process etc) as they move them from one branch to another ? what is the best approach that supports this above model ? Per your second question, best practice is generally to use System Defaults which are set in your Namespace and store the production settings (rather than storing them in the Production class). This allows you to prevent having to have differences in the Production class between branches. Interesting questions.There are several ways to achieve clean and pseudo-clean builds:Containers. Clean builds every time. Next articles in the series explore how containers can be used for CI/CD.Hooks. Curently I implemented one-time and every-time hooks before and after build. They can be used to do deletion, configuration, etc.Recreate. Add action to delete before build:DBsNamespacesRolesWebAppsAnything else you createdI agree with @Benjamin.Spead here. System default settings are the way to go. If you're working outside of Ensemble architecture, you can create a small settings class which gets the data from global/table and use that. Example. I agree, but unfortunately Portal's edit forms for config items always apply settings into the production class (the XData block). Even worse, Portal ignores the source control status of the production class, so you can't prevent these changes. Portal users have to go elsewhere to add/edit System Defaults values. It's far from obvious, nor is it easy. And because they don't have to (i.e. the users can just make the edit in the Portal panel and save it) nobody does. We raised all this years ago, but so far it's remained unaddressed See also https://community.intersystems.com/post/system-default-settings-versus-production-settings#answer-429986 Solution for this issue was posted here.
Article
Eduard Lebedyuk · Mar 26, 2018
In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?GitLab CI/CD using containersIn the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.In the third article, we covered GitLab installation and configuration and connecting your environments to GitLabIn the fourth article, we wrote a CD configuration.In this article, let's talk about containers and how (and why) they can be used.This article assumes familiarity with the concepts of docker and container. Check out these articles by @Luca.Ravazzolo if you want to read about containers and images.AdvantagesThere are many advantages to using containers:PortabilityEfficiencyIsolationLightweightImmutabilityLet's talk about them in detail.PortabilityA container wraps up an application with everything it needs to run, like configuration files and dependencies. This enables you to easily and reliably run applications on different environments such as your local desktop, physical servers, virtual servers, testing, staging, production environments and public or private clouds.Another part of portability is that once you built your Docker image and checked that it runs correctly, it would run anywhere else that runs docker, which are Windows, Linux and MacOS servers today.EfficiencyYou really only need your application process to run not all the system soft, etc. And containers provide exactly that - they run only the processes you explicitly need and nothing else. Since containers do not require a separate operating system, they use up less resources. While a VM often measures several gigabytes in size, a container usually measures only a few hundred megabytes, making it possible to run many more containers than VMs on a single server. Since containers have a higher utilization level with regard to the underlying hardware, you require less hardware, resulting in a reduction of bare metal costs as well as datacenter costs.IsolationContainers isolate your application from everything else, and while several containers can run on the same server they can be completely independent of each other. Any interaction between containers should be explicitly declared as such. If one container fails it doesn't affect others and can be quickly restarted. Security also benefits from such isolation. For example, exploiting web server vulnerability on a bare metal can potentially give an attacker access to the whole server, but in the case of containers, attacker would only get access to web server container.LightweightSince containers do not require a separate OS, they can be started, stopped or rebooted in a matter of seconds which speeds up all related development pipelines and time to production. You can start working sooner and spend zero time on configuring. ImmutabilityImmutable infrastructure is comprised of immutable components that are replaced for every deployment, rather than being updated in-place. Those components are started from a common image that is built once per deployment and can be tested and validated. Immutability reduces inconsistency and allows replication and moving between different states of your application with ease. More on immutability.New possibilitiesAll these advantages allow us to manage our infrastructure and workflow in the entirely new ways.OrchestrationThere is a problem with bare metal or VM environments - they gain individuality which brings many, usually unpleasant, surprises down the road. The answer to that is Infrastructure as code - management of infrastructure in a descriptive model, using the same versioning as DevOps team uses for source code.With Infrastructure as Code a deployment command always sets the target environment into the same configuration, regardless of the environment’s starting state. It is achieved by either automatically configuring an existing target or by discarding the existing target and recreating a fresh environment.Accordingly, with Infrastructure as Code, teams make changes to the environment description and version the configuration model, which is typically in well-documented code formats such as JSON. The release pipeline executes the model to configure target environments. If the team needs to make changes, they edit the source, not the target.All this is possible and much easier to do with containers. Shutting down container and starting a new one takes a few seconds while provisioning a new VM takes a few minutes. And I'm not even talking about rolling back a server into a clean state.ScalingFrom the previous point, you may get an idea that infrastructure as code is static by itself. That's not so, as orchestration tools can also provide horizontal scaling (provisioning more of the same) based on current workload. You should only run what is currently required and scale your application accordingly. It can also reduce costs.ConclusionContainers can streamline your development pipeline. Elimination of inconsistencies between environments allows for easier testing and debugging. Orchestration allows you to build scalable applications. Deployment or rollback to any point of immutable history is possible and easy.Organizations want to work at a higher level where all the above listed issues are already solved and where we find schedulers and orchestrators handling more things in an automated way for us.What's nextIn the next article, we'll talk about provisioning with containers and create a CD configuration that leverages InterSystems IRIS Docker container.