Clear filter
Announcement
Jeff Fried · Nov 4, 2019
The 2019.1.1 versions of InterSystems IRIS and IRIS for Health are now Generally Available!
These are maintenance releases in the EM (Extended Maintenance) stream. The changes are reflected in the 2019.1 documentation, which is available online and features a new look including a card-style TOC layout.
The build number for these releases is 2019.1.1.612.0. A full set of kits and containers for both products are available from the WRC Software Distribution site, including community editions of InterSystems IRIS and IRIS for Health.
This release also adds support for Red Hat Enterprise Linux 8, in addition to the previously supported platforms detailed in the 2019.1 Supported Platforms document.
InterSystems IRIS Data Platform 2019.1.1 includes maintenance updates in a number of areas, as described in the online documentation here.
It also includes three new features, described in the online documentation here:
Support for the InterSystems API Manager
X12 element validation.
In-place conversion from Caché and Ensemble to InterSystems IRIS
IRIS for Health 2019.1.1 includes maintenance updates in a number of areas, as described in the online documentation here.
It also includes two new features, described in the online documentation here:
Support for the InterSystems API Manager
X12 element validation
Article
Mikhail Khomenko · Feb 11, 2020
In an earlier article (hope, you’ve read it), we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.
In this article we’re going to try using GitHub Actions to deploy the server part of InterSystems Package Manager, ZPM-registry, on Google Kubernetes Engine (GKE).
As with all systems, the build/deploy process essentially comes down to “do this, go there, do that,” and so on. With GitHub Actions, each such action is a job that consists of one or more steps, together known as a workflow. GitHub will search for a description of the workflow in the YAML file (any filename ending in .yml or .yaml) in your .github/workflows directory. See Core concepts for GitHub Actions for more details.
All further actions will be performed in the fork of the ZPM-registry repository. We’ll call this fork "zpm-registry" and refer to its root directory as "<root_repo_dir>" throughout this article. To learn more about the ZPM application itself see Introducing InterSystems ObjectScript Package Manager and The Anatomy of ZPM Module: Packaging Your InterSystems Solution.
All code samples are stored in this repository to simplify copying and pasting. The prerequisites are the same as in the article Automating GKE creation on CircleCI builds.
We’ll assume you’ve read the earlier article and already have a Google account, and that you’ve created a project named "Development," as in the previous article. In this article, its ID is shown as <PROJECT_ID>. In the examples below, change it to the ID of your own project.
Keep in mind that Google isn’t free, although it has a free tier. Be sure to control your expenses.
Workflow Basics
Let’s get started.
A simple and useless workflow file might look like this:
$ cd <root_repo_dir>$ mkdir -p .github/workflows$ cat <root_repo_dir>/.github/workflows/workflow.yaml name: Traditional Hello Worldon: [push]jobs: courtesy: name: Greeting runs-on: ubuntu-latest steps: - name: Hello world run: echo "Hello, world!
When pushing to the repository, you need to execute a job named "Greeting," which consists of a single step: printing a welcome phrase. The job should run on a GitHub-hosted virtual machine called the Runner, with the latest version of Ubuntu installed.After pushing this file to the repository, you should see on the Code GitHub tab that everything went well:
If the job had failed, you’d see a red X instead of a green checkmark. To see more, click on the green checkmark and then on Details. Or you can immediately go to the Actions tab:
You can learn all about the workflow syntax in the help document Workflow syntax for GitHub Actions.
If your repository contains a Dockerfile for the image build, you could replace the "Hello world" step with something more useful like this example from starter-workflows:
steps:- uses: actions/checkout@v2- name: Build the Docker image run: docker build . --file Dockerfile --tag my-image:$(date +%s)
Notice that a new step, "uses: action/checkout@v2", was added here. Judging by the name "checkout", it clones the repository, but where to find out more?
As in the case of CircleCI, many useful steps don’t need to be rewritten. Instead, you can take them from the shared resource called Marketplace. Look there for the desired action, and note that it’s better to take those that are marked as "By actions" (when you hover over - "Creator verified by Github").
The "uses" clause in the workflow reflects our intention to use a ready-made module, rather than writing one ourselves.
The implementations of the actions themselves can be written in almost any language, but JavaScript is preferred. If your action is written in JavaScript (or TypeScript), it will be executed directly on the Runner machine. For other implementations, the Docker container you specify will run with the desired environment inside, which is obviously somewhat slower. You can read more about actions in the aptly titled article, About actions.
The checkout action is written in TypeScript. And in our example, Terraform action is a regular bash script launched in Docker Alpine.
There’s a Dockerfile in our cloned repository, so let's try to apply our new knowledge. We’ll build the image of the ZPM registry and push it into the Google Container Registry. In parallel, we’ll create the Kubernetes cluster in which this image will run, and we’ll use Kubernetes manifests to do this.
Here’s what our plan, in a language that GitHub understands, will look like (but keep in mind that this is a bird's eye view with many lines omitted for simplification, so don’t actually use this config):
name: Workflow description# Trigger condition. In this case, only on push to ‘master’ branchon: push: branches: - master
# Here we describe environment variables available # for all further jobs and their steps# These variables can be initialized on GitHub Secrets page# We add “${{ secrets }}” to refer themenv: PROJECT_ID: ${{ secrets.PROJECT_ID }}
# Define a jobs list. Jobs/steps names could be random but# it’s better to have they meaningfuljobs: gcloud-setup-and-build-and-publish-to-GCR: name: Setup gcloud utility, Build ZPM image and Publish it to Container Registry runs-on: ubuntu-18.04 steps: - name: Checkout - name: Setup gcloud cli - name: Configure docker to use the gcloud as a credential helper - name: Build ZPM image - name: Publish ZPM image to Google Container Registry
gke-provisioner: name: Provision GKE cluster runs-on: ubuntu-18.04 steps: - name: Checkout - name: Terraform init - name: Terraform validate - name: Terraform plan - name: Terraform apply
kubernetes-deploy: name: Deploy Kubernetes manifests to GKE cluster needs: - gcloud-setup-and-build-and-publish-to-GCR - gke-provisioner runs-on: ubuntu-18.04 steps: - name: Checkout - name: Replace placeholders with values in statefulset template - name: Setup gcloud cli - name: Apply Kubernetes manifests
This is the skeleton of the working config in which there are no muscles, the real actions for each step. Actions can be accomplished with a simple console command ("run" or "run |" if there are several commands):
- name: Configure docker to use gcloud as a credential helper run: | gcloud auth configure-docker
You can also launch actions as a module with "uses":
- name: Checkout uses: actions/checkout@v2
By default, all jobs run in parallel, and the steps in them are done in sequence. But by using "needs", you can specify that one job should wait for the rest to complete:
needs:- gcloud-setup-and-build-and-publish-to-GCR- gke-provisioner
By the way, in the GitHub Web interface, such waiting jobs appear only when the jobs they’re waiting for are executed.
The "gke-provisioner" job mentions Terraform, which we examined in the previous article. The preliminary settings for its operation in the GCP environment are repeated for convenience in a separate markdown file. Here are some additional useful links:
Terraform Apply Subcommand documentation
Terraform GitHub Actions repository
Terraform GitHub Actions documentation
In the "kubernetes-deploy" job, there is a step called "Apply Kubernetes manifests". We’re going to use manifests as mentioned in the article Deploying InterSystems IRIS Solution into GCP Kubernetes Cluster GKE Using CircleCI, but with a slight change.
In the previous articles, IRIS application has been stateless. That is, when restarting the pod, all data is returned to its default place. This is great, and it’s often necessary, but for ZPM registry you need to somehow save the packages that were loaded into it, regardless of how many times you need to restart. Deployment allows you to do this, of course, but not without limitations.
For stateful applications, it’s better to choose the StatefulSet resource. Pros and cons can be found in the GKE documentation topic on Deployments vs. StatefulSets and the blog post Kubernetes Persistent Volumes with Deployment and StatefulSet.
The StatefulSet resource is in the repository. Here’s the part that’s important for us:
volumeClaimTemplates:- metadata: name: zpm-registry-volume namespace: iris spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
The code creates a 10GB read/write disk that can be mounted by a single Kubernetes worker node. This disk (and the data on it) will survive the restart of the application. It can also survive the removal of the entire StatefulSet, but for this you need to set the correct Reclaim Policy, which we won’t cover here.
Before breathing life into our workflow, let's add a few more variables to GitHub Secrets:
The following table explains the meaning of these settings (service account keys are also present):
Name
Meaning
Example
GCR_LOCATION
Global GCR location
eu.gcr.io
GKE_CLUSTER
GKE cluster name
dev-cluster
GKE_ZONE
Zone to store an image
europe-west1-b
IMAGE_NAME
Image registry name
zpm-registry
PROJECT_ID
GCP Project ID
possible-symbol-254507
SERVICE_ACCOUNT_KEY
JSON key GitHub uses to connect to GCP. Important: it has to be base64-encoded (see note below)
ewogICJ0eXB...
TF_SERVICE_ACCOUNT_KEY
JSON key Terraform uses to connect to GCP (see note below)
{…}
For SERVICE_ACCOUNT_KEY, if your JSON-key has a name, for instance, key.json, run the following command:
$ base64 key.json | tr -d '\n'
For TF_SERVICE_ACCOUNT_KEY, note that its rights are described in Automating GKE creation on CircleCI builds.
One small note about SERVICE_ACCOUNT_KEY: if you, like me, initially forgot to convert it to base64 format, you’ll see a screen like this:
Now that we’ve looked at the workflow backbone and added the necessary variables, we’re ready to examine the full version of the workflow (<root_repo_dir>/.github/workflow/workflow.yaml):
name: Build ZPM-registry image, deploy it to GCR. Run GKE. Run ZPM-registry in GKEon: push: branches: - master
# Environment variables.# ${{ secrets }} are taken from GitHub -> Settings -> Secrets# ${{ github.sha }} is the commit hashenv: PROJECT_ID: ${{ secrets.PROJECT_ID }} SERVICE_ACCOUNT_KEY: ${{ secrets.SERVICE_ACCOUNT_KEY }} GOOGLE_CREDENTIALS: ${{ secrets.TF_SERVICE_ACCOUNT_KEY }} GITHUB_SHA: ${{ github.sha }} GCR_LOCATION: ${{ secrets.GCR_LOCATION }} IMAGE_NAME: ${{ secrets.IMAGE_NAME }} GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }} GKE_ZONE: ${{ secrets.GKE_ZONE }} K8S_NAMESPACE: iris STATEFULSET_NAME: zpm-registry
jobs: gcloud-setup-and-build-and-publish-to-GCR: name: Setup gcloud utility, Build ZPM image and Publish it to Container Registry runs-on: ubuntu-18.04 steps: - name: Checkout uses: actions/checkout@v2
- name: Setup gcloud cli uses: GoogleCloudPlatform/github-actions/setup-gcloud@master with: version: '275.0.0' service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}
- name: Configure docker to use the gcloud as a credential helper run: | gcloud auth configure-docker
- name: Build ZPM image run: | docker build -t ${GCR_LOCATION}/${PROJECT_ID}/${IMAGE_NAME}:${GITHUB_SHA} .
- name: Publish ZPM image to Google Container Registry run: | docker push ${GCR_LOCATION}/${PROJECT_ID}/${IMAGE_NAME}:${GITHUB_SHA}
gke-provisioner: # Inspired by: ## https://www.terraform.io/docs/github-actions/getting-started.html ## https://github.com/hashicorp/terraform-github-actions name: Provision GKE cluster runs-on: ubuntu-18.04 steps: - name: Checkout uses: actions/checkout@v2
- name: Terraform init uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.17 tf_actions_subcommand: 'init' tf_actions_working_dir: 'terraform'
- name: Terraform validate uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.17 tf_actions_subcommand: 'validate' tf_actions_working_dir: 'terraform'
- name: Terraform plan uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.17 tf_actions_subcommand: 'plan' tf_actions_working_dir: 'terraform'
- name: Terraform apply uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.17 tf_actions_subcommand: 'apply' tf_actions_working_dir: 'terraform'
kubernetes-deploy: name: Deploy Kubernetes manifests to GKE cluster needs: - gcloud-setup-and-build-and-publish-to-GCR - gke-provisioner runs-on: ubuntu-18.04 steps: - name: Checkout uses: actions/checkout@v2
- name: Replace placeholders with values in statefulset template working-directory: ./k8s/ run: | cat statefulset.tpl |\ sed "s|DOCKER_REPO_NAME|${GCR_LOCATION}/${PROJECT_ID}/${IMAGE_NAME}|" |\ sed "s|DOCKER_IMAGE_TAG|${GITHUB_SHA}|" > statefulset.yaml cat statefulset.yaml
- name: Setup gcloud cli uses: GoogleCloudPlatform/github-actions/setup-gcloud@master with: version: '275.0.0' service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}
- name: Apply Kubernetes manifests working-directory: ./k8s/ run: | gcloud container clusters get-credentials ${GKE_CLUSTER} --zone ${GKE_ZONE} --project ${PROJECT_ID} kubectl apply -f namespace.yaml kubectl apply -f service.yaml kubectl apply -f statefulset.yaml kubectl -n ${K8S_NAMESPACE} rollout status statefulset/${STATEFULSET_NAME}
Before you push to a repository, you should take the terraform-code from the Terraform directory of github-gke-zpm-registry repository, replace placeholders as noted in main.tf comment, and put it inside the terraform/ directory. Remember that Terraform uses a remote bucket that should be initially created as noted in Automating GKE creation on CircleCI builds article.
Also, Kubernetes-code should be taken from the K8S directory of github-gke-zpm-registry repository and put inside the k8s/ directory. These code sources were omitted in this article to save space.
Then you can trigger a deploy:
$ cd <root_repo_dir>/$ git add .github/workflow/workflow.yaml k8s/ terraform/$ git commit -m “Add GitHub Actions deploy”$ git push
After pushing the changes to our forked ZPM repository, we can take a look at the implementation of the steps we described:
There are only two jobs so far. The third, "kubernetes-deploy", will appear after the completion of those on which it depends.Note that building and publishing Docker images requires some time:
And you can check the result in the GCR console:
The "Provision GKE cluster" job takes longer the first time as it creates the GKE cluster. You’ll see a waiting screen for a few minutes:
But, finally, it finishes and you can be happy:
The Kubernetes resources are also happy:
$ gcloud container clusters get-credentials <CLUSTER_NAME> --zone <GKE_ZONE> --project <PROJECT_ID>
$ kubectl get nodesNAME STATUS ROLES AGE VERSIONgke-dev-cluster-dev-cluster-node-pool-98cef283-dfq2 Ready <none> 8m51s v1.13.11-gke.23
$ kubectl -n iris get poNAME READY STATUS RESTARTS AGEzpm-registry-0 1/1 Running 0 8m25s
It's a good idea to wait for Running status, then check other things:
$ kubectl -n iris get stsNAME READY AGEzpm-registry 1/1 8m25s
$ kubectl -n iris get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEzpm-registry LoadBalancer 10.23.248.234 104.199.6.32 52773:32725/TCP 8m29s
Even the disks are happy:
$ kubectl get pv -oyaml | grep pdName pdName: gke-dev-cluster-5fe434-pvc-5db4f5ed-4055-11ea-a6ab-42010af00286
And happiest of all is the ZPM registry (we took the External-IP output of "kubectl -n iris get svc"):
$ curl -u _system:SYS 104.199.6.32:52773/registry/_ping{"message":"ping"}
Handling the login/password over HTTP is a shame, but I hope to do something about this in future articles.
By the way, you can find more information about endpoints in the source code: see the XData UrlMap section.
We can test this repo by pushing a package to it. There’s a cool ability to push just a direct GitHub link. Let’s try with the math library for InterSystems ObjectScript. Run this from your local machine:
$ curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all[]$ curl -i -XPOST -u _system:SYS -H "Content-Type: application/json" -d '{"repository":"https://github.com/psteiwer/ObjectScript-Math"}' 'http://104.199.6.32:52773/registry/package'HTTP/1.1 200 OK$ curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all[{"name":"objectscript-math","versions":["0.0.4"]}]
Restart a pod to be sure that the data is in place:
$ kubectl -n iris scale --replicas=0 sts zpm-registry$ kubectl -n iris scale --replicas=1 sts zpm-registry$ kubectl -n iris get po -w
Wait for a running pod. Then what I hope you’ll see:
$ curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all[{"name":"objectscript-math","versions":["0.0.4"]}]
Let’s install this math package from your repository on your local IRIS instance. Choose the one where the ZPM client is already installed:
$ docker exec -it $(docker run -d intersystemsdc/iris-community:2019.4.0.383.0-zpm) bash$ iris session irisUSER>write ##class(Math.Math).Factorial(5)<CLASS DOES NOT EXIST> *Math.MathUSER>zpmzpm: USER>listzpm: USER>repo -listregistry Source: https://pm.community.intersystems.com Enabled? Yes Available? Yes Use for Snapshots? Yes Use for Prereleases? Yeszpm: USER>repo -n registry -r -url http://104.199.6.32:52773/registry/ -user _system -pass SYSzpm: USER>repo -list registry Source: http://104.199.6.32:52773/registry/ Enabled? Yes Available? Yes Use for Snapshots? Yes Use for Prereleases? Yes Username: _system Password: <set>zpm: USER>repo -list-modules -n registryobjectscript-math 0.0.4zpm: USER>install objectscript-math[objectscript-math] Reload START...[objectscript-math] Activate SUCCESS
zpm: USER>quitUSER>write ##class(Math.Math).Factorial(5) 120
Congratulations!Don’t forget to remove the GKE cluster when you don’t need it anymore:
Conclusion
There are not many references to GitHub Actions within the InterSystems community. I found only one mention from guru @mdaimor. But GitHub Actions can be quite useful for developers storing code on GitHub. Native actions supported only in JavaScript, but this could be dictated by a desire to describe steps in code, which most developers are familiar with. In any case, you can use Docker actions if you don’t know JavaScript.
Regarding the GitHub Actions UI, along the way I discovered a couple of inconveniences that you should be aware of:
You cannot check what is going on until a job step is finished. It’s not clickable, like in the step "Terraform apply".
While you can rerun a failed workflow, I didn’t find a way to rerun a successful workflow.
A workaround for the second point is to use the command:
$ git commit --allow-empty -m "trigger GitHub actions"
You can learn more about this in the StackOverflow question How do I re-run Github Actions? 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Jan 24, 2020
Dear Community,
We're pleased to invite you to the InterSystems Iberia Summit 2020, which will take place from February 18th to 19th in Barcelona, Spain!
You're more than welcome to join us at the Melia Barcelona Sarriá Hotel!
InterSystems Local Summits are held in different countries and are the premier event for the local InterSystems technology communities – a gathering of companies and developers at the forefront of their respective industries. These events attract a wide range of attendees, from C-level executives, experts, managers, directors and developers. Attendees gather to network with peers, connect with InterSystems partners, learn best practices and get a firsthand look at upcoming features and future innovations from InterSystems.
Join InterSystems Iberia Summit 2020 and be kept abreast of the latest developments!
Don’t miss out and register for the event by clicking right here.
Can’t wait to see you in Barcelona! 😉
So, remember:
⏱ Time: February 18-19, 2020
📍Venue: Melia Barcelona Sarriá Hotel, Barcelona, Spain
✅ Registration: SAVE YOUR SEAT TODAY I'm going to Barcelona this year, catch me there if you would like to see VSCode-ObjectScript in action if you would like to discuss new features you want to see there if you face in issues.
See you there For Footbal fans, we will get dinner al Camp Nou, Barcelona FC Stadium :-D May I say "Messi, could you please pass me the bread?" Great, I have some questions about VSCode-ObjectScript. It is the best occasion to ask the creator of the application directly. Don’t Miss a chance to hat-trick on vodka with Messi ⚽️⚽️⚽️ Published "Developer Resources" slides from InterSystems Iberia Summit in Barcelona.
Announcement
Evgeny Shvarov · Mar 4, 2020
Hi Developers!
In March we are starting our first InterSystems IRIS Programming Contest! It's a competition in creating open-source solutions using InterSystems IRIS Data Platform.
The topic for the first contest is InterSystems IRIS, Docker and ObjectScript!
The contest will last three weeks: March 9-31, 2020.
Prizes:
There will be money prizes for Experts Nomination - winners will be determined by a specially selected jury:
🥇 1st place - $2,000
🥈 2nd place - $1,000
🥉 3rd place - $500
Also, there will be Community Nomination - an application that will receive the most votes in total:
🏆 1st place - $1,000
And we provide winners with high-level badges on Global Masters.
If several participants score the same amount of votes they all are considered as winners and the money prize shared between the winners.
General requirements:
1. The application should be posted as Open Source under a certain license (e.g. MIT License).
2. The application should be approved and published on Open Exchange.
3. The application should use InterSystems IRIS or InterSystems IRIS for Health.
4. Both existing and new applications can participate in the contest.
5. One contestant can upload an unlimited number of applications.
Who can participate?
Each member registered in the Developer Community from any country can participate in a contest, except for InterSystems employees.
Judgment:
Judges for the Experts Nomination are InterSystems Product Managers, Developer Community Moderators and Global Masters advocates with VIP, Ambassador & Expert levels.
One judge can vote only for one application. The power of the vote is different:
PM vote - 3 points
Moderator vote - 2 points
VIP GM Advocate vote - 2 points
Ambassador GM Advocate vote - 1 point
Expert GM Advocate vote - 1 point
Judges for the Community Nomination are any registered community members who posted at least once and will have one vote point.
Judges can participate in a contest, but cannot vote for their own applications.
Judgment criteria
In the Experts Nomination, we will choose the best application which:
Makes the world a better place or makes the life of a developer better;
Has the best functionality - how much the application/library does;
Has a readable and qualitative ObjectScript code.
Contest Period:
March 9-22, 2020: Two weeks to upload your applications to Open Exchange (also during this period, you can edit your projects).
March 23-29, 2020: One week to vote.
All winners will be announced on March 30th, 2020.
The Topic
➡️ InterSystems ObjectScript and InterSystems IRIS in Docker Container ⬅️
We will choose the best application built using InterSystems ObjectScript and which could be launched on either InterSystems IRIS Community Edition(IRIS CE), or InterSystems IRIS for Health Community Edition (IRIS CE4H).
InterSystems IRIS CE Docker and ObjectScript requirement:
If we clone or download the application it should be runnable e.g. with:
$ docker-compose up -d
The application could be implemented as CLI, with execution in the IRIS terminal. e.g.:
$ docker-compose exec iris iris session iris
Node: 981b8e5c8f7a, Instance: IRIS
USER>w ##class(Your.Application).Run()
Here is the sample application.
For a given example the test will be:
$ docker-compose exec iris iris session iris
Node: 981b8e5c8f7a, Instance: IRIS
IRISAPP>w ##class(Contest.ObjectScript).TheQuestionOfTheUniverse()
The answer is 42
IRISAPP>
The README.md file in the description should contain the section, which describes how the CLI functionality could be tested.
We will accept for the contest GitHub repositories which are recognized by GitHub as ObjectScript, e.g. like here:
To make GitHub introduce this indication store your ObjectScript code in .cls files.
Use ObjectScript Contest Template - create your repository and substitute files in /src folder to your solution, or use it as a GitHub template for your new GitHub repository, or import the set of Docker-enable files to your repository. Learn more.
Watch the related video on how to make a repository from GitHub Template.
Here are a few examples of applications that fit the topic of the contest and match IRIS Community Edition running in Docker requirements: Python Gateway, Healthcare XML, Document Template, Game of Life, ForEach, ObjectScript Template.
We are starting in March, please ask your questions!
How to apply for the Contest
Log in to the Open Exchange, open your applications section.
Open the application which you want to apply for the contest and click Apply for Contest.
Make sure the status is 'Published'.
The application will go for the review and if it fits the topic of the contest the application will be listed on the Contest Board.
Stay tuned, the post will be updated. These two applications also meet the requirements and topic: ObjectScript Math and JSONManyToMany by @Peter.Steiwer: there is ObjectScript CLI and you can launch it with
$ docker-compose up -d
and then run and test it with:
$ docker-compose exec iris iris session iris
Node: 981b8e5c8f7a, Instance: IRIS
USER>zn "IRISAPP"
IRISAPP> w ##class(Math.Math).LeastCommonMultiple(134,382) Just to give you a few ideas about what you could submit for the contest check the Rosetta Code - there are still a bunch of opportunities to implement this or that popular algorithms which have implementations in different languages but not in ObjectScript. We are starting tomorrow! How to apply for the Programming Contest
Log in to Open Exchange, open your applications section.
Open the application which you want to apply for the contest and click Apply for Contest.
Make sure the status is 'Published'.
The application will go for the review and if it fits the topic of the contest the application will be listed on the Contest Board. Fixed the link of the contest For those, who start with ObjectScript here are some recommendations from InterSystems Online-Learning team:
ObjectScript First Look — brief hands-on introduction to ObjectScript.
ObjectScript Tutorial — provides an interactive introduction to the ObjectScript language.
Using ObjectScript — provides an overview of and details about the ObjectScript programming language.
ObjectScript Reference — provides reference material for ObjectScript.
Orientation Guide for Server-Side Programming — presents the essentials for programmers who write server-side code using InterSystems products.
InterSystems ObjectScript Basics — an interactive course covering ObjectScript basics.
Also tagging our Online Learning experts @Michelle.Spisak and @jennifer.ames here to share more details. Hi Developers,
Wonder if you could vote for the solution you like? You can! If you contribute to DC!
Everyone contributing articles, questions and replies is very welcome to give the vote in the Community Nomination. An application that will receive the most votes in total will win in this nomination.
Prizes are waiting for you, stay tuned! 🚀 Hey Developers,
The first application is already in the Contest Board!
Who's next? We prepared the template repository, especially for the contest. The post has been updated too. We have two winning nominations:
🏆 Experts Nomination - the best application that will be selected by a special jury of InterSystems Experts.
🏆 Community Nomination - the best application that will be chosen by a majority vote of all DC Contributors.
So ladies and gentlemen, upload your apps and vote! Want more?
Enjoy watching the new video on the InterSystems Developers YouTube, specially recorded by @Evgeny.Shvarov for the IRIS contest:
⏯ How to Create and Submit an Application for InterSystems IRIS Online Programming Contest 2020
Stay tuned! Mmhh.
Perhaps, I'm thinking about ... I want to describe the idea more.
Rosetta Code - is the site with implementations of the same algorithms in different languages.
E.g. check Python - and see hundreds of algorithms implemented.
And check ObjectScript - not that much, but some have an implementation in Python and ObjectScript.
E.g. here is MD5 implementation in ObjectScript and in Python
What you could do is:
Take the algorithm implemented in Python or other languages and introduce the implementation in ObjectScript.
And you are very welcome to participate in the contest with such a project. Hi Lorenzo,
You're very welcome to join our contest! Don't doubt 😉 Hey Developers,
One more application is already in the game. Please check our Contest Board!
Full speed ahead! 🔥 Hey Community,
Our Contest Board has been updated again. Two more applications are already participating in the IRIS contest!
Make your bets! 😉 Please welcome the new video specially recorded for the IRIS contest:
⏯ How to Enable Docker and VSCode to Your InterSystems IRIS Solution
This video describes how you can add InterSystems IRIS Docker and VSCode environment to your current repository with InterSystems ObjectScript using iris-docker-dev-kit.
Enjoy! Developers!
You have 6 days to submit your application for the InterSystems IRIS Online contest!
Don't hesitate to submit if you didn't finish it - you'll be able to fix the bugs and make improvements during the voting week too! Hey Developers!
Only 5 days left to upload your apps to our IRIS Contest!
Show your best ObjectScript skills on InterSystems IRIS and earn some $ and glory! 🔥 Just a few days left to send your app for the contest, I'm already in. )) OK!
For now, we have 8 applications in the contest! And 3 days left!
Competition is growing! Looking forward to more solutions! More and more nominees for winning! Already 8 applications in the Contest Board!
Who's next? 🚀 Yeah + 1 with Dmitriy
I wish to read nice ObjectScript code ! Voting for the best application will begin very soon! Only 3 days left before the end of registration for the IRIS contest.
Don't miss your chance to win! 🏆 Hey Community,
Now we have 11 applications in our Contest Board! And only 2 days left before the start of voting.
Hurry up to upload your application! Last call! Registration for the IRIS Contest ends today!
Now we have 15 apps - make your bets! 😉 The voting has been started!
Choose your best InterSystems IRIS application!
Announcement
Olga Zavrazhnova · Nov 8, 2019
Hi Developers,
New challenge on Global Masters: Review InterSystems IRIS on G2 and get 3000 points!The page for InterSystems IRIS is new on G2, and we need so much your voices and experience to be shared with the worldwide audience!Write a review on G2 using this link, complete the challenge on Global Masters and get 3000 points after your review is published!
Please check the additional information about Global Masters:
How to join InterSystems Global Masters
Global Masters Badges Descriptions
Global Masters Levels Descriptions
Changes in Global Masters Program
How to earn points on Global Masters
If you have not joined InterSystems Global Masters Advocacy Hub yet, let's get started right now!
Feel free to ask your questions in the comments to this post.
Announcement
Evgeny Shvarov · Dec 2, 2019
Hi Developers!
For those who want to participate in the Advent of Code 2019 and code with ObjectScript in IRIS, I created a very simple but handy Github Template.
Use the green button
to copy template in your own repo, clone the repo and run in the repo folder:
docker-compose up -d
you will get InterSystems IRIS 2019.4 Community Edition running with the template classes to load input data from files and Day1 solution.
This is also set up to start crafting solutions of Advent of Code 2019 and edit, compile and debug ObjectScript with VSCode addon.
Happy coding with Advent of Code 2019!
Article
Evgeny Shvarov · Jan 12, 2020
Hi Developers!
Suppose you published your application on Open Exchange with version 1.00. And then you've added a new outstanding feature and you make a new release.
You can also make a new release of your application on Open Exchange.
Why make releases on Open Exchange?
This the way for you to highlight the new features of your application. When you publish a new release the following happens:
Release notes appear on the News page of Open Exchange
The version of your app changes
Version History tab is updated
All the followers of you, your application or your company receive an email notification.
Weekly and monthly Open Exchange digest on OEX and Developers Community will include the note on your release.
How to make a new release
Open your published application page and click Settings-> Edit:
Make changes in description or tag if the new release brings these changes and click Save. Save it even you don't have any changes for the App's properties.
Then click 'Send for Approval to update the version and submit Release Notes:
You will see the window with version number and release notes.
We automatically up the minor version of the current version number but it's up to you what version to release or even not change the version number at all. Release notes supports Markdown so prepare the markdown text in any editor it supports (e.g. VSCode) and copy-n-paste it here. Then click on Send button:
The markdown I submitted here:
## InterSystems IRIS docker image update
In this release I updated an [InterSystems Docker](https://hub.docker.com/publishers/intersystems) image with the new 2019.4 release
Once the version is approved release notes will be sent to all your subscribers and be published on the News page:
Please submit your comments below if you have any questions and also submit suggestions and bug reports here.
Make new releases of your InterSystems Applications on Open Exchange and Stay tuned!
Announcement
Jamie Kantor · Jan 14, 2020
Hi, there, everyone.
We here in the certification team have been getting some questions about Caché developers taking the InterSystems IRIS Core Solutions Developer exam. I thought now would be a good time to clear up some doubts the community may have.
Even if you haven’t been working yet in InterSystems IRIS, the exam may well suit you already if you currently have experience in Caché. By looking at the Exam Details, you’ll see that there is only one IRIS-specific topic. Our exam designers did that on purpose because we knew that many of our partners and customers were considering or in the process of moving to IRIS as we developed the exam. We knew that most of our developers wouldn’t benefit from an exam that focused solely on new IRIS features. So, almost all of the exam is based on functionality you may already be familiar with.
However, this isn’t a Caché exam because the exam topics represent how a developer would use InterSystems IRIS today. Those of you who have been using InterSystems technology for a longer time will know that InterSystems has evolved a rich selection of features that developers can choose from. So, when we worked with our internal and community experts to design the exam, we made sure to select the IRIS areas and features that favor contemporary application development. That wasn’t an easy task because we know that many InterSystems developers have their favorite approaches to application development or their team may already have pre-established coding practices that cause them to use InterSystems technologies in a very specific way. In any case, we took these (and several other) ideas into consideration and think we got the balance right.
So, if you are a Caché developer and you want to know if the InterSystems IRIS Core Solutions Developer Specialist exam is right for you, we invite you to read the Exam Content and Topics in the Exam Details. We also strongly encourage you to review the sample questions to see the style and approaches to the topics. You might want to look up some of the listed features in our documentation as well.
Finally, our certification team is here to answer any questions you may have about this or any other exam at certification@intersystems.com
Thanks and Best Regards,
James Kantor - Certification Manager, InterSystems
Announcement
Olga Zavrazhnova · Jan 22, 2020
Hey Developers,
We invite you to review the InterSystems Caché or IRIS on the Gartner Peer Insights and get two $25 VISA Cards - one from Gartner Peer Insights and one from InterSystems! We had this promotion in 2018-2019, and now it's so good to announce it also for 2020!
See the rules below.
✅ #1: Follow this unique link and submit a review for IRIS or Caché.
✅ #2: Make a screenshot of the headline and text of your review.
✅ #3: Upload a screenshot in this challenge on Global Masters. After your review is published you will get two $25 VISA Cards! Note:
• Use mentioned above unique link in order to qualify for the gift cards. The Gift Cards are granted after review is published on the Gartner Peer Insights .
• Quantities are limited, and Gartner must approve the survey to qualify for the gift card. Gartner will not approve reviews from resellers, systems integrators or MSP/ISV’s of InterSystems. Survey must be completed by Dec. 31, 2020.
• The survey takes about 10-15 minutes. Gartner will authenticate the identity of the reviewer, but the published reviews are anonymous. You can check the status of your review and gift card in your Gartner Peer Insights reviewer profile at any time.
Done? Awesome! Your cards are on the way! Hello Olga,
Are InterSystems employees allowed to fill a Review or is it only for clients and partners using InterSystems technologies?Even if it is not for the reward, at least we can review the products we are working on, for our company's fame.
Regards,
Jacques Hi Jacques!
InterSystems employees are not allowed to review for obvious reasons ;)
But for our company's fame and to help Developers Community you are always welcome to contribute articles and submit apps on Open Exchange ) Hello Evgeny,True ^^It would have been nice to say nice things on GartnerI'm following Open Exchange carefully, and hope to have a good idea to develop and to share on it.
Cheers,
Jacques Hi Community, quick update!Gartner launched this promotion again and if you have not published a review yet - you can get a $25 VISA gift card from Gartner! Take a few minutes and write your review on an InterSystems product:
IRIS or Caché (don't forget to upload screenshot of your review on Global Masters to get $25 VISA also from InterSystems after your review is published! Upload in this challenge)
Ensemble
HealthShare
TrakCare
Gartner will ask you to sign in with LinkedIn or create a free Gartner account so they can verify your identity. Note! Quantities are limited, and Gartner must approve the survey to qualify for the gift card. Gartner will not approve reviews from resellers, systems integrators or MSP/ISV’s of InterSystems. Survey must be completed by Dec. 31, 2020. Unfortunately Gartner won't accept your review if you work for a company that is linked to Intersystems in any way such as a VAR. Hi David, yeh, unfortunately. I added this info in the description to notify people.
Article
Evgeny Shvarov · Feb 21, 2020
Hi Developers!
Another way to start using InterSystems ObjectScript Package Manager is to use prebuilt container images of InterSystems IRIS Community Edition and InterSystems IRIS for Health Community Edition.
We deploy this IRIS images on DockerHub and you can run it with the following command:
docker run --rm -p 52773:52773 --init --name my-iris -d intersystemsdc/iris-community:2019.4.0.383.0-zpm
Launch a terminal with:
docker exec -it my-iris iris session IRIS
And install zpm-module as:
USER>zpm
zpm: USER>install objectscript-math
[objectscript-math] Reload START
[objectscript-math] Reload SUCCESS
[objectscript-math] Module object refreshed.
[objectscript-math] Validate START
[objectscript-math] Validate SUCCESS
[objectscript-math] Compile START
[objectscript-math] Compile SUCCESS
[objectscript-math] Activate START
[objectscript-math] Configure START
[objectscript-math] Configure SUCCESS
[objectscript-math] Activate SUCCESS
zpm: USER>
And use same commands for InterSystems IRIS for Health using the tag: intersystemsdc/irishealth-community:2019.4.0.383.0-zpm
The images are being published on IRIS Community Edition and IRIS Community Edition for Health repositories of Docker Hub.
We will update tags with every new release of IRIS and ZPM.
Happy coding!
Question
Scott Roth · Oct 14, 2019
I am currently evaluating Source Control systems that we can use for both MS SQL, MS Visual Studio, and InterSystems IRIS. For both MS SQL and MS Visual Studio we do have the option of either Azure or GitHub. I understand when we upgrade to IRIS 2019.1 we have options for Source Control, and in previous Global Summit's I have heard GitHub discussed. So why can't I user GitHub for both MS SQL/MS Visual Studio and IRIS?
A couple of questions come to mind starting to think about Source Control
When integrating Source Control in an IRIS environment, is that source control just used for Code done in Studio, or can it be used for DTL, Business Process, Schema Editor, and etc?
Has anyone integrated IRIS with GitHub? Can you please provide examples.
How secure is the Source Control if you integrate it with GitHub?
Just trying to figure out the better route and if we can kill two birds with one stone.
Thanks
Scott Roth There is one exception though: if you are using some IRIS UI tools e.g. to develop productions you need to manage export/import these artefacts into files to let them be committed to Github then. Preferabbly automatically (e.g. per each Save operation). Scott, could you please specify, are you going to use Visual Studio or Visual Studio Code. Both are from Microsoft, but completely different products.
I know nothing about MS SQL. Visual Studio (not code) does support officially IRIS. There was one project, but already closed.
Visual Studio Code or VSCode itself supports GitHub by default very easy, just edit files locally, commit and push changes.
VSCode-ObjectScript is my extension for VSCode, adds support for Caché, Ensemble, and IRIS, any versions from 2016.2 (where were added Atelier support). It perfectly supports classes and routines, but if talk about "DTL, Business Process, Schema Editor, and etc" it does not have native support for it, but all of those staff based on classes, so, you can edit it as classes.
IRIS itself does not need to have support for GitHub, this task for an editor.
How secure is the Source Control if you integrate it with GitHub?
What do you mean here? Developers should manually choose what to commit to source control. It is not an automated task. Hi Scott!
There is no need to integrate IRIS with Github. It's more about how the IDE you are using to develop IRIS solutions is integrated with Github. And the majority of modern IDE are integrated with Github already: VSCode goes with Git/Github integration out of the box ( and I believe Visual Studio too (as soon as Github is Microsoft now too).
If the question is how you can develop IRIS solutions having the code managed in Github there are a lot of approaches. You can check these videos made by myself which illustrate:
How to create IRIS Application in Github, develop it in VSCode and commit changes into the repo
How to collaborate and make changes to an already existing project in Github repo using Github Flow
And:
Atelier can be integrated with Git
Studio also has the integration with Git we own our code and cannot allow it to be on another party's site (github) so our tech stack is much more interesting to work with It’s not a problem at all. You can use on-premises versions of GitHub, GitLab, Bitbucket or anything else, depends on your budget and needs. It is a problem if you maintain full ownership of your code. To make sure you maintain full ownership of your code you would have to use your own in house for repo is what I meant. Not my policy, it’s my companies So, company policy forces to keep all the source code only in Caché? You can install own source control server, even GitHub. It will be completely your own server anywhere you will decide. With no ability to connect from outside if you would need it.
So, yes, I still sure, not a problem at all. I have been worked in company with two contours, one is for development with no access to internet, completely isolated. And another network for outside world. And we had to use two PCs, for our work. And we anyway we were able to use source control
Announcement
Michelle Spisak · Oct 22, 2019
The Learning Services Online Learning team has posted new videos to help you learn the benefits of InterSystems IRIS. Take a peek to see what you stand to gain from making the switch to InterSystems IRIS!
Why Multi-Model?Stefan Wittmann presents use cases for the multi-model data access of InterSystems IRIS data platform. He shows the multi-model architecture that allows you to use the data model that best fits each task in your application — relational, object, or even direct/native access — all accessible through the language of your choice.
The Speed and Power of InterSystems IRISInterSystems IRIS powers many of the world’s most powerful applications — applications that require both speed and power for ingesting massive amounts of data, in real time, at scale. Learn about these features and more in this video!
Article
Peter Steiwer · Jan 10, 2020
When using Related Cubes in InterSystems IRIS BI, cubes must be built in the proper order. The One side must be built before the Many side. This is because during build time for the Many side, it looks up the record on the One side and creates a link. If the referenced record is not found on the One side, a Missing Relationship build error is generated. The One side is going to be the independent side of the relationship, AKA the side of the relationship that is referenced by the Many side or the Dependent cube. For example: Patients contain a reference to their Doctor. The Doctor does not contain references to each of their Patients. Doctors is the One, or Independent side. Patients is the Many, or Dependent side. For more information about setting up Cube Relationships, please see the documentation.
WARNING: If you rebuild the One side without rebuilding the Many side, the Many side may point to the wrong record. It is not guaranteed that a record in your cube will always have the same ID. The relationship link that is created is based on ID. YOU MUST REBUILD THE MANY SIDE AFTER BUILDING THE ONE SIDE.
To ensure your cubes are always built in the proper order, you can use the Cube Manager.
When debugging Build Errors, please also debug them in the Build Order. This is because errors can cascade and you don't want to spend time debugging an error just to find out it is because a different error happened first.
Understanding the Missing Relationship Build Error Message
SAMPLES>do ##class(%DeepSee.Utils).%PrintBuildErrors("RELATEDCUBES/PATIENTS") 1 Source ID: 1 Time: 01/03/2020 15:30:42 ERROR #5001: Missing relationship reference in RelatedCubes/Patients: source ID 1 missing reference to RxPrimaryCarePhysician 1744
Here is an example of what the Missing relationship build error looks like. We will extract some of these values from the message to understand what is happening.
Missing relationship reference in [Source Cube]: source ID [Source ID] missing reference to [Related Cube Reference] [Related Source ID]
In our error message, we have the following values:
Source Cube = RelatedCubes/Patients
Source ID = 1
Related Cube Reference = RxPrimaryCarePhysician
Related Source ID = 1744
Most of these are pretty straightforward except for the Related Cube Reference. Sometimes the name is obvious, other times it is not. Either way, we can do a little bit of work to find the cube this reference.
Step 1) Find the Fact Class for the Source Cube.
SAMPLES>w ##class(%DeepSee.Utils).%GetCubeFactClass("RelatedCubes/Patients")
BI.Model.RelCubes.RPatients.Fact
Step 2) Run an SQL query to get the Fact class the Related Cube Reference is pointing to:
SELECT Type FROM %Dictionary.PropertyDefinition where ID='[Source Cube Fact Class]||[Related Cube Reference]'
example:
SELECT Type FROM %Dictionary.PropertyDefinition where ID='BI.Model.RelCubes.RPatients.Fact||RxPrimaryCarePhysician'
Which returns a value of: BI.Model.RelCubes.RDoctors.Fact
Step 3) Now that we have the Related Cube Fact Class, we can run an SQL query to see if this Related Source ID does not have an associated fact in our Related Cube Fact Table.
SELECT * FROM BI_Model_RelCubes_RDoctors.Fact WHERE %SourceId=1744
Please note that we had to use the SQL table name instead of the class name here. This can typically be done by replacing all "." excluding ".Fact" with "_".
In this case, 0 rows were returned. This means it is still the case that the required related fact does not exist in the related cube. Sometimes after spending the time to get to this point, a synchronize may have happened to pull this new data in. At this point, the Build Error may no longer be true, but it has not yet been cleared out of the Build Errors global. Regular synchronization does not clean entries in this global that have been fixed. The only way to clean the Build Errors global is to run a Build against the cube OR running the following method:
Do ##class(%DeepSee.Utils).%FixBuildErrors("CUBE NAME WITH ERRORS")
If we now had data for the previous SQL query, the %FixBuildErrors method should fix the record and clear the error.
Step 4) Since we do not have this record in our Related Cube Fact Table, we should check the Related Cube Source Table to see if the record exists. First we have to find the Related Source Class by viewing the SOURCECLASS parameter of the Related Cube Fact Class:
SAMPLES>w ##class(BI.Model.RelCubes.RDoctors.Fact).#SOURCECLASS
BI.Study.Doctor
Step 5) Now that we have the Related Source Class, we can query the Related Source Table to see if the Related Source ID exists:
SELECT * FROM BI_Study.Doctor WHERE %ID=1744
If this query returns results, you should determine why this record does not exist in the Related Cube Fact Table. This could simply be because it has not yet synchronized. It could also have gotten an Error while building this fact. If this is the case, you need to remember to diagnose all Build Errors in the proper Build Order. It can often be the case that lots of errors cascade from one error.
If this query does not return results, you should determine why this record is missing from the Related Source Table. Perhaps some records have been deleted on the One side but records on the Many side have not yet been reassigned or deleted. Perhaps the Cube Relationship is configured incorrectly and the Related Source ID is not the correct value and the Cube Relationship definition should be changed.
This guide is a good place to start, but please feel free to contact the WRC. The WRC can help debug and diagnose this with you.
Announcement
Anastasia Dyubaylo · Sep 4, 2019
Hey Developers!
We are pleased to invite you to the upcoming webinar "InterSystems MLToolkit: AI Robotization" on 18th of September at 10:00 (GMT+3)!
Machine Learning (ML) Toolkit - a set of extensions to implement machine learning and artificial intelligence on the InterSystems IRIS Data Platform. As part of this webinar, InterSystems Sales Engineers @Sergey Lukyanchikov and @Eduard Lebedyuk plan to present an approach to the robotization of these tasks, i.e. to ensure their autonomous adaptive execution proceeds within the parameters and rules you specify. Self-learning neural networks, self-monitoring analytical processes, agency of analytical processes are the main subjects of this webinar.
Webinar is aimed at both experts in Data Science, Data Engineering, Robotic Process Automation - and those who just discover the world of artificial intelligence and machine learning.
We are waiting for you at our event!
Date: 18 September, 10:00 – 11:00 (GMT+3).Note: The language of the webinar is Russian.
Register for FREE today!
Announcement
Anastasia Dyubaylo · Sep 13, 2019
Hi Everyone!
New video, recorded by @Stefan.Wittmann, is already on InterSystems Developers YouTube:
JSON and XML persistent data serialization in InterSystems IRIS
Need to work with JSON or XML data?
InterSystems IRIS supports multiple inheritance and provides several built-in tools to easily convert between XML, JSON, and objects as you go.
Learn more about the multi-model development capabilities of InterSystems IRIS on Learning Services sites.
Enjoy watching the video! Can confirm that the %JSON.Adaptor tool is extremely useful! This was such a great addition to the product.In Application Services, we've used it to build a framework which allows us to not only expose our persistent classes via REST but also authorize different levels of access for different representations of each class (for example, all the properties, vs just the Name and the Id). The "Mappings and Parameters" feature is especially useful:https://irisdocs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_adaptorAlso, @Stefan are you writing backwards while you talk? That's impressive. Anyone who is doubting multiple-inheritance is insane.Although calling this kind of inheritance 'mixin-classes' helps I've noticed, mixing in additional features. https://hackaday.com/tag/see-through-whiteboard/