Clear filter
Article
Claudio Devecchi · Jun 20, 2023
In this article, I will share the theme we presented at the Global Summit 2023, in the Tech Exchange room. Me and @Rochael.Ribeiro
In this opportunity, we talk about the following topics:
Open Exchange Tools for Fast APIs
Open API Specification
Traditional versus Fast Api development
Composite API (Interoperability)
Spec-First or Api-First Approach
Api Governance & Monitoring
Demo (video)
Open Exchange Tools for Fast APIs
As we are talking about fast modern APIs development (Rest / json) we will use two Intersystems Open Exchange tools:
The first is a framework for rapid development of APIs which we will detail in this article.
https://openexchange.intersystems.com/package/IRIS-apiPub
The second is to use Swagger as a user interface for the specification and documentation of the Rest APIs developed on IRIS platform, as well as their use/execution. The basis for its operation is the Open Api specification (OAS) standard, described below:
https://openexchange.intersystems.com/package/iris-web-swagger-ui
What is the Open API Specification (OAS)?
It is a standard used worldwide to define, document and consume APIs. In most cases, APIs are designed even before implementation. I'll talk more about it in the next topics.
It is important because it defines and documents Rest APIs for its use, both on the provider and consumer side. But this pattern also serves to speed up tests and API calls in tools (Rest APIs Clients) on the market, such as Swagger, Postman, Insomnia, etc…
Traditional way in to publish API using IRIS
Imagine we have to build and publish a Rest API from an existing IRIS method (picture below).
In the traditional way:
1: We have to think about how consumers will call it. For example: Which path and verb will be used and how will be the response. Whether in a JSON object or as a plain/text.
2: Build a new method in a %CSP.REST class that will handle the http request to call it.
3: Handle the method's response to the intended http response for the end user.
4: Think about how we're going to provide the success code and how we're going to handle exceptions.
5: Map the route for our new method.
6: Provide the API documentation to the end user. We will probably build the OAS content manually.
7: And if, for example, we have a request or response payload (object), the implementation time will increase, because it must also be documented in OAS.
How can we be faster?
By simply tagging the IRIS method with the [WebMethod] attribute. Whatever it is, the framework will take care of its publication, using the OAS 3.x standard.
Why is the OAS 3.x standard so important?
Because It also documents in detail all properties of the input and output payloads.
In this way, any Rest Client tools on the market can instantly couple to the APIs, Like Insomnia, Postman, Swagger, etc. and provide a sample content to call them easily.
Using Swagger we will already visualize our API (image above) and call it. This is also very useful for testing.
API customization
But what if I need to customize my API?
For example: Instead of the name of the method I want the path to be something else. And I want the input parameters to be in the path, not like a query param.
We define a specific notation on top of the method, where we can complement the meta-information that the method itself does not provide.
In this example we are defining another path for our API and complementing the information for the end user to have a more friendly experience.
Projection Map for Rest API
This framework supports numerous types of parameters.
In this map we can highlight the complex types (the objects). They will be automatically exposed as a JSON payload and each property will be properly documented (OAS) for the end user .
Interoperability (Composite APIs)
By supporting complex types, you can also expose Interoperability Services.
It is a favorable scenario for building composite APIs, which use the orchestration of multiple external components (outbounds).
This means that objects or messages used as request or response will be automatically published and read by tools like swagger.
And it's an excellent way to test interoperability components, because usually a payload template is already loaded so that the user knows which properties the API uses.
First the developer can focus on testing, then shape the Api through customization.
Spec-first or Api-first Approach
Another concept widely used today is having the API defined even before its implementation.
With this framework it is possible to import an Open Api Spec. It creates the methods structure (spec) automatically, missing only their implementation.
API Governance and Monitoring
For Api's governance, it is also recommended to use IAM together.
In addition to having multiple plugins, IAM can quickly couple to APIs through the OAS standard.
apiPub offers additional tracing for APIs (see demo video)
Demo
Download & Documentation
Intersystems Open Exchange: https://openexchange.intersystems.com/?search=apiPub
Complete documentation: https://github.com/devecchijr/apiPub
Very rich material for the use of a new development method, bringing facilities and innovation. This will help a lot in the agility of API development.
Congratulations Claudio, for sharing the knowledge. thank you @Thiago.Simoes
Announcement
Anastasia Dyubaylo · Jul 11, 2023
Hi Community,
Let's meet together at the online meetup with the winners of the InterSystems Grand Prix Contest 2023 – a great opportunity to have a discussion with the InterSystems Experts team as well as our contestants.
Winners' demo included!
Date & Time: Thursday, July 13, 11 am EDT | 5 pm CEST
Join us to learn more about winners' applications and to have a talk with our experts.
➡️ REGISTER TODAY
See you all at our virtual meetup! 👉 LIVE NOW
Article
Niyaz Khafizov · Jul 6, 2018
Hi all. Yesterday I tried to connect Apache Spark, Apache Zeppelin, and InterSystems IRIS. During the process, I experienced troubles connecting it all together and I did not find a useful guide. So, I decided to write my own.
Introduction
What is Apache Spark and Apache Zeppelin and find out how it works together. Apache Spark is an open-source cluster-computing framework. It provides an interface for programming entire clusters with implicit data parallelism and fault tolerance. So, it is very useful when you need to work with Big Data. And Apache Zeppelin is a notebook, that provides cool UI to work with analytics and machine learning. Together, it works like this: IRIS provides data, Spark reads provided data, and in a notebook we work with the data.
Note: I have done the following on Windows 10.
Apache Zeppelin
Now, we will install all the necessary programs. First of all, download apache zeppelin from the official site of apache zeppelin. I have used zeppelin-0.8.0-bin-all.tgz. It includes Apache Spark, Scala, and Python. Unzip it to any folder. After that you can launch zeppelin by calling \bin\zeppelin.cmd from the root of your Zeppelin folder. Wait until the Done, zeppelin server started string appears and open http://localhost:8080 in your browser. If everything is okay, you will see Welcome to Zeppelin! message.
Note: I assume, that InterSystems IRIS already installed. If not, download and install it before the next step.
Apache Spark
So, we have the browser's open window with Zeppelin notebook. In the upper-right corner click on anonymous and after, click on Interpreter. Scroll down and find spark.
Next to the spark find edit button and click on it. Scroll down and add dependencies to intersystems-spark-1.0.0.jar and to intersystems-jdbc-3.0.0.jar. I installed InterSystems IRIS to the C:\InterSystems\IRIS\ directory, so artifacts I need to add are at:
My files are here:
And save it.
Check that it works
Let us check it. Create a new note, and in a paragraph paste the following code:
var dataFrame=spark.read.format("com.intersystems.spark").option("url", "IRIS://localhost:51773/NAMESPACE").option("user", "UserLogin").option("password", "UserPassword").option("dbtable", "Sample.Person").load()
// dbtable - name of your table
URL - IRIS address. It is formed as follows IRIS://ipAddress:superserverPort/namespace:
protocol IRIS is a JDBC connection over TCP/IP that offers Java shared memory connection;
ipAddress — The IP address of the InterSystems IRIS instance. If you are connecting locally, use 127.0.0.1 instead of localhost;
superserverPort — The superserver port number for the IRIS instance, which is not the same as the webserver port number. To find the superserver port number, in the Management Portal, go to System Administration > Configuration > System Configuration > Memory and Startup; namespace — An existing namespace in the InterSystems IRIS instance. In this demo, we connect to the USER namespace.
Run the paragraph. If everything is okay, you will see FINISHED.
My notebook:
Conclusion
In conclusion, we found out how Apache Spark, Apache Zeppelin, and InterSystems IRIS can work together. In my next articles, I will write about data analysis.
Links
The official site of Apache Spark
Apache Spark documentation
IRIS Protocol
Using the InterSystems Spark Connector
💡 This article is considered as InterSystems Data Platform Best Practice.
Article
sween · Jul 29, 2021
We are ridiculously good at mastering data. The data is clean, multi-sourced, related and we only publish it with resulting levels of decay that guarantee the data is current. We chose the HL7 Reference Information Model (RIM) to land the data, and enable exchange of the data through Fast Healthcare Interoperability Resources (FHIR®).
We are also a high performing, full stack team, and like to keep our operational resources on task, so managing the underlying infrastructure to host the FHIR® data repository for purposes of ingestion and consumption is not in the cards for us. For this, we chose the [FHIR® Accelerator Service](https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=FAS) to handle storage, credentials, back up, development, and FHIR® interoperability.
Our data is marketable, and well served as an API, so we will **monetize** it. This means we need to package our data/api up for appropriate sale — which includes: a developer portal, documentation, sample code, testing tools, and other resources to get developers up and running quickly against our data. We need to focus on making our API as user-friendly as possible, and give us some tooling to ward off abuse and protect our business against denial service attacks. For the customers using our data, we chose to use [Google Cloud's Apigee Edge](https://apigee.google.com/edge).

> ### With our team focused and our back office entirely powered as services, we are set to make **B I L L I O N S**, and this is an account as to how.
# Provisioning
High level tasks for provisioning in the [FHIR® Accelerator Service](https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=FAS) and [Google Cloud's Apigee Edge](https://apigee.google.com/edge).
## FHIR® Accelerator Service
Head over to the AWS Marketplace and subscribe to the InterSystems FHIR® Accelerator Service, or sign up for a trial account directly [here](https://portal.trial.isccloud.io/account/signup).
After your account has been created, create a FHIR® Accelerator deployment for use to store and sell your FHIR® data.

After a few minutes, the deployment will be ready for use and available to complete the following tasks:
1. Create an API Key in the Credentials section and record it.
2. Record the newly created FHIR® endpoint from the Overview section.


## Google Cloud Apigee Edge
Within your Google Cloud account, create a project and enable it for use with Apigee Edge. To understand a little bit of the magic that is going on with the following setup, we are enabling a Virtual Network to be created, a Load Balancer, SSL/DNS for our endpoint, and making some choices on whether or not its going to be publicly accessible.
> Fair warning here, if you create this as an evaluation and start making M I L L I O N S, it cannot be converted to a paid plan later on to continue on to making B I L L I O N S.



## Build the Product
Now, lets get on to building the product for our two initial customers of our data, Axe Capital and Taylor Mason Capital.

### Implement Proxy
Out first piece of the puzzle here is the mechanics of our proxy from Apigee to the FHIR® Accelerator Service. At its core, we are implementing a basic reverse proxy that backs the Apigee Load Balancer with our FHIR® API. Remember that we created all of the Apigee infrastructure during the setup process when we enabled the GCP Project for Apigee.

### Configure the Proxy
Configuring the proxy basically means you are going to define a number of policies to the traffic/payload as it either flows to (PreFlow/PostFlow) to shape the interaction and safety of how the customers/applications behave against the API.
In the below, we configure a series of policies that :
1. Add CORS Headers.
2. Remove the API Key from the query string.
3. Add the FHIR® Accelerator API key to the headers.
4. Impose a Quota/Limit.

A mix of XML directives and a user interface to configure the policy is available as below.

### Add a Couple of Developers, Axe and Taylor
We need to add some developers next, which is as simple as adding the users to any directory, this is required to enable the Applications that are created in the next step and supplied to our customers.

### Configure the Apps, one per customer
Applications is where we break part our *product* and logically divide it up to our customers, here we will create one app per customer. Important note here that in our case for this demonstration, this is where the apikey for that particular customer is assigned, after we assign the developer to the app.

### Create the Developer Portal
The Developer Portal is the "**clown suit**" and front door for our customers and where they can interact with what they are paying for. It comes packed with some powerful customization, a specific url for the product it is attached to, and allows the import of a swagger/openapi spec for developers to interact with the api using swagger based implemented UI. Lucky for us the Accelerator Service comes with a swagger definition, so we just have to know where to look for it and make some modifications so that the defs work against our authentication scheme and url. We don't spend a lot of time here in the demonstration, but you should if you plan on setting yourself apart for the paying customers.

### Have Bobby Send a Request
Let's let Bobby Axelrod run up a tab by sending his first requests to our super awesome data wrapped up in FHIR®. For this, keep in mind the key that is being used and the endpoint that is being used, is all assigned by Apigee Edge, but the access to the FHIR® Accelerator Service is done through the single key we supplied in the API Proxy.


### Rate Limit Bobby with a Quota
Let's just say one of our customers has a credit problem, so we want to limit the use of our data on a rate basis. If you recall, we did specify a rate of 30 requests a minute when we setup the proxy, so lets test that below.

### Bill Axe Capital
I will get in front of your expectations here so you wont be too disappointed by how rustic the billing demonstration is, but it does employ a technique here to generate a report for purposes of invoicing, that actually removes things that may or may not be the customers fault in the proxy integration. For instance, if you recall from the rate limit demo above, we sent in 35 requests, but limited things to 30, so a quick filter in the billing report will actually remove those and show we are competent enough to bill only for our customers utilization.

To recap, monetizing our data included:
* Safety against abuse and DDOS protection.
* Developer Portal and customization for the customer.
* Documentation through Swagger UI.
* Control over the requests Pre/Post our API
... and a way to invoice for **B I L L I O N S**.
This is very cool. Well done. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Jeff Fried · Jan 27, 2020
Preview releases are now available for the 2020.1 version of InterSystems IRIS and IRIS for Health!
Kits and Container images are available via the WRC's preview download site.
The build number for these releases is 2020.1.0.199.0. (Note: first release was build 197, updated to 199 on 2/12/20)
InterSystems IRIS Data Platform 2020.1 has many new capabilities including:
Kernel Performance enhancements, including reduced contention for blocks and cache lines
Universal Query Cache - every query (including embedded & class ones) now gets saved as a cached query
Universal Shard Queue Manager - for scale-out of query load in sharded configurations
Selective Cube Build - to quickly incorporate new dimensions or measures
Security improvements, including hashed password configuration
Improved TSQL support, including JDBC support
Dynamic Gateway performance enhancements
Spark connector update
MQTT support in ObjectScript
(NOTE: this preview build does not include TLS 1.3 and OpenLDAP updates, which are planned for General Availability)
InterSystems IRIS for Health 2020.1 includes all of the enhancements of InterSystems IRIS. In addition, this release includes:
In-place conversion to IRIS for Health
HL7 Productivity Toolkit including Migration Tooling and Cloverleaf conversion
X12 enhancements
FHIR R4 base standard support
As this is an EM (Extended Maintenance) release, customers may want to know the differences between 2020.1 and 2019.1. These are listed in the release notes:
InterSystems IRIS 2020.1 release notes
IRIS for Health 2020.1 release notes
Draft documentation can be found here:
InterSystems IRIS 2020.1 documentation
IRIS for Health 2020.1 documentation
The platforms on which InterSystems IRIS and IRIS for Health 2020.1 are supported for development and production are detailed in the Supported Platforms document. Jeffrey, thank you for the info.
Do you already know that the Supported Platforms document link is broken? (404) Hi Jeff! What are the Docker image tags for Community Editions? I've just uploaded the Community Editions to the Docker Store (2/13-updated with new preview build):
docker pull store/intersystems/iris-community:2020.1.0.199.0
docker pull store/intersystems/irishealth-community:2020.1.0.199.0 Thanks, Steve! Will native install kits be available for the Community Editions as well? Yes, full kit versions of the 2020.1 Community Edition Preview are available through the WRC download site as well. I'm getting this error when I attempt to access the link ...
Jeffery
If you don't use that link and first log into the WRC application: https://wrc.intersystems.com/wrc/enduserhome.csp
Can you then go to: https://wrc.intersystems.com/wrc/coDistribution2.csp
Then select Preview? Some customers have had problems with the distib pages because their site restricts access to some JS code we get from a third party. I get the same result using your suggested method, Brendan.
I'm not technically a customer; I work for a Services Partner of ISC. I am a DC Moderator though (if that carries any weight) so it would be nice to keep abreast of the new stuff OK I needed to do one more click, your Org does not have a support contract so you can't have access to these pages, sorry.
Maybe Learning Services could help you out but I can't grant you access to the kits on the WRC. Hello,
I took this for a spin and noticed that the new Prometheus metrics are not available on it like they were in 2019.4 ? ( ie: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api ).
Am I missing something or is the metrics api still under consideration to make it into this build ?
The correct link is https://docs.intersystems.com/iris20201/csp/docbook/platforms/index.html. I fixed the typo in the post. Tanks for pointing that out! Seems to be there for me...
Hello Jeffrey,
We're currently working on IRIS for Health 20.1 build 197, and we were wondering what fixes or additions went to latest build 199. Intesystems used publish all fixes with each FT build version, is there such list?
Thank you
Yuriy
The Preview has been updated with build 2020.1.0.199.0. This includes a variety of changes, primarily corrections for issues found under rare conditions in install, upgrade, and certain distributed configurations. None of these changes impacts any published API.
Thank you for working with the preview and for your feedback! Hi Yuriy -
Thanks for pointing this out. We did not prepare a list for this, but I did make a comment on this thread, including verifying that none of these changes impacts any published API. If there is a change resolving an issue you reported through the WRC, you'll see that this is resolved via the normal process. We will be publishing detailed changenotes with the GA release.
-Jeff
Article
sween · Mar 4, 2024
If you are a customer of the new InterSystems IRIS® Cloud SQL and InterSystems IRIS® Cloud IntegratedML® cloud offerings and want access to the metrics of your deployments and send them to your own Observability platform, here is a quick and dirty way to get it done by sending the metrics to Google Cloud Platform Monitoring (formerly StackDriver).
The Cloud portal does contain a representation of some top level metrics for at-a-glance heads up metrics, which is powered by a metrics endpoint that is exposed to you, but without some inspection you would not know it was there.
🚩 This approach is most likely taking advantage of a "to be named feature", so with that being said, it is not future-proof and definitely not supported by InterSystems.
So what if you wanted a more comprehensive set exported? This technical article/example shows a technique to scrape and forward metrics to observability, it can be modified to suit your needs, to scrape ANY metrics target and send to ANY observability platform using the Open Telemetry Collector.
The mechanics leading up to the above result can be accomplished in many ways, but for here we are standing up a Kubernetes pod to run a python script in one container, and Otel in another to pull and push the metrics... definitely a choose your own adventure, but for this example and article k8s is the actor pulling this off with Python.
Steps:
Prereqs
Python
Container
Kubernetes
Google Cloud Monitoring
Prerequisites:
An active subscription to IRIS® Cloud SQL
One Deployment, running, optionally with Integrated ML
Secrets to supply to your environment
Environment Variables
Obtain Secrets
I dropped this in a teaser as it is a bit involved and somewhat off target of the point, but these are the values you will need to generate the secrets.
ENV IRIS_CLOUDSQL_USER 'user'
ENV IRIS_CLOUDSQL_PASS 'pass'
☝ These are your credentials for https://portal.live.isccloud.io
ENV IRIS_CLOUDSQL_USERPOOLID 'userpoolid'
ENV IRIS_CLOUDSQL_CLIENTID 'clientid'
ENV IRIS_CLOUDSQL_API 'api'
☝ These you have to dig out of development tools for your browser.
`aud` = clientid
`userpoolid`= iss
`api` = request utl
ENV IRIS_CLOUDSQL_DEPLOYMENTID 'deploymentid'
☝ This can be derived from the Cloud Service Portal
Python:
Here is the python hackery to pull the metrics from the Cloud Portal and export them locally as metrics for the otel collector to scrape:
iris_cloudsql_exporter.py
import time
import os
import requests
import json
from warrant import Cognito
from prometheus_client.core import GaugeMetricFamily, REGISTRY, CounterMetricFamily
from prometheus_client import start_http_server
from prometheus_client.parser import text_string_to_metric_families
class IRISCloudSQLExporter(object):
def __init__(self):
self.access_token = self.get_access_token()
self.portal_api = os.environ['IRIS_CLOUDSQL_API']
self.portal_deploymentid = os.environ['IRIS_CLOUDSQL_DEPLOYMENTID']
def collect(self):
# Requests fodder
url = self.portal_api
deploymentid = self.portal_deploymentid
print(url)
print(deploymentid)
headers = {
'Authorization': self.access_token, # needs to be refresh_token, eventually
'Content-Type': 'application/json'
}
metrics_response = requests.request("GET", url + '/metrics/' + deploymentid, headers=headers)
metrics = metrics_response.content.decode("utf-8")
for iris_metrics in text_string_to_metric_families(metrics):
for sample in iris_metrics.samples:
labels_string = "{1}".format(*sample).replace('\'',"\"")
labels_dict = json.loads(labels_string)
labels = []
for d in labels_dict:
labels.extend(labels_dict)
if len(labels) > 0:
g = GaugeMetricFamily("{0}".format(*sample), 'Help text', labels=labels)
g.add_metric(list(labels_dict.values()), "{2}".format(*sample))
else:
g = GaugeMetricFamily("{0}".format(*sample), 'Help text', labels=labels)
g.add_metric([""], "{2}".format(*sample))
yield g
def get_access_token(self):
try:
user_pool_id = os.environ['IRIS_CLOUDSQL_USERPOOLID'] # isc iss
username = os.environ['IRIS_CLOUDSQL_USER']
password = os.environ['IRIS_CLOUDSQL_PASS']
clientid = os.environ['IRIS_CLOUDSQL_CLIENTID'] # isc aud
print(user_pool_id)
print(username)
print(password)
print(clientid)
try:
u = Cognito(
user_pool_id=user_pool_id,
client_id=clientid,
user_pool_region="us-east-2", # needed by warrant, should be derived from poolid doh
username=username
)
u.authenticate(password=password)
except Exception as p:
print(p)
except Exception as e:
print(e)
return u.id_token
if __name__ == '__main__':
start_http_server(8000)
REGISTRY.register(IRISCloudSQLExporter())
while True:
REGISTRY.collect()
print("Polling IRIS CloudSQL API for metrics data....")
#looped e loop
time.sleep(120)
Docker:
Dockerfile
FROM python:3.8
ADD src /src
RUN pip install prometheus_client
RUN pip install requests
WORKDIR /src
ENV PYTHONPATH '/src/'
ENV PYTHONUNBUFFERED=1
ENV IRIS_CLOUDSQL_USERPOOLID 'userpoolid'
ENV IRIS_CLOUDSQL_CLIENTID 'clientid'
ENV IRIS_CLOUDSQL_USER 'user'
ENV IRIS_CLOUDSQL_PASS 'pass'
ENV IRIS_CLOUDSQL_API 'api'
ENV IRIS_CLOUDSQL_DEPLOYMENTID 'deploymentid'
RUN pip install -r requirements.txt
CMD ["python" , "/src/iris_cloudsql_exporter.py"]
docker build -t iris-cloudsql-exporter .
docker image tag iris-cloudsql-exporter sween/iris-cloudsql-exporter:latest
docker push sween/iris-cloudsql-exporter:latest
Deployment:
k8s; Create us a namespace:
kubectl create ns iris
k8s; Add the secret:
kubectl create secret generic iris-cloudsql -n iris \
--from-literal=user=$IRIS_CLOUDSQL_USER \
--from-literal=pass=$IRIS_CLOUDSQL_PASS \
--from-literal=clientid=$IRIS_CLOUDSQL_CLIENTID \
--from-literal=api=$IRIS_CLOUDSQL_API \
--from-literal=deploymentid=$IRIS_CLOUDSQL_DEPLOYMENTID \
--from-literal=userpoolid=$IRIS_CLOUDSQL_USERPOOLID
otel, Create Config:
apiVersion: v1
data:
config.yaml: |
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'IRIS CloudSQL'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 30s
scrape_timeout: 30s
static_configs:
- targets: ['192.168.1.96:5000']
metrics_path: /
exporters:
googlemanagedprometheus:
project: "pidtoo-fhir"
service:
pipelines:
metrics:
receivers: [prometheus]
exporters: [googlemanagedprometheus]
kind: ConfigMap
metadata:
name: otel-config
namespace: iris
k8s; Load the otel config as a configmap:
kubectl -n iris create configmap otel-config --from-file config.yaml
k8s; deploy load balancer (definitely optional), MetalLB. I do this to scrape and inspect from outside of the cluster.
cat <<EOF | kubectl apply -f -n iris -
apiVersion: v1
kind: Service
metadata:
name: iris-cloudsql-exporter-service
spec:
selector:
app: iris-cloudsql-exporter
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 8000
EOF
gcp; need the keys to google cloud, the service account needs to be scoped
roles/monitoring.metricWriter
kubectl -n iris create secret generic gmp-test-sa --from-file=key.json=key.json
k8s; the deployment/pod itself, two containers:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iris-cloudsql-exporter
labels:
app: iris-cloudsql-exporter
spec:
replicas: 1
selector:
matchLabels:
app: iris-cloudsql-exporter
template:
metadata:
labels:
app: iris-cloudsql-exporter
spec:
containers:
- name: iris-cloudsql-exporter
image: sween/iris-cloudsql-exporter:latest
ports:
- containerPort: 5000
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/gmp/key.json"
- name: IRIS_CLOUDSQL_USERPOOLID
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: userpoolid
- name: IRIS_CLOUDSQL_CLIENTID
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: clientid
- name: IRIS_CLOUDSQL_USER
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: user
- name: IRIS_CLOUDSQL_PASS
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: pass
- name: IRIS_CLOUDSQL_API
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: api
- name: IRIS_CLOUDSQL_DEPLOYMENTID
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: deploymentid
- name: otel-collector
image: otel/opentelemetry-collector-contrib:0.92.0
args:
- --config
- /etc/otel/config.yaml
volumeMounts:
- mountPath: /etc/otel/
name: otel-config
- name: gmp-sa
mountPath: /gmp
readOnly: true
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/gmp/key.json"
volumes:
- name: gmp-sa
secret:
secretName: gmp-test-sa
- name: otel-config
configMap:
name: otel-config
kubectl -n iris apply -f deployment.yaml
Running
Assuming nothing is amiss, lets peruse the namespace and see how we are doing.
✔ 2 config maps, one for GCP, one for otel
✔ 1 load balancer
✔ 1 pod, 2 containers successful scrapes
Google Cloud Monitoring
Inspect observability to see if the metrics are arriving ok and be awesome in observability!
Announcement
AYUSH Shetty · May 18
I am writing to express my interest in the "IRIS Ensemble Integration . I have 2 years of experience as an Ensemble IRIS Developer, working with Ensemble and IRIS for integration, server management, and application development. Looking for more opportunites to work under Iris Cache Objectscript
Article
Ben Spead · May 21
For 15 over years I have been playing with ways to speed up the way I use InterSystems systems and technology via AutoHotkey scripting. As a power keyboard user (I avoid my mouse when possible) I found it very helpful to set up hotkeys to get to my most frequently accessed systems and research utilities as quickly as possible. While I have used this approach for many years, this is the first time that I am introducing my approach and a customer-facing hotkey script to the D.C. and OEx...
ISCLauncher is a hotkey program based on AutoHotKey (Windows scripting language) which provides quick access a number of useful InterSystems resources and online systems (Windows OS only). Use it to quickly access the following InterSystems resources:
Documentation Search
Developer Community Search
D.C. A.I.
Online Learning
WRC Issues
iService Issues
CCR records
... plus more!
To try it out for yourself, use the "Download' button on the ISCLauncher Open Exchange listing, which will pull down a Zip file from which you can extract the contents. Run the ISCLauncher.exe and you will see the following in your Windows SysTray:
To pull up the Help screen so you can see all of the things that it can do, once you are running ISCLauncher, press [Ctrl]+[Windows]+[?]:
The power of ISCLauncher is that it can turn plain text into a hyperlink. E.g. If you have an ID from a WRC, iService or CCR record in an email, chat or notes, simply highlighting the record ID and using the appropriate hotkey will allow you to just directly to that record. See a demo for a CCR lookup below ([Ctrl]+[Windows]+[c]):
To access a record even faster, use ISC Uber-Key ( [Ctrl]+[Windows]+[Space] ) to try to automatically determine the record type and navigate immediately there (credit to @Chad.Severtson for the original ISC Uber-Key code from years ago!).
In addition to WRC, iService and CCR records - do quick searches against things like InterSystems Documentation ([Ctrl]+[Windows]+[b] ) or the Developer Community ([Ctrl]+[Windows]+[d]):
Make this tool even more powerful by adding your own hotkeys for things that you frequently type or open on your desktop. For some inspiration, here is my personal launcher which I have tuned over the years:
Have ideas how to make this more powerful? Add comments below. Also, once this is in GitHub you can feel free to create PRs with your suggestions.
Announcement
Anastasia Dyubaylo · Jun 2
Hi Community,
It's time to announce the winners of the InterSystems FHIR and Digital Health Interoperability Contest!
Thanks to all our amazing participants who submitted 11 applications 🔥
Now it's time to announce the winners!
Experts Nomination
🥇 1st place and $5,000 go to the FHIRInsight app by @José.Pereira, @henry, @Henrique
🥈 2nd place and $2,500 go to the iris-fhir-bridge app by @Muhammad.Waseem
🥉 3rd place and $1,000 go to the health-gforms app by @Yuri.Gomes
🏅 4th place and $500 go to the fhir-craft app by @Laura.BlázquezGarcía
🏅 5th place and $300 go to the CCD Data Profiler app by @Landon.Minor
🌟 $100 go to the IRIS Interop DevTools app by @Chi.Nguyen-Rettig
🌟 $100 go to the hc-export-editor app by @Eric.Fortenberry
🌟 $100 go to the iris-medbot-guide app by @shan.yue
🌟 $100 go to the Langchain4jFhir app by @ErickKamii
🌟 $100 go to the ollama-ai-iris app by @Oliver.Wilms
Community Nomination
🥇 1st place and $1,000 go to the iris-medbot-guide app by @shan.yue
🥈 2nd place and $600 go to the FHIRInsight app by @José.Pereira, @henry, @Henrique
🥉 3rd place and $300 go to the FhirReportGeneration app by @XININGMA
🏅 4th place and $200 go to the iris-fhir-bridge app by @Muhammad.Waseem
🏅 5th place and $100 go to the fhir-craft app by @Laura.BlázquezGarcía
Our sincerest congratulations to all the winners!
Join the fun next time ;)
Announcement
Bob Kuszewski · Jun 20
InterSystems is pleased to announce that IAM 3.10 has been released. IAM 3.10 is the first significant release in about 18 months, so it includes many significant new features that are not available in IAM 3.4, including:
Added support for incremental config sync for hybrid mode deployments. Instead of sending the entire entity config to data planes on each config update, incremental config sync lets you send only the changed configuration to data planes.
Added the new configuration parameter admin_gui_csp_header to Gateway, which controls the Content-Security-Policy (CSP) header served with Kong Manager. This defaults to off, and you can opt in by setting it to on. You can use this setting to strengthen security in Kong Manager.
AI RAG Injector (ai-rag-injector) Added the AI Rag Injector plugin, which allows automatically injecting documents to simplify building RAG pipelines.
AI Sanitizer (ai-sanitizer) Added the AI Sanitizer plugin, which can sanitize the PII information in requests before the requests are proxied by the AI Proxy or AI Proxy Advanced plugins.
Kafka Consume (kafka-consume): Introduced the Kafka Consume plugin, which adds Kafka consumption capabilities to Kong Gateway.
Redirect (redirect): Introduced the Redirect plugin, which lets you redirect requests to another location.
… and many more
Customers upgrading from earlier versions of IAM must get a new IRIS license key in order to use IAM 3.10. Kong has changed their licensing in a way that requires us to provide you with new license keys. When you are upgrading IAM, you will need to install the new IRIS license key on your IRIS server before starting IAM 3.10.
IAM 2.8 has reached its end-of-life and current customers are strongly encouraged to upgrade as soon as possible. IAM 3.4 will reach end-of-life in 2026, so start planning that upgrade soon.
IAM is an API gateway between your InterSystems IRIS servers and applications, providing tools to effectively monitor, control, and govern HTTP-based traffic at scale. IAM is available as a free add-on to your InterSystems IRIS license.
IAM 3.10 can be downloaded from the Components area of the WRC Software Distribution site.
Follow the Installation Guide for guidance on how to download, install, and get started with IAM. The complete IAM 3.10 documentation gives you more information about IAM and using it with InterSystems IRIS. Our partner Kong provides further documentation on using IAM in the Kong Gateway (Enterprise) 3.10 documentation
IAM is only available in OCI (Open Container Initiative) a.k.a. Docker container format. Container images are available for OCI compliant run-time engines for Linux x86-64 and Linux ARM64, as detailed in the Supported Platforms document.
The build number for this release is IAM 3.10.0.2.
This release is based on Kong Gateway (Enterprise) version 3.10.0.2.
Announcement
Irène Mykhailova · Jun 23
Hi Community!
We have great news for those of you who are interested in what's happening at the InterSystems Ready 2025 but couldn't attend in person. All the keynotes are being streamed! Moreover, you can watch them afterwards if they happen at an inopportune time.
Keynotes from Day 1 are already ready 😉
And don't forget to check out the rest of the keynotes:
Keynotes from day 2
Keynotes from day 3
It promises to be epic!
Announcement
Fabiano Sanches · Jun 27
Reference: Build 2025.1.0.1.24372U.25e14d55
Overview
This release introduces significant enhancements to security, analytics capabilities, and user experience, along with important operational improvements aimed at reducing downtime and improving reliability.
New Features and Enhancements
Category
Feature / Improvement
Details
Analytics
Adaptive Analytics in Data Fabric Studio
InterSystems Data Fabric Studio now includes Adaptive Analytics as an optional feature, offering advanced analytics capabilities directly within your workflow.
Security
Enhanced Firewall Management
Firewall management page now supports creating explicit inbound and outbound firewall rules specifically for port 22, providing greater security and access control.
Custom APIs Security Update
Custom APIs have transitioned from ID tokens to access tokens, strengthening security by improving authentication mechanisms.
Enforcement of HTTPS for Custom APIs
Custom APIs no longer support HTTP; all communication is now exclusively over HTTPS, ensuring encrypted and secure data transmission.
General Security Improvements
Multiple security enhancements applied, reinforcing the security posture across the platform.
User Experience
New Feature Announcements and Widgets
Additional widgets have been introduced to effectively communicate new features, announcements, and important updates directly within the Cloud Service Portal.
Operations
Improved Timezone Change Performance
Downtime associated with the timezone-change operation on prod environments significantly reduced from approximately 2 minutes to about 15 seconds, minimizing impact on operations.
Recommended Actions
Explore Adaptive Analytics within Data Fabric Studio to enhance your data-driven decision-making capabilities.
Review firewall settings to leverage the new inbound/outbound port 22 rules. The first deploy you perform will define the rules. Make sure to review the outbound rules.
Ensure Custom APIs use updated SDKs that utilize access tokens instead of ID tokens, and confirm HTTPS-only configurations are correctly applied.
Support
For assistance, open a support case via iService or directly through the InterSystems Cloud Service Portal.
Thank you for choosing InterSystems Cloud Services.
Announcement
Thomas Dyar · Dec 14, 2021
Preview releases are now available for the 2021.2 version of InterSystems IRIS, IRIS for Health, and HealthShare Health Connect.
As this is a preview release, we are eager to learn from your experiences with this new release ahead of its General Availability release next month. Please share your feedback through the Developer Community so we can build a better product together.
InterSystems IRIS Data Platform 2021.2 makes it even easier to develop, deploy and manage augmented applications and business processes that bridge data and application silos. It has many new capabilities including:
Enhancements for application and interface developers, including:
Embedded Python
Interoperability Productions in Python
Updates to Visual Studio Code ObjectScript Extension Pack
New Business Services and operations added allowing users to set and run SQL query with minimal custom coding
Enhancements for Analytics and AI, including:
New SQL LOAD command efficiently loads CSV and JDBC source data into tables
Enhancements to Adaptive Analytics
Enhancements for Cloud and Operations tasks, including:
New Cloud Connectors make it simple to access and use cloud services within InterSystems IRIS applications
IKO enhancements improve manageability of Kubernetes resources
Enhancements for database and system administrators, including:
Online Shard Rebalancing automates distribution of data across nodes without interrupting operations
Adaptive SQL engine uses fast block sampling and automation to collect advanced table statistics and leverages runtime information for improved query planning
Storage needs for InterSystems IRIS are reduced with new stream and journal file compression settings
Support for TLS 1.3 and OpenSSL 1.1.1, using system-provided libraries
New ^TRACE utility reports detailed process statistics such as cache hits and reads
More details on all of these features can be found in the product documentation:
InterSystems IRIS 2021.1 documentation and release notes
InterSystems IRIS for Health 2021.1 documentation and release notes
HealthShare Health Connect 2021.1 documentation and release notes
InterSystems IRIS 2021.2 is a Continuous Delivery (CD) release, which now comes with classic installation packages for all supported platforms, as well as container images in OCI (Open Container Initiative) a.k.a. Docker container format. Container images are available for OCI compliant run-time engines for Linux x86-64 and Linux ARM64, as detailed in the Supported Platforms document.
Full installation packages for each product are available from the WRC's product download site. Using the "Custom" installation option enables users to pick the options they need, such as InterSystems Studio and IntegratedML, to right-size their installation footprint.
Installation packages and preview keys are available from the WRC's preview download site.
Container images for the Enterprise Edition, Community Edition and all corresponding components are available from the InterSystems Container Registry using the following commands:
docker pull containers.intersystems.com/intersystems/iris:2021.2.0.617.0
docker pull containers.intersystems.com/intersystems/iris-ml:2021.2.0.617.0
docker pull containers.intersystems.com/intersystems/irishealth:2021.2.0.617.0
docker pull containers.intersystems.com/intersystems/irishealth-ml:2021.2.0.617.0
For a full list of the available images, please refer to the ICR documentation.
Alternatively, tarball versions of all container images are available via the WRC's preview download site.
The build number for this preview release is 2021.2.0.617.0. Interoperability productions with Python and Cloud connectors? YEEEESSSSSSS.
However, containers.intersystems.com is giving up bad credentials.... or am I the sole brunt of cruelty here?
```
(base) sween @ dimsecloud-pop-os ~
└─ $ ▶ docker login -u="ron.sweeney@integrationrequired.com" containers.intersystems.com
Password:
Error response from daemon: Get https://containers.intersystems.com/v2/: unauthorized: BAD_CREDENTIAL
```
I was able to get in, for example:
$ docker-ls tags --registry https://containers.intersystems.com intersystems/irishealth ...
requesting list . done
repository: intersystems/irishealth
tags:
- 2019.1.1.615.1
- 2020.1.0.217.1
- 2020.1.1.408.0
- 2020.2.0.211.0
- 2020.3.0.221.0
- 2020.4.0.547.0
- 2021.1.0.215.0
- 2021.2.0.617.0 No problem:
download from wrc>previews some .tar.gz
docker load -i <downloaded> ...
off it goes with Docker run or Dockerfile + docker-compose
And here it is, containers.intersystems.com gone
$ docker pull containers.intersystems.com/intersystems/irishealth-community:2021.2.0.617.0
Error response from daemon: Get "https://containers.intersystems.com/v2/": Service Unavailable
Could you push those images to the docker hub, as usual before? It's more stable. Hi Dmitry,
Thanks for the heads-up, we are working to bring containers.intersystems.com back online. The docker hub listings will be updated with the 2021.2 preview images in the next day or so, and we will update this announcement when they're available!
Kind Regards,Thomas Dyar Good news -- containers.intersystems.com is back online. Please let us know if you encounter any issues!
Regards,
Thomas Dyar And images with ZPM package manager 0.3.2 are available accordingly:
intersystemsdc/iris-community:2021.2.0.617.0-zpm
intersystemsdc/iris-ml-community:2021.2.0.617.0-zpm
intersystemsdc/iris-community:2021.1.0.215.3-zpm
intersystemsdc/irishealth-community:2021.1.0.215.3-zpm
intersystemsdc/irishealth-ml-community:2021.1.0.215.3-zpm
intersystemsdc/irishealth-community:2021.1.0.215.3-zpm
And to launch IRIS do:
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-ml-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2021.2.0.617.0-zpm
And for terminal do:
docker exec -it my-iris iris session IRIS
and to start the control panel:
http://localhost:9092/csp/sys/UtilHome.csp
To stop and destroy container do:
docker stop my-iris
And the FROM clause in dockerfile can look like:
FROM intersystemsdc/iris-community:2021.2.0.617.0-zpm
Or to take the latest image:
FROM intersystemsdc/iris-community Excellent! That's comfort. Available on Docker Hub too. Will we have an arm64 version? I was trying to install the preview version on Ubuntu 20.04.3 LTS ARM64(in a VM on Mac M1).
But irisintall gave the the following error. Installing zlib1g-dev did not solve the problem. Could anyone suggest me what I was missing?
-----
Your system type is 'Ubuntu LTS (ARM64)'.
zlib1g version 1 is required.
** Installation aborted ** Based on the msg alone , would need:
sudo apt install zlib1g
Instead of:
sudo apt install zlib1g-dev Thanks. But looks like zlib1g is already installed..
$ sudo apt install zlib1g
Reading package lists... Done
Building dependency tree
Reading state information... Done
zlib1g is already the newest version (1:1.2.11.dfsg-2ubuntu1.2).
0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. I know this is not the proper way to do it, but I worked around this issue by deleting the line for zlib1g in the file, package/requirements_check/requirements.lnxubuntu2004arm64.isc.
Looks like the instance is working fine, so I suspect there is something wrong with requirement checking in installation.
Question
Ben Spead · Feb 3, 2022
(I wasn't able to find this in the docs or the Community, so feel free to point me to a reference that I missed)
How can I determine the effective User and Group that will be used when an InterSystems IRIS process is doing file I/O on the file system for UNIX? Bonus points if you can tell me how to do it from within InterSystems IRIS as well as from the host OS.
Thanks! From the OS side in AIX, I can see it in parameters.isc (example from a QA env I'm playing with)
security_settings.iris_user: irisusr
security_settings.iris_group: irisusr
security_settings.manager_user: irisusr
security_settings.manager_group: irisusr
I do not recall how to see it in IRIS itself (or if it's even possible) but I remember wanting to figure out how to change the values after installation (due to someone goofing up an entry on a dev environment) and without a lot of effort, it is pretty difficult. Thank you @Craig.Regester for the response. Looking up that file in the docs (https://docs.intersystems.com/iris20201/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_unixdist) it tells me that "For security reasons, the parameters.isc file is accessible only by the root user." I am pretty sure there is a way to tell what it is currently running as without needing that level of access. But it is good to know about this in the cases where root access is an option Interesting question. I didn't see anything for this in the class reference either. I'll be following this post. ##class(%SYS.ProcessQuery).GetOSUsername() You could try:ps -ax -o uname,cmd | grep "irisdb WD" | grep -v "grep"-ax will get all the running processes including ones that aren't running on a terminal, retrieve the username, and command that started the process. We then use grep to filter out the exact process we need. Thanks @Jean.Millette and @Timothy.Leavitt for the help with this! great way to do it within InterSystems IRIS - thanks Tim! Perfect!! Exactly what I was looking for - thank you @Sarmishta.Velury (and @Jean.Millette and @Timothy.Leavitt ) If you're doing file I/O, why not %File.GetOwnerGroup()? Or is this something you need to know before you open (or create) a file? @Jeffrey.Drumm - it is important to have this run from a process that is already running in the InterSystems IRIS instance (or something spun off from the SuperServer) so that it doesn't pick up your UNIX username if you enter via console. So being able to run an API like Tim's will work nicely as I can run it via the Output window from Studio: Understood ... but I'm curious as to how IRIS is getting my environment:
w ##class(%SYSTEM.Util).GetEnviron("USER")
jdrumm
w ##class(%SYS.ProcessQuery).GetOSUsername()
irisusr
No idea, but I don't think it's tied to the IRIS name. When I am logged into Studio as bspead I see the following:
[quote]
w ##class(%SYSTEM.Util).GetEnviron("USER")rootw ##class(%SYS.ProcessQuery).GetOSUsername()cacheusr
[/quote]
Announcement
Evgeny Shvarov · Jun 1, 2022
Hi developers!
Here is the score of technical bonuses for participants' applications in the InterSystems Grand Prix 2022 programming Contest!
Project
InterSystems FHIR
IntegratedML
Native API
Interoperability
Production EXtension
Embedded Python
AtScale
Tableau, PowerBI, Logi
InterSystems IRIS BI
Docker
ZPM
Online Demo
Unit Testing
First Article on DC
Second Article on DC
Code Quality
Video on YouTube
Total Bonus
Nominal
5
4
3
3
4
5
4
9
3
2
2
2
2
2
1
1
3
55
db-migration-using-SQLgateway
2
2
2
6
CrossECP-IRIS
2
2
2
1
7
M-N-Contests
2
2
2
2
1
1
3
13
cryptocurrency-rate-forecasting
0
FHIR Patient Viewer
5
2
3
10
IRIS import manager
3
2
2
7
test-data
3
5
2
2
2
2
2
1
19
Docker InterSystems Extension
2
3
5
apptools-infochest
2
2
2
2
8
iris-mail
3
5
2
2
2
2
1
17
production-monitor
3
2
2
7
iris-megazord
5
3
2
2
2
2
2
1
3
22
apptools-admin
2
2
2
6
webterminal-vscode
2
2
ESKLP
3
2
2
7
Disease Predictor
4
2
2
2
1
3
14
iris-fhir-client
5
5
2
2
2
2
1
3
22
Water Conditions in Europe
5
4
9
3
2
2
2
1
3
31
FHIR Pseudonymization Proxy
5
3
2
2
12
ObjectScript-Syntax-For-GitLab
2
2
1
5
CloudStudio
2
3
5
FIT REST Operation Framework
3
2
2
7
Bonuses are subject to change upon the update.
Please claim here in the comments below or in the Discord chat. My app Disease Predictor has YouTube video. It is already added at the end of OEX app page description (https://openexchange.intersystems.com/package/Disease-Predictor). So I claim youtube bonus. I've added YouTube video and added it at the description of OEX Application https://openexchange.intersystems.com/package/Water-Conditions-in-EuropeAlso two related articles posted to DC, linked to application, please count.
And the last question is about bonuses for Tableau, PowerBI, Logi.
According to The Rules 3 points count for each of that systems.
I've used all them 3 in Contest App. I've added YouTube video and 2nd Article for iris-fhir-client app (https://openexchange.intersystems.com/package/iris-fhir-client) .So please consider it.
Thanks Added zpm/docker support (though struggling w/ M1 docker a bit I think) Hi @Craig.Regester! This is great!
Please submit the app with the ZPM option on OEx to make it available for everyone!