Clear filter
Announcement
Evgeny Shvarov · Aug 21, 2018
Hi Community!This year will have a special section on Flash Talks which gives you an opportunity to introduce your tool or solution on InterSystems Global Summit 2018!What is Flash Talks? It's 15 min session you have on Technology Exchange scene: 10 min for your pitch, 5 min for Q&A. The session WILL BE live streamed on Developer Community YouTube Channel.Developer Community Flash Talks!Today, 10/02, Flash Talks Scene @ InterSystems Global Summit 2018!2 pm Open source approaches to work with Documents @Eduard.Lebedyuk, InterSystems2-15 InterSystems IRIS on Kubernetes by @Dmitry.Maslennikov2-30 Visual Studio Code IDE for InterSystems Data Platforms by @John.Murray, GeorgeJames Software2-45 Static Analysis for ObjectScript with CacheQuality by @Daniel.Tamajon, Lite Solutions3-00 InterSystems Open Exchange by @Evgeny.Shvarov, InterSystems3-15 Q&A Session on Developer Community, Global Masters, and Open Exchange Well!We already have two slots taken!One is:Static Analysis for ObjectScript with CacheQuality It's a CI compatible static analysis to for ObjectScript codebase which finds bugs and obeys code guidlines using a set of managed rules from LiteSolutions (which is now better will sound as IRISQuality, @Daniel.Tamajon? ;)Another slot is a secret topic, will be announced just before the GS.So we have two more, book your session on InterSystems IRIS Flash Talks from Developer Community! Yet another topic for InterSystems IRIS Flash Talks:InterSystems IRIS on Kubernetes by @Dmitry.Maslennikov Few days before Global Summit 2018!And I can announce yet another presenter: @Eduard.Lebedyuk, InterSystems.Title: Open source approaches to work with Documents.And we have the day and time!Find Developer Community Flash Talks "Share Your InterSystems IRIS Solution!" on Tuesday the 2nd of October on Flash Talks Stage from 2pm to 3-30 pm. And, we have another flash talk topic - "Visual Studio Code IDE for InterSystems Data Platforms" from @John.Murray!So many exciting topics! Looking forward! And one more topic from me: InterSystems Open Exchange - marketplace of Solutions, Tools and Adapters for InterSystems Data Platforms!The updated the agenda is here and in the topic:Tuesday 10/02, at Tech Exchange Global Summit 2018!2 pm Open source approaches to work with Documents @Eduard Lebedyuk, InterSystems2-15 InterSystems IRIS on Kubernetes by @Dmitry Maslennikov, 2-30 Visual Studio Code IDE for InterSystems Data Platforms by @John Murray, GeorgeJames Software2-45 Static Analysis for ObjectScript with CacheQuality by @Daniel Tamajon, Lite Solutions3-00 InterSystems Open Exchange by @Evgeny Shvarov, InterSystems3-15 Q&A Session on Developer Community, Global Masters and Open Exchange This is a live broadcast recording from Developer Community Flash Talks Great stuff. Thanks. Where can I get the sample files and/or some instructions on the use of Kubernetes as demonstrated? I bet @Dmitry.Maslennikov can share the info. Dmitry? I have done kubernetes deployments on a couple of projects, but both of them is private. And I can't share it. If you need some help, you can contact me directly, and I can share some ideas about how it can be done. Thanks Dmitry.
Announcement
Anastasia Dyubaylo · Nov 2, 2018
Hi Developers!New video from Global Summit 2018 is available now on InterSystems Developers YouTube Channel:Unit Test Coverage in InterSystems ObjectScript InterSystems is sharing a tool they use for measuring test coverage. Watch this session recording and learn how you can measure the effectiveness of your existing unit tests, identify weak areas, enable improvement, and track results over time.Takeaway: I know how to use the tool InterSystems provides to improve my unit tests.Presenter: @Timothy.Leavitt, Developer on HealthShare Development Team.Note: Test Coverage Tool is also available on InterSystems Open Exchange. And...Content related to this session, including slides, video and additional learning content can be found here.Don`t forget to subscribe our InterSystems Developers YouTube Channel.Enjoy and stay tuned!
Announcement
Evgeny Shvarov · Jul 6, 2023
Hi Developers!
Here is the bonus results for the applications in InterSystems Grand Prix Programming Contest 2023:
Project
LLM AI or LangChain
FHIR SQL Builder
FHIR
IntegratedML
Native API
Embedded Python
Interoperability
PEX
Adaptive Analytics
Tableau, PowerBI, Logi
IRIS BI
Columnar Index
Docker
ZPM
Online Demo
Unit Testing
Community Idea Implementation
First Article on DC
Second Article on DC
Code Quality
First Time Contribution
Video on YouTube
Total Bonus
Nominal
6
5
3
4
3
4
3
2
3
3
3
1
2
2
2
2
4
2
1
1
3
3
62
oex-mapping
4
3
2
2
2
2
2
1
1
3
22
appmsw-warm-home
2
2
2
2
1
9
RDUH Interface Analyst HL7v2 Browser Extension
3
3
6
irisapitester
4
2
2
2
1
1
3
15
oex-vscode-snippets-template
2
2
4
1
9
IRIS FHIR Transcribe Summarize Export
6
3
4
2
2
2
2
1
1
3
3
29
IntegratedMLandDashboardSample
4
3
2
2
1
12
iris-user-manager
2
2
1
5
irisChatGPT
6
5
4
2
2
2
2
1
1
3
28
fhir-chatGPT
6
3
4
2
1
16
iris-fhir-generative-ai
6
3
4
3
2
2
2
2
1
1
3
29
IRIS Data Migration Manager
-
-
-
0
password-app-iris-db
3
2
2
2
3
3
15
interoperability_GPT
6
4
3
2
1
16
FHIR Editor
3
2
5
Recycler
3
-
-
3
ZProfile
2
2
2
2
3
11
DevBox
6
2
3
11
FHIR - AI and OpenAPI Chain
6
3
2
2
2
2
1
1
3
3
25
IntegratedML-IRIS-PlatformEntryPrediction
4
3
3
10
Please apply with your comments for new implementations and corrections to be made here in the comments or in Discord. Hi @Evgeny.Shvarov !
I used Java to connect to IRIS in the application and associated an article with it, but I did not see it in bonus points. Can they be added to bonus points? Hi Zhang! We don't have points for using Java. What bonus are you talking about? If you mean Native API you haven't use it. You used only jdbc in your project without Native SDK . Hi @Evgeny.Shvarov Thanks for publishing the bonuses.Please note that I have added FHIR SQL Builder functionality in my new release of irisChatGPT application. So pls consider it.Thanks Hi Muhammad! Your points were added to the table! Thank you! Hi @Semion.Makarov
I added a BI dashboard to do analytics on the app logs of iris-fhir-generative-ai to the release 1.0.9, and a second article explaining about such analytics.
So, I'd like to ask for IRIS BI and Second article bonuses.
PS: Sorry for publish this so late, but I had this idea just Sunday late. 😄
Thanks! Hi Jose! I've applied these bonuses to your app.
Article
Alex Woodhead · Jun 15, 2023
Demonstration example for the current Grand Prix contest for use of a more complex Parameter template to test the AI.
Interview Questions
There is documentation. A recruitment consultant wants to quickly challenge candidates with some relevant technical questions to a role.
Can they automate making a list of questions and answers from the available documentation?
Interview Answers and Learning
One of the most effective ways to cement new facts into accessible long term memory is with phased recall.
In essence you take a block of text information, reorganize it into a series of self-contained Questions and Facts.
Now imagine two questions:
What day of the week is the trash-bin placed outside for collection?
When is the marriage anniversary?
Quickly recalling correct answers can mean a happier life!!
Recalling the answer to each question IS the mechanism to enforce a fact into memory.
Phased Recall re-asks each question with longed and longer time gaps when the correct answer is recalled.For example:
You consistently get the right answer: The question is asked again tomorrow, in 4 days, in 1 week, in 2 weeks, in 1 month.
You consistently get the answer wrong: The question will be asked every day until it starts to be recalled.
If you can easily see challenging answers, it is productive to re-work difficult answers, to make them more memorable.
There is a free software package called Anki that provides this full phased recall process for you.
If you can automate the creation of questions and answers into a text file, the Anki will create new flashcards for you.
Hypothesis
We can use LangChain to transform InterSystems PDF documentation into a series of Questions and answers to:
Make interview questions and answers
Make Learner Anki flash cards
Create new virtual environment
mkdir chainpdf
cd chainpdf
python -m venv .
scripts\activate
pip install openai
pip install langchain
pip install wget
pip install lancedb
pip install tiktoken
pip install pypdf
set OPENAI_API_KEY=[ Your OpenAI Key ]
python
Prepare the docs
import glob
import wget;
url='https://docs.intersystems.com/irisforhealth20231/csp/docbook/pdfs.zip';
wget.download(url)
# extract docs
import zipfile
with zipfile.ZipFile('pdfs.zip','r') as zip_ref:
zip_ref.extractall('.')
Extract PDF text
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.prompts.prompt import PromptTemplate
from langchain import OpenAI
from langchain.chains import LLMChain
# To limit for the example
# From the documentation site I could see that documentation sets
# GCOS = Using ObjectScript
# RCOS = ObjectScript Reference
pdfFiles=['./pdfs/pdfs/GCOS.pdf','./pdfs/pdfs/RCOS.pdf']
# The prompt will be really big and need to leave space for the answer to be constructed
# Therefore reduce the input string
text_splitter = CharacterTextSplitter(
separator = "\n\n",
chunk_size = 200,
chunk_overlap = 50,
length_function = len,
)
# split document text into chuncks
documentsAll=[]
for file_name in pdfFiles:
loader = PyPDFLoader(file_name)
pages = loader.load_and_split()
# Strip unwanted padding
for page in pages:
del page.lc_kwargs
page.page_content=("".join((page.page_content.split('\xa0'))))
documents = text_splitter.split_documents(pages)
# Ignore the cover pages
for document in documents[2:]:
# skip table of contents
if document.page_content.__contains__('........'):
continue
documentsAll.append(document)
Prep search template
_GetDocWords_TEMPLATE = """From the following documents create a list of distinct facts.
For each fact create a concise question that is answered by the fact.
Do NOT restate the fact in the question.
Output format:
Each question and fact should be output on a seperate line delimited by a comma character
Escape every double quote character in a question with two double quotes
Add a double quote to the beginning and end of each question
Escape every double quote character in a fact with two double quotes
Add a double quote to the beginning and end of each fact
Each line should end with {labels}
The documents to reference to create facts and questions are as follows:
{docs}
"""
PROMPT = PromptTemplate(
input_variables=["docs","labels"], template=_GetDocWords_TEMPLATE
)
llm = OpenAI(temperature=0, verbose=True)
chain = LLMChain(llm=llm, prompt=PROMPT)
Process each document and place output in file
# open an output file
with open('QandA.txt','w') as file:
# iterate over each text chunck
for document in documentsAll:
# set the label for Anki flashcard
source=document.metadata['source']
if source.__contains__('GCOS.pdf'):
label='Using ObjectScript'
else:
label='ObjectScript Reference'
output=chain.run(docs=document,labels=label)
file.write(output+'\n')
file.flush()
There were some retry and force-close messages during loop.
Anticipate this is limiting the openAI API to a fair use.
Alternatively a local LLM could be applied instead.
Examine the output file
"What are the contexts in which ObjectScript can be used?", "You can use ObjectScript in any of the following contexts: Interactively from the command line of the Terminal, As the implementation language for methods of InterSystems IRIS object classes, To create ObjectScript routines, and As the implementation language for Stored Procedures and Triggers within InterSystems SQL.", Using ObjectScript,
"What is a global?", "A global is a sparse, multidimensional database array.", Using ObjectScript,
"What is the effect of the ##; comment on INT code line numbering?", "It does not change INT code line numbering.", Using ObjectScript,
"What characters can be used in an explicit namespace name after the first character?", "letters, numbers, hyphens, or underscores", Using ObjectScript
"Are string equality comparisons case-sensitive?", "Yes" Using ObjectScript,
"What happens when the number of references to an object reaches 0?", "The system automatically destroys the object.",Using ObjectScript
Question: "What operations can take an undefined or defined variable?", Fact: "The READ command, the $INCREMENT function, the $BIT function, and the two-argument form of the $GET function.", Using ObjectScript, a
While a good attempt at formatting answers has occurred there is some deviation.
Manually reviewing I can pick some questions and answers to continue the experiment.
Importing FlashCards into Anki
Reviewed text file:
"What are the contexts in which ObjectScript can be used?", "You can use ObjectScript in any of the following contexts: Interactively from the command line of the Terminal, As the implementation language for methods of InterSystems IRIS object classes, To create ObjectScript routines, and As the implementation language for Stored Procedures and Triggers within InterSystems SQL.", "Using ObjectScript","What is a global?", "A global is a sparse, multidimensional database array.", "Using ObjectScript","What is the effect of the ##; comment on INT code line numbering?", "It does not change INT code line numbering.", "Using ObjectScript","What characters can be used in an explicit namespace name after the first character?", "letters, numbers, hyphens, or underscores", "Using ObjectScript""Are string equality comparisons case-sensitive?", "Yes", "Using ObjectScript","What happens when the number of references to an object reaches 0?", "The system automatically destroys the object.","Using ObjectScript""What operations can take an undefined or defined variable?", "The READ command, the $INCREMENT function, the $BIT function, and the two-argument form of the $GET function.", "Using ObjectScript"
Creating new Anki card deck
Open Anki and select File -> Import
Select the reviewed text file
Optionally create a new Card Deck for "Object Script"
A basic card type is fine for this format
There was mention of a "Field 4" so should check the records.
Anki import success
Lets Study
Now choose the reinforcement schedule
Happy Learning !!
References
Anki software is available from https://apps.ankiweb.net/
Article
Claudio Devecchi · Jun 20, 2023
In this article, I will share the theme we presented at the Global Summit 2023, in the Tech Exchange room. Me and @Rochael.Ribeiro
In this opportunity, we talk about the following topics:
Open Exchange Tools for Fast APIs
Open API Specification
Traditional versus Fast Api development
Composite API (Interoperability)
Spec-First or Api-First Approach
Api Governance & Monitoring
Demo (video)
Open Exchange Tools for Fast APIs
As we are talking about fast modern APIs development (Rest / json) we will use two Intersystems Open Exchange tools:
The first is a framework for rapid development of APIs which we will detail in this article.
https://openexchange.intersystems.com/package/IRIS-apiPub
The second is to use Swagger as a user interface for the specification and documentation of the Rest APIs developed on IRIS platform, as well as their use/execution. The basis for its operation is the Open Api specification (OAS) standard, described below:
https://openexchange.intersystems.com/package/iris-web-swagger-ui
What is the Open API Specification (OAS)?
It is a standard used worldwide to define, document and consume APIs. In most cases, APIs are designed even before implementation. I'll talk more about it in the next topics.
It is important because it defines and documents Rest APIs for its use, both on the provider and consumer side. But this pattern also serves to speed up tests and API calls in tools (Rest APIs Clients) on the market, such as Swagger, Postman, Insomnia, etc…
Traditional way in to publish API using IRIS
Imagine we have to build and publish a Rest API from an existing IRIS method (picture below).
In the traditional way:
1: We have to think about how consumers will call it. For example: Which path and verb will be used and how will be the response. Whether in a JSON object or as a plain/text.
2: Build a new method in a %CSP.REST class that will handle the http request to call it.
3: Handle the method's response to the intended http response for the end user.
4: Think about how we're going to provide the success code and how we're going to handle exceptions.
5: Map the route for our new method.
6: Provide the API documentation to the end user. We will probably build the OAS content manually.
7: And if, for example, we have a request or response payload (object), the implementation time will increase, because it must also be documented in OAS.
How can we be faster?
By simply tagging the IRIS method with the [WebMethod] attribute. Whatever it is, the framework will take care of its publication, using the OAS 3.x standard.
Why is the OAS 3.x standard so important?
Because It also documents in detail all properties of the input and output payloads.
In this way, any Rest Client tools on the market can instantly couple to the APIs, Like Insomnia, Postman, Swagger, etc. and provide a sample content to call them easily.
Using Swagger we will already visualize our API (image above) and call it. This is also very useful for testing.
API customization
But what if I need to customize my API?
For example: Instead of the name of the method I want the path to be something else. And I want the input parameters to be in the path, not like a query param.
We define a specific notation on top of the method, where we can complement the meta-information that the method itself does not provide.
In this example we are defining another path for our API and complementing the information for the end user to have a more friendly experience.
Projection Map for Rest API
This framework supports numerous types of parameters.
In this map we can highlight the complex types (the objects). They will be automatically exposed as a JSON payload and each property will be properly documented (OAS) for the end user .
Interoperability (Composite APIs)
By supporting complex types, you can also expose Interoperability Services.
It is a favorable scenario for building composite APIs, which use the orchestration of multiple external components (outbounds).
This means that objects or messages used as request or response will be automatically published and read by tools like swagger.
And it's an excellent way to test interoperability components, because usually a payload template is already loaded so that the user knows which properties the API uses.
First the developer can focus on testing, then shape the Api through customization.
Spec-first or Api-first Approach
Another concept widely used today is having the API defined even before its implementation.
With this framework it is possible to import an Open Api Spec. It creates the methods structure (spec) automatically, missing only their implementation.
API Governance and Monitoring
For Api's governance, it is also recommended to use IAM together.
In addition to having multiple plugins, IAM can quickly couple to APIs through the OAS standard.
apiPub offers additional tracing for APIs (see demo video)
Demo
Download & Documentation
Intersystems Open Exchange: https://openexchange.intersystems.com/?search=apiPub
Complete documentation: https://github.com/devecchijr/apiPub
Very rich material for the use of a new development method, bringing facilities and innovation. This will help a lot in the agility of API development.
Congratulations Claudio, for sharing the knowledge. thank you @Thiago.Simoes
Announcement
Anastasia Dyubaylo · Jul 11, 2023
Hi Community,
Let's meet together at the online meetup with the winners of the InterSystems Grand Prix Contest 2023 – a great opportunity to have a discussion with the InterSystems Experts team as well as our contestants.
Winners' demo included!
Date & Time: Thursday, July 13, 11 am EDT | 5 pm CEST
Join us to learn more about winners' applications and to have a talk with our experts.
➡️ REGISTER TODAY
See you all at our virtual meetup! 👉 LIVE NOW
Article
Niyaz Khafizov · Jul 6, 2018
Hi all. Yesterday I tried to connect Apache Spark, Apache Zeppelin, and InterSystems IRIS. During the process, I experienced troubles connecting it all together and I did not find a useful guide. So, I decided to write my own.
Introduction
What is Apache Spark and Apache Zeppelin and find out how it works together. Apache Spark is an open-source cluster-computing framework. It provides an interface for programming entire clusters with implicit data parallelism and fault tolerance. So, it is very useful when you need to work with Big Data. And Apache Zeppelin is a notebook, that provides cool UI to work with analytics and machine learning. Together, it works like this: IRIS provides data, Spark reads provided data, and in a notebook we work with the data.
Note: I have done the following on Windows 10.
Apache Zeppelin
Now, we will install all the necessary programs. First of all, download apache zeppelin from the official site of apache zeppelin. I have used zeppelin-0.8.0-bin-all.tgz. It includes Apache Spark, Scala, and Python. Unzip it to any folder. After that you can launch zeppelin by calling \bin\zeppelin.cmd from the root of your Zeppelin folder. Wait until the Done, zeppelin server started string appears and open http://localhost:8080 in your browser. If everything is okay, you will see Welcome to Zeppelin! message.
Note: I assume, that InterSystems IRIS already installed. If not, download and install it before the next step.
Apache Spark
So, we have the browser's open window with Zeppelin notebook. In the upper-right corner click on anonymous and after, click on Interpreter. Scroll down and find spark.
Next to the spark find edit button and click on it. Scroll down and add dependencies to intersystems-spark-1.0.0.jar and to intersystems-jdbc-3.0.0.jar. I installed InterSystems IRIS to the C:\InterSystems\IRIS\ directory, so artifacts I need to add are at:
My files are here:
And save it.
Check that it works
Let us check it. Create a new note, and in a paragraph paste the following code:
var dataFrame=spark.read.format("com.intersystems.spark").option("url", "IRIS://localhost:51773/NAMESPACE").option("user", "UserLogin").option("password", "UserPassword").option("dbtable", "Sample.Person").load()
// dbtable - name of your table
URL - IRIS address. It is formed as follows IRIS://ipAddress:superserverPort/namespace:
protocol IRIS is a JDBC connection over TCP/IP that offers Java shared memory connection;
ipAddress — The IP address of the InterSystems IRIS instance. If you are connecting locally, use 127.0.0.1 instead of localhost;
superserverPort — The superserver port number for the IRIS instance, which is not the same as the webserver port number. To find the superserver port number, in the Management Portal, go to System Administration > Configuration > System Configuration > Memory and Startup; namespace — An existing namespace in the InterSystems IRIS instance. In this demo, we connect to the USER namespace.
Run the paragraph. If everything is okay, you will see FINISHED.
My notebook:
Conclusion
In conclusion, we found out how Apache Spark, Apache Zeppelin, and InterSystems IRIS can work together. In my next articles, I will write about data analysis.
Links
The official site of Apache Spark
Apache Spark documentation
IRIS Protocol
Using the InterSystems Spark Connector
💡 This article is considered as InterSystems Data Platform Best Practice.
Article
sween · Jul 29, 2021
We are ridiculously good at mastering data. The data is clean, multi-sourced, related and we only publish it with resulting levels of decay that guarantee the data is current. We chose the HL7 Reference Information Model (RIM) to land the data, and enable exchange of the data through Fast Healthcare Interoperability Resources (FHIR®).
We are also a high performing, full stack team, and like to keep our operational resources on task, so managing the underlying infrastructure to host the FHIR® data repository for purposes of ingestion and consumption is not in the cards for us. For this, we chose the [FHIR® Accelerator Service](https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=FAS) to handle storage, credentials, back up, development, and FHIR® interoperability.
Our data is marketable, and well served as an API, so we will **monetize** it. This means we need to package our data/api up for appropriate sale — which includes: a developer portal, documentation, sample code, testing tools, and other resources to get developers up and running quickly against our data. We need to focus on making our API as user-friendly as possible, and give us some tooling to ward off abuse and protect our business against denial service attacks. For the customers using our data, we chose to use [Google Cloud's Apigee Edge](https://apigee.google.com/edge).

> ### With our team focused and our back office entirely powered as services, we are set to make **B I L L I O N S**, and this is an account as to how.
# Provisioning
High level tasks for provisioning in the [FHIR® Accelerator Service](https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=FAS) and [Google Cloud's Apigee Edge](https://apigee.google.com/edge).
## FHIR® Accelerator Service
Head over to the AWS Marketplace and subscribe to the InterSystems FHIR® Accelerator Service, or sign up for a trial account directly [here](https://portal.trial.isccloud.io/account/signup).
After your account has been created, create a FHIR® Accelerator deployment for use to store and sell your FHIR® data.

After a few minutes, the deployment will be ready for use and available to complete the following tasks:
1. Create an API Key in the Credentials section and record it.
2. Record the newly created FHIR® endpoint from the Overview section.


## Google Cloud Apigee Edge
Within your Google Cloud account, create a project and enable it for use with Apigee Edge. To understand a little bit of the magic that is going on with the following setup, we are enabling a Virtual Network to be created, a Load Balancer, SSL/DNS for our endpoint, and making some choices on whether or not its going to be publicly accessible.
> Fair warning here, if you create this as an evaluation and start making M I L L I O N S, it cannot be converted to a paid plan later on to continue on to making B I L L I O N S.



## Build the Product
Now, lets get on to building the product for our two initial customers of our data, Axe Capital and Taylor Mason Capital.

### Implement Proxy
Out first piece of the puzzle here is the mechanics of our proxy from Apigee to the FHIR® Accelerator Service. At its core, we are implementing a basic reverse proxy that backs the Apigee Load Balancer with our FHIR® API. Remember that we created all of the Apigee infrastructure during the setup process when we enabled the GCP Project for Apigee.

### Configure the Proxy
Configuring the proxy basically means you are going to define a number of policies to the traffic/payload as it either flows to (PreFlow/PostFlow) to shape the interaction and safety of how the customers/applications behave against the API.
In the below, we configure a series of policies that :
1. Add CORS Headers.
2. Remove the API Key from the query string.
3. Add the FHIR® Accelerator API key to the headers.
4. Impose a Quota/Limit.

A mix of XML directives and a user interface to configure the policy is available as below.

### Add a Couple of Developers, Axe and Taylor
We need to add some developers next, which is as simple as adding the users to any directory, this is required to enable the Applications that are created in the next step and supplied to our customers.

### Configure the Apps, one per customer
Applications is where we break part our *product* and logically divide it up to our customers, here we will create one app per customer. Important note here that in our case for this demonstration, this is where the apikey for that particular customer is assigned, after we assign the developer to the app.

### Create the Developer Portal
The Developer Portal is the "**clown suit**" and front door for our customers and where they can interact with what they are paying for. It comes packed with some powerful customization, a specific url for the product it is attached to, and allows the import of a swagger/openapi spec for developers to interact with the api using swagger based implemented UI. Lucky for us the Accelerator Service comes with a swagger definition, so we just have to know where to look for it and make some modifications so that the defs work against our authentication scheme and url. We don't spend a lot of time here in the demonstration, but you should if you plan on setting yourself apart for the paying customers.

### Have Bobby Send a Request
Let's let Bobby Axelrod run up a tab by sending his first requests to our super awesome data wrapped up in FHIR®. For this, keep in mind the key that is being used and the endpoint that is being used, is all assigned by Apigee Edge, but the access to the FHIR® Accelerator Service is done through the single key we supplied in the API Proxy.


### Rate Limit Bobby with a Quota
Let's just say one of our customers has a credit problem, so we want to limit the use of our data on a rate basis. If you recall, we did specify a rate of 30 requests a minute when we setup the proxy, so lets test that below.

### Bill Axe Capital
I will get in front of your expectations here so you wont be too disappointed by how rustic the billing demonstration is, but it does employ a technique here to generate a report for purposes of invoicing, that actually removes things that may or may not be the customers fault in the proxy integration. For instance, if you recall from the rate limit demo above, we sent in 35 requests, but limited things to 30, so a quick filter in the billing report will actually remove those and show we are competent enough to bill only for our customers utilization.

To recap, monetizing our data included:
* Safety against abuse and DDOS protection.
* Developer Portal and customization for the customer.
* Documentation through Swagger UI.
* Control over the requests Pre/Post our API
... and a way to invoice for **B I L L I O N S**.
This is very cool. Well done. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Thomas Dyar · Dec 14, 2021
Preview releases are now available for the 2021.2 version of InterSystems IRIS, IRIS for Health, and HealthShare Health Connect.
As this is a preview release, we are eager to learn from your experiences with this new release ahead of its General Availability release next month. Please share your feedback through the Developer Community so we can build a better product together.
InterSystems IRIS Data Platform 2021.2 makes it even easier to develop, deploy and manage augmented applications and business processes that bridge data and application silos. It has many new capabilities including:
Enhancements for application and interface developers, including:
Embedded Python
Interoperability Productions in Python
Updates to Visual Studio Code ObjectScript Extension Pack
New Business Services and operations added allowing users to set and run SQL query with minimal custom coding
Enhancements for Analytics and AI, including:
New SQL LOAD command efficiently loads CSV and JDBC source data into tables
Enhancements to Adaptive Analytics
Enhancements for Cloud and Operations tasks, including:
New Cloud Connectors make it simple to access and use cloud services within InterSystems IRIS applications
IKO enhancements improve manageability of Kubernetes resources
Enhancements for database and system administrators, including:
Online Shard Rebalancing automates distribution of data across nodes without interrupting operations
Adaptive SQL engine uses fast block sampling and automation to collect advanced table statistics and leverages runtime information for improved query planning
Storage needs for InterSystems IRIS are reduced with new stream and journal file compression settings
Support for TLS 1.3 and OpenSSL 1.1.1, using system-provided libraries
New ^TRACE utility reports detailed process statistics such as cache hits and reads
More details on all of these features can be found in the product documentation:
InterSystems IRIS 2021.1 documentation and release notes
InterSystems IRIS for Health 2021.1 documentation and release notes
HealthShare Health Connect 2021.1 documentation and release notes
InterSystems IRIS 2021.2 is a Continuous Delivery (CD) release, which now comes with classic installation packages for all supported platforms, as well as container images in OCI (Open Container Initiative) a.k.a. Docker container format. Container images are available for OCI compliant run-time engines for Linux x86-64 and Linux ARM64, as detailed in the Supported Platforms document.
Full installation packages for each product are available from the WRC's product download site. Using the "Custom" installation option enables users to pick the options they need, such as InterSystems Studio and IntegratedML, to right-size their installation footprint.
Installation packages and preview keys are available from the WRC's preview download site.
Container images for the Enterprise Edition, Community Edition and all corresponding components are available from the InterSystems Container Registry using the following commands:
docker pull containers.intersystems.com/intersystems/iris:2021.2.0.617.0
docker pull containers.intersystems.com/intersystems/iris-ml:2021.2.0.617.0
docker pull containers.intersystems.com/intersystems/irishealth:2021.2.0.617.0
docker pull containers.intersystems.com/intersystems/irishealth-ml:2021.2.0.617.0
For a full list of the available images, please refer to the ICR documentation.
Alternatively, tarball versions of all container images are available via the WRC's preview download site.
The build number for this preview release is 2021.2.0.617.0. Interoperability productions with Python and Cloud connectors? YEEEESSSSSSS.
However, containers.intersystems.com is giving up bad credentials.... or am I the sole brunt of cruelty here?
```
(base) sween @ dimsecloud-pop-os ~
└─ $ ▶ docker login -u="ron.sweeney@integrationrequired.com" containers.intersystems.com
Password:
Error response from daemon: Get https://containers.intersystems.com/v2/: unauthorized: BAD_CREDENTIAL
```
I was able to get in, for example:
$ docker-ls tags --registry https://containers.intersystems.com intersystems/irishealth ...
requesting list . done
repository: intersystems/irishealth
tags:
- 2019.1.1.615.1
- 2020.1.0.217.1
- 2020.1.1.408.0
- 2020.2.0.211.0
- 2020.3.0.221.0
- 2020.4.0.547.0
- 2021.1.0.215.0
- 2021.2.0.617.0 No problem:
download from wrc>previews some .tar.gz
docker load -i <downloaded> ...
off it goes with Docker run or Dockerfile + docker-compose
And here it is, containers.intersystems.com gone
$ docker pull containers.intersystems.com/intersystems/irishealth-community:2021.2.0.617.0
Error response from daemon: Get "https://containers.intersystems.com/v2/": Service Unavailable
Could you push those images to the docker hub, as usual before? It's more stable. Hi Dmitry,
Thanks for the heads-up, we are working to bring containers.intersystems.com back online. The docker hub listings will be updated with the 2021.2 preview images in the next day or so, and we will update this announcement when they're available!
Kind Regards,Thomas Dyar Good news -- containers.intersystems.com is back online. Please let us know if you encounter any issues!
Regards,
Thomas Dyar And images with ZPM package manager 0.3.2 are available accordingly:
intersystemsdc/iris-community:2021.2.0.617.0-zpm
intersystemsdc/iris-ml-community:2021.2.0.617.0-zpm
intersystemsdc/iris-community:2021.1.0.215.3-zpm
intersystemsdc/irishealth-community:2021.1.0.215.3-zpm
intersystemsdc/irishealth-ml-community:2021.1.0.215.3-zpm
intersystemsdc/irishealth-community:2021.1.0.215.3-zpm
And to launch IRIS do:
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-ml-community:2021.2.0.617.0-zpm
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2021.2.0.617.0-zpm
And for terminal do:
docker exec -it my-iris iris session IRIS
and to start the control panel:
http://localhost:9092/csp/sys/UtilHome.csp
To stop and destroy container do:
docker stop my-iris
And the FROM clause in dockerfile can look like:
FROM intersystemsdc/iris-community:2021.2.0.617.0-zpm
Or to take the latest image:
FROM intersystemsdc/iris-community Excellent! That's comfort. Available on Docker Hub too. Will we have an arm64 version? I was trying to install the preview version on Ubuntu 20.04.3 LTS ARM64(in a VM on Mac M1).
But irisintall gave the the following error. Installing zlib1g-dev did not solve the problem. Could anyone suggest me what I was missing?
-----
Your system type is 'Ubuntu LTS (ARM64)'.
zlib1g version 1 is required.
** Installation aborted ** Based on the msg alone , would need:
sudo apt install zlib1g
Instead of:
sudo apt install zlib1g-dev Thanks. But looks like zlib1g is already installed..
$ sudo apt install zlib1g
Reading package lists... Done
Building dependency tree
Reading state information... Done
zlib1g is already the newest version (1:1.2.11.dfsg-2ubuntu1.2).
0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. I know this is not the proper way to do it, but I worked around this issue by deleting the line for zlib1g in the file, package/requirements_check/requirements.lnxubuntu2004arm64.isc.
Looks like the instance is working fine, so I suspect there is something wrong with requirement checking in installation.
Announcement
Jeff Fried · Jan 27, 2020
Preview releases are now available for the 2020.1 version of InterSystems IRIS and IRIS for Health!
Kits and Container images are available via the WRC's preview download site.
The build number for these releases is 2020.1.0.199.0. (Note: first release was build 197, updated to 199 on 2/12/20)
InterSystems IRIS Data Platform 2020.1 has many new capabilities including:
Kernel Performance enhancements, including reduced contention for blocks and cache lines
Universal Query Cache - every query (including embedded & class ones) now gets saved as a cached query
Universal Shard Queue Manager - for scale-out of query load in sharded configurations
Selective Cube Build - to quickly incorporate new dimensions or measures
Security improvements, including hashed password configuration
Improved TSQL support, including JDBC support
Dynamic Gateway performance enhancements
Spark connector update
MQTT support in ObjectScript
(NOTE: this preview build does not include TLS 1.3 and OpenLDAP updates, which are planned for General Availability)
InterSystems IRIS for Health 2020.1 includes all of the enhancements of InterSystems IRIS. In addition, this release includes:
In-place conversion to IRIS for Health
HL7 Productivity Toolkit including Migration Tooling and Cloverleaf conversion
X12 enhancements
FHIR R4 base standard support
As this is an EM (Extended Maintenance) release, customers may want to know the differences between 2020.1 and 2019.1. These are listed in the release notes:
InterSystems IRIS 2020.1 release notes
IRIS for Health 2020.1 release notes
Draft documentation can be found here:
InterSystems IRIS 2020.1 documentation
IRIS for Health 2020.1 documentation
The platforms on which InterSystems IRIS and IRIS for Health 2020.1 are supported for development and production are detailed in the Supported Platforms document. Jeffrey, thank you for the info.
Do you already know that the Supported Platforms document link is broken? (404) Hi Jeff! What are the Docker image tags for Community Editions? I've just uploaded the Community Editions to the Docker Store (2/13-updated with new preview build):
docker pull store/intersystems/iris-community:2020.1.0.199.0
docker pull store/intersystems/irishealth-community:2020.1.0.199.0 Thanks, Steve! Will native install kits be available for the Community Editions as well? Yes, full kit versions of the 2020.1 Community Edition Preview are available through the WRC download site as well. I'm getting this error when I attempt to access the link ...
Jeffery
If you don't use that link and first log into the WRC application: https://wrc.intersystems.com/wrc/enduserhome.csp
Can you then go to: https://wrc.intersystems.com/wrc/coDistribution2.csp
Then select Preview? Some customers have had problems with the distib pages because their site restricts access to some JS code we get from a third party. I get the same result using your suggested method, Brendan.
I'm not technically a customer; I work for a Services Partner of ISC. I am a DC Moderator though (if that carries any weight) so it would be nice to keep abreast of the new stuff OK I needed to do one more click, your Org does not have a support contract so you can't have access to these pages, sorry.
Maybe Learning Services could help you out but I can't grant you access to the kits on the WRC. Hello,
I took this for a spin and noticed that the new Prometheus metrics are not available on it like they were in 2019.4 ? ( ie: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api ).
Am I missing something or is the metrics api still under consideration to make it into this build ?
The correct link is https://docs.intersystems.com/iris20201/csp/docbook/platforms/index.html. I fixed the typo in the post. Tanks for pointing that out! Seems to be there for me...
Hello Jeffrey,
We're currently working on IRIS for Health 20.1 build 197, and we were wondering what fixes or additions went to latest build 199. Intesystems used publish all fixes with each FT build version, is there such list?
Thank you
Yuriy
The Preview has been updated with build 2020.1.0.199.0. This includes a variety of changes, primarily corrections for issues found under rare conditions in install, upgrade, and certain distributed configurations. None of these changes impacts any published API.
Thank you for working with the preview and for your feedback! Hi Yuriy -
Thanks for pointing this out. We did not prepare a list for this, but I did make a comment on this thread, including verifying that none of these changes impacts any published API. If there is a change resolving an issue you reported through the WRC, you'll see that this is resolved via the normal process. We will be publishing detailed changenotes with the GA release.
-Jeff
Article
sween · Mar 4, 2024
If you are a customer of the new InterSystems IRIS® Cloud SQL and InterSystems IRIS® Cloud IntegratedML® cloud offerings and want access to the metrics of your deployments and send them to your own Observability platform, here is a quick and dirty way to get it done by sending the metrics to Google Cloud Platform Monitoring (formerly StackDriver).
The Cloud portal does contain a representation of some top level metrics for at-a-glance heads up metrics, which is powered by a metrics endpoint that is exposed to you, but without some inspection you would not know it was there.
🚩 This approach is most likely taking advantage of a "to be named feature", so with that being said, it is not future-proof and definitely not supported by InterSystems.
So what if you wanted a more comprehensive set exported? This technical article/example shows a technique to scrape and forward metrics to observability, it can be modified to suit your needs, to scrape ANY metrics target and send to ANY observability platform using the Open Telemetry Collector.
The mechanics leading up to the above result can be accomplished in many ways, but for here we are standing up a Kubernetes pod to run a python script in one container, and Otel in another to pull and push the metrics... definitely a choose your own adventure, but for this example and article k8s is the actor pulling this off with Python.
Steps:
Prereqs
Python
Container
Kubernetes
Google Cloud Monitoring
Prerequisites:
An active subscription to IRIS® Cloud SQL
One Deployment, running, optionally with Integrated ML
Secrets to supply to your environment
Environment Variables
Obtain Secrets
I dropped this in a teaser as it is a bit involved and somewhat off target of the point, but these are the values you will need to generate the secrets.
ENV IRIS_CLOUDSQL_USER 'user'
ENV IRIS_CLOUDSQL_PASS 'pass'
☝ These are your credentials for https://portal.live.isccloud.io
ENV IRIS_CLOUDSQL_USERPOOLID 'userpoolid'
ENV IRIS_CLOUDSQL_CLIENTID 'clientid'
ENV IRIS_CLOUDSQL_API 'api'
☝ These you have to dig out of development tools for your browser.
`aud` = clientid
`userpoolid`= iss
`api` = request utl
ENV IRIS_CLOUDSQL_DEPLOYMENTID 'deploymentid'
☝ This can be derived from the Cloud Service Portal
Python:
Here is the python hackery to pull the metrics from the Cloud Portal and export them locally as metrics for the otel collector to scrape:
iris_cloudsql_exporter.py
import time
import os
import requests
import json
from warrant import Cognito
from prometheus_client.core import GaugeMetricFamily, REGISTRY, CounterMetricFamily
from prometheus_client import start_http_server
from prometheus_client.parser import text_string_to_metric_families
class IRISCloudSQLExporter(object):
def __init__(self):
self.access_token = self.get_access_token()
self.portal_api = os.environ['IRIS_CLOUDSQL_API']
self.portal_deploymentid = os.environ['IRIS_CLOUDSQL_DEPLOYMENTID']
def collect(self):
# Requests fodder
url = self.portal_api
deploymentid = self.portal_deploymentid
print(url)
print(deploymentid)
headers = {
'Authorization': self.access_token, # needs to be refresh_token, eventually
'Content-Type': 'application/json'
}
metrics_response = requests.request("GET", url + '/metrics/' + deploymentid, headers=headers)
metrics = metrics_response.content.decode("utf-8")
for iris_metrics in text_string_to_metric_families(metrics):
for sample in iris_metrics.samples:
labels_string = "{1}".format(*sample).replace('\'',"\"")
labels_dict = json.loads(labels_string)
labels = []
for d in labels_dict:
labels.extend(labels_dict)
if len(labels) > 0:
g = GaugeMetricFamily("{0}".format(*sample), 'Help text', labels=labels)
g.add_metric(list(labels_dict.values()), "{2}".format(*sample))
else:
g = GaugeMetricFamily("{0}".format(*sample), 'Help text', labels=labels)
g.add_metric([""], "{2}".format(*sample))
yield g
def get_access_token(self):
try:
user_pool_id = os.environ['IRIS_CLOUDSQL_USERPOOLID'] # isc iss
username = os.environ['IRIS_CLOUDSQL_USER']
password = os.environ['IRIS_CLOUDSQL_PASS']
clientid = os.environ['IRIS_CLOUDSQL_CLIENTID'] # isc aud
print(user_pool_id)
print(username)
print(password)
print(clientid)
try:
u = Cognito(
user_pool_id=user_pool_id,
client_id=clientid,
user_pool_region="us-east-2", # needed by warrant, should be derived from poolid doh
username=username
)
u.authenticate(password=password)
except Exception as p:
print(p)
except Exception as e:
print(e)
return u.id_token
if __name__ == '__main__':
start_http_server(8000)
REGISTRY.register(IRISCloudSQLExporter())
while True:
REGISTRY.collect()
print("Polling IRIS CloudSQL API for metrics data....")
#looped e loop
time.sleep(120)
Docker:
Dockerfile
FROM python:3.8
ADD src /src
RUN pip install prometheus_client
RUN pip install requests
WORKDIR /src
ENV PYTHONPATH '/src/'
ENV PYTHONUNBUFFERED=1
ENV IRIS_CLOUDSQL_USERPOOLID 'userpoolid'
ENV IRIS_CLOUDSQL_CLIENTID 'clientid'
ENV IRIS_CLOUDSQL_USER 'user'
ENV IRIS_CLOUDSQL_PASS 'pass'
ENV IRIS_CLOUDSQL_API 'api'
ENV IRIS_CLOUDSQL_DEPLOYMENTID 'deploymentid'
RUN pip install -r requirements.txt
CMD ["python" , "/src/iris_cloudsql_exporter.py"]
docker build -t iris-cloudsql-exporter .
docker image tag iris-cloudsql-exporter sween/iris-cloudsql-exporter:latest
docker push sween/iris-cloudsql-exporter:latest
Deployment:
k8s; Create us a namespace:
kubectl create ns iris
k8s; Add the secret:
kubectl create secret generic iris-cloudsql -n iris \
--from-literal=user=$IRIS_CLOUDSQL_USER \
--from-literal=pass=$IRIS_CLOUDSQL_PASS \
--from-literal=clientid=$IRIS_CLOUDSQL_CLIENTID \
--from-literal=api=$IRIS_CLOUDSQL_API \
--from-literal=deploymentid=$IRIS_CLOUDSQL_DEPLOYMENTID \
--from-literal=userpoolid=$IRIS_CLOUDSQL_USERPOOLID
otel, Create Config:
apiVersion: v1
data:
config.yaml: |
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'IRIS CloudSQL'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 30s
scrape_timeout: 30s
static_configs:
- targets: ['192.168.1.96:5000']
metrics_path: /
exporters:
googlemanagedprometheus:
project: "pidtoo-fhir"
service:
pipelines:
metrics:
receivers: [prometheus]
exporters: [googlemanagedprometheus]
kind: ConfigMap
metadata:
name: otel-config
namespace: iris
k8s; Load the otel config as a configmap:
kubectl -n iris create configmap otel-config --from-file config.yaml
k8s; deploy load balancer (definitely optional), MetalLB. I do this to scrape and inspect from outside of the cluster.
cat <<EOF | kubectl apply -f -n iris -
apiVersion: v1
kind: Service
metadata:
name: iris-cloudsql-exporter-service
spec:
selector:
app: iris-cloudsql-exporter
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 8000
EOF
gcp; need the keys to google cloud, the service account needs to be scoped
roles/monitoring.metricWriter
kubectl -n iris create secret generic gmp-test-sa --from-file=key.json=key.json
k8s; the deployment/pod itself, two containers:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iris-cloudsql-exporter
labels:
app: iris-cloudsql-exporter
spec:
replicas: 1
selector:
matchLabels:
app: iris-cloudsql-exporter
template:
metadata:
labels:
app: iris-cloudsql-exporter
spec:
containers:
- name: iris-cloudsql-exporter
image: sween/iris-cloudsql-exporter:latest
ports:
- containerPort: 5000
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/gmp/key.json"
- name: IRIS_CLOUDSQL_USERPOOLID
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: userpoolid
- name: IRIS_CLOUDSQL_CLIENTID
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: clientid
- name: IRIS_CLOUDSQL_USER
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: user
- name: IRIS_CLOUDSQL_PASS
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: pass
- name: IRIS_CLOUDSQL_API
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: api
- name: IRIS_CLOUDSQL_DEPLOYMENTID
valueFrom:
secretKeyRef:
name: iris-cloudsql
key: deploymentid
- name: otel-collector
image: otel/opentelemetry-collector-contrib:0.92.0
args:
- --config
- /etc/otel/config.yaml
volumeMounts:
- mountPath: /etc/otel/
name: otel-config
- name: gmp-sa
mountPath: /gmp
readOnly: true
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/gmp/key.json"
volumes:
- name: gmp-sa
secret:
secretName: gmp-test-sa
- name: otel-config
configMap:
name: otel-config
kubectl -n iris apply -f deployment.yaml
Running
Assuming nothing is amiss, lets peruse the namespace and see how we are doing.
✔ 2 config maps, one for GCP, one for otel
✔ 1 load balancer
✔ 1 pod, 2 containers successful scrapes
Google Cloud Monitoring
Inspect observability to see if the metrics are arriving ok and be awesome in observability!
Announcement
AYUSH Shetty · May 18
I am writing to express my interest in the "IRIS Ensemble Integration . I have 2 years of experience as an Ensemble IRIS Developer, working with Ensemble and IRIS for integration, server management, and application development. Looking for more opportunites to work under Iris Cache Objectscript
Announcement
Celeste Canzano · May 12
Hello IRIS community,
InterSystems Certification is currently developing a certification exam for InterSystems IRIS SQL professionals, and if you match the exam candidate description given below, we would like you to beta test the exam! The exam will be available for beta testing starting May 19, 2025.
Please note: Only candidates with the pre-existing InterSystems IRIS SQL Specialist certification are eligible to take the beta. Interested in the beta but don’t have the SQL Specialist certification? Take the SQL Specialist exam!
Eligible candidates will receive an email from the certification team on May 19, 2025 with instructions on scheduling the exam.
Beta testing will be completed June 30, 2025.
What are my responsibilities as a beta tester?
You will schedule and take the exam by June 30th. The exam will be administered in an online proctored environment free of charge (the standard fee of $150 per exam is waived for all beta testers). The InterSystems Certification team will then perform a careful statistical analysis of all beta test data to set a passing score for the exam. The analysis of the beta test results will take 6-8 weeks, and once the passing score is established, you will receive an email notification from InterSystems Certification informing you of the results. If your score on the exam is at or above the passing score, you will have earned the certification!
Note: Beta test scores are completely confidential.
How is this exam different from the InterSystems IRIS SQL Specialist exam?
This new exam - InterSystems IRIS SQL Professional - covers higher-level SQL topics and is recommended for candidates with 4 to 6 years of relevant experience, compared to the 1 to 2 years recommended for the SQL Specialist exam.
Interested in participating? Read the Exam Details below.
Exam Details
Exam title: InterSystems IRIS SQL Professional
Candidate description: A developer or solutions architect who
Designs IRIS SQL applications
Manages IRIS SQL operations
Uses IRIS SQL
Loads and efficiently queries datasets stored in IRIS SQL
Number of questions: 38
Time allotted to take exam: 2 hours
Recommended preparation: Review the content below before taking the exam.
Online Learning:
Using SQL in InterSystems IRIS (learning path, 3h 45m)
Recommended practical experience:
4 to 6 years of experience developing and managing IRIS SQL applications is recommended.
At least 2 years of experience working with ObjectScript and globals in InterSystems IRIS is recommended.
Exam practice questions
A set of practice questions to familiarize candidates with question formats and approaches will be provided here on May 14, 2025.
Exam format
The questions are presented in two formats: multiple choice and multiple response. Access to InterSystems IRIS Documentation will be available during the exam.
DISCLAIMER: Please note this exam has a 2-hour time limit. While InterSystems documentation will be available during the exam, candidates will not have time to search the documentation for every question. Thus, completing the recommended preparation before taking the exam, and searching the documentation only when absolutely necessary during the exam, are both strongly encouraged!
System requirements for beta testing
Working camera & microphone
Dual-core CPU
At least 2 GB available of RAM memory
At least 500 MB of available disk space
Minimum internet speed:
Download - 500kb/s
Upload - 500kb/s
Exam topics and content
The exam contains questions that cover the areas for the stated role as shown in the exam topics chart immediately below.
Topic
Subtopic
Knowledge, skills, and abilities
1. Designs IRIS SQL applications
1.1 Designs a SQL schema
Distinguishes use cases for row vs columnar table layout
Distinguishes use cases for different index types
1.2 Designs advanced schemas
Recalls anatomy of Globals (subscript and value)
Interprets relationship between table structure and Globals
Distinguishes the (Globals) level at which mirroring/journaling operates from the SQL layer
Distinguishes the differences between date/time data types
Interprets the overhead associated with stream data
Identifies use cases for text search
1.3 Writes business logic
Identifies use cases for UDFs, UDAFs, and SPs
1.4 Develops Object/Relational applications
Recalls SQL best practices when defining classes
Uses Object access to interact with individual rows
Identifies SQL limitations with class inheritance
Uses serial and object properties
Identifies use cases for collection properties
Distinguishes class relationships from Foreign Keys
1.5 Deploys SQL applications
Determines what needs to be part of a deployment
2. Uses IRIS SQL
2.1 Manages IRIS query processing
Identify benefits of the universal query cache
List considerations made by the optimizer
Differentiates client and server-side problems
Uses Statement Index to find statement metadata
Distinguishes between the use of parameters and constants in a query
Distinguishes between transaction and isolation levels
2.2 Interprets query plans
Identifies the use of indices in a query plan
Identifies vectorized (columnar) query plans
Uses hints to troubleshoot query planning
Identifies opportunities for indices, based on a query plan
2.3 Uses IRIS SQL in applications
Distinguishes use cases for Dynamic SQL and Embedded SQL
2.4 Uses IRIS-specific SQL capabilities
Uses arrow syntax for implicit joining
Determines use cases for explicit use of collation functions
3. Manages IRIS SQL operations
3.1 Manages SQL operations
Identifies use cases for purging queries and rebuilding indices
Recalls impact of purging queries and rebuilding indices
Identifies use cases for un/freezing query plans, including automation
Identifies use cases for (bitmap) index compaction
Uses the runtime stats in the Statement Index to find statements with optimization opportunities
3.2 Configures InterSystems SQL options
Recalls relevant system configuration options (e.g. lock threshold)
Differentiates scale-out options, ECP, and sharding
3.3 Manages SQL security
Recalls to apply SQL privilege checking when using Embedded SQL
3.4 Uses PTools for advanced performance analysis
Identifies use cases for using PTools
Interested in participating? Eligible candidates will receive an email from the certification team on May 19th with instructions on how to schedule and take the exam. Hello Celeste! This is really interesting. How are the eligible candidates chosen? Is there a way to apply? Thank you. Hi Pietro! Unlike prior certification exam betas, only folks who hold the InterSystems IRIS SQL Specialist certification are eligible. There is no application process, rather, the certification team will be reaching out directly to eligible individuals on May 19th. Anyone who holds an active SQL Specialist certification will receive an email next Monday with instructions on how to access and take the beta exam. The email will be sent to the address associated with your account on Credly, our digital badging platform.
If you do not yet have the SQL Specialist certification, I encourage you to consider taking the InterSystems IRIS SQL Specialist certification exam. Once you pass this exam and obtain the certification, you will receive an email from the certification team regarding the beta.
Please let me know if I can clarify anything! Thank you for the clarifications Celeste!
Question
Kurt Hofman · Dec 15, 2020
Hello,
I'm testing IRIS 2020.4 Preview with preview key.
I've access to the management portal but I can't connect with Studio.
This is my docker-command : docker run --name my-iris --detach --publish 9091:51773 --publish 9092:52773 --volume C:\Docker\iris_external:/external –-volume C:\Docker\iris_durable:/durable –-env ISC_DATA_DIRECTORY=/durable/irissys containers.intersystems.com/intersystems/iris:2020.4.0.524.0 --key /external/iris.key --password-file /external/password.txt
I notice that Caché Direct is disabled by the license.
Can someone help me out ? InterSystems have changed SuperServer port in a few latest builds, back to 1972. So, just replacing 51773 with 1972, should work.
You can use docker inspect
docker inspect containers.intersystems.com/intersystems/iris:2020.4.0.524.0
Will outputs something like this
.....
"ExposedPorts": {
"1972/tcp": {},
"2188/tcp": {},
"52773/tcp": {},
"53773/tcp": {},
"54773/tcp": {}
},
.....
"Labels": {
"com.intersystems.adhoc-info": "",
"com.intersystems.platform-version": "2020.4.0.524.0",
"com.intersystems.ports.default.arbiter": "2188",
"com.intersystems.ports.default.license-server": "4002",
"com.intersystems.ports.default.superserver": "1972",
"com.intersystems.ports.default.webserver": "52773",
"com.intersystems.ports.default.xdbc": "53773",
"com.intersystems.product-name": "IRIS",
"com.intersystems.product-platform": "dockerubuntux64",
"com.intersystems.product-timestamp": "Thu Oct 22 2020 13:02:16 EDT",
"com.intersystems.product-timestamp.iso8601": "2020-10-22T17:02:16Z",
"maintainer": "InterSystems Worldwide Response Center <support@intersystems.com>",
"org.opencontainers.image.created": "2020-10-22T19:32:32Z",
"org.opencontainers.image.documentation": "https://docs.intersystems.com/",
"org.opencontainers.image.title": "intersystems/iris",
"org.opencontainers.image.vendor": "InterSystems",
"org.opencontainers.image.version": "2020.4.0.524.0-0"
}
I've just left, only interesting lines in your case. Where you can find, which ports are declared as exposed in the image, and labels, which declares available ports in the image.
Access directly to the label
$ docker inspect containers.intersystems.com/intersystems/iris:2020.4.0.524.0 \
--format '{{ index .Config.Labels "com.intersystems.ports.default.superserver" }}'
1972 Thanks, I replaced 51773 with 1972 and now it works !
Question
yeung elijah · Jan 28, 2021
Hi,
I'm a java developer,have integration package for springboot?(maven or gradle)
This is also the question I want to submit. check https://openexchange.intersystems.com/package/springboot-iris-crud