Clear filter
Article
Evgeniy Potapov · Mar 18, 2024
Pandas is not just a popular software library. It is a cornerstone in the Python data analysis landscape. Renowned for its simplicity and power, it offers a variety of data structures and functions that are instrumental in transforming the complexity of data preparation and analysis into a more manageable form. It is particularly relevant in such specialized environments as ObjectScript for Key Performance Indicators (KPIs) and reporting, especially within the framework of the InterSystems IRIS platform, a leading data management and analysis solution. In the realm of data handling and analysis, Pandas stands out for several reasons. In this article, we aim to explore those aspects in depth:
Key Benefits of Pandas for Data Analysis:
In this part, we will delve into the various advantages of using Pandas. It includes intuitive syntax, efficient handling of large datasets, and the ability to work seamlessly with different data formats. The ease with which Pandas integrates into existing data analysis workflows is also a significant factor that enhances productivity and efficiency.
Solutions to Typical Data Analysis Tasks with Pandas:
Pandas is versatile enough to tackle routine data analysis tasks, ranging from simple data aggregation to complex transformations. We will explore how Pandas can be used to solve these typical challenges, demonstrating its capabilities in data cleaning, transformation, and exploratory data analysis. This section will provide practical insights into how Pandas simplifies these tasks.
Using Pandas Directly in ObjectScript KPIs in IRIS:
The integration of Pandas with ObjectScript for the development of KPIs in the IRIS platform is simply a game-changer. This part will cover how Pandas can be utilized directly within ObjectScript, enhancing the KPI development process. We will also explore practical examples of how Pandas can be employed to analyze and visualize data, thereby contributing to more robust and insightful KPIs.
Recommendations for Implementing Pandas in IRIS Analytic Processes:
Implementing a new tool in an existing analytics process can be challenging. For that reason, this section aims to provide best practices and recommendations for integrating Pandas into the IRIS analytics ecosystem as smoothly as possible. From setup and configuration to optimization and best practices, we will cover essential guidelines to ensure a successful integration of Pandas into your data analysis workflow. Pandas is a powerful data analytics library in the Python programming language. Below, you can find a few benefits of Pandas for data analytics:
Ease of use: Pandas provides a simple and intuitive interface for working with data. It is built on top of the NumPy library and provides such high-level data structures as DataFrames, which makes it easy to work with tabular data.
Data Structures: The principal data structures in Pandas are Series and DataFrame. Series is a one-dimensional array with labels, whereas DataFrame is a two-dimensional table representing a set of Series. These data structures combined allow convenient storage and manipulation of data.
Handling missing data: Pandas provides convenient methods for detecting and handling missing data (NaN or None). It includes some methods for deleting, filling, or replacing missing values, simplifying your work with real data.
Data grouping and aggregation: With Pandas it is easy to group data by features and apply aggregation functions (sum, mean, median, etc.) to each data group.
Powerful indexing capabilities: Pandas provides flexible tools for indexing data. You can use labels, numeric indexes, or multiple levels of indexing. It allows you to filter, select, and manipulate data efficiently.
Reading and writing data: Pandas supports multiple data formats, including CSV, Excel, SQL, JSON, HTML, etc. It facilitates the process of reading and writing data from/to various sources.
Extensive visualization capabilities: Pandas is integrated with such visualization libraries as Matplotlib and Seaborn, making it simple to create graphs and visualize data, especially with the help of DeepSeeWeb through integration via embedded Python.
Efficient time management: Pandas provides multiple features for working with time series, including powerful tools for working with timestamps and periods.
Extensive data manipulation capabilities: The library provides various functions for filtering, sorting, and reshaping data, as well as joining and merging tables, which makes it a powerful tool for data manipulation.
Excellent performance: Pandas is purposefully optimized to handle large amounts of data. It provides high performance by using Cython and enhanced data structures.
Let's look at an example of Pandas' implementation in an ObjectScript environment. We will employ VSCode as our development environment. The choice of IDE in this case was determined by the availability of InterSystems ObjectScript Extension Pack, which provides a debugger and editor for ObjectScript.First of all, let's create a KPI class:
Class BI.KPI.pandasKpi Extends %DeepSee.KPI
{
}
Then, we should make an XML document defining the type, name, and number of columns and filters of our KPI:
XData KPI [ XMLNamespace = "http://www.intersystems.com/deepsee/kpi" ]
{
<!-- 'manual' KPI type will tell DeepSee that data will be gathered from the class method defined by us-->
<kpi name="MembersPandasDemo" sourceType="manual">
<!-- we are going to need only one column for our KPI query -->
<property columnNo="1" name="Members" displayName="Community Members"/>
<!-- and lastly we should define a filter for our members -->
<filter name="InterSystemsMember"
displayName="InterSystems member"
sql="SELECT DISTINCT ISCMember from Community.Member"/>
</kpi>
}
The next step is to define the python function, write the import, and create the necessary variables:
ClassMethod MembersDF(sqlstring) As %Library.DynamicArray [ Language = python ]
{
# First of all, we import the most important library in our script: IRIS.
# IRIS library provides syntax for calling ObjectScript classes.
# It simplifies Python-ObjectScript integration.
# With the help of the library we can call any class and class method, and
# it returns whatever data type we like, and ObjectScript understands it.
import iris
# Then, of course, import the pandas itself.
import pandas as pd
# Create three empty arrays:
Id_list = []
time_list = []
ics_member = []
Next step: define a query against the database:
# Define SQL query for fetching data.
# The query can be as simple as possible.
# All the work will be done by pandas:
query = """
SELECT
id as ID, CAST(TO_CHAR(Created, 'YYYYMM') as Int) as MonthYear, ISCMember as ISC
FROM Community.Member
order by Created DESC
"""
Then, we need to save the resulting data into an array group:
# Call the class specified for executing SQL statements.
# We use embedded Python library to call the class:
sql_class = iris.sql.prepare(query)
# We use it again to call dedicated class methods:
rs = sql_class.execute()
# Then we use pandas directly on the result set to make dataFrame:
data = rs.dataframe()
We also can pass an argument to filter our data frame.
# Filter example
# We take an argument sqlstring which, in this case, contains boolean data.
# With a handy function .loc filtering all the data
if sqlstring is not False:
data = data.loc[data["ISC"] == int(sqlstring)]
Now, we should group the data and define x-axis for it:
# Group data by date displayed like MonthYear:
grouped_data = data.groupby(["MonthYear"]).count()
Unfortunately, we cannot take the date column directly from grouped data DataFrame,so, instead, we take the date column from the original DataFrame and process it.
# Filter out duplicate dates and append them to a list.
# After grouping by MonthYear, pandas automatically filters off duplicate dates.
# We should do the same to match our arrays:
sorted_filtered_dates = [item for item in set(data["MonthYear"])]
# Reverse the dates from left to right:
date = sorted(sorted_filtered_dates, reverse=True)
# Convert dict to a list:
id = grouped_data["ID"].id.tolist()
# Reverse values according to the date array:
id.reverse()
# In order to return the appropriate object to ObjectScript so that it understands it,
# we call '%Library.DynamicArray' (it is the closest one to python and an easy-to-use type of array).
# Again, we use IRIS library inside python code:
OBJIDList = iris.cls('%Library.DynamicArray')._New()
OBJtimeList = iris.cls('%Library.DynamicArray')._New()
# Append all data to DynamicArray class methods Push()
for i in date:
OBJtimeList._Push(i)
for i in ID:
OBJIDList._Push(i)
return OBJIDList, OBJtimeList
}
Next step is to define KPI specific method for DeepSee to understand what data to take:
// Define method. The method must always be %OnLoadKPI(). Otherwise, the system will not recognise it.
Method %OnLoadKPI() As %Status
{
//Define string for the filter. Set the default to zero
set sqlstring = 0
//Call %filterValues method to fetch any filter data from the widget.
if $IsObject(..%filterValues) {
if (..%filterValues.InterSystemsMember'="")
{
set sqlstring=..%filterValues.%data("InterSystemsMember")
}
}
//Call pandas function, pass filter value if any, and receive dynamic arrays with data.
set sqlValue = ..MembersDF(sqlstring)
//Assign each tuple to a variable.
set idList = sqlValue.GetAt(1)
set timeList = sqlValue.GetAt(2)
//Calculate size of x-axis. It will be rows for our widget:
set rowCount = timeList.%Size()
//Since we need only one column, we assign variable to 1:
set colCount = 1
set ..%seriesCount=rowCount
//Now, for each row, assign time value and ID value of our members:
for rows = 1:1:..%seriesCount
{
set ..%seriesNames(rows)=timeList.%Get(rows-1)
for col = 1:1:colCount
{
set ..%data(rows,"Members")=idList.%Get(rows-1)
}
}
quit $$$OK
At this point, compile the KPI and create a widget on a dashboard using KPI data source.
That's it! We have successfully navigated through the process of integrating and utilizing Pandas in our ObjectScript applications on InterSystems IRIS. This journey has taken us from fetching and formatting data to filtering and displaying it, all within a single, streamlined function. This demonstration highlights the efficiency and power of Pandas in data analysis. Now, let's explore some practical recommendations for implementing Pandas within the IRIS environment and conclude with insights on its transformative impact.Recommendations for Practical Application of Pandas in IRIS
Start with Prototyping:
Begin your journey with Pandas by using example datasets and utilities. This approach helps you understand the basics and nuances of Pandas in a controlled and familiar environment. Prototyping allows us to experiment with different Pandas functions and methods without the risks associated with live data.
Gradual Implementation:
Introduce Pandas incrementally into your existing data processes. Instead of a complete overhaul, identify the areas where Pandas can enhance or simplify data handling and analysis. It could be some simple tasks like data cleaning aggregation or a more complex analysis where Pandas capabilities can be fully leveraged.
Optimize Pandas Use:
Prior to working with large datasets, it is crucial to optimize your Pandas code. Efficient code can significantly reduce processing time and resource consumption, which is especially important in large-scale data analysis. Such techniques such as vectorized operations, using appropriate data types, and avoiding loops in data manipulation can significanlty enhance performance.
Conclusion
The integration of Pandas into ObjectScript applications on the InterSystems IRIS platform marks a significant advancement in the field of data analysis. Pandas brings us an array of powerful tools for data processing, analysis, and visualization, which are now at the disposal of IRIS users. This integration not only accelerates and simplifies the development of KPIs and analytics but also paves the way for more sophisticated and advanced data analytical capabilities within the IRIS ecosystem. With Pandas, analysts and developers can explore new horizons in data analytics, leveraging its extensive functionalities to gain deeper insights from their data. The ability to process and analyze large datasets efficiently, coupled with the ease of creating compelling visualizations, empowers users to make more informed decisions and uncover trends and patterns that were previously difficult to detect. In summary, Pandas integration into the InterSystems IRIS environment is a transformative step, enhancing the capabilities of the platform and offering users an expanded toolkit for tackling the ever-growing challenges and complexities of modern data analysis. What a great case of using Embedded Python in InterSystems IRIS BI! Wonderful, @Evgeniy.Potapov ! Do you want to add an Open Exchange app to demo the case? That'd be fantastic! Awesome!
Do you have dashboard file for this example. If yes, please share the same. Sure!
You can use or OpenExchange application Demo-Pandas-AnalyticsInstall it by
zpm "install demo-pandas-analytics"
This dashboard is really simple, you can watch the source code here
Announcement
Kristina Lauer · Apr 24, 2024
Are you ready to get InterSystems certified? At the Global Summit, you can take a certification exam for free! All seven exams will be offered, including the new InterSystems IRIS SQL Specialist exam (in beta).
Find the exam schedule and registration details—as well as ways to prepare—and reserve your spot!
Announcement
Anastasia Dyubaylo · Sep 2, 2024
Hi Developers,
We'd like to invite you to join our next contest dedicated to creating useful tools to make your fellow developers' lives easier:
🏆 InterSystems Developer Tools Contest 🏆
Submit an application that helps to develop faster, contributes more qualitative code, and helps in testing, deployment, support, or monitoring of your solution with InterSystems IRIS.
Duration: September 9 - 29, 2024
Prize pool: $14,000
The topic
💡 InterSystems IRIS developer tools 💡
In this contest, we expect applications that improve developer experience with IRIS, help to develop faster, contribute more qualitative code, help to test, deploy, support, or monitor your solution with InterSystems IRIS.
General Requirements:
An application or library must be fully functional. It should not be an import or a direct interface for an already existing library in another language (except for C++, where you really need to do a lot of work to create an interface for IRIS). It should not be a copy-paste of an existing application or library.
Accepted applications: new to Open Exchange apps or existing ones, but with a significant improvement. Our team will review all applications before approving them for the contest.
The application should work either on IRIS, IRIS for Health or IRIS Cloud SQL. The first two could be downloaded as host (Mac, Windows) versions from Evaluation site, or can be used in the form of containers pulled from InterSystems Container Registry or Community Containers: intersystemsdc/iris-community:latest or intersystemsdc/irishealth-community:latest .
The application should be Open Source and published on GitHub.
The README file to the application should be in English, contain the installation steps, and either the video demo or/and a description of how the application works.
No more than 3 submissions from one developer are allowed.
NB. Our experts will have the final say in whether the application is approved for the contest or not based on the criteria of complexity and usefulness. Their decision is final and not subject to appeal.
Prizes
1. Experts Nomination - a specially selected jury will determine the winners:
🥇 1st place - $5,000
🥈 2nd place - $3,000
🥉 3rd place - $1,500
🏅 4th place - $750
🏅 5th place - $500
🌟 6-10th places - $100
2. Community winners - an application that will receive the most votes in total:
🥇 1st place - $1000
🥈 2nd place - $750
🥉 3rd place - $500
🏅 4th place - $300
🏅 5th place - $200
If several participants score the same amount of votes, they all are considered winners, and the money prize is shared among the winners.
Who can participate?
Any Developer Community member, except for InterSystems employees (ISC contractors allowed). Create an account!
Developers can team up to create a collaborative application. 2 to 5 developers are allowed in one team.
Do not forget to highlight your team members in the README of your application – DC user profiles.
Important Deadlines:
🛠 Application development and registration phase:
September 9, 2024 (00:00 EST): Contest begins.
September 22, 2024 (23:59 EST): Deadline for submissions.
✅ Voting period:
September 23, 2024 (00:00 EST): Voting begins.
September 29, 2024 (23:59 EST): Voting ends.
Note: Developers can improve their apps throughout the entire registration and voting period.
Helpful resources
✓ Example applications:
webterminal - an emulation for IRIS terminal as a web application
git-source-control - git tool to manage changes for shared dev environments and IRIS UI dev editors by @Timothy.Leavitt
iris-rad-studio - RAD for UI
cmPurgeBackup - backup tool
errors-global-analytics - errors visualization
objectscript-openapi-definition - open API generator
Test Coverage Tool - test coverage helper
iris-bi-utils - a toolset for IRIS BI
and many more.
✓ Templates we suggest to start from:
iris-dev-template
Interoperability-python
rest-api-contest-template
native-api-contest-template
iris-fhir-template
iris-fullstack-template
iris-interoperability-template
iris-analytics-template
✓ For beginners with IRIS and Python:
InterSystems Embedded Python in glance
InterSystems IRIS Interoperability with Embedded Python
Feedback : Using embedded python daily for more than 2 years
Embedded Python Template
✓ For beginners with IRIS and ObjectScript:
Build a Server-Side Application with InterSystems IRIS
Learning Path for beginners
✓ For beginners with ObjectScript Package Manager (ZPM):
How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS
Package First Development Approach with InterSystems IRIS and ZPM
✓ How to submit your app to the contest:
How to submit an application on Open Exchange
How to apply for the contest
Need Help?
Join the contest channel on InterSystems' Discord server or talk with us in the comments section of this post.
We can't wait to see your projects! Good luck 👍
By participating in this contest, you agree to the competition terms laid out here. Please read them carefully before proceeding.
Hey Devs!
One app has been added to the contest, check it out!
Code-Scanner by @Robert.Cemper1003
Upload your applications and join the contest! Hi Developers!
The "Kick-off Webinar for InterSystems Developer Tools Contest 2024" recording is on InterSystems Developers YouTube channel!Enjoy!
⏯️Kick-off Webinar for InterSystems Developer Tools Contest 2024
A couple of days ago I entered a new version of DX Jetpack for VS Code which adds three completely new extensions. Hey Devs!
There are only three days left before the registration phase ends! Upload your applications and join the contest!
Two more apps have been added to the contest, check it out!
DX Jetpack for VS Code by @John.Murray ks-iris-lib by @Robert.Barbiaux Developers!
One more article has been added to the contest!
iris-ccd-devtools by @Chi.Nguyen-Rettig
There are only two days left to join the contest! Community!
Three more applications have been added to the contest!
db-management-tool by @Andrii.Mishchenko sql-embeddings by @Henrique pxw-lib-sql by @Paul.Waterman
Last day to join the contest, upload your apps! iris-ccd-devtools by @Chi Nguyen-Rettig
This tool is incredibly innovative! It's definitely going to save a lot of time and frustration for many! 😁 it would be great if the issues in building Docker container would be fixed.so we can touch it and reduce frustration of evaluators firstActually it is broken, Issues pending since some days
Hmmm I'm not having that problem
Announcement
Irène Mykhailova · Jan 14
Hello Community !
We are delighted to invite all our customers, partners and community members to participate in the InterSystems Benelux & France Summit 2025! The registration for the Summit 2025 is already open.
This event promises to be an interactive experience highlighting inspiring case studies, technological innovations and roadmaps for the coming year in the fields of healthcare and data platforms. Hands-on demonstrations will also allow you to explore the latest developments in a tangible way.
➡️ InterSystems Benelux & France Summit
🗓 Dates : February 11 - 12, 2025
📍 Place : Hilton Rotterdam | Weena 10 | 3012 CM Rotterdam | Netherlands
Over several sessions, we will explore topics such as:
Healthcare standards such as FHIR and OMOP
Analytics and AI
GenAI and Vector Search
Python
...and various other emerging technologies
We are confident that this event will be a remarkable start to the year. We would be delighted to have you there to enrich this event with your experience and expertise. Save the date and register today using our registration form.
We look forward to seeing you at the InterSystems Benelux & France Summit 2025! I also think it's a great way to start the year 👏
Article
Ashok Kumar T · Feb 17
What is JWT?
JWT (JSON Web Token) is an open standard (RFC 7519) that offers a lightweight, compact, and self-contained method for securely transmitting information between two parties. It is commonly used in web applications for authentication, authorization, and information exchange.
A JWT is typically composed of three parts:
1. JOSE (JSON Object Signing and Encryption) Header2. Payload3. Signature
These parts are encoded in Base64Url format and concatenated with dots (.) separating them.
Structure of a JWT
Header
{ "alg": "HS256", "typ": "JWT"}
Payload
{"sub": "1234567890", "name": "John Doe", "iat": 1516239022}
Signature:The signature is used to verify that the sender of the JWT is who it says it is and to ensure that the message has not been tampered with.
To create the signature:
1. base64 Encoded header and payload.2. Apply the signing algorithm (e.g., HMAC SHA256 or RSA) with a secret key (for symmetric algorithms like HMAC) or a private key (for asymmetric algorithms like RSA).3. Base64Url encode the result to obtain the signature.
Sample JWT. View the content of the JWT
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
JWT Creation in IRIS
Note: Before 2024, the %OAuth2.JWT class was used for generating JWTs in IRIS. The %Net.JSON.JWT class is now the primary class for JWT creation, and I will use this class in the example code.
JWK overview
JWK represents a cryptographic key particularly for signing and verifying the JWTs. JWKs allow you to represent public keys (for verification) and private keys (for signing) in a standardized format that can be easily exchanged between systems. The JWKS holds multiple JWK's
JWT workflow
1. Construct your header as a %DynamicObject and add custom headers if needed.
2. Construct the body/claims directly as a %DynamicObject
3. Call the Create method from the %Net.JSON.JWT class.
Set sc = ##Class(%Net.JSON.JWT).Create(header, , claims, jwks, , .JWT)
Create JWK
Set sc = ##Class(%Net.JSON.JWK).Create("HS256","1212ASD!@#!#@$@#@$$#SDFDGD#%+_)(*@$SFFS",.privateJWK,.publicJWK)
This will return the private key
{"kty":"oct","k":"MTIxMkFTRCFAIyEjQCRAI0AkJCNTREZER0QjJStfKSgqQCRTRkZT","alg":"HS256"}
Some important JWK Properties
"kty": "oct" - represents the symmetric algorithm"kty": "RSA" / "kty": "EC" - represents the Asymmetric algorithm
Once the JWK is created, it can be added to the JWKS.
Let's create JWKS in IRIS
Set sc = ##class(%Net.JSON.JWKS).PutJWK(jwk,.JWKS)
This method returns the JWKS
Generating the JWT in IRIS
You can create Symmetric or Asymmetric key JWTs in IRIS. The %Net.JSON.JWK class is primarily used to generate the JWT. Before calling the method, ensure that you create and send the JWKS for both Symmetric and Asymmetric Encryption when generating the JWT.
Symmetric Encryption
Symmetric algorithms use a shared secret key, where both the sender and receiver use the same key to sign and verify the JWT. These algorithms, like HMAC (HS256, HS512, HS384), generate a hash (signature) for the JWT payload. This approach is not recommended for high-security systems since both signing and verification are exposed, posing potential security risks.
The Create method from the %Net.JSON.JWK class is used to generate the JWK. It takes two input parameters and returns two output parameters:
1. algorithm - The algorithm for which to create the JWK.2. secert - The key which is used to sign and verify the JWT3. privateJWK - The private JSON Web Key that is created.4. publicJWK - The public JSON Web key that is created.
For symmetric key algorithms - you'll get privateJWK
for Asymmetric key algorithms- You'll get privateJWK and publicJWK
SymmetricKeyJWT
ClassMethod SymmetricKeyJWT(){ Set secret = "1212ASD!@#!#@$@#@$$#SDFDGD#%+_)(*@$SFFS" Set algorithm ="HS256" Set header = {"alg": (algorithm), "typ": "JWT","x-c":"te"} Set claims= {"sub": "1234567890","name": "John Doe", "iat": 1516239022 } #; create JWK Set sc = ##class(%Net.JSON.JWK).Create(algorithm,secret,.privateJWK) If $$$ISERR(sc) { Write $SYSTEM.OBJ.DisplayError(sc) } ZWrite privateJWK #; Create JWKS Set sc = ##class(%Net.JSON.JWKS).PutJWK(privateJWK,.privateJWKS) If $$$ISERR(sc) { Write $SYSTEM.OBJ.DisplayError(sc) } ZWrite privateJWKS Set sc = ##Class(%Net.JSON.JWT).Create(header,,claims,privateJWKS,,.pJWT) If $$$ISERR(sc) { Write $SYSTEM.OBJ.DisplayError(sc) } Write pJWT Quit}
Output
LEARNING>d ##class(Learning.JWT.NetJWT).SymmetricKeyJWT()
privateJWK={"kty":"oct","k":"MTIxMkFTRCFAIyEjQCRAI0AkJCNTREZER0QjJStfKSgqQCRTRkZT","alg":"HS256"} ; <DYNAMIC OBJECT>
privateJWKS="{""keys"":[{""kty"":""oct"",""k"":""MTIxMkFTRCFAIyEjQCRAI0AkJCNTREZER0QjJStfKSgqQCRTRkZT"",""alg"":""HS256""}]}"
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsIngtYyI6InRlIn0.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.PcCs_I8AVy5HsLu-s6kQYWaGvuwqwPAElIad11NpM_E
Asymmetric Encryption
Asymmetric encryption refers to the use of a key pair: one key for signing the token (private key) and another key for verifying the token (public key). This is different from symmetric encryption
Private Key: This key is used for signing the JWT. It is kept secret and should never be exposed.Public Key: This key is used to verify the authenticity of the JWT. It can be safely shared and distributed because it cannot be used to sign new tokens.
You can generate the JWT Asymmetric encryption with private key/certificate via %SYS.X509Credentials. so, You have to store your certificate in this persistent class.
AsymmetricWithx509
ClassMethod AsymmetricWithx509(){ Set x509 = ##class(%SYS.X509Credentials).%OpenId("myprivateTest1") Set algorithm ="RS256" Set header = {"alg": (algorithm), "typ": "JWS","x-c":"te"} Set claims= {"sub": "1234567890","name": "John Doe", "iat": 1516239022 } #; create JWK Set sc = ##class(%Net.JSON.JWK).CreateX509(algorithm,x509,.privateJWK,.publicJWK) ZWrite privateJWK,publicJWK If $$$ISERR(sc) { Write $SYSTEM.OBJ.DisplayError(sc) } #; Create JWKS Set sc = ##class(%Net.JSON.JWKS).PutJWK(privateJWK,.privateJWKS) ZWrite privateJWKS If $$$ISERR(sc) { Write $SYSTEM.OBJ.DisplayError(sc) } ; sigJWKS - private Set sc = ##Class(%Net.JSON.JWT).Create(header,,claims,privateJWKS,,.pJWT) If $$$ISERR(sc) { Write $SYSTEM.OBJ.DisplayError(sc) } Write pJWT Quit}
JWT in Web applications.
Starting from the 2023 version, IRIS includes built-in JWT creation for web applications by default. Ensure that JWT Authentication is enabled when setting up your web application
I've added the brief explanation about the configuration
1. Enable the JWT Authentication in your web application2. If you haven't already, create a REST class3. The default endpoint resource "/login" is included. Make a REST API call using basic authentication with the payload like {"user": "_SYSTEM", "password": "SYS"}.4. The response will be a JSON containing the "access_token," "refresh_token," and other relevant details.5. Use the "access_token" for authorization.
Article
Julio Esquerdo · Feb 18
REST API with Swagger in InterSystems IRIS
Hello
The HTTP protocol allows you to obtain resources, such as HTML documents. It is the basis of any data exchange on the Web and a client-server protocol, meaning that requests are initiated by the recipient, usually a Web browser.
REST APIs take advantage of this protocol to exchange messages between client and server. This makes REST APIs fast, lightweight, and flexible. REST APIs use the HTTP verbs GET, POST, PUT, DELETE, and others to indicate the actions they want to perform.
When we make a call to a RESt API, what actually happens is an HTTP call. The API receives this call and according to the requested verb and path, the API performs the desired action. In the case of the Iris implementation we can see this clearly in the URLMap definition area:
XData UrlMap{<Routes> <Route Url="/cliente" Method="POST" Call="Incluir" Cors="true"/> <Route Url="/cliente/:chave" Method="PUT" Call="Alterar" Cors="true"/> <Route Url="/cliente/:chave" Method="DELETE" Call="Deletar" Cors="true"/> <Route Url="/cliente/:chave" Method="GET" Call="Pesquisar" Cors="true"/> <Route Url="/cliente" Method="GET" Call="Listar" Cors="true"/> <Route Url="/openapi" Method="GET" Call="OpenAPI" Cors="true"/> <Route Url="/test" Method="GET" Call="Test" Cors="true"/> </Routes>}
Notice that we have the path (Url) and the verb (Method) defined for each call (Call). Thus, the code that meets the API knows what it should do.
This structure in the REST API does not only serve the routing of actions that arrive at the API.
It is also the basis for creating the API Swagger file, as seen in the %REST.API class documentation, GetWebRESTApplication method: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25REST.API#GetWebRESTApplication
Let's see how we can generate this documentation then.
First let's publish our API. Let's use the same API as in article https://community.intersystems.com/post/using-rest-api-flask-and-iam-intersystems-iris-part-1-rest-api
Just follow the guidelines and use the code that the article provides to have our API published.
Once the API is published we will include a new route in URLMap and a new method in our code:
<Route Url="/openapi" Method="GET" Call="OpenAPI" Cors="true"/>
ClassMethod OpenAPI() As %Status{ Do ##class(%REST.Impl).%SetContentType("application/json") Set sc = ##class(%REST.API).GetWebRESTApplication("", "/rest/servico", .swagger) If $$$ISERR(sc) { Quit sc // If an error occurred, exit the method } Write swagger.%ToJSON() Quit $$$OK}
Include the new route and the method associated with it in your API code. Let's now do a test. Let's open Postman and call the endpoint for generating Swagger, which is /openapi:
And below we have the complete definition of the Swagger generated by our call:
{
"info": {
"title": "",
"description": "",
"version": "",
"x-ISC_Namespace": "DEMO"
},
"basePath": "/rest/servico",
"paths": {
"/customer": {
"post": {
"parameters": [
{
"name": "payloadBody",
"in": "body",
"description": "Request body contents",
"required": false,
"schema": {
"type": "string"
}
}
],
"operationId": "Include",
"x-ISC_ServiceMethod": "Include",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
},
"get": {
"operationId": "List",
"x-ISC_ServiceMethod": "List",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
}
},
"/customer/{key}": {
"put": {
"parameters": [
{
"name": "key",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "payloadBody",
"in": "body",
"description": "Request body contents",
"required": false,
"schema": {
"type": "string"
}
}
],
"operationId": "Change",
"x-ISC_ServiceMethod": "Change",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
},
"delete": {
"parameters": [
{
"name": "key",
"in": "path",
"required": true,
"type": "string"
}
],
"operationId": "Delete",
"x-ISC_ServiceMethod": "Delete",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
},
"get": {
"parameters": [
{
"name": "key",
"in": "path",
"required": true,
"type": "string"
}
],
"operationId": "Search",
"x-ISC_ServiceMethod": "Search",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
}
},
"/openapi": {
"get": {
"operationId": "OpenAPI",
"x-ISC_ServiceMethod": "OpenAPI",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
}
},
"/test": {
"get": {
"operationId": "Test",
"x-ISC_ServiceMethod": "Test",
"x-ISC_CORS": true,
"responses": {
"default": {
"description": "(Unexpected Error)"
},
"200": {
"description": "(Expected Result)"
}
}
}
}
},
"Swagger": "2.0"
}
The following link gives us access to Iris's documentation that points to a tool that receives the output of our API call and transforms it into a user-friendly interface for documentation and testing of the service: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GREST_discover_doc#GREST_gendoc
Let's fire this URL and provide the path to retrieve the Swagger definition from our API: https://swagger.io/tools/swagger-ui/
Let's replace the demo call on the page (the petstore call) with our call: http://127.0.0.1/iris_iss/rest/servico/openapi and see the screen with the generated Swagger documentation:
At the top of the call we have the basic information of our API:
And we can also navigate through the calls and view important information, such as in the POST call to include a new customer:
But, as we saw a little above, the Swagger definition is actually a file in JSON format that is available to us in the form of a %DynamicObject (see documentation in https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25Library.DynamicObject) after the ##class(%REST.API).GetWebRESTApplication() that we perform in our OpenAPI method. In this way, based on the OpenAPI 2.0 definition we can include or remove information from our Swagger. Let's, for example, enrich the basic information of the API. According to the definition of Version 2.0 (https://swagger.io/specification/v2/ ) we may have the following information made available:
Let's then complete the information that we can make available. To do this, we'll change our OpenAPI method, including the basic information, accepted protocol, authentication, and access data (host and basePath):
ClassMethod OpenAPI() As %Status{ Do ##class(%REST.Impl).%SetContentType("application/json") Set sc = ##class(%REST.API).GetWebRESTApplication("", "/rest/servico", .swagger) If $$$ISERR(sc) { Quit sc // If an error occurred, exit the method } Set info = { "title": "Cliente", "description": "API de Manipulação de Cliente", "version": "1.00" } Do swagger.%Set("info", info) Do swagger.%Set("host", "127.0.0.1") Do swagger.%Set("basePath", "/iris_iss/rest/servico") Set schemes = [] Do schemes.%Push("http") Do swagger.%Set("schemes", schemes) Set security = [{"basicAuth":[]}] Do swagger.%Set("security", security) Set securityDefinitions = {} Do securityDefinitions.%Set("basicAuth", {}) Do securityDefinitions.basicAuth.%Set("type", "basic") Do swagger.%Set("securityDefinitions", securityDefinitions) Write swagger.%ToJSON() Quit $$$OK}
Calling the Swagger documentation again on the preview page now we have the following output:
See that our documentation is much more complete, with more detailed information about the API, such as the authentication mechanism (Basic Auth), the accepted protocol (HTTP) and the version, description and URL definitions.
We can now enrich the definition of the calls by putting the expected structure in the payloads and data examples for the call. To do this, let's overlay the information in paths for "/client":
ClassMethod OpenAPI() As %Status{ Do ##class(%REST.Impl).%SetContentType("application/json") Set sc = ##class(%REST.API).GetWebRESTApplication("", "/rest/servico", .swagger) If $$$ISERR(sc) { Quit sc // If an error occurred, exit the method } Set info = { "title": "Cliente", "description": "API de Manipulação de Cliente", "version": "1.00" } Do swagger.%Set("info", info) Do swagger.%Set("host", "127.0.0.1") Do swagger.%Set("basePath", "/iris_iss/rest/servico") Set schemes = [] Do schemes.%Push("http") Do swagger.%Set("schemes", schemes) Set security = [{"basicAuth":[]}] Do swagger.%Set("security", security) Set securityDefinitions = {} Do securityDefinitions.%Set("basicAuth", {}) Do securityDefinitions.basicAuth.%Set("type", "basic") Do swagger.%Set("securityDefinitions", securityDefinitions) Set incluirPesquisar = { "post": { "summary": "Criar um novo cliente", "description": "Recebe os dados de um cliente e cadastra no sistema.", "parameters": [ { "name": "body", "in": "body", "required": true, "schema": { "type": "object", "properties": { "nome": { "type": "string", "description": "Nome do cliente" }, "idade": { "type": "integer", "description": "Idade do cliente" } }, "required": ["nome", "idade"], "example": { "nome": "João Silva", "idade": 30 } } } ], "responses": { "default": { "description": "Falha na chamada da API" }, "200": { "description": "Cliente criado com sucesso" }, "406": { "description": "Falha na inclusão do cliente" } } }, "get": { "summary": "Recupera todos os clientes", "description": "Retorna a lista de clientes com suas informações.", "responses": { "default": { "description": "Falha na chamada da API" }, "200": { "description": "Lista de clientes", "schema": { "type": "object", "properties": { "clientes": { "type": "array", "items": { "type": "object", "properties": { "chave": { "type": "string", "description": "Chave única do cliente" }, "nome": { "type": "string", "description": "Nome do cliente" }, "idade": { "type": "integer", "description": "Idade do cliente" } }, "required": ["chave", "nome", "idade"] } } }, "example": { "clientes": [ { "chave": "1", "nome": "Maria", "idade": 35 }, { "chave": "2", "nome": "Julio", "idade": 57 }, { "chave": "4", "nome": "Julia", "idade": 25 }, { "chave": "5", "nome": "Julia", "idade": 22 } ] } } } } } } Do swagger.paths.%Set("/cliente", incluirPesquisar) Write swagger.%ToJSON() Quit $$$OK}
Calling the documentation page again we can now see the POST methods to include a client and GET to retrieve a list:
See that we already have an explanation of what is each of the methods available for POST and GET that we have overwritten.
Note we also no longer have the "x-ISC_ServiceMethod" and "x-ISC_Namespace" tags that we didn't put in our new definition.
Clicking to expand the POST method for example, we now visually have an example call (Example Value):
And the format of the call payload (Model):
In customer recovery, we have the definition of the returned Response with example:
And also with the expected model as an answer:
Also, we can test API calls from the documentation page itself. See the “Try it out” button available on the page. Click on it, then on Execute and see the result:
These possibilities for the Code-first approach of a REST API open up many possibilities for documentation, allowing us to control what will be disseminated, adding or subtracting the information we deem convenient in the published material.
The OpenAPI definition is very complete and detailed. There is a variety of information that we can include or take away, depending on what we need or want to make available in it.
See you next time!
Announcement
Anastasia Dyubaylo · Jan 24
Hi Community,
We have some exciting news! It's time for the next InterSystems writing competition:
✍️ InterSystems Technical Article Contest 2025 ✍️
Write an article on any topic related to the InterSystems products and services.
🎁 Prizes for everyone: A special prize for each author participating in the competition!
Prizes
1. Everyone is a winner in the Tech Article Contest! Any member who writes an article during the contest period will receive a special gift:
🎁 4-in-1 Charging Cable
2. Expert Awards – articles will be judged by InterSystems experts:
🥇 1st place: Nintendo Switch OLED / Hogwarts Icons - Collectors' Edition Lego / The X-Mansion Lego
🥈 2nd place: Nintendo Switch Lite / Lamborghini Countach 5000 Quattrovalvole Lego
🥉 3rd place: Amazon Kindle Paperwhite / Retro Radio Lego
Or as an alternative, any winner can choose a prize from a lower tier than his own.
3. Developer Community Award – article with the most likes:
🏅 Amazon Kindle Paperwhite / Retro Radio Lego
Note:
The author can only be awarded once per category (in total, the author will win two prizes: one for Expert and one for the Community)
In the event of a tie, the number of votes of the experts for the tied articles will be considered a tie-breaking criterion.
Who can participate?
Any Developer Community member, except for InterSystems employees (contractors are allowed). Create an account!
Contest period
📝 February 3rd to March 2nd: Publication of articles.
🗳️ March 3rd to March 9th: Voting time.
🏅 March 10th: Winners announcement.
Publish an article(s) throughout this period. DC members can vote for published articles with Likes – votes in the Community award.
Note: The sooner you publish the article(s), the more time you will have to collect both Expert & Community votes.
What are the requirements?
❗️ Any article written during the contest period and satisfying the requirements below will automatically* enter the competition:
The article must be about InterSystems products and services.
The article must be in English (incl. inserting code, screenshots, etc.).
The article must be 100% new.
The article cannot be a translation of an article already published in other communities.
The article should contain only correct and reliable information about InterSystems technology.
Article size: 400 words minimum (links and code are not counted towards the word limit).
Articles on the same topic but with dissimilar examples from different authors are allowed.
* Our experts will moderate articles. Only valid content will be eligible to enter the contest.
🎯 EXTRA BONUSES
Bonus
Nominal
Details
Topic bonus
3
Write an article on one of the proposed topics below.
Video bonus
3
Besides publishing the article, make an explanatory video. How to shoot a good video for the Article Contest bonus
Discussion bonus
1
Article with the most useful discussion, as decided by InterSystems experts.Only 1 article will get this bonus.
Translation bonus
2
Publish a translation of your article to any of the regional communities. Learn more.Note: Only one per article.
New participant bonus
5
If you haven't participated in the previous contests, your article(s) will get this bonus.
Application bonus
5
Upload an application from your article to Open Exchange.
If a contest article is split into multiple parts, bonuses apply only to the first part unless they are specific to a later part, and the same bonus cannot be used for multiple parts.
Proposed topics
Here's a list of proposed topics that will give your article a Topic bonus:
✔️ Using ODBC & JDBC✔️ Using DB-API✔️ Using Dynamic SQL & Embedded SQL✔️ Generating OpenAPI Documentation✔️ Authentication Related Endpoints (using SSO, OAuth and ZAuth)✔️ Using isc.rest to develop API✔️ Embedded Python in Interoperability (Operations, Services, Custom Functions)✔️ IKO common deployments✔️ Adapt existing C#, Java, and Python code to IRIS and IRIS interoperability using external language gateways✔️ GenAI, Vector Search, RAG✔️ FHIR, EHR, OMOP✔️ Data Fabric, Data Lake, Data Warehouse, Data Mesh✔️ Sharding, Mirroring
Need inspiration or an example? Check out the #Best Practices.
It's time to show off your writing skills!
🖋️ Write. Share. Shine.
Important note: Delivery and prizes vary by country and may not be possible for some of them. A list of countries with restrictions can be requested from @Liubka.Zelenskaia 🔥🔥🔥 🤜🤛 Hi Community! The first bunch of articles have been submitted to the contest:
SQL Host Variables missing? by @Robert.Cemper1003
Monitoring InterSystems IRIS with Prometheus and Grafana by @Stav
REST Service in IRIS Production: When You Crave More Control Over Raw Data by @Aleksandr.Kolesov
Generation of OpenAPI Specifications by @Alessandra.Carena
We are looking forward to even more exciting articles! 😎 Developers!The first week of the publication phase is almost over! And 3 more articles have been published to the contest, check them out:Using DocDB in SQL, almost by @Dmitry.Maslennikov IntegratedML Configuration and Application in InterSystems IRIS by @André.DienesFriedrichIRIS %Status and Exceptions by @Ashok.Kumar @Anastasia.Dyubaylo Hello, how are you? Is it possible to change some points of the article after publishing it? I received some very useful feedback from friends in the community. Hi Andre! Of course! Please feel free to make any changes to your article. Thank you :) Hi Community!
Check out this post to learn how to create a good video based on your article and earn bonus points in the contest:
>> How to shoot a good video for the Article Contest bonus
by @Sergio.Farago
Enjoy!! 😉 Developers!
There are only two weeks left for the publication period, so write down your articles and join the contest!11 more articles have been added, check them out:
IRIS %Status and Exceptions Part-2 by @Ashok.Kumar Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 1 - SQL Gateway by @Julio.Esquerdo Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 2 – Python and Vector Search by @Julio.EsquerdoUsing SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 3 – REST and Interoperability by @Julio.Esquerdo A look at Dynamic SQL and Embededd SQL by @Andre.LarsenBarbosa Using REST API, Flask and IAM with InterSystems IRIS - Part 1 - REST API by @Julio.Esquerdo Using REST API, Flask and IAM with InterSystems IRIS - Part 2 – Flask App by @Julio.Esquerdo Using REST API, Flask and IAM with InterSystems IRIS - Part 3 – IAM by @Julio.EsquerdoBulk FHIR Step by Step by @Yuri.GomesHTTP and HTTPS with REST API by @Julio.EsquerdoJWT Creation and Integration in InterSystems IRIS by @Ashok.Kumar Hello Devs!
Registration for the Tech Article Contest is continuing! There are 7 more articles that have been added, check them out:
OMOP Odyssey - InterSystems OMOP, The Cloud Service (Troy) by @sween OMOP Odyssey - Celebration (House of Hades) by @sween OMOP Odyssey - FHIR® to OMOP ETL (Calypso’s Island) by @sween OMOP Odyssey - No Code CDM Exploration with Databricks AI/BI Genie (Island of Aeolus) by @sween REST API with Swagger in InterSystems IRIS by @Julio.Esquerdo Securing HL7 Interfaces with SSL/TLS (X.509) Certificates by @Eric.FortenberryUsing isc.rest to Develop API with InterSystems IRIS by @diba
So, write your article and join the contest! We are waiting for you!
Hi Devs!
Registration closes on March 2nd, so you still have time to publish your article.
We look forward to your cool articles! 😎 Hey Devs!Today is the last day for registration! Complete your articles, publish them, and join!
Meanwhile, more articles have been added. Check them out:Modern and Easy-to-Use VSCode Plugin for InterSystems ObjectScript: Class Diagram Visualization with PlantUML by @Jinyao Creating FHIR responses with IRIS Interoperability production by @Laura.BlázquezGarcía Introducing UDP Adapter by @Corentin.Blondeau The Case for IRIS and JavaScript by @Rob.Tweed High-Performance Message Searching in Health Connect by @Timothy.Scott Using Dynamic & Embedded SQL with InterSystems IRIS by @Muhammad.Waseem Proposal for ObjectScript naming conventions and coding guidelines by @Robert.Barbiaux Leveraging InterSystems IRIS for Health Data Analytics with Explainable AI and Vector Search by @rahulsinghal IRIS Vector Search for Matching Companies and Climate Action by @Alice.Heiman Multi-Layered Security Architecture for IRIS Deployments on AWS with InterSystems IAM by @Roy.Leonov SQLAchemy-iris with the latest version Python driver by @Dmitry.Maslennikov The submission period has ended.
🎉 38 articles submitted – our biggest Tech Article Contest yet! 🎉
This week is voting week. Show your support for the participants by casting your votes (likes) in the Community Award category.
>> Contest page <<
Article
Developer Community Admin · Mar 18
The InterSystems IRIS data platform underlies all InterSystems applications, as well as thousands of customer and partner applications across Healthcare, Financial Services, Supply Chain, and other ecosystems. It is a converged platform, providing transactional-analytical data management, integrated interoperability, and data integration, as well as integrated analytics and AI. It supports the InterSystems Smart Data Fabric approach to managing diverse and distributed data.
At the core of our architecture are facilities for high-performance, multi-model, multi-lingual data processing in our core data engine, also known as the Common Data Plane. Around that lives a remarkable facility for scaling out extremely high volumes of data and high transaction rates that can reach over a billion database operations per second.
Next are two major subsystems: one that focuses on analytics and artificial intelligence (AI) and another that focuses on interoperability and data integration. These subsystems follow our fundamental philosophy of running everything close to the data to provide high performance with a minimal footprint.
Finally, around the subsystems, we have built a smart data fabric that enables customers to solve complex problems in a single stack. The following sections explore these layers and how they interact to give a better sense of what makes InterSystems IRIS technology so special.
Famous for its performance, the core of InterSystems technology is a highly efficient mechanism for data storage, indexing, and access. Unlike other database providers, we do not provide a natively relational or document database. We use an underlying storage format called globals. They are modeled in a highly optimized, multi-dimensional array-style format that is built as a B+ tree that is automatically indexed with every operation.InterSystems.com Built at a layer below data models — such as relational, object, or document — a single storage format is projected into different data formats and models. This is referred to as a Common Data Plane.
The underlying global format is highly efficient and translatable to many different data models:
Globals (denoted with an up-caret “^” prefix) can have many subscripts, each of which can be numeric, alphanumeric, or symbolic. Globals are powerful and represent data in a general way that simultaneously supports many data paradigms with a single copy of the data. Cases like associative and sparse arrays are easy to process in this approach. We also encode in the storage format itself, using encodings (denoted with a dollar sign “$” prefix) that provide a small footprint and low latency because of the disk and I/O optimizations. The format of these encodings is the same in memory, on disk, or on the wire. This minimizes the transformations involved in ingesting data and achieves amazing speeds expected from an in-memory database, but with persistence typical of a diskbased database.
An example of how a single global can support multiple data models is illustrated by a case where you are using SQL or BI tools and want to access the data in a relational format, in tables with rows and columns. If you are doing object-oriented development, however, we automatically project those objects into globals and subsequently project that data into relational format. Similarly, we can project JSON or other document formats into a relational form.
This capability means that rather than having multiple data stores, one relational, another object, and another document, and stitching them together we have one copy projected to all these different forms, without duplication, moving, or mapping. From this also comes a convenient combination of schema-on-write and schema-on-read. As with a data lakehouse, you can depend on a level of structure like a data link, after inserting the data and figuring out the best schema for that data based on its current use. This global structure works well for structured data, as well as for documents and semi-structured or unstructured data.
A few encodings, engineered very tightly, are used to store data and indices efficiently.
While lists are the default storage encoding, InterSystems IRIS may represent data and indices in one or more of these encodings based on the data characteristics and/or the developers’ specifications. Vectors store a large number of the same datatype efficiently and are used for columnar storage in analytics, for vector search, for time series, and for more specialized cases. Packed-value arrays (known as $pva) are ideal for documentoriented storage. Bitmaps are used for Boolean data and for highly efficient bitmap indices.
All these data structures are automatically indexed in a highly optimized update upon every operation. Many successful customers have used built-in indexing to carry out low-latency, full-transactional steps, like the “billion database operations per second” mentioned earlier. Such consistent indexing, performed almost instantly, gives us consistent, low-latency access to all data in any format. 6 Multi-model facilities made possible by the underlying global format are virtually instantaneous because there is only one copy of the data to change, and thus no time or space needed for data replication. This also grants major advantages in ingestion speed, reliability, and scale-out.
The system can combine encodings. The multi-lingual capability that globals provide means that you can work in the programming language of your choice, with effortless access to all needed formats. Clearly true with relational access through standards like JDBC and ODBC, it is also true of automatic matching of objects in .NET or in Java to an underlying format. From a development perspective, you do not need to worry about the object relational mapping; you just work with an object, and we take care of the storage format.
Around the core data engine is layered a distributed cache coming with built-in consistency guarantees. This cache uses our Enterprise Cache Protocol, or ECP, and satisfies textbook guarantees for consistency under distributed data and failure. ECP builds in these consistency rules to maintain data integrity across a distributed system even in the presence of failures, encapsulating them directly.
In other words, the performance of the distributed data stays high, even at scale. You can spread these ECP nodes for horizontal scaling, managing higher throughput. You can also spread them for data distribution, meaning that you can have in-memory performance without having to live within the memory available for any node.
ECP works especially well in the cloud because of its scale-out. We’ve built that into our InterSystems Kubernetes Operator (IKO) to provide auto scaling, and we can transparently add and remove nodes using ECP to the application. Scaling out like this is essentially linear, and you can independently scale out the ingestion versus the data processing versus the data storage and optimize for your workload. Because ECP is robust to changes in topology, a node can die without affecting transaction processing. You can add nodes on the fly, and they can pick up the load. That provides seamless elasticity, meaning you can size things dynamically and enjoy a net lower cost. ECP is transparent to the application; no changes are needed to scale out any application. Customers also have the flexibility to associate specific workloads with specific sets of notes in an InterSystems IRIS cluster. For example, reporting or analytics workloads might be assigned to one pod, and transaction-heavy workloads to another.
The next layer of InterSystems IRIS architecture is a built-in interoperability subsystem. It integrates data across messages, devices, and different APIs. It also integrates bulk data, in either the ETL or the ELT (extract-transform-load or extract-load-transform) patterns. InterSystems IRIS Interoperability uses the common data plane as a built-in repository for all elements of message handling and data integration. This benefits from the performance and reliability of the first two layers, as well as the multimodel capabilities. For example, bulk structure data tends to be relationally oriented, and many messaging protocols tend to be document oriented.
By default, interoperability is persistent – meaning that data messages and transformations are stored within the system for auditing, replay, and analytics. Unlike many other interoperability middleware offerings, delivery can be guaranteed, traced, and audited across the board. You can confirm that a message was delivered or see who sent what to whom, the type of information important for both analytics and forensics. The general paradigm for InterSystems IRIS Interoperability is object-oriented. This aids in creation and maintenance of adapters: object inheritance minimizes the effort required to build any needed custom adapters, including testing. It also helps with the creation and maintenance of data transformations. As shown in Figure 8, use of a common object can dramatically reduce the number of transformations needed between different data formats or protocols. Rather than building and maintaining a data transformation for each pair, a single transformation for each data format into a common object provides a simpler approach that is easier to test and maintain.
Within the InterSystems IRIS Interoperability subsystem, there is a wide range of integration scenarios across messages, devices, and APIs.
This interoperability includes built-in full lifecycle API management, streaming facilities, IoT integration, compatibility with cloud services, and more. We also provide dynamic gateways in multiple languages, enabling high performance integration of existing applications into these data flows in the language of your choice.
InterSystems IRIS Interoperability sits alongside a set of built-in analytics and AI facilities.
Each of these capabilities runs “close to the data,” meaning that in general we bring processing to the data rather than, at considerable cost and delay, move data to the processing.
Several analytics facilities have been built into InterSystems IRIS. One is InterSystems IRIS BI, which is a MOLAP-type, cube-based architecture for business intelligence (BI), optimized for latency. Because this set of subsystems is built into InterSystems IRIS, we can trigger on SQL and events in the cube with only 10-20 milliseconds from data to dashboard. Having a single copy of data across transactions and analytics helps keep this latency low. Because ECP allows one set of nodes to operate on analytics in isolation from the transactional workload, analytics poses no risk to transactional responsiveness, while there is never a need for more than a single copy of the data.
Another facility is Adaptive Analytics, which, unlike InterSystems IRIS BI, does not use prebuilt cubes. It dynamically optimizes and builds virtual cubes as it goes, making these available for both BI and Adaptive Analytics is a ROLAP-type headless analytics facility that includes seamless integration with all leading BI tools, such as Tableau, PowerBI, Qlik, Excel, and others.
Alongside the analytics facilities are several ML and AI facilities.
Integrated ML allows you to write automatic machine learning (ML)-style models using SQL. You simply write an SQL command, then create, train, validate, and predict with the model. The results can be directly used in SQL. Thus, developers familiar with SQL can use ML predictions in their applications.
Python sits directly within the kernel of the data platform, so it runs directly against the data with maximum performance. You do not need to port from a development or lab environment where you build models into a production environment where you run those models. You can build and run in the same cluster and therefore have assurance that what you have built and what you run use the same data in the same format and are therefore consistent. Data science projects are simple and fast.
InterSystems IRIS embedded vector search capabilities let you search unstructured and semi-structured data. Data is converted to vectors (or embeddings), then stored and indexed in InterSystems IRIS for semantic search, retrieval-augmented generation (RAG), text analysis, recommendation engines, and other use cases.
These layers - the core data engine, the ECP layer to scale out Interoperability, and our analytics facilities - are part of our unique ability to power a Smart Data Fabric architecture. Data fabric is an architectural pattern that provides common governance over a wide variety of data and data sources. A common pattern for a data fabric is to bring in data from multiple sources; normalize, deduplicate, cross-correlate and improve the data; and then make it available for a variety of different applications:
Within most data fabrics, there are multiple capabilities including ingestion, pipelining, metadata, and more. What makes the InterSystems approach smart is the inclusion of analytics and AI within the data fabric:
One of the key tenets of InterSystems technology is “connect or collect.” Some facilities within InterSystems IRIS, like foreign tables or federated tables, let you work or “connect” with data where it lies. Or you can choose to collect that data.
InterSystems IRIS is agnostic with respect to cloud provider and runs on premises, in the cloud of your choice, in heterogeneous and hybrid scenarios, or in multicloud environments. The fastest growing part of our business is our cloud services, which are available across multiple clouds. The flexibility to run wherever you want to deploy is key. That distinguishes InterSystems IRIS from, for example, the facilities provided by the cloud vendors themselves or many of the current options for data warehouses. You can run InterSystems IRIS and applications built with it wherever you want. Of course, InterSystems IRIS itself is available as a cloud managed service.
More articles on the subject:
Data models in InterSystems IRIS
Four Database APIs
ECP and Process Management API
How to develop in IRIS BI (also with change management)
Building Analytics Solution with IRIS
Using embedded python
Vector search and RAG (Retrieval Augmented Generation) models
Source: InterSystems IRIS Data Platform: Architecture Guide Excellent overview Nice!
Announcement
Jeff Fried · Mar 10
Using OpenEHR with InterSystems IRIS
Occasionally. we get questions about using OpenEHR with InterSystems. Typically, these discussions focus on why and how an organization wants to implement OpenEHR in building applications. Here’s a brief guide:
InterSystems is focused on interoperability: we prioritize interoperability through standards such as HL7, IHE, DICOM, and ISO. In our experience, no single standard addresses all the needs of complex healthcare data. Thus, we recommend that any implementation of OpenEHR be evaluated in conjunction with these standards and an analysis of which of your scenarios are best addressed by each standard.
OpenEHR on InterSystems IRIS: for organizations building applications or data products using the OpenEHR model, InterSystems IRIS is an ideal platform. It offers multi-model capabilities, scalability, high performance, and reliability. InterSystems IRIS for Health adds the flexibility to use many other healthcare standards including HL7 FHIR in the same application.
Interoperabiilty with OpenEHR: OpenEHR began before HL7 FHIR, but FHIR now covers many of the original OpenEHR use cases. FHIR has broad implementation, including use by major EHR systems. Therefore the need to connect between OpenEHR-based and non-OpenEHR-based systems is increasing; InterSystems technology excels at data mapping to and from OpenEHR.
In summary, if you plan to use OpenEHR, consider InterSystems IRIS for Health as your platform. Define your scenarios first to decide where OpenEHR fits best.
This is a helpful overview! It effectively highlights the strengths of InterSystems IRIS for organizations considering OpenEHR. One point worth expanding could be practical examples or case studies where InterSystems IRIS has successfully integrated OpenEHR with other healthcare standards like HL7 FHIR. Real-world scenarios could offer valuable insights for those evaluating the best approach for their healthcare data solutions. The most popular OpenEHR GitHub repositories have 200-300 stars... No one uses it and no one needs it
Announcement
Anastasia Dyubaylo · Apr 7
Hi Community,
It's time to announce the winners of the AI Programming Contest: Vector Search, GenAI and AI Agents!
Thanks to all our amazing participants who submitted 15 applications 🔥
Now it's time to announce the winners!
Experts Nomination
🥇 1st place and $5,000 go to the bg-iris-agent app by @geotat, @Elena.Karpova, @Alena.Krasinskiene
🥈 2nd place and $2,500 go to the mcp-server-iris app by @Dmitry.Maslennikov
🥉 3rd place and $1,000 go to the langchain-iris-tool app by @Yuri.Gomes
🏅 4th place and $500 go to the Facilis app by @Henrique, @henry, @José.Pereira
🏅 5th place and $300 go to the toot app by @Alex.Woodhead
🌟 $100 go to the iris-AgenticAI app by @Muhammad.Waseem
🌟 $100 go to the iris-easybot app by @Eric.Fortenberry
🌟 $100 go to the oncorag app by Patrick Salome
🌟 $100 go to the AiAssistant app by @XININGMA
🌟 $100 go to the iris-data-analysis app by @lando.miller
Community Nomination
🥇 1st place and $1,000 go to the AiAssistant app by @XININGMA
🥈 2nd place and $600 go to the bg-iris-agent app by @geotat, @Elena.Karpova, @Alena.Krasinskiene
🥉 3rd place and $300 go to the iris-data-analysis app by @lando.miller
🏅 4th place and $200 go to the Facilis app by @Henrique, @henry, @José.Pereira
🏅 5th place and $100 go to the langchain-iris-tool app by @Yuri.Gomes
Our sincerest congratulations to all the participants and winners!
Join the fun next time ;) 
Congratulations!!! Thanks experts and community. I learned a lot about Intersystems AI stack. Congrats to all participants! 👏🏽
Special congrats to brazilian DC members @Yuri Marx and @Henrique Dias, @Henry Pereira, @José PereiraWell done! Good job! 🥳 👍🏽 Congrats to all the winners! Amazing stuff—so much to learn from you all! Thanks!! Thanks @Danusa.Ferreira
 Congratulations to the winners! But also congratulations to all the contestants, it was really difficult voting since there were so many great applications!! Thanks Congratulations everyone! Great result. Congratulations to everyone!! You have created so great applications 😉
Congratulations winners!!
Announcement
Anastasia Dyubaylo · Apr 11
Hi Community,
We're happy to announce that registration for the event of the year — InterSystems Ready 2025 — is now open. This is the Global Summit we all know and love, but with a new name!
➡️ InterSystems Ready 2025
🗓 Dates: June 22-25, 2025
📍 Location: Signia Hilton Bonnet Creek, Orlando, FL, USA
InterSystems READY 2025 is a friendly and informative environment for the InterSystems community to meet, interact, and exchange knowledge.
READY 2025 event includes:
Sessions: 3 and a half days of sessions geared to the needs of software developers and managers. Sessions repeat so you don’t have to miss out as you build your schedule.
Inspiring keynotes: Presentations that challenge your assumptions and highlight new possibilities.
What’s next: In the keynotes and breakout sessions you’ll learn what’s on the InterSystems roadmap, so you’ll be ready to go when new tech is released.
Networking: Meet InterSystems executives, members of our global product and innovation teams, and peers from around the world to discuss what matters most to you.
Workshops and personal training: Dive into exactly what you need with an InterSystems expert, including one-on-ones.
Startup program: Demonstrate your tech, connect with potential buyers, and learn how InterSystems can help you accelerate growth of your business.
Partner Pavilion: Looking for a consultant, systems integrator, tools to simplify your work? It’s all in the pavilion.
Fun: Demos and Drinks, Tech Exchange, and other venues.
Learn more about the prices on the official website and don't forget that the super early bird discount lapses on April 16th!
We look forward to seeing you at the InterSystems Ready 2025!
Article
Stav Bendarsky · Feb 3
Monitoring your IRIS deployment is crucial. With the deprecation of System Alert and Monitoring (SAM), a modern, scalable solution is necessary for real-time insights, early issue detection, and operational efficiency. This guide covers setting up Prometheus and Grafana in Kubernetes to monitor InterSystems IRIS effectively.
This guide assumes you already have an IRIS cluster deployed using the InterSystems Kubernetes Operator (IKO), which simplifies deployment, integration and mangement.
Why Prometheus and Grafana?
Prometheus and Grafana are widely adopted tools for cloud-native monitoring and visualization. Here’s why they are a fit:
Scalability: Prometheus handles large-scale data ingestion efficiently.
Alerting: Customizable alerts via Prometheus Alertmanager.
Visualization: Grafana offers rich, customizable dashboards for Kubernetes metrics.
Ease of Integration: Seamlessly integrates with Kubernetes workloads.
Prerequisites
Before starting, ensure you have the following:
Basic knowledge of Kubernetes and Linux
kubectl and helm installed.
Familiarity with Prometheus concepts (refer to the Prometheus documantion for more information).
A deployed IRIS instance using the InterSystems Kubernetes Operator (IKO), refer to another article here.
Step 1: Enable Metrics in InterSystems IRIS
InterSystems IRIS exposes metrics via /api/monitor/ in the Prometheus format. Ensure this endpoint is enabled:
Open the Management Portal.
Go to System Administration > Security > Applications > Web Applications.
Ensure /api/monitor/ is enabled and accessible by Prometheus. You can check its status by navigating to the Management Portal, going to System Administration > Security > Applications > Web Applications, and verifying that the endpoint is listed and enabled.
Verify its availability by accessing:
http://<IRIS_HOST>:<PORT>/api/monitor/metrics
Step 2: Deploy Prometheus Using Helm
Deploying Prometheus using Helm provides an easy-to-manage monitoring setup. We will use the kube-prometheus-stack chart that includes Prometheus, Alertmanager, and Grafana.
Prepare the configuration: Create a values.yaml file with the following settings:
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: 'intersystems_iris_metrics'
metrics_path: '/api/monitor/metrics'
static_configs:
- targets:
- 'iris-app-compute-0.iris-svc.commerce.svc.cluster.local:80' # Replace with your IRIS service
# To scrape custom metrics from the REST API created in IRIS
- job_name: 'custom_iris_metrics'
metrics_path: '/web/metrics'
static_configs:
- targets:
- 'commerce-app-webgateway-0.iris-svc.commerce.svc.cluster.local:80'
basic_auth:
username: '_SYSTEM'
password: 'SYS'
Explanation:
iris-app-compute-0.iris-svc.commerce.svc.cluster.local:80: The format of the target should follow this convention: <pod-name>-iris-svc.<namespace>.svc.cluster.local:80. Replace <pod-name> with your IRIS pod, specify whether you want to scrape compute or data pods, and adjust the namespace as needed.
basic_auth** section**: If authentication is required to access the IRIS metrics endpoint, provide the necessary credentials.
Add the Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
Install Prometheus using Helm:
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring --create-namespace -f values.yaml
Verify the deployment:
kubectl get pods -n monitoring
Step 3: Custom Metrics with REST API
You can create a custom metrics CSP page that serves your application metrics. In this guide, I provide an example of a simple CSP page that extracts system metrics from IRIS itself, but you can totally build your own CSP page with your own custom metrics—just make sure they are in the Prometheus format.
CustomMetrics.REST
Class CustomMetrics.REST Extends %CSP.REST
{ Parameter HandleCorsRequest = 1; ClassMethod Metrics() As %Status
{
Try {
Do %response.SetHeader("Content-Type", "text/plain; version=0.0.4; charset=utf-8")
New $Namespace Set $Namespace = "%SYS"
Set ref = ##class(SYS.Stats.Dashboard).Sample()
Write "# HELP iris_license_high Peak number of licenses used", $CHAR(10)
Write "# TYPE iris_license_high gauge", $CHAR(10)
Write "iris_license_high ", ref.LicenseHigh, $CHAR(10)
Write "# HELP iris_active_processes Number of active processes", $CHAR(10)
Write "# TYPE iris_active_processes gauge", $CHAR(10)
Write "iris_active_processes ", ref.Processes, $CHAR(10)
Write "# HELP iris_application_errors Number of application errors", $CHAR(10)
Write "# TYPE iris_application_errors counter", $CHAR(10)
Write "iris_application_errors ", ref.ApplicationErrors, $CHAR(10)
Return $$$OK
} Catch ex {
Do %response.SetHeader("Content-Type", "text/plain")
Write "Internal Server Error", $CHAR(10)
Do $System.Status.DisplayError(ex.AsStatus())
Return $$$ERROR($$$GeneralError, "Internal Server Error")
}
}
XData UrlMap
{
<Routes>
<Route Url="/metrics" Method="GET" Call="Metrics"/>
</Routes>
}
}
Deploy this as a REST service under a new web application called metrics in IRIS, and add its path to Prometheus for scraping.
Step 4: Verify Prometheus Setup
Open the Prometheus UI (http://<PROMETHEUS_HOST>:9090).
Go to Status > Targets and confirm IRIS metrics are being scraped.
Step 5: Access Grafana
With Prometheus scraping IRIS metrics, the next step is to visualize the data using Grafana.
1. Retrieve the Grafana service details:
kubectl get svc -n monitoring
If you’re using an ingress controller, you can access Grafana using the configured hostname (e.g., http://grafana.example.com). Otherwise, you can use the following options:
Port Forwarding: Use kubectl port-forward to access Grafana locally:
kubectl port-forward svc/monitoring-grafana -n monitoring 3000:80
Then, access Grafana at http://localhost:3000.
NodePort or ClusterIP: Refer to the NodePort or ClusterIP service details from the command output to connect directly.
Step 6: Log In to Grafana
Use the default credentials to log in:
Username: admin
Password: prom-operator (or the password set during installation).
Step 7: Import a Custom Dashboard
I’ve created a custom dashboard specifically tailored for InterSystems IRIS metrics, which you can use as a starting point for your monitoring needs. The JSON file for this dashboard is hosted on GitHub for easy access and import: Download the Custom Dashboard JSON
To import the dashboard:
Navigate to Dashboards > Import in Grafana.
Paste the URL of the JSON file into the Import via panel JSON field or upload the file directly.
Assign the dashboard to a folder and Prometheus data source when prompted.
Once imported, you can edit the panels to include additional metrics, customize the visualizations, or refine the layout for better insights into your IRIS environment.
Conclusion
By following this guide, we've successfully set up Prometheus to scrape InterSystems IRIS metrics and visualize them using Grafana. Additionally, you can explore other monitoring tools such as Loki to also monitor logs efficiently and configure alerts using Alertmanager or external services like PagerDuty and Slack. If you have any questions or feedback, feel free to reach out! That's really helpful, I appreciate it! thanxbetter than a lot of the other alternatives Great guide! Clear, concise, and super helpful for setting up Prometheus and Grafana with IRIS. Well done! Excellent step by step that so many of our customers have been asking about since the deprecation of SAM - thank you Stav! Glad it was useful for you Thank you, Ari! Glad that it will address the needs from SAM's deprecation. @Daniel.Kutac - you asked for that Really helpful, Thank you Stav! I appreciate the attention to not-so-obvious details, great work. Excellent!, thank you @Evgeny.Shvarov for pointing me at this article. And thank you Stav for great article!
Question
Irina Yaroshevskaya · Feb 8
Where is InterSystems IRIS Download for Windows (better free)? You can download the latest community edition of InterSystems iris and iris for health in https://evaluation.intersystems.com/ Is the following article helpful?
https://community.intersystems.com/post/how-download-and-install-intersystems-iris
Announcement
Liubov Zelenskaia · Feb 28
InterSystems’ team is heading to the EHH 2025 hachathon, taking place on March 7-9, 2025!
EHH is an international hackathon tackling some of the most pressing healthcare challenges today. It brings hackers, students, entrepreneurs, healthcare, and industry experts together to create new ideas and technologies for diabetology, surgery, transplantology, patient care and comfort. You can apply for free, as an individual, or as a team of 3 members.Interested to become a mentor? Send a direct message to us!
Announcement
Liubov Zelenskaia · Feb 28
InterSystems’ team is heading to the GrandHack 2025 hackathon, taking place on March 14-16, 2025!
MIT Hacking Medicine's GrandHack 2025 is an annual flagship event that brings together innovators to tackle pressing healthcare challenges. This year's hackathon features tracks in Assistive Technology, Heart Health, and Transformative Intelligence, aiming to develop solutions that enhance patient care and outcomes. Interested to become a mentor? Send a direct message to us!