Clear filter
Announcement
Anastasia Dyubaylo · Apr 16
Hi Community!
We know that every developer has small side projects — apps where you experiment with new technologies, test ideas before implementing them in bigger solutions, or just build something fun for the sake of curiosity. But what if one of those projects could take you all the way to the InterSystems READY 2025?
We’re launching a unique opportunity: show us your passion, creativity, and love for IRIS, and win a free pass to the InterSystems READY + hotel accommodation!
The rules are simple: upload your fun IRIS-based side project to Open Exchange, and record a short inspirational video about why you should be the one to get the pass to THE event of the year and win!
Duration: April 21 - May 04, 2025
Prizes: hotel accommodation and free passes to the InterSystems READY 2025!
🛠 What do you need to do?
Upload a fun IRIS-based side project to Open Exchange (How to publish an application on Open Exchange). Be creative - it can be useful, quirky, fun, or just something you’ve always wanted to try.
Record a short inspirational video (up to 5 minutes):
Tell us how InterSystems technologies or the Developer Community impacted your project or career.
Explain why YOU should get a ticket to the InterSystems READY 2025.
Submit your video and link to your app via this form.
🧠 General Requirements for Applications
An application or library must be fully functional. It should not be an import or a direct interface for an already existing library in another language (except for C++). It should not be a copy-paste of an existing application or library.
Accepted applications: new to Open Exchange. Our team will review all applications before approving them.
The application should work on either IRIS, IRIS for Health or IRIS Cloud SQL. The first two could be downloaded as host (Mac, Windows) versions from Evaluation site, or can be used in the form of containers pulled from InterSystems Container Registry or Community Containers: intersystemsdc/iris-community:latest or intersystemsdc/irishealth-community:latest .
The application should be Open Source and published on GitHub or GitLab.
The README file to the application should be in English, contain the installation steps, and either the video demo or/and a description of how the application works.
An application can have only one author.
NB. Our experts will have the final say in whether the application is approved for this initiative based on the criteria of creativity and originality of approach. Their decision is final and not subject to appeal.
👨💻 Who Can Participate
Any Developer Community member, except InterSystems employees or contractors. Create an account!
🏆 Prizes
🥇 1st place: free pass to InterSystems READY 2025 + hotel accommodation
🥈 2nd & 3rd places: free pass to InterSystems READY 2025
❗️The prize is not exchangeable for cash or any other alternative.
✨ Bonus Points
You can boost your chances of winning by submitting additional activities:
Publish a tech article about your project
Create an extra video demo of your project
Share your inspirational video on your social media using hashtag #Ready2025 and mention @InterSystems and @InterSystemsDev
Get inspired. Build something fun. Share your story.
It’s your time to shine - CODE your way to InterSystems READY 2025! Hey Developers,
There's still time to participate in this challenge and win a pass and hotel accommodation for InterSystems READY 2025! Don't miss out on your chance to be the winner! I just submitted my new app jupyter-for-money for approval on Open Exchange. Hi! Don't forget to fill in the form. I have submitted the form with my video. Thank you for doing this contest! Great challenge and I had fun creating my wp-iris-project. Hi @ShawntelleCoker! Please fill out this form to participate in the challenge. I submitted the form.
Question
Ali Chaib · Feb 7
I understand that InterSystems provides functions to facilitate transactions between FHIR and HL7 via the SDA segment. My question is:
Does this transformation only work when InterSystems receives FHIR requests and converts them into HL7, or does it also support responses?
Specifically, if our operation sends a GET request to a broker and receives a FHIR response, does InterSystems support transforming this response into an SDA segment automatically?
Or should we manually parse and modify the response to handle it according to our needs?
What would be the best approach in this scenario? Any insights or best practices would be greatly appreciated!
Thanks in advance. Hello @Ali.Chaib2396
SDA, or Summary Document Architecture is an intermidary format. It's used to easily convert between multiple data formats--such as HL7 V2, C-CDA, C32, HL7 FHIR, and others.
I hope You are sending a GET request via HTTPOperation in FHIR interoperability production. You'll get the http response. There is no predefined transformation available to do it. You need to programmatically convert the response. I just convert the FHIR operation outcome to FHIR R4 outcome object directly.
ClassMethod fhirresponse()
{
#;Assume the json as a fhir response
Set fhiResponse = {"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"not-found","diagnostics":"<HSFHIRErr>ResourceNotFound","details":{"text":"No resource with type 'Task' and id '33'"}}]}
#dim opoutcome As HS.FHIRModel.R4.OperationOutcome = ##Class(HS.FHIRModel.R4.OperationOutcome).fromDao(fhiResponse)
Write opoutcome.issue
}
Thank you @Ashok.Kumar
Yes, I fully understand that SDA (Summary Document Architecture) is an intermediary format.And yes, I am using the HTTP operation in my interoperability production.
When you say "You need to programmatically convert the response", do you mean:
Convert it into an object, or
Convert it into SDA?
If the goal is to convert it into an object, does that mean I should manually read the JSON entities/nodes, extract the relevant information, and transform it directly into an HL7 message—bypassing SDA entirely?
If the goal is to convert it into SDA, does that mean I must first transform the FHIR response into a FHIR request? If so, how? Because FHIR responses often contain additional fields that do not appear in FHIR requests, such as:
"type": "searchset",
"total": 1,
"search": {
"mode": "match"
}
I’m asking these questions because, as you know, InterSystems provides built-in functionalities to automatically convert FHIR requests into SDA, such as:
Process: HS.FHIR.DTL.Util.HC.FHIR.SDA3.Process
Transform class: HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3
These components accept FHIR requests and successfully transform them into SDA.
However, when handling FHIR responses, I tried extracting the payload, creating a new message, and sending it to HS.FHIR.DTL.Util.HC.FHIR.SDA3.Process. Unfortunately, the transformation failed—possibly because the FHIR response contains additional fields that the built-in functions don’t recognize.
It's worth noting that when I send a FHIR request to the exact same process, it transforms correctly into SDA (without these extra fields). Hello @Ali.Chaib2396
FHIR to SDA
Yes we have built in transformation class available to FHIR ⇆ SDA and HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3 is used to convert to SDA3 to FHIRIf you're goal to to convert to FHIR to SDA object then "HS.FHIR.DTL.Util.HC.SDA3.FHIR.Process" use this Business Process(BP) class to convert but this will send the request to BO again based on the BP configuration. Otherwise programmatically construct the input and pass the required values to the method in TransformStream class transformation class(which is in FHIR to SDA conversion business process) HS.FHIR.DTL.Util.API.Transform.SDA3ToFHIR for convert FHIR to SDA
SDA3 to HL7 conversion
There is no method for programmatically converting from SDA to HL7 v2.
from documentation. So, You need to read the JSON response, extract the required information, and transform it directly into an HL7 message
There are multiple DTL's available for SDA3 to HL7 v2 in package "HS.Gateway.SDA3.SDA3ToHL7." Can you check and utilize if it's helps
Thanks!
Announcement
Bob Kuszewski · Feb 20
Welcome to the Q1’2025 quarterly platforms update.
If you’re new to these updates, welcome! The big news this quarter is the upcoming Red Hat Enterprise Linux 10 release. Read on for more on that. This update aims to share recent changes as well as our best current knowledge on upcoming changes, but predicting the future is tricky business and this shouldn’t be considered a committed roadmap.
With that said, on to the update…
InterSystems IRIS Production Operating Systems and CPU Architectures
Red Hat Enterprise Linux
Upcoming Changes
We’re expecting Red Hat to release RHEL 10 in late May or early June and add support for it shortly thereafter in IRIS 2025.1
RHEL 9.5 has undergone minor OS certification without incident.
Further reading: RHEL Release Page
Ubuntu
Previous Updates
We’ve completed minor OS certification for Ubuntu 24.04.01 and 22.04.3 on IRIS 2024.1 without incident
Upcoming Changes
Ubuntu 24.04.2 has just been released and minor OS certification will begin shortly.
Further Reading: Ubuntu Releases Page
SUSE Linux
Recent Update
We’ve completed minor OS certification of SUSE Linux Enterprise Server 15 SP6 without incident.
We’re planning to change how we phrase SUSE Service Pack support to just say “Any Service Pack that SUSE publicly supports” rather than list specific SPs in the documentation. We made this change for RHEL and it’s working well there, so we’ll make the same change with SUSE.
Previous Updates
General Support from SUSE for Linux Enterprise Server 15 SP3 came to an end on 12/31/2022, but extended security support will continue until December, 2025.
Further Reading: SUSE lifecycle
Oracle Linux
Upcoming Changes
We’re expecting Oracle Linux 10 to be released around the same time as RHEL 10. Since we support Oracle Linux via the IRIS RHEL kit, we’re expecting Oracle Linux 10 support at the same time as RHEL 10 support is released.
Further Reading: Oracle Linux Support Policy
Microsoft Windows
Recent Changes
Windows Server 2025 was released in November and we’re actively adding support for the platform. We’re expecting our testing to finish shortly and if all goes well, it’ll be added as a supported platform.
Upcoming Changes
Microsoft has pushed back the anticipated release date for Windows 12 to the fall of 2025. We’ll start the process of supporting the new OS after it’s been released.
Further Reading: Microsoft Lifecycle
AIX
Previous Changes
IRIS 2024.3 and up will only support OpenSSL 3. NOTE: This means that 2024.2 is the last version of IRIS that has both OpenSSL 1 and OpenSSL 3 kits. In IRIS 2023.3, 2024.1, & 2024.2, we provided two separate IRIS kits – one that supports OpenSSL 1 and one that supports OpenSSL 3. Given the importance of OpenSSL 3 for overall system security, we’ve heard from many of you that you’ve already moved to OpenSSL 3.
Further Reading: AIX Lifecycle
Containers
Previous Updates
We changed the container base image from Ubuntu 22.04 to Ubuntu 24.04 with IRIS 2024.2
We’re considering changes to the default IRIS container to, by default, have internal traffic (ECP, Mirroring, etc) on a different port from potentially externally facing traffic (ODBC, JDBC, etc). If you have needs in this area, please reach out and let me know.
InterSystems IRIS Development Operating Systems and CPU Architectures
MacOS
Upcoming Changes
Apple has released MacOS 15 and we are planning support for it in IRIS 2025.1
InterSystems Components
Previous Updates
InterSystems API Manager 2.8.4.11 & 3.4.3.11 have been released. If you’re using IAM, please upgrade soon.
InterSystems Reports 24.1 has been released.
Caché & Ensemble Production Operating Systems and CPU Architectures
Previous Updates
A reminder that the final Caché & Ensemble maintenance releases are scheduled for Q1-2027, which is coming up sooner than you think. See Jeff’s excellent community article for more info.
InterSystems Supported Platforms Documentation
The InterSystems Supported Platforms documentation is the definitive source information on supported technologies.
IRIS 2024.1 Supported Server Platforms
IRIS 2023.1 Supported Server Platforms
IRIS 2022.1 Supported Server Platforms
Caché & Ensemble 2018.1 Supported Server Platforms
… and that’s all folks. Again, if there’s something more that you’d like to know about, please let us know.
Announcement
Anastasia Dyubaylo · Feb 25
Hi Community,
Watch this short exercise in writing basic code snippets in InterSystems ObjectScript using Copilot in VSCode and the GPT-4.0 engine. This screencast covers "Hello, World," global manipulation, class creation, and building a simple REST API application.
>> Coding InterSystems ObjectScript with Copilot <<
🗣 Presenter: @Evgeny.Shvarov, Senior Manager of Developer and Startup Programs, InterSystems
📌 The related code can be found here: objectscript-copilot-demo.
Feel free to share your thoughts or questions in the comments to this post. Enjoy!
Article
Muhammad Waseem · Feb 28
Hi Community, In this article, we will explore the concepts of Dynamic SQL and Embedded SQL within the context of InterSystems IRIS, provide practical examples, and examine their differences to help you understand how to leverage them in your applications.
InterSystems SQL provides a full set of standard relational features, including the ability to define table schema, execute queries, and define and execute stored procedures. You can execute InterSystems SQL interactively from the Management Portal or programmatically using a SQL shell interface. Embedded SQL enables you to embed SQL statements in your ObjectScript code, while Dynamic SQL enables you to execute dynamic SQL statements from ObjectScript at runtime. While static SQL queries offer predictable performance, dynamic and embedded SQL offer flexibility and integration, respectively.
Dynamic SQL
Dynamic SQL refers to SQL statements that are constructed and executed at runtime, as opposed to static SQL, which is predefined and embedded directly in the application code. Dynamic SQL is particularly useful when the structure of a query is not known in advance or needs to be dynamically adjusted based on user input or application logic.
In InterSystems IRIS, Dynamic SQL is implemented through the %SQL.Statement class, which provides methods for preparing and executing SQL statements dynamically.
Key Benefits of Dynamic SQL
Flexibility: Dynamic SQL allows you to build queries programmatically, making it ideal for applications with complex or changing requirements.
Adaptability: You can modify queries based on runtime conditions, such as user input or application state.
Ad-Hoc Queries: If the application needs to generate custom queries based on user input, Dynamic SQL allows the construction of these queries at runtime.
Complex Joins and Conditions: In scenarios where the number of joins or conditions can change based on data, Dynamic SQL enables the construction of complex queries.
Practical Examples
1- Dynamic Table Creation: Building Database Schemas on the Fly
This example demonstrates how to dynamically create a table at runtime using InterSystems Dynamic SQL, enabling flexible and adaptive database schema management.
ClassMethod CreateDynamicTable(tableName As %String, columns As %String) As %Status
{
// Construct sql text
Set sql = "CREATE TABLE " _ tableName _ " (" _ columns _ ")"
//Create an instance of %SQL.Statement
Set statement = ##class(%SQL.Statement).%New()
//Prepare the query
Set status = statement.%Prepare(sql)
If $$$ISERR(status) {
Quit status
}
//Execute the query
Set result = statement.%Execute()
//Check for errors
If result.%SQLCODE = 0 {
Write "Table created successfully!", !
} Else {
Write "Error: ", result.%SQLCODE, " ", result.%SQLMSG, !
}
Quit $$$OK
}
Invoke Method
USER>do ##class(dc.DESql).CreateDynamicTable("Books","BookID NUMBER NOT NULL,Title VARCHAR(100),Author VARCHAR(300),PublicationYear NUMBER NULL, AvailableFlag BIT")
Output
2- Dynamic Table Search: Querying Data with User-Defined Filters
This example illustrates how to perform a dynamic table search based on user-defined criteria, enabling flexible and adaptable querying.
ClassMethod DynamicSearchPerson(name As %String = "", age As %Integer = "") As %Status
{
// Create an instance of %SQL.Statement
set stmt = ##class(%SQL.Statement).%New()
// Base query
set query = "SELECT ID, Name, Age, DOB FROM Sample.Person"
// Add conditions based on input parameters
if name '= "" {
set query = query _ " WHERE Name %STARTSWITH ?"
}
if (age '= "") && (name '= "") {
set query = query _ " AND Age = ?"
}
if (age '= "") && (name = "") {
set query = query _ " WHERE Age = ?"
}
// Prepare the query
set status = stmt.%Prepare(query)
if $$$ISERR(status) {
do $System.Status.DisplayError(status)
quit status
}
// Execute the query with parameters
if (age '= "") && (name '= "") {
set rset = stmt.%Execute(name, age)
}
if (age '= "") && (name = "") {
set rset = stmt.%Execute(age)
}
if (age = "") && (name '= "") {
set rset = stmt.%Execute(name)
}
// Display results
while rset.%Next() {
write "ID: ", rset.ID, " Name: ", rset.Name, " Age: ", rset.Age, !
}
quit $$$OK
}
Invoke Method
do ##class(dc.DESql).DynamicSearchPerson("Y",67)
Output
3- Dynamic Pivot Tables: Transforming Data for Analytical Insights
This example showcases how to dynamically generate a pivot table using InterSystems Dynamic SQL, transforming raw data into a structured summary.
ClassMethod GeneratePivotTable(tableName As %String, rowDimension As %String, columnDimension As %String, valueColumn As %String) As %Status
{
// Simplified example; real pivot tables can be complex
Set sql = "SELECT " _ rowDimension _ ", " _ columnDimension _ ", SUM(" _ valueColumn _ ") FROM " _ tableName _ " GROUP BY " _ rowDimension _ ", " _ columnDimension
//Create an instance of %SQL.Statement
Set statement = ##class(%SQL.Statement).%New()
// Prepare the query
Set status = statement.%Prepare(sql)
If $$$ISERR(status) {
Quit status
}
// Execute the query
Set result = statement.%Execute()
// Check for errors
If result.%SQLCODE = 0 {
While result.%Next() {
do result.%Display()
}
} Else {
Write "Error: ", result.%SQLCODE, " ", result.%SQLMSG, !
}
Quit $$$OK
}
Invoke Method
Do ##class(dc.DESql).GeneratePivotTable("Sales", "Region", "ProductCategory", "Revenue")
Output
4- Schema Exploration: Unlocking Database Metadata with Dynamic SQL
This example demonstrates how to explore and retrieve metadata about database schemas dynamically, providing insights into table structures and column definitions.
ClassMethod ExploreTableSchema(tableName As %String) As %Status
{
// Create a new SQL statement object
set stmt = ##class(%SQL.Statement).%New()
// Construct the query dynamically
set sql = "SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA||'.'||TABLE_NAME = ?"
// Prepare the query
set status = stmt.%Prepare(sql)
if $$$ISERR(status) {
do $System.Status.DisplayError(status)
quit status
}
// Execute the query
set result = stmt.%Execute(tableName)
// Display schema information
write !, "Schema for Table: ", tableName
write !, "-------------------------"
write !, "Column Name",?15, "Data Type", ?30, "Nullable ",?45,"Column#"
write !, "-------------------------"
while result.%Next() {
write !, result.%Get("COLUMN_NAME"),?15, result.%Get("DATA_TYPE"), ?30, result.%Get("IS_NULLABLE"), ?45,result.%Get("ORDINAL_POSITION")
}
quit $$$OK
}
Invoke Method
Do ##class(dc.DESql).ExploreTableSchema("Sample.Person")
Output
Embedded SQL
Embedded SQL is a method of including SQL statements directly within your programming language (in this case, ObjectScript or another InterSystems-compatible language). Embedded SQL is not compiled when the routine that contains it is compiled. Instead, compilation of Embedded SQL occurs upon the first execution of the SQL code (runtime). It is quite powerful when used in conjunction with the object access capability of InterSystems IRIS.
You can embed SQL statements within the ObjectScript code used by the InterSystems IRIS® data platform. These Embedded SQL statements are converted to optimized, executable code at runtime. Embedded SQL is particularly useful for performing database operations such as querying, inserting, updating, and deleting records.
There are two kinds of Embedded SQL:
A simple Embedded SQL query can only return values from a single row. Simple Embedded SQL can also be used for single-row insert, update, and delete, and for other SQL operations.
A cursor-based Embedded SQL query can iterate through a query result set, returning values from multiple rows. Cursor-based Embedded SQL can also be used for multiple-row update and delete SQL operations.
Key Benefits of Embedded SQL
Seamless Integration: Embedded SQL allows you to write SQL statements directly within ObjectScript code, eliminating the need for external calls or complex interfaces.
Performance: By embedding SQL within ObjectScript, you can optimize database interactions and reduce overhead.
Simplicity: Embedded SQL simplifies the process of working with databases, as it eliminates the need for separate SQL scripts or external tools.
Error Handling: Embedded SQL allows for better error handling since the SQL code is part of the application logic.
Practical Examples
1-Record Creation: Inserting Data with Embedded SQL
This example demonstrates how to insert a new record into a table using Embedded SQL, ensuring seamless data integration.
ClassMethod AddBook(bookID As %Integer, title As %String, author As %String, year As %Integer, available As %Boolean) As %Status
{
// Embedded SQL to insert a new book
&sql(
INSERT INTO SQLUser.Books (BookID, Title, Author, PublicationYear, AvailableFlag)
VALUES (:bookID, :title, :author, :year, :available)
)
// Check for errors
if SQLCODE '= 0 {
write "Error inserting book: ", %msg, !
quit $$$ERROR($$$GeneralError, "Insert failed")
}
write "Book added successfully!", !
quit $$$OK
}
Invoke Method
Do ##class(dc.DESql).AddBook(1,"To Kill a Mockingbird","Harper Lee", 1960,1)
Announcement
Anastasia Dyubaylo · Mar 3
Hey Community,
Here are the bonuses for participants' articles that take part in the InterSystems Technical Article Contest 2025:
Title
Topic
Video
Discussion
Translation
Newbie
App
Total points
SQL Host Variables missing ?
+
+
+
+
13
Monitoring InterSystems IRIS with Prometheus and Grafana
+
+
+
+
15
REST Service in IRIS Production: When You Crave More Control Over Raw Data
+
+
6
Generation of OpenAPI Specifications
+
+
+
10
Using DocDB in SQL, almost
0
IntegratedML Configuration and Application in InterSystems IRIS
+
+
+
10
IRIS %Status and Exceptions
+
5
IRIS %Status and Exceptions Part-2
+
5
Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 1 - SQL Gateway
+
+
+
10
Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 2 – Python and Vector Search
+
+
+
10
Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 3 – REST and Interoperability
+
+
+
10
A look at Dynamic SQL and Embedded SQL
+
+
+
+
11
Using REST API, Flask and IAM with InterSystems IRIS - Part 1 - REST API
+
+
7
Using REST API, Flask and IAM with InterSystems IRIS - Part 2 – Flask App
+
+
7
Using REST API, Flask and IAM with InterSystems IRIS - Part 3 – IAM
+
+
7
Bulk FHIR Step by Step
+
3
HTTP and HTTPS with REST API
+
+
7
JWT Creation and Integration in InterSystems IRIS
+
+
8
OMOP Odyssey - InterSystems OMOP, The Cloud Service (Troy)
+
3
OMOP Odyssey - Celebration (House of Hades)
+
3
OMOP Odyssey - FHIR® to OMOP ETL (Calypso’s Island)
+
3
OMOP Odyssey - No Code CDM Exploration with Databricks AI/BI Genie (Island of Aeolus)
+
3
REST API with Swagger in InterSystems IRIS
+
+
7
Securing HL7 Interfaces with SSL/TLS (X.509) Certificates
+
5
Modern and Easy-to-Use VSCode Plugin for InterSystems ObjectScript: Class Diagram Visualization with PlantUML
+
+
10
Creating FHIR responses with IRIS Interoperability production
+
+
+
+
15
Introducing UDP Adapter
+
5
The Case for IRIS and JavaScript
0
High-Performance Message Searching in Health Connect
+
5
Using Dynamic & Embedded SQL with InterSystems IRIS
+
3
Proposal for ObjectScript naming conventions and coding guidelines
+
5
Leveraging InterSystems IRIS for Health Data Analytics with Explainable AI and Vector Search
+
+
+
5
15
IRIS Vector Search for Matching Companies and Climate Action
+
+
+
13
Multi-Layered Security Architecture for IRIS Deployments on AWS with InterSystems IAM
0
SQLAchemy-iris with the latest version Python driver
+
5
Parallel Query Processing - (System-wide and Query-based)
+
+
8
IRIS Vector Search for Climate Matching: Introducing BAS
+
+
+
11
QueryResponse Interface Design and Development Guide Based on REST API in HealthConnect
+
5
Bonuses are subject to change upon update.
Please claim your bonuses in the comments below! For the article "Leveraging InterSystems IRIS for Health Data Analytics with Explainable AI and Vector Search" I have also submitted the following:1. Translated article for French community2. Solution on Open Exchange platformCould you please account for those bonuses too?Thanks! For this article "Parallel Query Processing - (System-wide and Query-based)", can it be considered for a topic bonus due to its relation to SQLs, such as embedded SQL, dynamic SQL, and SQL in query methods?Thanks Hi Rahul,
I added a translation bonus.
To receive an Open Exchange bonus, you need to add a link to your application on Open Exchange in the corresponding field when editing your article:
Hi Parani,
Unfortunately, in your article, there's no mention at all of either embedded or dynamic SQL. So at this point your article can't get this bonus. Hi Irène,The article was updated later today with the following details, as it is a common feature in any SQL."InterSystems IRIS supports parallel processing for both embedded SQL, dynamic SQL and SQL in QueryMethods. When the %PARALLEL keyword is being used in the FROM clause of a query to suggest parallel processing. The SQL optimizer will determine if the query can benefit from parallel processing and apply it where appropriate."Did you mean to expect if examples are needed for each, or if the content until yesterday will be considered? Hi Parani,
What you're saying is the point - in your article, you're discussing a feature of SQL in general. If there is some influence of %PARALLEL keyword on execution of Dynamic or Embedded SQL and you highlight it in the article, you will get the bonus. Hi Irène,
I've just translated my article "Monitoring InterSystems IRIS with Prometheus and Grafana" into French, and it was previously available in Portuguese from another contributor. I was wondering if I could get the translation bonus for that.Also, I believe the article should qualify for the IKO common deployments bonus since deploying Prometheus and Grafana as described wouldn't be possible without using IKO. it relies on IKO defaults, so it fits right into that category.
Would be great if you could take a look :)Thanks,Stav Hi Stav,
All done! Good luck. Hi Irene, thanks for the information. I haved updated the open exchange application link as well. Could you please update the bonus for that as well. Thanks again for the continued assistance! Not surprised to get no bonuses for an article about an important language whose integration is primarily achieved and supported by third-party, Open Source products. I'm not a fan of bonuses anyway: shouldn't articles stand on their own merit, particularly if they promote innovative Community development in areas that are otherwise lacking in support? Hi Rahul,
All done. Good luck! Thank you so much :) Hi Irène,Yes, it was available. Would be great if you could take a look :) Hi Parani,
All done. Good luck. Hi @Rob.Tweed! Which bonus do you think we missed? It can happen. We'll fix it. I'm glad you contributed, and I'm even more glad you have happy users who benefit from the product you created. I don't think you missed any, that's the issue: my point is that the only articles you incentivise with such bonuses are those that focus on the products provided by InterSystems, not those developed outside by the community which are arguably just as important Hi Rob! Feel free to share any suggestions you have regarding the list of proposed topics. We may consider including them in upcoming contests. Thanks!
Announcement
Anastasia Dyubaylo · Apr 22
Hey Community,
Great news for all those of you who have missed the Super Early Bird discount for the InterSystems READY 2025! You still have a chance to get an Early bird discount up until 26th of May! So don't miss you chance to participate in the event of the year.
➡️ InterSystems Ready 2025
🗓 Dates: June 22-25, 2025
📍 Location: Signia Hilton Bonnet Creek, Orlando, FL, USA
Also, there are several ways you can get your passes without paying a dime!
Check out our newest competition Code Your Way to InterSystems READY 2025. The rules are simple: upload your IRIS-based side project to Open Exchange, and record a short inspirational video about why you should be the one to get the pass to THE event of the year!
Redeem the reward in your Global Masters My Rewards section if you have enough points. It comprises a ticket and a three-night hotel accommodation (Sunday, Monday, Tuesday). Please note that flights/transportation costs are not included.
We hope to see you there! Awesome! I'll be there!
Announcement
Anastasia Dyubaylo · May 20
Hi Community,
We’ve got something exciting for you — it’s time for a new demo video contest, and this time, you’re in the judge’s seat!
📺 Demo Games for InterSystems Sales Engineers 📺
For this contest, InterSystems Sales Engineers from around the world submitted short demo videos showcasing unique use cases, smart solutions, and powerful capabilities of InterSystems technologies.
Now it’s your turn! We’re opening up voting to the entire Developer Community. Your insight and perspective as developers make you the perfect experts.
👉 How to participate
Watch the demos on the Demo Games Contest Page
Vote for your favourite entries (Developer Community login required)
🗓 Contest period
Voting is open from May 26 until September 14, 2025, and winners will be announced shortly after.
New videos will be added throughout the contest — so keep checking back. You might find a favourite right at the end!
✅ How to vote:
All active members who have made a valid contribution to the Developer Community or Open Exchange — such as asking or answering questions, writing articles, or publishing applications — are eligible to vote. This includes customers, partners, and employees who registered using their corporate email addresses.
To vote:
Log in (or create an account) on the Developer Community
Visit the Contest Page
Select your top 3 favourite videos and click the “Vote” button for each
🏆 Scoring system:
1st place vote = 9 points
2nd place = 6 points
3rd place = 3 points
🔁 Votes from the Official Community Moderators* count double, so your top picks really matter!
* Moderators are indicated by a green circle around their avatar.
Let the Demo Games begin – and may the best demo win!
Announcement
Bob Kuszewski · May 21
We have a big update this quarter.
RHEL 10 was released yesterday, read on for what that means for you
2025.3 will use OpenSSL 3 across all operating systems SUSE 15 sp6 will be the minimum OS for orgs using SUSE
The minimum CPU standards are going up in 2025.3
Older Windows Server operating systems will no longer be supported in 2025.3
If you’re new to these updates, welcome! This update aims to share recent changes as well as our best current knowledge on upcoming changes, but predicting the future is tricky business and this shouldn’t be considered a committed roadmap.
InterSystems IRIS Production Operating Systems and CPU Architectures
Minimum Supported CPU Architecture
In 2024, InterSystems introduced a minimum supported CPU architecture for all Intel- & AMD-based servers that allows us to take advantage of new CPU instructions to create faster versions of InterSystems IRIS. InterSystems IRIS 2025.3 will update that list to require the x86-64-v3 microarchitecture level, which requires the AVX, AVX2, BMI, and BMI2 instructions.
For users with Intel-based systems, this means that Skylake and up will be required while Hasell/Broadwell will not be supported.
For users with AMD-based systems, this means that Excavator and up will be required while Piledriver & Steamroller will not be supported.
Are you wondering if your CPU will still be supported? We published a handy article on how to look up your CPU’s microarchitecture in 2023.
Red Hat Enterprise Linux
Upcoming Changes
RHEL 10 - Red Hat released RHEL 10 on May 20th. We’ve been testing on the latest beta of RHEL 10 on InterSystems IRIS 2025.1.
InterSystems IRIS 2025.1 support – We anticipate officially adding support for RHEL 10 in about a month. That’s assuming the GA version of RHEL 10 doesn’t introduce any significant problems, of course.
Moving forward – once we have support for RHEL 10 in InterSystems IRIS, we will stop supporting RHEL 8 on moving forward versions of InterSystems IRIS. This likely means that InterSystems IRIS 2025.2 will just support RHEL 9 & 10.
Previous Updates
RHEL 9.5 has undergone minor OS certification without incident.
Further reading: RHEL Release Page
Ubuntu
Current Update
Ubuntu 24.04.2 has just been released and minor OS certification has begun.
Further Reading: Ubuntu Releases Page
SUSE Linux
Upcoming Changes
InterSystems IRIS 2025.3+ will require SUSE Linux Enterprise Server 15 SP6 or greater – SLES 15 sp6 has given us the option to use OpenSSL 3 and, to provide you with the most secure platform possible, we’re going to change InterSystems IRIS to start taking advantage of it.
In preparation for moving to OpenSSL 3 in IRIS 2025.3, there will be no IRIS 2025.2 for SUSE.
Further Reading: SUSE lifecycle
Oracle Linux
Upcoming Changes
We’re expecting Oracle Linux 10 to be released around the same time as RHEL 10. Since we support Oracle Linux via the IRIS RHEL kit, we’re expecting Oracle Linux 10 support at the same time as RHEL 10 support is released.
Further Reading: Oracle Linux Support Policy
Microsoft Windows
Recent Changes
Windows Server 2025 is now supported in InterSystems IRIS 2025.1 and up.
Upcoming Changes
InterSystems IRIS 2025.3+ will no longer support Windows Server 2016 & 2019.
Microsoft has pushed back the anticipated release date for Windows 12 to the fall of 2025. We’ll start the process of supporting the new OS after it’s been released.
Further Reading: Microsoft Lifecycle
AIX
Upcoming Changes
IBM is rolling out new Power 11 hardware this summer. We anticipate running the new hardware through the paces over the course of the late summer and early fall. Look for a full update on our findings in the Q4 newsletter.
Previous Changes
IRIS 2024.3 and up only support OpenSSL 3. NOTE: This means that 2024.2 is the last version of IRIS that has both OpenSSL 1 and OpenSSL 3 kits. In IRIS 2023.3, 2024.1, & 2024.2, we provided two separate IRIS kits – one that supports OpenSSL 1 and one that supports OpenSSL 3. Given the importance of OpenSSL 3 for overall system security, we’ve heard from many of you that you’ve already moved to OpenSSL 3.
Further Reading: AIX Lifecycle
Containers
Previous Updates
We changed the container base image from Ubuntu 22.04 to Ubuntu 24.04 with IRIS 2024.2
We’re considering changes to the default IRIS container to, by default, have internal traffic (ECP, Mirroring, etc) on a different port from potentially externally facing traffic (ODBC, JDBC, etc). If you have needs in this area, please reach out and let me know.
InterSystems IRIS Development Operating Systems and CPU Architectures
MacOS
Recent Changes
IRIS 2025.1 adds support for MacOS 15 on both ARM- and Intel-based systems.
InterSystems Components
Upcoming Releases
InterSystems API Manager 3.10 will be released soon.
InterSystems Kubernetes Operator 3.8 will be released in the coming weeks as well.
Caché & Ensemble Production Operating Systems and CPU Architectures
Previous Updates
A reminder that the final Caché & Ensemble maintenance releases are scheduled for Q1-2027, which is coming up sooner than you think. See Jeff’s excellent community article for more info.
InterSystems Supported Platforms Documentation
The InterSystems Supported Platforms documentation is the definitive source information on supported technologies.
IRIS 2025.1 Supported Server Platforms
IRIS 2024.1 Supported Server Platforms
IRIS 2023.1 Supported Server Platforms
Caché & Ensemble 2018.1 Supported Server Platforms
… and that’s all folks. Again, if there’s something more that you’d like to know about, please let us know. IBM POWER supported processors continues the same as POWER 8 or later? That's right. There's no planned change in supported POWER processors. IBM does a good job of phasing out older processors with new versions of the OS. According to Wikipedia x86-64-v3 microarchitecture level is Intel Haswell and newer.Intel Skylake and newer is x86-64-v4 microarchitecture level.
Announcement
Anastasia Dyubaylo · Dec 12, 2022
Hello and welcome to the Developer Ecosystem Fall News!
This fall we've had so much fun and activities online and offline in the InterSystems Developer Ecosystem.
In case you missed something, we've prepared for you a selection of the hottest news and topics to catch up on!
News
🫂 InterSystems Developer Ecosystem Team
📊 Online Analytics Dashboard for Community Members
🎃 Halloween season on Global Masters
🔗 Developer Connections on GM
💡 InterSystems Ideas News
🔥 Back to school on FHIR with DC FR
🎉 InterSystems a Leader in Latest Forrester Wave Report: Translytical Data Platforms Q4 2022
📝 Updated Learning Path "Getting Started with InterSystems ObjectScript"
✅ InterSystems IRIS System Administration Specialist Certification Exam is now LIVE
📦 ZPM is now InterSystems Package Manager (IPM)
Contests & Events
InterSystems Interoperability Contest: Building Sustainable Solutions
Contest Announcement
Kick-off Webinar
Technology Bonuses
Time to Vote
Technical Bonuses Results
Winners Announcement
Meetup with Winners
Your Feedback
InterSystems IRIS for Health Contest: FHIR for Women's Health
Contest Announcement
Kick-off Webinar
Technology Bonuses
Time to Vote
Technical Bonuses Results
Winners Announcement
Community Roundtables
1. VSCode vs Studio
2. Best Source Control
3. Developing with Python
InterSystems Idea-A-Thon
Contest Announcement
Winners Announcement
📄 [DC Contest] 1st Tech Article Contest on Chinese Developer Community
⏯ [Webinar] What’s New in InterSystems IRIS 2022.2
⏯ [Webinar] Building and Enabling Healthcare Applications with HL7 FHIR
⏯ [Webinar] Deployments in Kubernetes with High Availability
👥 [Conference] InterSystems Iberia Summit 2022
👥 [Conference] InterSystems UK & Ireland Summit 2022
👥 [Conference] InterSystems at Big Data Minds DACH 2022 in Berlin
👥 [Conference] InterSystems Partnertage DACH 2022
👥 [Conference] InterSystems at data2day
👥 [Conference] InterSystems at Global DevSlam in Dubai
👾 [Hackathon] InterSystems at HackMIT
👾 [Hackathon] InterSystems at CalHacks hackathon
👾 [Hackathon] InterSystems at TechCrunch Disrupt
👾 [Hackathon] InterSystems at the European Healthcare Hackathon in Prague
☕️ [Meetup] InterSystems Developer Meetup in San Francisco
☕️ [Meetup] The 1st Spanish Developer Community Meetup in Valencia
☕️ [Meetup] InterSystems <> Mirantis Developer Meetup on Kubernetes in Boston
👋 InterSystems FHIR Healthtech Incubator Caelestinus Final Demo Day
Latest Releases
⬇️ Developer Community Release, September 2022
⬇️ Open Exchange - ObjectScript Quality status
⬇️ New Multi-Channel View on Global Masters
⬇️ InterSystems IRIS, IRIS for Health, HealthShare Health Connect, & InterSystems IRIS Studio 2022.2
⬇️ InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2022.1.1
⬇️ InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2022.3 developer previews
Preview 1
Preview 2
⬇️ InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2022.2 developer previews
Preview 7
Preview 8
Preview 9
Preview 10
⬇️ InterSystems Package Manager
Release 0.5.0
Release 0.4.0
⬇️ InterSystems extension for Docker Desktop
⬇️ VS Code Server Manager 3.2.1
⬇️ SAM (System Alerting & Monitoring) 2.0
⬇️ InterSystems Container Registry web user interface
Best Practices & Key Questions
🔥 Best Practices of Autumn 2022
GitHub Codespaces with IRIS
Using Grafana directly from IRIS
Uploading and downloading files via http
Adding VSCode into your IRIS Container
Reading AWS S3 data on COVID as SQL table in IRIS
IRIS Embedded Python with Azure Service Bus (ASB) use case
BILLIONS - Monetizing the InterSystems FHIR® with Google Cloud's Apigee Edge
Let's fight against the machines!
Top 10 InterSystems IRIS Features
The way to launch Jupyter Notebook + Apache Spark + InterSystems IRIS
Apache Web Gateway with Docker
❓ Key Questions of Autumn 2022: September, October, November
People and Companies to Know About
👋 Muhammad Waseem - New Developer Community Moderator
👋 Tete Zhang - New Developer Community Moderator
👋 New Partner - PainChek® Ltd
🌟 Global Masters of Autumn 2022: September, October, November
Job Opportunities
💼 InterSystems HealthShare Architect (Remote Opportunity)
💼 InterSystems HealthShare Practice Lead (Remote Opportunity)
💼 REMOTE InterSystems Object Developer with Docker Experience
💼 Integration Developer OR Business Analyst with IRIS/Ensemble
💼 Looking for InterSystems Developer
So...
Here is our take on the most interesting and important things!
What were your highlights from this past season? Share them in the comments section and let's remember the fun we've had!
@Anastasia.Dyubaylo, I am always amazed at what you and your team accomplishes ... great work! As usual, lots of things are going on in the Community! Thanks, Ben! Glad to hear!
Your feedback is always greatly appreciated :) Even more ahead ;)
Article
Stefan Cronje · Jan 25, 2023
Hi folks,
I am announcing a new package I have loaded on the OEX, which I am also planning on entering into the contest this month.
In a nutshell, what it offers you are the following.
Base classes to use on Persistent (table) classes for InterSystems IRIS to keep record history
These classes enable the historizing of persistent class records into another persistent class when touched.
This provides for a full history of any record.
It allows for record rollback to a specific version.
It can automatically purge old history records.
Do you need it?
Have you ever had the scenario where a data fix has gone terribly wrong, and you need to roll back the update?Have you ever tried to find out what or who updated or inserted a specific record?Have you ever needed to find out what has happened to a record over its lifetime?
You can get all this now, by simply extending from two classes.
What this article covers is what it offers. The package contains all the instructions needed, and there are only a few.
The Basics
The table that contains the "current" record have two sets of fields
Create
This contains the details of when the entry was created and is immutable.
Update
This contains the information of the last time the record was updated.
The table that contains the history records have three sets of fields
Create
This is copied as is from the current record when inserted.
Update
This is copied as is from the current record when inserted.
Historize
This contains details on the historization entry on insertion, and is immutable.
Each of the sets above contain the following fields:
Name
Content
DateTimeStamp
The system's date and time
Job
The PID value ($JOB)
SystemUser
The system user performing the action. $USERNAME
BusinessHost
The interoperability business host that was involved, if available.
ClientIP
The client's IP address that instructed this, if available.
CSPSessionID
The CSP Session ID that was involved, if available.
Routine
The calling routing. This can really help pinpoint where in the code this originated from.
The "current" record table has a Version property, and this is set when extending from the base class.The history racoed table has a Version property, which is the version that the "current" record was at that point.
What can it historize?
Serial Properties
"Normal" Properties
Class References
Arrays of Data types
Lists of Data types
Arrays of Objects
Lists of Objects
Relationship - where cardinality is "one"
Note that in this case, the history table's prpperty must be an object reference and not a relationship.
What can't it historize?
Relationships where the cardinality is "many"
What about my other triggers?
The triggers activate in order slot number 10. This means you can add your triggers before or after the execution of these triggers.
Additional Functionality
Record historization can be switched off for a specific table by setting a global. This is to cater for bulk updates where is the history is not required, for example, populating a new field during a deployment.
An auto purge setting can be set per table in a global. If you have a table that does get a lot of updates, and you only need the previous two records for example, you can set it to keep the last two records in hte history and remove the older ones. This happens during the historization of the record.
Restoring Previous Versions
The base classes will generate methods in the "current" record class that can be used to restore a specific version, or the previous version, back into the current table.These can be invoked via SQL too, if you need to restore from a selection of rows.
Note that the restored record will bump the version number on the "current" table and will not have its old version number, which is probably a good thing.
Congrats on your first entry to our contests on Open Exchange! good luck :) Hi Stefan,
Your video is now on InterSystems Developers YouTube:
⏯ IRIS Table Audit Demo
Article
Rizmaan Marikar · Mar 20, 2023
Introduction
Data analytics is a crucial aspect of business decision-making in today's fast-paced world. Organizations rely heavily on data analysis to make informed decisions and stay ahead of the competition. In this article, we will explore how data analytics can be performed using Pandas and Intersystems Embedded Python. We will discuss the basics of Pandas, the benefits of using Intersystems Embedded Python, and how they can be used together to perform efficient data analytics.
What's Pandas for?
Pandas is a versatile tool that can be used for a wide range of tasks, to the point where it may be easier to list what it cannot do rather than what it can do.
Essentially, pandas serves as a home for your data. It allows you to clean, transform, and analyze your data to gain familiarity with it. For instance, if you have a dataset saved in a CSV file on your computer, pandas can extract the data into a table-like structure called a DataFrame. With this DataFrame, you can perform various tasks such as:
Calculating statistics and answering questions about the data such as finding the average, median, maximum, or minimum of each column, determining if there is correlation between columns, or exploring the distribution of data in a specific column.
Cleaning the data by removing missing values or filtering rows and columns based on certain criteria.
Visualizing the data with the help of Matplotlib, which enables you to plot bars, lines, histograms, bubbles, and more.
Storing the cleaned and transformed data back into a CSV, database, or another type of file.
Before delving into modeling or complex visualizations, it's essential to have a solid understanding of your dataset's nature, and pandas provides the best way to achieve this understanding.
Benefits of using Intersystems Embedded Python
Intersystems Embedded Python is a Python runtime environment that is embedded within the Intersystems data platform. It provides a secure and efficient way to execute Python code within the data platform, without having to leave the platform environment. This means that data analysts can perform data analytics tasks without having to switch between different environments, resulting in increased efficiency and productivity.
Combining Pandas and Intersystems Embedded Python
By combining Pandas and Intersystems Embedded Python, data analysts can perform data analytics tasks with ease. Intersystems Embedded Python provides a secure and efficient runtime environment for executing Python code, while Pandas provides a powerful set of data manipulation tools. Together, they offer a comprehensive data analytics solution for organizations.
Installing Pandas.
Install a Python Package
To use Pandas with InterSystems Embedded Python, you'll need to install it as a Python package. Here are the steps to install Pandas:
Open a command prompt as Administrator mode (on Windows).
Navigate to the <installdir>/bin directory in the command prompt.
Run the following command to install Pandas: irispip install --target <installdir>\mgr\python pandas This command installs Pandas into the <installdir>/mgr/python directory, which is recommended by InterSystems. Note that the exact command may differ depending on the package you're installing. Simply replace pandas with the name of the package you want to install.
That's it! Now you can use Pandas with InterSystems Embedded Python.
irispip install --target C:\InterSystems\IRIS\mgr\python pandas
Now that we have Pandas installed, we can start working with the employees dataset. Here are the steps to read the CSV file into a Pandas DataFrame and perform some data cleaning and analysis:
First Lets create a new instance of python
Set python = ##class(%SYS.Python).%New()
Import Python Libraries, in this case i will be importing pandas and builtins
Set pd = python.Import("pandas")
#;To import the built-in functions that are part of the standard Python library
Set builtins = python.Import("builtins")
Importing data into the pandas library
There are several ways to read data into a Pandas DataFrame using InterSystems Embedded Python. Here are three common methods.I am using the following sample file as a example.
Read data from a CSV.
Use read_csv() with the path to the CSV file to read a comma-separated values
Set df = pd."read_csv"("C:\InterSystems\employees.csv")
Importing text files
Reading text files is similar to CSV files. The only nuance is that you need to specify a separator with the sep argument, as shown below. The separator argument refers to the symbol used to separate rows in a DataFrame. Comma (sep = ","), whitespace(sep = "\s"), tab (sep = "\t"), and colon(sep = ":") are the commonly used separators. Here \s represents a single white space character.
Set df = pd."read_csv"("employees.txt",{"sep":"\s"})
Importing Excel files
To import Excel files with a single sheet, the "read_excel()" function can be used with the file path as input. For example, the code df = pd.read_excel('employees.xlsx') reads an Excel file named "diabetes.xlsx" and stores its contents in a DataFrame called "df".
Other arguments can also be specified, such as the header argument to determine which row becomes the header of the DataFrame. By default, header is set to 0, which means the first row becomes the header or column names. If you want to specify column names, you can pass a list of names to the names argument. If the file contains a row index, you can use the index_col argument to specify it.
It's important to note that in a pandas DataFrame or Series, the index is an identifier that points to the location of a row or column. It labels the row or column of a DataFrame and allows you to access a specific row or column using its index. The row index can be a range of values, a time series, a unique identifier (e.g., employee ID), or other types of data. For columns, the index is usually a string denoting the column name.
Set df = pd."read_excel"("employees.xlsx")
Importing Excel files (multiple sheets)
Reading Excel files with multiple sheets is not that different. You just need to specify one additional argument, sheet_name, where you can either pass a string for the sheet name or an integer for the sheet position (note that Python uses 0-indexing, where the first sheet can be accessed with sheet_name = 0)
#; Extracting the second sheet since Python uses 0-indexing
Set df = pd."read_excel"("employee.xlsx", {"sheet_name":"1"})
Read data from a JSON.
Set df = pd."read_json"("employees.json")
Lets look at the data in the dataframe.
How to view data using .head() and .tail()
For this we can use the builtins library which we imported (ZW works too )
do builtins.print(df.head())
Let's list all the columns on the dataset
Do builtins.print(df.columns)
Lets Cleanup the data
Convert the "Start Date" column to a datetime object.
Set df."Start Date" = pd."to_datetime"(df."Start Date")
the updated dataset looks as follows.
Convert the 'Last Login Time' column to a datetime object
Set df."Last Login Time" = pd."to_datetime"(df."Last Login Time")
Fill in missing values in the 'Salary' column with the mean salary
Set meanSal = df."Salary".mean()
Set df."Salary" = df."Salary".fillna(meanSal)
Perform Some Analysis.
Calculate the average salary by gender.
Do builtins.print(df.groupby("Gender")."Salary".mean())
Calculate the average bonus percentage by team.
Do builtins.print(df.groupby("Team")."Bonus %".mean())
Calculate the number of employees hired each year.
Do builtins.print(df."Start Date".dt.year."value_counts"()."sort_index"())
Calculate the number of employees by seniority status.
Do builtins.print(df."Senior Management"."value_counts"())
Outputting data in pandas
Just as pandas can import data from various file types, it also allows you to export data into various formats. This happens especially when data is transformed using pandas and needs to be saved locally on your machine. Below is how to output pandas DataFrames into various formats.
Outputting a DataFrame into a CSV file
A pandas DataFrame (here we are using df) is saved as a CSV file using the ."to_csv"() method.
do df."to_csv"("C:\Intersystems\employees_out.csv")
Outputting a DataFrame into a JSON file
Export DataFrame object into a JSON file by calling the ."to_json"() method.
do df."to_json"("C:\Intersystems\employees_out.json")
Outputting a DataFrame into an Excel file
Call ."to_excel"() from the DataFrame object to save it as a “.xls” or “.xlsx” file.
do df."to_excel"("C:\Intersystems\employees_out.xlsx")
Let's create a basic bar chart that shows the number of employees hired each year.
for this i am using matplotlib.pyplot
//import matplotlib
Set plt = python.Import("matplotlib.pyplot")
//create a new dataframe to reprecent the bar chart
set df2 = df."Start Date".dt.year."value_counts"()."sort_index"().plot.bar()
//export the output to a png
do plt.savefig("C:\Intersystems\barchart.png")
//cleanup
do plt.close()
That's it! With these simple steps, you should be able to read in a CSV file, clean the data, and perform some basic analysis using Pandas in InterSystems Embedded Python.
Video
You are now able to access the video by utilizing the link provided below. The video itself serves as a comprehensive overview and elaboration of the above tutorial.https://youtu.be/hbRQszxDTWU
Conclusion
The tutorial provided only covers the basics of what pandas can do. With pandas, you can perform a wide range of data analysis, visualization, filtering, and aggregation tasks, making it an invaluable tool in any data workflow. Additionally, when combined with other data science packages, you can build interactive dashboards, develop machine learning models to make predictions, automate data workflows, and more. To further your understanding of pandas, explore the resources listed below and accelerate your learning journey.
Disclaimer
It is important to note that there are various ways of utilizing Pandas with InterSystems. The article provided is intended for educational purposes only, and it does not guarantee the most optimal approach. As the author, I am continuously learning and exploring the capabilities of Pandas, and therefore, there may be alternative methods or techniques that could produce better results. Therefore, readers should use their discretion and exercise caution when applying the information presented in the article to their respective projects. Great article, if your are looking for an approach without objectscript and making use of "irispyhton", check this code :
python code :
```python
import pandas as pd
from sqlalchemy import create_engine,types
engine = create_engine('iris+emb:///')
df = pd.read_csv("/irisdev/app/notebook/Notebooks/date_panda.csv")
# change type of FullDate to date
df['FullDate'] = pd.to_datetime(df['FullDate'])
df.head()
df.to_sql('DateFact', engine, schema="Demo" ,if_exists='replace', index=True,
dtype={'DayName': types.VARCHAR(50), 'FullDate': types.DATE, 'MonthName': types.VARCHAR(50),
'MonthYear': types.INTEGER, 'Year': types.INTEGER})
```
requirements.txt :
```
pandas
sqlalchemy==1.4.22
sqlalchemy-iris==0.5.0
irissqlcli
```
date_panda.csv
```
ID,DayName,FullDate,MonthName,MonthYear,Year
1,Monday,1900-01-01,January,190001,1900
2,Tuesday,1900-01-02,January,190001,1900
3,Wednesday,1900-01-03,January,190001,1900
4,Thursday,1900-01-04,January,190001,1900
5,Friday,1900-01-05,January,190001,1900
6,Saturday,1900-01-06,January,190001,1900
7,Sunday,1900-01-07,January,190001,1900
8,Monday,1900-01-08,January,190001,1900
9,Tuesday,1900-01-09,January,190001,1900
``` @Guillaume.Rongier7183 that’s awesome thank you for sharing, will check this out. Hi Rizman,
Your video is available on InterSystems Developers YouTube:
⏯Pandas with embedded python
Thank you! 💡 This article is considered as InterSystems Data Platform Best Practice. Avec l'approche Python, je rencontre l'erreur suivante :ERREUR <Ens>ErrCanNotAcquireJobRootLock: Impossible d'obtenir un verrouillage pour l'enregistrement de la globale
Ci-dessous le code implémenté :import pandas as pdfrom sqlalchemy import create_engine, types
class FileOperationEmbedded(BusinessOperation): tablename = None engine = None
def on_init(self): if not hasattr(self, "dsnIris"): self.dnsIris = 'iris+emb:///' if not hasattr(self, "schema"): self.schema = 'Toto'
self.engine = create_engine(self.dnsIris) return None
def on_message(self, request:DFrameRequest): df = pd.DataFrame(request.dframe.col)
for row in request.dframe.col: df = pd.DataFrame.from_dict(row, orient='index').T.reset_index(drop=True) try: df.to_sql(name=self.tablename, con=self.engine, if_exists='append', index=False, schema=self.schema, dtype={'id': types.INTEGER, 'col_type': types.VARCHAR(50), 'col_center': types.VARCHAR(50), 'col_name': types.VARCHAR(50), 'col_issue_name': types.VARCHAR(50), 'col_model': types.VARCHAR(50), 'col_treatment': types.VARCHAR(50), 'source': types.VARCHAR(50), 'filename': types.VARCHAR(100), 'created_at': types.TIMESTAMP}) except Exception as e: self.log_info(f"Une erreur s'est produite : {e}")
return NoneAvez-vous une idée d'où pourrait provenir l'erreur et quel la marche suivre pour résoudre le problème ?
As it's a question written in French, I've moved it here.
I'll also try to answer it in English. Hi,
I can't reproduce your error. I'm missing some information.
What I have done so far is :
- adding the missing imports
- adding the missing class DFrameRequest
- i suppose it is a dataclass with a field named dframe of type pd.DataFrame
- i suppose it is a subclass of Message
- i have added a main function to run the code
- i'm not sure of the format of the dataframe and the data in it
```python
from dataclasses import dataclass
import pandas as pd
from grongier.pex import BusinessOperation,Message
from sqlalchemy import create_engine, types
@dataclass
class DFrameRequest(Message):
dframe: pd.DataFrame
class FileOperationEmbedded(BusinessOperation):
tablename = None
engine = None
def on_init(self):
if not hasattr(self, "dsnIris"):
self.dnsIris = 'iris+emb:///'
if not hasattr(self, "schema"):
self.schema = 'Toto'
self.engine = create_engine(self.dnsIris)
return None
def on_message(self, request:DFrameRequest):
df = pd.DataFrame(request.dframe.col)
for row in request.dframe.col:
df = pd.DataFrame.from_dict(row, orient='index').T.reset_index(drop=True)
try:
df.to_sql(name=self.tablename, con=self.engine, if_exists='append', index=False, schema=self.schema,
dtype={'id': types.INTEGER, 'col_type': types.VARCHAR(50), 'col_center': types.VARCHAR(50),
'col_name': types.VARCHAR(50), 'col_issue_name': types.VARCHAR(50),
'col_model': types.VARCHAR(50), 'col_treatment': types.VARCHAR(50),
'source': types.VARCHAR(50), 'filename': types.VARCHAR(100), 'created_at': types.TIMESTAMP})
except Exception as e:
self.log_info(f"Une erreur s'est produite : {e}")
return None
if __name__ == '__main__':
# create a new instance of the business operation
bo = FileOperationEmbedded()
# initialize the business operation
bo.on_init()
# create a new message
msg = DFrameRequest(pd.DataFrame())
msg.dframe.col = [
{'id': 1, 'col_type': 'type1', 'col_center': 'center1', 'col_name': 'name1', 'col_issue_name': 'issue1',
'col_model': 'model1', 'col_treatment': 'treatment1', 'source': 'source1', 'filename': 'file1',
'created_at': '2021-10-01 00:00:00'},
{'id': 2, 'col_type': 'type2', 'col_center': 'center2', 'col_name': 'name2', 'col_issue_name': 'issue2',
'col_model': 'model2', 'col_treatment': 'treatment2', 'source': 'source2', 'filename': 'file2',
'created_at': '2021-10-02 00:00:00'}
]
# send the message to the business operation
bo.on_message(msg)
print("Done")
```
Then, from your code I can see the following issues :
- you are using the same variable name for the dataframe and the list of rows
- the variable `self.tablename` is not initialized
- the name `FileOperationEmbedded` it's maybe not the best name for your class as it is not a file operation
- why are you using a for loop to iterate over the rows of the dataframe ?
I have modified your code to fix these issues :
```python
from dataclasses import dataclass
import pandas as pd
from grongier.pex import BusinessOperation,Message
from sqlalchemy import create_engine, types
@dataclass
class DFrameRequest(Message):
dframe: pd.DataFrame
class IrisSqlAlchmyEmbedded(BusinessOperation):
tablename = None
engine = None
def on_init(self):
if not hasattr(self, "dsnIris"):
self.dnsIris = 'iris+emb:///'
if not hasattr(self, "schema"):
self.schema = 'Toto'
if not hasattr(self, "tablename") or self.tablename is None:
self.tablename = 'mytable'
self.engine = create_engine(self.dnsIris)
return None
def on_message(self, request:DFrameRequest):
try:
request.dframe.to_sql(name=self.tablename, con=self.engine, if_exists='append', index=False, schema=self.schema,
dtype={'id': types.INTEGER, 'col_type': types.VARCHAR(50), 'col_center': types.VARCHAR(50),
'col_name': types.VARCHAR(50), 'col_issue_name': types.VARCHAR(50),
'col_model': types.VARCHAR(50), 'col_treatment': types.VARCHAR(50),
'source': types.VARCHAR(50), 'filename': types.VARCHAR(100), 'created_at': types.TIMESTAMP})
except Exception as e:
print(f"Une erreur s'est produite : {e}")
return None
if __name__ == '__main__':
# create a new instance of the business operation
bo = IrisSqlAlchmyEmbedded()
# initialize the business operation
bo.on_init()
# create a new message
msg = DFrameRequest(pd.DataFrame([
{'id': 1, 'col_type': 'type1', 'col_center': 'center1', 'col_name': 'name1', 'col_issue_name': 'issue1',
'col_model': 'model1', 'col_treatment': 'treatment1', 'source': 'source1', 'filename': 'file1',
'created_at': '2021-10-01 00:00:00'},
{'id': 2, 'col_type': 'type2', 'col_center': 'center2', 'col_name': 'name2', 'col_issue_name': 'issue2',
'col_model': 'model2', 'col_treatment': 'treatment2', 'source': 'source2', 'filename': 'file2',
'created_at': '2021-10-02 00:00:00'}
]))
# send the message to the business operation
bo.on_message(msg)
print("Done")
```
Article
Shanshan Yu · Apr 19, 2023
With the improvement of living standards, people pay more and more attention to physical health. And the healthy development of children has become more and more a topic of concern for parents. The child's physical development can be reflected from the child's height and weight. Therefore, it is of great significance to predict the height and weight in a timely manner. Pay attention to the child's developmental state through scientific prediction and comparison.
The project uses InterSystems IRIS Cloud SQL to support by entering a large number of weight and height related data, and establishes AutoML based on IntegratedML for predictive analysis. According to the input parent height, it can quickly predict the future height of children, and judge whether the child's body mass index is based on the current height and weight status. In the normal range.
Key Applications: InterSystems IRIS Cloud SQL, IntegratedML
Function:
By applying this program, the height of children in normal developmental state can be quickly predicted. Through the results, parents can judge whether the child's development is normal and whether clinical intervention is required, which will help to understand the child's future height; through the current weight status Determine whether the current child's BMI is normal and understand the child's current health status.
Application Scenario
1. Children's height prediction
2. Monitoring of child development
Application Principles
The client and server of the application were completed using Vue and Java respectively, while the database was completed using InterSystems Cloud SQL, a cloud database platform from Intersystems.
***The main prediction function of the application uses the Integrated ML of InterSystems Cloud SQL. It effectively helped me quickly create and train data models, and successfully implemented prediction functions.
Test Flow
① Select the module
② Fill in the relevant data. If there is adult sibling data, you can click add to fill in the information.
③ Click Submit and wait for the prediction result to appear in a while.
Article
Vadim Aniskin · Apr 28, 2023
Hey Community!
Here is a short article on how to create an idea on InterSystems Ideas.
0. Register on Ideas Portal if you aren't a member yet or log in. You can easily register using your InterSystems Developer Community ID.
1. Read carefully Portal Guide page on the Ideas Portal, especially "Idea promotion rules" section. All posted ideas are moderated following these rules.
2. Click on the "Add a new idea" button
and you will see the form to add the idea.
3. First, provide a one-sentence summary of the idea which is the required field. When you start typing, you will see a list of ideas with similar words in their names or tags. In case a similar idea is already created, vote or comment on this idea. The optimal size of an idea summary is 4-12 words.
4. Next, describe the idea in the "Please add more details" field.
In addition to text, you can attach screenshots or other files and insert tables and links. There is a full-screen mode that helps you see the whole description of your idea without scrolling.
5. Then you need to fill in the required field "Category". The correct category will help to assign your idea to the appropriate expert in the InterSystems team.
In case you first sorted ideas by category and then pushed the button "Add a new idea", the idea's category will be added automatically.
6. Optionally, you can add tags to your idea, so other users can find it easily based on tags. The list of tags starts with tags having an "InterSystems" title, all other tags are sorted in alphabetical order.
7. Click on "Add idea" to submit.
Hope this helps you share your ideas with others! If you have any questions, please send a direct message to @Vadim.Aniskin.
---------------------
* Please take into account that ideas and comments should be in English.* Ideas Portal admins can ask questions using Developer Community direct messages to clarify the idea and its category. Please answer these questions to make your idea visible to all users.* When you create an idea you automatically subscribe to e-mail notifications related to your idea, including:
changes in the status of your idea
comments on your idea posted by portal admins (you don't get notifications about comments from other users)
Announcement
Raj Singh · Nov 8, 2022
I'm pleased to announce a milestone in the lifecycle of the ObjectScript package manager, ZPM. The package manager has offered developers the ability to neatly package up ObjectScript code and deployment configuration settings and version information in a convenient way. Over the last few years, it has evolved greatly into an integral part of many development workflows.
It has proven so important that InterSystems has decided to use it for packaging our own components, and that has led us to a decision to move the GitHub repository from the community into our corporate one, and rename it InterSystems Package Manager (IPM). IPM will still be open source. Community members will be able to review the code and submit pull requests. But this change gives us the ability to ensure the security of the software in a way we could not with non-employees being able to make changes to the code base directly. And a heightened level of security and trust is key with software that can install code alongside your data.
So please celebrate the life of ZPM with me, welcome the birth of IPM, and give thanks to the contributors -- especially Nikolay Soloviev and @Dmitry.Maslennikov, who has once again shown amazing insight into developer needs, coupled with the skills and dedication to build great software!
---
https://github.com/intersystems/ipm This is super exciting and I look forward to seeing what's next! Exciting news! absolutely! Great work - long time coming and well worth it!!! Good to see this.
Minor nit is the Info panel at https://openexchange.intersystems.com/package/ObjectScript-Package-Manager still has some links pointing to the old intersystems-community repo. They forward correctly but deserve to be updated anyway. Will this change the commands from zpm to ipm? Nope, it's it will require changes in the language itself. And I'm sure there is no reasons for it. Thanks a lot to all the contributors and to the community that supported with feedback, pull-requests and which adopted broadly the tool!
Currently there are 250+ packages published on OEX, at least 300+ developers who install packages every month, and the amount of installed packages is above 2,000 every month.
Thank you! my review on OEX now shows also if the package supports IPMall 19 reviews have been updated thank you so much for your efforts on this @Robert.Cemper1003 !!