Search

Clear filter
Announcement
Bob Kuszewski · Nov 22, 2022

Announcing the InterSystems Container Registry web user interface

Announcing the InterSystems Container Registry web user interface InterSystems is pleased to announce the release of the InterSystems Container Registry web user interface. This tool is designed to make it easier to discover, access, and use the many Container images hosted on ICR. The InterSystems Container Registry UI is available at: https://containers.intersystems.com/contents Community Edition Containers When you visit the ICR user interface, you’ll have access to InterSystems' publicly available containers, including the IRIS community edition containers. Use the left-hand navigation to select the product family that interests you, then the container, and finally select the specific version. Enterprise Edition Containers You can see the private containers, including the IRIS enterprise edition, by clicking on the Login button. Once logged in, the left-hand navigation will include all the containers you have access to. Enjoy! That great, can be very useful to just check if a version is still available or the one that just came out. A great resource for our community - thanks! This is nice !! Thank you @Robert.Kuszewski !! SUPER !!!
Article
Anastasia Dyubaylo · Mar 30, 2023

How to create / announce an Event on InterSystems Developer Community

Hi Community, Some of you would like to share an event (online or offline) with others on our Community and here is a how-to on how to actually create an Event to invite your fellow members. The main challenge when creating an event is to fill in all the necessary pieces of information in the right places. So let's look at what needs to be done. First of all, you need to choose the tag Events After you've done it, you'll see that a new form appeared at the bottom of the page: When filling in the form please follow the instruction provided. Click Publish as usual and your event will become visible on the main page of the Community. And this is it. If you have any comments or questions, please don't hesitate to ask them in the Comments section.
Announcement
Bob Kuszewski · Jun 2, 2023

InterSystems Supported Platforms Update Q2-2023

We often get questions about recent and upcoming changes to the list of platforms and frameworks that are supported by the InterSystems IRIS data platform. This update aims to share recent changes as well as our best current knowledge on upcoming changes, but predicting the future is tricky business and this shouldn’t be considered a committed roadmap. With that said, on to the update… IRIS Production Operating Systems and CPU Architectures Red Hat Enterprise Linux Recent Changes RHEL 9.2 & RHEL 8.8 were released in May, 2023. Red Hat is planning to support these releases for 4 years. InterSystems is planning to do additional testing of IRIS on RHEL 9.2 through a new process we’re calling “Minor OS version certification” that is intended to provide additional security that a minor OS update didn’t break anything obvious. With the release of RHEL 9.2, Red Hat has ended support for RHEL 9.1. This is consistent with the “odd/even” support cycle that Red Hat has been using since RHEL 8.0. RHEL 8.4 extended maintenance ends 5/31/2023, which means that IRIS will stop supporting this minor version at that time as well. Upcoming Changes RHEL 9.3 is planned for later in the year. This will be a short-term-support release from Red Hat, so InterSystems won’t be recommending it for production deployments. Previous Updates IRIS 2022.1.2 adds support for RHEL 9.0. 9.0 is a major OS release that updates the Linux Kernel to 5.14, OpenSSL to 3.0, and Python 3.9 IRIS 2022.2.0 removes support for RHEL 7.x. RHEL 7.9 is still supported in earlier versions of IRIS. Further reading: RHEL Release Page Ubuntu Recent Changes Ubuntu 22.04.02 LTS was released February 22, 2023. InterSystems is currently performing additional testing of IRIS on 22.04.02 LTS through a new process we’re calling “Minor OS version certification”. So far, so good. Upcoming Changes The next major update of Ubuntu is scheduled for April, 2024 Previous Updates IRIS 2022.1.1 adds support for Ubuntu 22.04. 22.04 is a major OS release that updates the Linux Kernel to 5.15, OpenSSL to 3.0.2, and Python 3.10.6 IRIS 2022.2.0 removes support for Ubuntu 18.04. Ubuntu 18.04 is still supported in earlier versions of IRIS. IRIS 2022.1.1 & up containers are based on Ubuntu 22.04. Further Reading: Ubuntu Releases Page SUSE Linux Upcoming Changes SUSE Linux Enterprise Server 15 SP5 is currently in public beta testing. We expect SUSE to release 15 SP5 late Q2 or early Q3 with support added to IRIS after that. SP5 will include Linux Kernel 5.14.21, OpenSSL 3.0.8, and Python 3.11 General Support from SUSE for Linux Enterprise Server 15 SP3 came to an end on 12/31/2022, but extended security support will continue until December, 2025. Previous Updates IRIS 2022.3.0 adds support for SUSE Linux Enterprise Server 15 SP4. 15 SP4 is a major OS release that updates the Linux Kernel to 5.14, OpenSSL to 3.0, and Python 3.9 Further Reading: SUSE lifecycle Oracle Linux Upcoming Changes Based on their history, Oracle Linux 9 will include RHEL 9.2 sometime in the second half of 2023. Previous Updates IRIS 2022.3.0 adds support for Oracle Linux 9. Oracle Linux 9 is a major OS release that tracks RHEL 9, so it, too, updates the Linux Kernel to 5.14, OpenSSL to 3.0, and Python 3.9 Further Reading: Oracle Linux Support Policy Microsoft Windows Upcoming Changes Windows Server 2012 will reach its end of extended support in October, 2023. If you’re still running on the platform, now is the time to plan migration. IRIS 2023.2 will not be available for Windows Server 2012. Previous Updates We haven’t made any changes to the list of supported Windows versions since Windows Server 2022 was added in IRIS 2022.1 Further Reading: Microsoft Lifecycle AIX Upcoming Changes InterSystems is working closely with IBM to add support for OpenSSL 3.0. This will not be included in IRIS 2023.2.0 as IBM will need to target the feature in a further TL release. The good news is that IBM is looking to release OpenSSL 3.0 for both AIX 7.2 & 7.3 and the timing looks like it should align for IRIS 2023.3. Previous Updates We haven’t made any changes to the list of supported AIX versions since AIX 7.3 was added and 7.1 removed in IRIS 2022.1 Further Reading: AIX Lifecycle Containers Upcoming Changes IRIS containers will only be tagged with the year and release, such as “2023.2” instead of the full build numbers we’ve been using in the past. This way, your application can, by default, pick up the latest maintenance build of your release. We are also adding “latest-em” and “latest-cd” tags for the most recent extended maintenance and continuous distribution IRIS release. These will be good for demos, examples, and development environments. We will also start to tag the preview containers with “-preview” so that it’s clear which container is the most recent GA release. These changes will all be effective the 2023.2 GA release. We’ll be posting more about this in June. Previous Updates We are now publishing multi-architecture manifests for IRIS containers. This means that pulling the IRIS container tagged 2022.3.0.606.0 will download the right container for your machine’s CPU architecture (Intel/AMD or ARM). IRIS Development Operating Systems and CPU Architectures MacOS Recent Changes We’ve added support for MacOS 13 in IRIS 2023.1 Upcoming Changes MacOS 14 will be announced soon and expect it will be GA later in the year. CentOS We are considering removing support for CentOS/CentOS Stream. See reasoning below. Red Hat has been running a developer program for a few years now, which gives developers access to free licenses for non-production environments. Developers currently using CentOS are encouraged to switch to RHEL via this program. CentOS Stream is now “upstream” of RHEL, meaning that it has bugs & features not yet included in RHEL. It also updates daily, which can cause problems for developers building on the platform (to say nothing of our own testing staff). We haven’t made any changes to the list of supported CentOS versions since we added support for CentOS 8-Stream and removed support for CentOS 7.9 in IRIS 2022.1 InterSystems Components InterSystems API Manager (IAM) IAM 3.2 was released this quarter and it included a change to the container’s base image from Alpine to Amazon Linux. Caché & Ensemble Production Operating Systems and CPU Architectures Previous Updates Cache 2018.1.7 adds support for Windows 11 InterSystems Supported Platforms Documentation The InterSystems Supported Platforms documentation is source for definitive list of supported technologies. IRIS 2020.1 Supported Server Platforms IRIS 2021.1 Supported Server Platforms IRIS 2022.1 Supported Server Platforms IRIS 2023.1 Supported Server Platforms Caché & Ensemble 2018.1.7 Supported Server Platforms … and that’s all folks. Again, if there’s something more that you’d like to know about, please let us know. AIX 7.3 TL 1 and later has OpenSSL 3.X From the linked IBM page AIX 7.3 TL1 was released six months ago.$oslevel -s7300-01-02-2320$openssl versionOpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023) Unfortunately, there are significant bugs in the OpenSSL 3 available for AIX 7.3 that make running IRIS (or any other high user of OpenSSL) impossible. We have a build in-house from IBM that has fixed the problems that should be available to the public later in the year.
Announcement
Anthony Jackson · Jun 3

InterSystems HealthShare developer interoperability Developer/Lead

Summary: Duties and Responsibilities: Design and implement healthcare data integration solutions using the InterSystems platform /HealthShare platform, ensuring data interoperability across various healthcare systems. Develop and maintain data mappings and transformations to ensure accurate data exchange between systems, leveraging IRIS API’s, HL7, FHIR, and other healthcare data standards. Build and maintain interfaces to connect health information systems, including clinical applications, EHR, and other healthcare data sources. Work closely with cross-functional teams, including developers, business analysts, and healthcare professionals, to gather requirements, design solutions, and implement integrations. Create and maintain comprehensive documentation for integration solutions, including interface specifications, data mappings, and test plans. Identify opportunities to improve integration processes and solutions, contributing to the overall efficiency and effectiveness of healthcare data exchange. Education and Skills: Experience with interoperability tools (like InterSystems), such as NextGen Mirth or "Availity" or "Infor" or "IRIS" or Cloverleaf or Rhapsody or Ensemble etc. Experience and expertise with InterSystems IRIS, InterSystems HealthShare. Familiarity / Experience working with Medical Claims data Experience with Amazon Web Services (AWS). Experience developing Continuous Integration / Continuous Delivery (CI/CD). Knowledge and understanding of agile methodologies Please send your profile to : Antony.Jackson@infinite.com antonyjacks@gmail.com
Announcement
Anastasia Dyubaylo · Jun 10

[Video] Foreign Tables in InterSystems IRIS 2025.1

Hi Community, Our Project Managers created a series of videos to highlight some of the interesting features of the new 2025.1 release. Please welcome the first one of them: ⏯ Foreign Tables in InterSystems IRIS 2025.1 The InterSystems IRIS 2025.1 release introduces powerful new features for data management, including full support for foreign tables in production. It adds fine-grained privileges for managing access to foreign servers and querying them, plus a new command for sending direct SQL queries to external relational databases. New table conversion options also allow switching between row and column layouts for better analytics performance. 🗣 Presenter: @Benjamin.DeBoe, Manager, Analytics Product Management, InterSystems Want to stay up to date? Watch the video and subscribe for more!
Question
Laura Blázquez García · Jun 9

Protocol Error between the Web Gateway and InterSystems IRIS

I have created a new docker stack with webgateway and IRIS for Health 2025.1. I have mapped the posts of wegateway like this: 8743:443 8780:80 I can access IRIS portal through 8743 without problems. I also have created a FHIR repository, and I'm able to access it through 8743 port. I have a web application, in another server with another domain, that connects to this FHIR repository. I have configured in FHIR endpoint the allowed origin to the domain of this application. But when I try to connect from this application to FHIR repository I have this error on Webgateway: Protocol Error between the Web Gateway and InterSystems IRIS This is a second instance that I'm configuring, with the first one I didn't see this error. Could this be because first instance runs in 8443 port? Or maybe it is because 2025.1 version? I don't know what to do... I can’t find the information you’re looking for. If you rephrase your question, I might be able to help. You can also explore the following resources: - **"Extend Your Web Server Configuration with the Web Gateway"** [[1]](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCGI_webserver) - **"Web Gateway Registry in InterSystems IRIS"** [[2]](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCGI_registry) - **"The Web Gateway: Serve InterSystems Web Applications and REST APIs to a Web Client"** [[3]](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCGI_intro).Sources:1.Extend Your Web Server Configuration with the Web Gateway | Web Gateway Guide | InterSystems IRIS Data Platform 2025.12.Web Gateway Registry in InterSystems IRIS | Web Gateway Guide | InterSystems IRIS Data Platform 2025.13.The Web Gateway: Serve InterSystems Web Applications and REST APIs to a Web Client | Web Gateway Guide | InterSystems IRIS Data Platform 2025.1 I found this article: https://community.intersystems.com/post/no-access-control-allow-origin-header-issue-cors-issue This is what's happening to me from my web application (excluding the webgateway error, which I think simply the description isn't correct). Checking the CSPSystem user, I've assigned it the role of the database where the FHIR repository is located, and it still doesn't work. However, if I assign it the %All role, it works. So, I'm missing some permission for this user; I can't figure out which one... How did you installed set up your FHIR Server? When installed using the IRIS portal the FHIR Web Application is automatically created and configured with the following Application Roles :%DB_HSCUSTOM%DB_HSLIB%DB_HSSYS%DB_<YourNamespace>X0001R%DB_<YourNamespace>X0001V%HS_DB_<YourNamespace>%HS_ImpersonateUser
Announcement
Anastasia Dyubaylo · Jul 14

[Video] InterSystems IRIS: Easy to Consume and Easy to Use

Hi Community, Enjoy the new video on InterSystems Developers YouTube: ⏯ InterSystems IRIS: Easy to Consume and Easy to Use @ Global Summit 2024 Watch this video to learn the company’s vision for IRIS Data Platform and its evolving role in supporting next-generation data architectures. It explores how IRIS is expanding to meet the demands of AI, data fabrics, and real-time analytics, emphasizing scalability, trust, and agility. 🗣 Presenter: Scott Gnau, Vice President, Data Platforms, InterSystems Stay ahead of the curve! Watch the video and subscribe to keep learning! 👍
Announcement
Dwain Bowline · Jul 14

MUMPS DTM to InterSystems IRIS Database Conversion Expert

Seeking an experienced troubleshooter to assist with the conversion of a MUMPS DTM to InterSystems IRIS Database. The ideal candidate will have a strong background in database conversion processes within IRIS and expert knowledge of MUMPS DTM Routines and Globals. Your primary responsibility will be to troubleshoot and resolve issues that arise with the IRIS Database and the refactoring of MUMPS routines. If you have experience converting MUMPS DTM and are InterSystems Certified with a problem-solving mindset, we would like to collaborate with you. Contract position for US based residents send email of interest to Medica@outlook.com
Announcement
Evgeny Shvarov · Jul 14

Technology Bonuses for the InterSystems Developer Tools Contest 2025

Here are the technology bonuses for the InterSystems Developer Tools Contest 2025 that will give you extra points in the voting: IRIS Vector Search usage -3 Embedded Python usage -3 InterSystems Interoperability - 3 InterSystems IRIS BI - 3 VSCode Plugin - 3 FHIR Tools - 3 Docker container usage -2 ZPM Package Deployment - 2 Implement InterSystems Community Idea - 4 Find a bug in Embedded Python - 2 Article on Developer Community - 2 The second article on Developer Community - 1 Video on YouTube - 3 First Time Contribution - 3 See the details below. IRIS Vector Search - 3 points Starting from 2024.1 release InterSystems IRIS contains a new technology vector search that allows to build vectors over InterSystems IRIS data and perform search of already indexed vectors. Use it in your solution and collect 3 bonus points. Here is the demo project that leverages it. Embedded Python - 3 points Use Embedded Python in your application and collect 3 extra points. You'll need at least InterSystems IRIS 2021.2 for it. InterSystems Interoperability - 3 points Make a tool to enhance developer experience or to maintain or monitor or use the InterSystems Interoperability engine.Inteoperability tool example. Interoperability adapter example. Basic Interoperability template. Python Interoperability template. InterSystems IRIS BI - 3 points Develop a tool that improves the developer experience or uses InterSystems IRIS BI feature of IRIS Platform. Examples: DeepSeeWeb, IRIS BI Utils, AnalyzeThis. IRIS BI Analytics template. VSCode Plugin - 3 points Develop a plugin to Visual Studio Code editor that will help developers to develop with InterSystems IRIS. Examples: VSCode ObjectScript, CommentToObjectScript, OEX-VSCode-snippets-Example, irislab, vscode-snippets-template and more. FHIR Tools - 3 points Develop a tool that helps to develop and maintain FHIR applications in InterSystems IRIS or help with FHIR enablement tools, such as FHIR SQL Builder, FHIR Object Model, and InterSystems FHIR Server. Here is a basic InterSystems FHIR template and examples of FHIR-related tools. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Implement Community Opportunity Idea - 4 points Implement any related to developer tools idea from the InterSystems Community Ideas portal which has the "Community Opportunity" status. Only ideas created before the contest is announced will be accepted. This will give you 4 additional bonus points. Find a bug in Embedded Python - 2 pointsWe want the broader adoption of InterSystems Embedded python, so we encourage you to report the bugs you will face during the development of your python application with IRIS in order to fix it. Please submit the bug here in a form of issue and how to reproduce it. You can collect 2 bonus points for the first reproducible bug. Article on Developer Community - 2 points Write a brand new article on Developer Community that describes the features of your project and how to work with it. The minimum article length is 1000 characters. Collect 2 points for the article. *This bonus is subject to the discretion of the experts whose decision is final. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation of the first article. The second article should go into detail about a feature of your project. The 3rd and more articles will not bring more points, but the attention will be all yours. *This bonus is subject to the discretion of the experts whose decision is final. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Examples. First-Time Contribution - 3 points Collect 3 bonus points if you participate in InterSystems Open Exchange contests for the first time! The list of bonuses is subject to change. Stay tuned! Good luck with the competition! The Java language is very important for IRIS, many native features of IRIS use Java libraries, like the XML engine, but only Python Embedded has bonus (+3) and Java don't have any bonus! I suggest create a bonus for Java also. And I'm using NodeJS, and the official driver has so many issues, quite critical to make it somewhat useful, but only bugs in Embedded Python are worth extra points. Does not look fair. NodeJS support requires a lot of attention, too, not only Python I agree with you
Announcement
Carmen Logue · Jul 22

InterSystems Reports Version 25.1 Release Announcement

InterSystems Reports version 25.1 is now available from the InterSystems Software Distribution site in the Components section. The software is labeled InterSystems Reports Designer and InterSystems Reports Server and is available for Mac OSX, Windows and Linux operating systems. This new release brings along enhancements and fixes from our partner, insightsoftware. InterSystems Reports 25.1 is powered by Logi Report Version 25.1 which includes: - support for dynamic subject construction in scheduled email distribution of reports - ability to easily switch between metric and imperial units of measure - several improvements to ease of alignment in designing pixel-perfect page reports For more information about these features and others, see the release notes available from insightsoftware. Note that the InterSystems Reports 25.1 installation requires JDK version 11 or 17 for the installation to execute. Please upgrade if you are using JDK 8 prior to the InterSystems Reports installation. For more information about InterSystems Reports, see the InterSystems documentation and learning services content.
Announcement
Anastasia Dyubaylo · Jul 28

Time to vote in the InterSystems Developer Tools Contest 2025

Hi Community, It's voting time! Cast your votes for the best applications in our InterSystems Developer Tools Contest: 🔥 VOTE FOR THE BEST APPS 🔥 How to vote? Details below. Experts nomination: An experienced jury from InterSystems will choose the best apps to nominate for the prizes in the Experts Nomination. Community nomination: All active members of the Developer Community with a “trusted” status in their profile are eligible to vote in the Community nomination. To check your status, please click on your profile picture at the top right, and you will see it under your picture. To become a trusted member, you need to participate in the Community at least once. Blind vote! The number of votes for each app will be hidden from everyone. We will publish the leaderboard in the comments section of this post daily. Experts may vote any time so it is possible that the places change dramatically at the last moment. The same applies to bonus points. The order of projects on the contest page will be determined by the order in which applications were submitted to the competition, with the earliest submissions appearing higher on the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you change your mind, cancel the choice and give your vote to another application! Support the application you like! Note: Contest participants are allowed to fix bugs and make improvements to their applications during the voting week, so be sure to subscribe to application releases! So! After the first day of the voting, we have: Community Nomination, Top 5 InterSystems Testing Manager for VS Code by @John Murray typeorm-iris by @Dmitry Maslenniko toolqa by @André.DienesFriedrich, @Andre.LarsenBarbosa Global-Inspector by @Robert.Cemper1003 dc-artisan by @Henrique, @José.Pereira , @henry ➡️ Voting is here. Expert Nomination, Top 5 typeorm-iris by @Dmitry Maslenniko InterSystems Testing Manager for VS Code by @John Murray IPM Explorer for VSCode by @John.McBrideDev PyObjectscript Gen by @Antoine.Dh dc-artisan by @Henrique, @José.Pereira , @henry ➡️ Voting is here. Experts, we are waiting for your votes! 🔥 Participants, improve & promote your solutions! Voting for the InterSystems Developer Tools Contest 2025 goes ahead! And here're the results at the moment: Community Nomination, Top 5 InterSystems Testing Manager for VS Code by @John Murray toolqa by @André.DienesFriedrich, @Andre Larsen Barbosa typeorm-iris by @Dmitry Maslenniko dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira IPM Explorer for VSCode by @John McBride ➡️ Voting is here. Expert Nomination, Top 5 typeorm-iris by @Dmitry Maslenniko InterSystems Testing Manager for VS Code by @John Murray IPM Explorer for VSCode by @John McBride PyObjectscript Gen by @Antoine.Dh dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira ➡️ Voting is here. Hi Devs! At the moment, we can see the following results of the voting: Community Nomination, Top 5 InterSystems Testing Manager for VS Code by @John Murray dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira toolqa by @André.DienesFriedrich, @Andre Larsen Barbosa IPM Explorer for VSCode by @John McBride typeorm-iris by @Dmitry Maslenniko ➡️ Voting is here. Expert Nomination, Top 5 IPM Explorer for VSCode by @John McBride typeorm-iris by @Dmitry Maslenniko InterSystems Testing Manager for VS Code by @John Murray dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira iris4word by @Yuri.Gomes ➡️ Voting is here. Developers, only a few days remain until the end of the voting period!Don't forget to check for Technological Bonuses, you can gain some extra points from them! Developers!Today is the last day of voting! The winners will be announced tomorrow! Community and experts, vote for the applications! And devs, check the technology bonuses to earn more points! Good luck, participants!
Announcement
Kate Lau · Aug 12

[Demo Video] Data Transformation Adventures with InterSystems IRIS

#InterSystems Demo Games entry ⏯️ Data Transformation Adventures with InterSystems IRIS Navigating interoperability and healthcare can be an exciting adventure. In this demo, we will show you how InterSystems IRIS can be the perfect tool to get data - wherever it may lie, in whatever format, into a format of your choice, including FHIR! Once that is done, analytics is a piece of cake with the FHIR SQL Builder and Deep See Web. Let the questing begin! Presenters:🗣 @Kate.Lau, Sales Engineer, InterSystems🗣 @Merlin, Sales Engineer, InterSystems🗣 @Martyn.Lee, Sales Engineer, InterSystems🗣 @Bryan.Hoon, Sales Engineer, InterSystems 🗳 If you like this video, don't forget to vote for it in the Demo Games!
Announcement
Anastasia Dyubaylo · Jul 11

[Video] Discovering InterSystems Products - A High Level Overview

Hi Community, Enjoy the new video on InterSystems Developers YouTube: ⏯ Discovering InterSystems Products - A High Level Overview @ Global Summit 2024 Watch this video for a high-level overview of the full InterSystems product portfolio, especially if you're looking to understand how the various technologies connect and support real-world use cases. 🗣 Presenter: @joe.gallant, Senior Technical Trainer, InterSystems Tune in to the latest innovations and subscribe to catch what’s coming!
Article
Hannah Kimura · Jun 23

Transforming with Confidence: A GenAI Assistant for InterSystems OMOP

## INTRO Barricade is a tool developed by ICCA Ops to streamline and scale support for FHIR-to-OMOP transformations for InterSystems OMOP. Our clients will be using InterSystems OMOP to transform FHIR data to this OMOP structure. As a managed service, our job is to troubleshoot any issues that come with the transformation process. Barricade is the ideal tool to aid us in this process for a variety of reasons. First, effective support demands knowledge across FHIR standards, the OHDSI OMOP model, and InterSystems-specific operational workflows—all highly specialized areas. Barricade helps bridge knowledge gaps by leveraging large language models to provide expertise regarding FHIR and OHDSI. In addition to that, even when detailed explanations are provided to resolve specific transformation issues, that knowledge is often buried in emails or chats and lost for future use. Barricade can capture, reuse, and scale that knowledge across similar cases. Lastly, we often don’t have access to the source data, which means we must diagnose issues without seeing the actual payload—relying solely on error messages, data structure, and transformation context. This is exactly where Barricade excels: by drawing on prior patterns, domain knowledge, and contextual reasoning to infer the root cause and recommend solutions without direct access to the underlying data. ## IMPLEMENTATION OVERVIEW Barricade is built off of Vanna.AI. So, while I use Barricade a lot in this article to refer to the AI agent we created, the underlying model is really a Vanna model. At its core, Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. We customized our vanna model to not only generate SQL queries, but to also be able to answer conceptual questions. Setting up vanna is extremely easy and quick. For our setup, we used chatgpt as the LLM, ChromaDB as our vector database, and Postgres as the database that stores the data we want to query (our run data -- data related to the transformation from FHIR-to-OMOP). You can choose from many different options for your LLM, vector database, and Postgres. Valid options are detailed here: [Quickstart with your own data](https://vanna.ai/docs/postgres-openai-standard-chromadb/). ## HOW THIS IS DIFFERENT THAN CHATGPT ChatGPT is a standalone large language model (LLM) that generates responses based solely on patterns learned during its training. It does not access external data at inference time. Barricade, on the other hand, is built on Vanna.ai, a Retrieval-Augmented Generation (RAG) system. RAG enhances LLMs by layering dynamic retrieval over the model, which allows it to query external sources for real-time, relevant information and incorporate those results into its output. By integrating our Postgres database directly with Vanna.ai, Barricade can access: - The current database schema. - Transformation run logs. - Internal documentation. - Proprietary transformation rules. This live access is critical when debugging production data issues, because the model isn't just guessing- it’s seeing and reasoning with real data. In short, Barricade merges the language fluency of ChatGPT with the real-time accuracy of direct data access, resulting in more reliable, context-aware insights. ## HOW BARRICADE WAS CREATED ### STEP 1: CREATE VIRTUAL ENVIRONMENT This step creates all the files that make up Vanna. virtualenv --python="/usr/bin/python3.10" barricade source barricade/bin/activate pip install ipykernel python -m ipykernel install --user --name=barricade jupyter notebook --notebook-dir=. For barricade, we will be editing many of these files to customize our experience. Notable files include: - `barricade/lib/python3.13/site-packages/vanna/base/base.py` - `barricade/lib/python3.13/site-packages/vanna/openai/openai_chat.py` - `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` - `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` ### STEP 2: INITIALIZE BARRICADE This step includes importing the packages needed and initializing the model. The minimum imports needed are: `from vanna.chromadb import ChromaDB_VectorStore from vanna.openai import OpenAI_Chat` NOTE: Each time you create a new model, make sure to remove all remnants of old training data or vector dbs. To use the code we used below, import os and shutil. if os.path.exists("chroma.sqlite3"): os.remove("chroma.sqlite3") print("Unloading Vectors...") else: print("The file does not exist") base_path = os.getcwd() for root, dirs, files in os.walk(base_path, topdown=False): if "header.bin" in files: print(f"Removing directory: {root}") shutil.rmtree(root) Now, it's time to initialize our model: class MyVanna(ChromaDB_VectorStore, OpenAI_Chat): def __init__(self, config=None): # initialize the vector store (this calls VannaBase.__init__) ChromaDB_VectorStore.__init__(self, config=config) # initialize the chat client (this also calls VannaBase.__init__ but more importantly sets self.client) OpenAI_Chat.__init__(self, config=config) self._model = "barricade" vn = MyVanna(config={'api_key': CHAT GPT API KEY, 'model': 'gpt-4.1-mini'}) ### STEP 3: TRAINING THE MODEL This is where the customization begins. We begin my connecting to the Postgres tables. This has our run data. Fill in the arguments with your host, dbname, username, password, and port for the Postgres database. vn.connect_to_postgres(host='POSTGRES DATABASE ENDPOINT', dbname='POSTGRES DB NAME', user='', password='', port='') From there, we trained the model on the information schemas. df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS") plan = vn.get_training_plan_generic(df_information_schema) vn.train(plan=plan) After that, we went even further and decided to send Barricade to college. We loaded all of Chapters 4-7 from the book of OHDSI to give barricade a good understanding of OMOP core principals. We loaded FHIR documentation, specifically explanations of various FHIR resource types, which describe how healthcare data is structured, used, and interrelated so barricade understood FHIR resources. We loaded FHIR-to-OMOP mappings, so barricade understood which FHIR resource mapped to which OMOP table(s). And finally, we loaded specialized knowledge regarding edge cases that need to be understood for FHIR-to-OMOP transformations. Here is a brief overview of how that training looked: **FHIR resource example**: def load_fhirknowledge(vn): url = "https://build.fhir.org/datatypes.html#Address" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **MAPPING example:** def load_mappingknowledge(vn): # Transform url = "https://docs.intersystems.com/services/csp/docbook/DocBook.UI.Page.cls?KEY=RDP_transform" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **The book of OHDSI example**: def load_omopknowledge(vn): # Chapter 4 url = "https://ohdsi.github.io/TheBookOfOhdsi/CommonDataModel.html" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **Specialized knowledge example**: def load_opsknowledge(vn): vn.train(documentation="Our business is to provide tools for generating evidence for the transformation Runs") Finally, we train barricade to understand the SQL queries that are commonly used. This will help the system understand the context of the questions that are being asked: cdmtables = ["conversion_warnings","conversion_issues","ingestion_report","care_site", "cdm_source", "cohort", "cohort_definition", "concept", "concept_ancestor", "concept_class", "concept_relationship", "concept_synonym", "condition_era", "condition_occurrence", "cost", "death", "device_exposure", "domain", "dose_era", "drug_era", "drug_exposure", "drug_strength", "episode", "episode_event", "fact_relationship", "location", "measurement", "metadata", "note", "note_nlp", "observation", "observation_period", "payer_plan_period", "person", "procedure_occurrence", "provider", "relationship", "source_to_concept_map", "specimen", "visit_detail", "visit_occurrence", "vocabulary"] for table in cdmtables: vn.train(sql="SELECT * FROM WHERE omopcdm54." + table) ### STEP 4: CUSTOMIZATIONS (OPTIONAL) Barricade is quite different from the Vanna base model, and these customizations are a big reason for that. **UI CUSTOMIZATIONS** To customize anything related to the UI (what you see when you spin up the flask app and barricade comes to life), you should edit the `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` and `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` paths. For barricade, we customized the suggested questions and the logos. To edit the suggested questions, modify the `generate_questions` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We added this: if hasattr(self.vn, "_model") and self.vn._model == "barricade": return jsonify( { "type": "question_list", "questions": [ "What are the transformation warnings?", "What does a fhir immunization resource map to in omop?", "Can you retrieve the observation period of the person with PERSON_ID 61?", "What are the mappings for the ATC B03AC?", "What is the Common Data Model, and why do we need it for observational healthcare data?" ], "header": "Here are some questions you can ask:", } ) Note that these suggested questions will only come up if you set self._model= barricade in STEP 2. To change the logo, you must edit assets.py. This is tricky, because assets.py contains the *compiled* js and css code. To find the logo you want to replace, go to your browser that has the flask app running, and inspect element. Find the SVG block that corresponds to the logo, and replace that block in assets.py with the SVG block of the new image you want. We also customized the graph response. In relevant questions, a graph is generated using plotly. The default prompts were generating graphs that were almost nonsensical. To fix this, we edited the `generate_plotly_figure` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We specifically changed the prompt: def generate_plotly_figure(user: any, id: str, df, question, sql): table_preview = df.head(10).to_markdown() # or .to_string(), or .to_json() prompt = ( f"Here is a preview of the result table (first 10 rows):\n{table_preview}\n\n" "Based on this data, generate Plotly Python code that produces an appropriate chart." ) try: question = f"{question}. When generating the chart, use these special instructions: {prompt}" code = vn.generate_plotly_code( question=question, sql=sql, df_metadata=f"Running df.dtypes gives:\n {df.dtypes}", ) self.cache.set(id=id, field="plotly_code", value=code) fig = vn.get_plotly_figure(plotly_code=code, df=df, dark_mode=False) fig_json = fig.to_json() self.cache.set(id=id, field="fig_json", value=fig_json) return jsonify( { "type": "plotly_figure", "id": id, "fig": fig_json, } ) except Exception as e: import traceback traceback.print_exc() return jsonify({"type": "error", "error": str(e)}) The last UI customization we did was we chose to provide our own index.html. You specify the path to your index.html file when you initialize the flask app (otherwise it will use the default index.html). **PROMPTING/MODEL/LLM CUSTOMIZATIONS**: We made *many* modifications in `barricade/lib/python3.13/site-packages/vanna/base/base.py`. Notable modifications include setting the temperature for the LLM dynamically (depending on the task), creating different prompts based on if the question was conceptual or one that required sql generation, and adding a check for hallucinations in the SQL generated by the LLM. Temperature controls the randomness of text that is generated by LLMs during inference. A lower temperature essentially makes those tokens with the highest probability more likely to be selected; a higher temperature increases a model's likelihood of selecting less probable tokens. For tasks such as generating SQL, we want the temperature to be lower to prevent hallucinations. Hallucinations are when the LLM makes up something that doesn't exist. In SQL generation, a hallucination may look like the LLM querying a column that doesn't exist. This renders the query unusable, and throws an error. Thus, we edited the `generate_sql` function to change the temperature dynamically. The temperature is between 0 - 1. For questions deemed to be conceptual, we set the temperature to be 0.6, and for questions that require generating sql, we set the temperature to be 0.2. Furthermore, for tasks such as generating sql, the temperature is 0.2, while for tasks such as generating graphs and summaries, the default temperature is 0.5. We decided on 0.5 for the graph and summary tasks, because they require more creativity. We added two helper functions: `decide_temperature` and `is_conceptual`. the keyword to indicate a conceptual question is if the user says "barricade" in their question. def is_conceptual(self, question:str): q = question.lower() return ( "barricade" in q and any(kw in q for kw in ["how", "why", "what", "explain", "cause", "fix", "should", "could"]) ) def decide_temperature(self, question: str) -> float: if "barricade" in question.lower(): return 0.6 # Conceptual reasoning return 0.2 # Precise SQL generation If a question is conceptual, sql is not generated, and the LLM response is returned. We specified this in the prompt for the LLM. The prompt is different depending on if the question requires sql generation or if the question is conceptual. We do this in `get_sql_prompt`, which is called in `generate_sql`: def get_sql_prompt(self, initial_prompt : str, question: str, question_sql_list: list, ddl_list: list, doc_list: list, conceptual: bool = False, **kwargs): if initial_prompt is None: if not conceptual: initial_prompt = f"You are a {self.dialect} expert. " + \ "Please help to generate a SQL query to answer the question. Your response should ONLY be based on the given context and follow the response guidelines and format instructions. " else: initial_prompt = "Your name is barricade. If someone says the word barricade, it is not a part of the question, they are just saying your name. You are an expert in FHIR to OMOP transformations. Do not generate SQL. Your role is to conceptually explain the issue, root cause, or potential resolution. If the user mentions a specific table or field, provide interpretive guidance — not queries. Focus on summarizing, explaining, and advising based on known documentation and transformation patterns." initial_prompt = self.add_ddl_to_prompt( initial_prompt, ddl_list, max_tokens=self.max_tokens ) if self.static_documentation != "": doc_list.append(self.static_documentation) initial_prompt = self.add_documentation_to_prompt( initial_prompt, doc_list, max_tokens=self.max_tokens ) if not conceptual: initial_prompt += ( "===Response Guidelines \n" "1. If the provided context is sufficient, please generate a valid SQL query without any explanations for the question. \n" "2. If the provided context is almost sufficient but requires knowledge of a specific string in a particular column, please generate an intermediate SQL query to find the distinct strings in that column. Prepend the query with a comment saying intermediate_sql \n" "3. If the provided context is insufficient, please explain why it can't be generated. \n" "4. Please use the most relevant table(s). \n" "5. If the question has been asked and answered before, please repeat the answer exactly as it was given before. \n" f"6. Ensure that the output SQL is {self.dialect}-compliant and executable, and free of syntax errors. \n" ) else: initial_prompt += ( "===Response Guidelines \n" "1. Do not generate SQL under any circumstances. \n" "2. Provide conceptual explanations, interpretations, or guidance based on FHIR-to-OMOP transformation logic. \n" "3. If the user refers to warnings or issues, explain possible causes and common resolutions. \n" "4. If the user references a table or field, provide high-level understanding of its role in the transformation process. \n" "5. Be concise but clear. Do not make assumptions about schema unless confirmed in context. \n" "6. If the question cannot be answered due to lack of context, state that clearly and suggest what additional information would help. \n" ) message_log = [self.system_message(initial_prompt)] for example in question_sql_list: if example is None: print("example is None") else: if example is not None and "question" in example and "sql" in example: message_log.append(self.user_message(example["question"])) message_log.append(self.assistant_message(example["sql"])) message_log.append(self.user_message(question)) return message_log def generate_sql(self, question: str, allow_llm_to_see_data=True, **kwargs) -> str: temperature = self.decide_temperature(question) conceptual = self.is_conceptual(question) question = re.sub(r"\bbarricade\b","",question,flags=re.IGNORECASE).strip() if self.config is not None: initial_prompt = self.config.get("initial_prompt", None) else: initial_prompt = None question_sql_list = self.get_similar_question_sql(question, **kwargs) ddl_list = self.get_related_ddl(question, **kwargs) doc_list = self.get_related_documentation(question, **kwargs) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list, conceptual=conceptual, **kwargs, ) self.log(title="SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, temperature, **kwargs) self.log(title="LLM Response", message=llm_response) if 'intermediate_sql' in llm_response: if not allow_llm_to_see_data: return "The LLM is not allowed to see the data in your database. Your question requires database introspection to generate the necessary SQL. Please set allow_llm_to_see_data=True to enable this." if allow_llm_to_see_data: intermediate_sql = self.extract_sql(llm_response,conceptual) try: self.log(title="Running Intermediate SQL", message=intermediate_sql) df = self.run_sql(intermediate_sql,conceptual) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list+[f"The following is a pandas DataFrame with the results of the intermediate SQL query {intermediate_sql}: \n" + df.to_markdown()], **kwargs, ) self.log(title="Final SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, **kwargs) self.log(title="LLM Response", message=llm_response) except Exception as e: return f"Error running intermediate SQL: {e}" return self.extract_sql(llm_response,conceptual) Even with a low temperature, the LLM would still generate hallucinations sometimes. To further prevent hallucinations, we added a check for them before returning the sql. We created a helper function, `clean_sql_by_schema`, which takes the generated SQL, and finds any columns that do not exist. It then removes that SQL, and returns the cleaned version with no hallucinations. For cases where the SQL is something like "SELECT cw.id FROM omop.conversion_issues", it uses the `extract_alias_mapping` to map cw to the conversion_issues table. Here are those functions for reference: def extract_alias_mapping(self,sql: str) -> dict[str,str]: """ Parse the FROM and JOIN clauses to build alias → table_name. """ alias_map = {} # pattern matches FROM schema.table alias or FROM table alias for keyword in ("FROM", "JOIN"): for tbl, alias in re.findall( rf'{keyword}\s+([\w\.]+)\s+(\w+)', sql, flags=re.IGNORECASE ): # strip schema if present: table_name = tbl.split('.')[-1] alias_map[alias] = table_name return alias_map def clean_sql_by_schema(self, sql: str, schema_dict: dict[str,list[str]] ) -> str: """ Returns a new SQL where each SELECT-line is kept only if its alias.column is in the allowed columns for that table. schema_dict: { 'conversion_warnings': [...], 'conversion_issues': [...] } """ alias_to_table = self.extract_alias_mapping(sql) lines = sql.splitlines() cleaned = [] in_select = False for line in lines: stripped = line.strip() # detect start/end of the SELECT clause if stripped.upper().startswith("SELECT"): in_select = True cleaned.append(line) continue if in_select and re.match(r'FROM\b', stripped, re.IGNORECASE): in_select = False cleaned.append(line) continue if in_select: # try to find alias.column in this line m = re.search(r'(\w+)\.(\w+)', stripped) if m: alias, col = m.group(1), m.group(2) table = alias_to_table.get(alias) if table and col in schema_dict.get(table, []): cleaned.append(line) else: # drop this line entirely continue else: # no alias.column here (maybe a comma, empty line, etc) cleaned.append(line) else: cleaned.append(line) # re-join and clean up any dangling commas before FROM out = "\n".join(cleaned) out = re.sub(r",\s*\n\s*FROM", "\nFROM", out, flags=re.IGNORECASE) self.log("RESULTING SQL:" + out) return out ### STEP 5: Initialize the Flask App Now we are ready to bring Barricade to life! The code below will spin up a flask app, which lets you communicate with the AI agent, barricade. As you can see, we specified our own index_html_path, subtitle, title, and more. This is all optional. These arguments are defined here: [web customizations](https://vanna.ai/docs/web-app/) from vanna.flask import VannaFlaskApp app = VannaFlaskApp(vn, chart=True, sql=False, allow_llm_to_see_data=True,ask_results_correct=False, title="InterSystems Barricade", subtitle="Your AI-powered Transformation Cop for InterSystems OMOP", index_html_path=current_dir + "/static/index.html" ) Once your app is running, you will see barricade: ## RESULTS/DEMO Barricade can help you gain a deeper understanding about omop and fhir, and it can also help you debug transformation issues that you run into when you are trying to transform your fhir data to omop. To showcase Barricade's ability, I will show a real life example. A few months ago, we got an iService ticket, with the following description: To test Barricade, I copied this description into Barricade, and here is the response: First, Barricade gave me a table documenting the issues: Then, Barricade gave me a graph to visualize the issues: And, most importantly, Barricade gave me a description of the exact issue that was causing problems AND told me how to fix it: ### READY 2025 demo Infographic Here is a link to download the handout from our READY 2025 demo: Download the handout.
Article
Nikolay Solovyev · Jul 29

Dynamic Templated Emails in InterSystems IRIS with templated_email

Sending emails is a common requirement in integration scenarios — whether for client reminders, automatic reports, or transaction confirmations. Static messages quickly become hard to maintain and personalize. This is where the templated_email module comes in, combining InterSystems IRIS Interoperability with the power of Jinja2 templates. Why Jinja2 for Emails Jinja2 is a popular templating engine from the Python ecosystem that enables fully dynamic content generation. It supports: Variables — inject dynamic data from integration messages or external sources Conditions (if/else) — change the content based on runtime data Loops (for) — generate tables, lists of items, or repeatable sections Filters and macros — format dates, numbers, and reuse template blocks Example of a simple email body template: Hello {{ user.name }}! {% if orders %} You have {{ orders|length }} new orders: {% for o in orders %} - Order #{{ o.id }}: {{ o.amount }} USD {% endfor %} {% else %} You have no new orders today. {% endif %} With this approach, your email content becomes dynamic, reusable, and easier to maintain. Email Styling and Rendering Considerations Different email clients (Gmail, Outlook, Apple Mail, and others) may render the same HTML in slightly different ways. To achieve a consistent appearance across platforms, it is recommended to: Use inline CSS instead of relying on external stylesheets Prefer table‑based layouts for the overall structure, as modern CSS features such as flexbox or grid may not be fully supported Avoid script and complex CSS rules, as many email clients block or ignore them Test emails in multiple clients and on mobile devices Capabilities of the templated_email Module The templated_email module is designed to simplify creating and sending dynamic, professional emails in InterSystems IRIS. Its key capabilities include: render Jinja2 templates to generate dynamic email content Flexible usage — can be used both from Interoperability productions and directly from ObjectScript code Built‑in Business Operation — ready‑to‑use operation for production scenarios with SMTP integration Automatic inline styling — converts CSS styles into inline attributes for better email client compatibility Markdown support — allows creating clean, maintainable templates that are rendered to HTML before sending These features make it easy to produce dynamic, well‑formatted emails that display consistently across clients. Module templated_email The module provides the following key components: TemplatedEmail.BusinessOperation — a Business Operation to send templated emails via SMTP TemplatedEmail.EmailRequest — an Interoperability Message for triggering email sends, with fields for templates, data, and recipients TemplatedEmail.MailMessage — an extension of %Net.MailMessage that adds: applyBodyTemplate() — renders and sets the email body using a Jinja2 + Markdown template applySubjectTemplate() — renders and sets the email subject using a Jinja2 template This allows you to replace %Net.MailMessage with TemplatedEmail.MailMessage in ObjectScript and gain dynamic templating: set msg = ##class(TemplatedEmail.MailMessage).%New() do msg.applySubjectTemplate(data,"New order from {{ customer.name }}") do msg.applyBodyTemplate(data,,"order_template.tpl") More details and usage examples can be found in the GitHub repository. Templates Are Not Just for Emails The template rendering methods can also be used for generating HTML reports, making it easy to reuse the same templating approach outside of email. With templated_email, you can create dynamic, professional, and maintainable emails for InterSystems IRIS — while leveraging the full flexibility of Jinja2 templates.