My main goal of this article was to prove the use of InterSystems IRIS for Health for REST FHIR interoperability between multiple applications. In this use case, some initiating application makes a REST call to IRIS for Health (which is merely a passthrough for REST calls) to retrieve FHIR data from an Oracle Health R4 FHIR repository. Ideally, it simplifies the syntax for calling the Oracle Health APIs.
Hello,
I'm trying to add another segment to an HL7 MDM message. More specifically OBXgrp(1).OBX. The addition itself works,. When I look at the message in the trace, the segment with the content can be seen. However, it is not displayed as an OBX segment.
Build Map Status = 'ERROR <EnsEDI>ErrMapRequired: Missing required OBXgrp(1) element at segment 6'
'ERROR <EnsEDI>ErrMapSegUnrecog: Unrecognized Segment 6:'' found after segment 5 (TXA)'
Hey Community,
More than 1000 applications are already available for everyone to download on the InterSystems Open Exchange.
And now it's time to announce the best developers and the most downloaded apps of 2024!
%20(2).jpg)
Let's take a closer look at our heroes and their apps:
Hello I try to implementing this service, and try to convert HL7v2, FHIR on both sides.
https://github.com/grongierisc/iris-healthtoolkit-service/
I download this service and tried run docker-compose but after that I get some error.
Iris Healthtoolkit Service
Easy to use HL7v2 to FHIR, CDA to FHIR, FHIR to HL7v2 as a Service.
The aim of this project is to provide an REST API that can convert easily various health formats. Post the desire format in the REST body, get the answer in the new format.
:fire: Official Version : https://aws.amazon.com/marketplace/pp/prodview-q7ryewpz75cq2 :fire:
:tv: Video : https://youtu.be/lr2B7zSFkds :tv:
Install
Clone this repository
git clone https://github.com/grongierisc/iris-healthtoolkit-service.git
Docker
docker-compose up --build -d
Usage
You need to install the application first. If not installed, please refer to the previous article
Application demonstration
After successfully running the iris image vector search application, some data needs to be stored to support image retrieval as it is not initialized in the library.
Image storage
Firstly, drag and drop the image or click the upload icon, select the image, and click the upload button to upload and vectorize it. This process may be a bit slow.
This process involves using embedded Python to call the CLIP model and vectorize the image into 512 dimensional vector data.
Is it possible to get the length of queue for a production using Python code?
I'm using embedded Python at the moment.
I'd like to use the Python external language server later - the Python external server will not start in my environment.
If it is possible to query the production queue length programmically, please advise how?
It would also be nice to show the number of messages processed per second, if IRIS keeps track of this.
Hey Community,
Enjoy the new video on InterSystems Developers YouTube:
I am looking for a link to documentation that shows what responses are available to commands like:
^%GI, ^%GO, ^%RO, ^%RIMF
tried search in docs.intersystems.com "https://docs.intersystems.com/iris20243/csp/docbook/Doc.Results.cls?doc…"
So far had a suggestion just go to the code to figure it out.
While working on getting JSON support for some Python libraries, I discovered some capabilities IRIS provided.
- JSON_OBJECT - A conversion function that returns data as a JSON object.
- JSON_ARRAY - A conversion function that returns data as a JSON array.
- IS JSON - Determines if a data value is in JSON format.
- JSON_TABLE function returns a table that can be used in a SQL query by mapping JSON.
- JSONPath support - is a query language for querying values in JSON.
✓ 7,069 downloads in 2024
✓ 1,029 applications all time
✓ 38,243 downloads all time
✓ 2,981 developers joined
We are using a DTL transformation to take HL7 and transform into custom XML. But the nodes in the resulting XML are appearing out of sequence - and therefore failing validation against the schema.
The XSD schema for the XML looks fine when imported into Ensemble: root node in the XSD looks like this:.png)
And shows in Ensemble like this:.png)
The transformation looks like this, and we can see the text from the trace elements at lines 5, 12 and 19 appear in the correct order in the event log:
HOWEVER, the resulting XML has the <allergies> nodes before the <patientNotes> nodes....:
We are using a DTL transformation to take HL7 and transform into custom XML (XML is a virtual document, held in an EnsLib.EDI.XML.Document object). The schema specifying the format of the XML says one element should occur no more than 24 times (maxOccurs="24" in the XSD schema). However, the transformation to produce one such element always produces 24 elements, all but the last one blank, when tested stand-alone. And when the sub-transform producing one element is incorporated into the full transformation to produce the whole XML object, it produces the wrong output.Is this a bug in the
Hi,
We are currently switching from Studio to VSCode and central GIT with Serverside Development and have a few start problems.
I have set up a system with Git.
The baseline commit including remote push works.
But now I have a problem with the VSCode. When I create and compile a new class, it saves it correctly. But if I want to make further changes to the class, it saves the class and loads it again with the previous status. So the changes I made are gone again.
I have deactivated CompileOnSave without success.
The log shows the following
Hi Community!
We hope you know that when you have an interesting idea about InterSystems products or services, you should publish it on the Ideas Portal. And those Developer Community members who implement the proposed ideas are added to the "Hall of Fame". Want to get accepted to the InterSystems Ideas Hall of Fame? Read on to learn how you can get on the list.
Hi All,
I don't know if I am doing something wrong or maybe a setting is off.
But while in Cache Studio, under the Edit menu, when I use "Find in Files . . ." nothing is returned, regardless of what I am trying to find.
Weird.
Hello,
Very much keen if we could gather the per namespace and business component utilization of InterSystems cache server.
For e. I have a PRD server where its CPU utilization is at max all the time and I want to know which namespace and its business process (service/Operation/Process) is utilizing what number of CPU and memory.
** I can get the CPU and Memory utilization per Cache.exe and PID, but not able to get the Namespace and ConfiguratioName to which that particular PID belongs.
Thank you,
Our target system requires a blank ROL segment whenever the ROL segment does not exist in an A08 from our source system. I'm not sure if this can be done with the DTL gui tools or if some sql or other code is required (coding is not in my wheelhouse). Here is what I've tried, but this does not yield the "ROL|1|" I'd like to create. Any help would be greatly appreciated.
Here is the input message that I'd like to add the ROL segment to:
I am a volunteer at a nonprofit that is attempting to connect to a product that uses InterSystems Cache (Clinisys LIMS). I have not used this database before, but I have used many others (MS-SQL, Oracle, etc.). I am hoping IRIS is compatible enough with Cache for this project.
I downloaded IRIS. I learned about Terminal [IRIS]. Using it I was able to create a new table, insert data, and retrieve it. I also used the Management Portal to query the new table. It also returned the data.
HealthShare Patient Index – Virtual February 19-21, 2025 9:00am-5:00pm US-Eastern Time (EST)
Hey Community,
Enjoy the new video on InterSystems Developers YouTube:
I'm looking for a new position. Part time/full time/temp. I'm flexible.
I have 20 years healthcare IT background - most recently with a startup where I built/supported interfaces in Mirth.
Even a temp project that needs someone to test or help with workflow. I have a background in end-to-end integration between systems (Saas/APIs, etc)
We have a vendor that can only send us a uu-encoded PDF. Is there a way to decrypt it in Ensemble?
Thanks
Scott Roth
The Ohio State University Wexner Medical Center
Hey Community,
Enjoy the new video on InterSystems Developers YouTube:
⏯ Creating an InterSystems IRIS Cross Functional App in 150 Lines of Code @ Global Summit 2024
I am currently experiencing frustration with trying to Authenticate an Active Directory account through JDBC as the Hospital System moves from OnPrem SQL Server to using Azure SQL Server with Microsoft Entra Authentication.
Microsoft cannot give me a straight answer of what is required from a JDBC standpoint to authenticate from a Linux environment.
When you deploy code from a repo, class (file) deletion might not be reflected by your CICD system.
Here's a simple one-liner to automatically delete all classes in a specified package that have not been imported. It can be easily adjusted for a variety of adjunct tasks:
set packages = "USER.*,MyCustomPackage.*"set dir = "C:\InterSystems\src\"set sc = $SYSTEM.OBJ.LoadDir(dir,"ck", .err, 1, .loaded)
set sc = $SYSTEM.OBJ.Delete(packages _ ",'" _ $LTS($LI($LFS(loaded_",",".cls,"), 1, *-1), ",'"), "/generated=0", .err2)The first command compiles classes and also returns a list of loaded classes. The second command deletes all classes from specified packages, except for the classes loaded just before that. Any generated classes are also skipped since they won't be in the loaded list.
We connect to MS SQL Databases using the Microsoft JDBC Driver 12.2 using the following URL
jdbc:sqlserver://<server>:<port>;database=<database name>;trustServerCertificate=true;integratedSecurity=true;authenticationScheme=NTLM;domain=osumc;authentication=NotSpecified
They want to migrate the databases to the Azure Cloud and in doing so we need the Authentication to change to go through Microsoft Entra. I was given the following URL
Not sure there are many that connect to MS SQL to execute queries, stored procedures, etc, but our Healthsystem has many different MS SQL based databases we use within the Interoperability environment for various reasons.
With the push to moving from on-prem to the Cloud we ran into some difficulties with our SQL Gateway connections and knowing how to config them to use Microsoft Entra for Active Directory Authentication.


