I have been trying to track down an issue we are seeing in our TEST environment with Memory usage.
We have Several BP's for years now that take a HL7 message, parse it apart, and make calls to a Custom EnsLib.SQL.OutboundAdapter to have it execute Insert/Select/Update/Delete stored procedures against a MS SQL Database via JDBC connection. We are using Microsoft's JDBC 12.2 driver to do this.
What we are seeing is that IRIS.WorkQueue globals are being defined for these calls but then the IRIS.WorkQueue is not being cleaned up and taking up large amounts of Memory.
I’m working on an InterSystems IRIS production that needs to call an external API using OAuth client credentials (client_id and client_secret). For security reasons, I must pass these credentials via environment variables in my Docker container.
sometimes we need more then one Iris container at the same time. Since as containers they always have the same instance name shown in the management portal, it is hard to distinguish the management-portals of the instances. Searching for a way to make it easier I thought I could change the instance name shown in the management portal. I tried "iris rename" in different ways but could only change the configuration name which is shown by "iris list", not the name in the management portal.
The 2024.1.4 and 2023.1.6 maintenance releases of InterSystems IRIS® data platform,InterSystems IRIS® for HealthTM, and HealthShare® Health Connect are now Generally Available (GA).
First, let me wish the developer community a Happy New Year! We hope to bring you many good things this year, and today I'd like to introduce the latest version of the Intersystems Language Server extension to VS Code. Most Language Server improvements are experienced via the ObjectScript extension UI, so you many not be aware of the many improvements in areas like Intellisense and hover overs that were released throughout 2024.
To transfer data between production components I actively use messages of type Ens.StreamContainer class and its descendants. In many cases the content of the message content is not visualised (the 'Body' tab contains a table with a list of selected message properties but the 'Contents' tab is empty). Response messages are never visualised, and request messages are visualised with a fifty-fifty probability. What do I need to do to ensure that messages are always visualised?
Is anyone using DICOM Interoperability in IRIS for Health configured in Mirror?
I'm asking because I'm not sure how to handle where the DICOM messages are stored.
For some reason DICOM use the filesystem to store raw messages, the directory used can be configured in the StorageLocation production settings, obviously this is a big issue if/when a mirror failover occur.
Unfortunately in IRIS it's not possible to change the DICOM storage from file stream to global stream.
We are pretty new to Ensemble and we are considering on using a default setup for our production. Let me explain the situation. We have one sender that sends HL7 ADT messages to our system. We have 60+ other systems that need to recieve ADT messages. We where thinking on a few ways on how to do the setup.
We're continuing to improve and teach our Developer Community AI, and in this iteration, we've added descriptions of Open Exchange Applications to the knowledge base!
This new feature will allow DC AI to search app descriptions to answer your questions. This will give you better answers that require programming, not just general information.
To add this knowledge to your search, just tick the checkbox Open Exchange Applications:
February 19, 2025 – Alert: SQL Queries Returning Wrong Results
InterSystems has corrected two issues that can cause a small number of SQL queries to return incorrect results. In addition, InterSystems has corrected an inconsistency in date/time datatype handling that may lead to different, unexpected – yet correct – results for existing applications that rely on the earlier, inconsistent behavior.
DP-436825: SQL Queries with Lateral Join May Return Wrong Results
I finally figured out how to get JWT token using set x = ##class(%SYS.OAuth2.Authorization).GetAccessTokenClient("medbank","openid fhirUser",.prop,.err).
I also found iris-fhir-client app on Open Exchange. I registered Epic sandbox server, but I cannot list resources. I suspect I need to integrate authorization / authentication. How do I do this with irisfhirclient py?
In this post, we'll discuss our project that leverages Pulumi and Docker Compose to automate the deployment of InterSystems WSGI applications on AWS. The focus is on simplicity and efficiency, using pre-built infrastructure templates for provisioning and scaling AWS resources.
I am working on implementing OAuth 2.0 authentication in InterSystems IRIS and need to correctly define a CSRF token that will be validated by OAuth.Response. However, I am having trouble finding a clear method to configure the CSRF token correctly.
Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris
Part 3 – REST and Interoperability
Now that we have finished the configuration of the SQL Gateway and we have been able to access the data from the external database via python, and we have set up our vectorized base, we can perform some queries. For this in this part of the article we will use an application developed with CSP, HTML and Javascript that will access an integration in Iris, which then performs the search for data similarity, sends it to LLM and finally returns the generated SQL. The CSP page calls an API in Iris that receives the data to be used in the query, calling the integration. For more information about REST in the Iris see the documentation available at https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...
As I have been working with IRIS (nee Caché) for some time now, I decided to take a look at how I manage things like functions and scripts that may not belong to a single namespace or environment, but should still have a home.
I have a business process that adds data to a global variable on receipt of an HL7 message, and a scheduled task that executes a class method defined within the same business process that removes data from the same global variable. With this in mind it makes sense to consider concurrency and therefore make use of the LOCK command.
My first question is whether this is actually necessary?
https://www.youtube.com/embed/Z-x6Rk7Bi3I [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
I created and then compiled a routine using the %Routine library
s routine = ##class(%Routine).%New(fileName_"."_extension) d routine.Write(parsedRule) //s status = routine.Save() s status = routine.SaveStream(,.refresh) if ($$$ISERR(status)) throw status
Date & Time: Monday, May 12 – 9 am EDT | 3 pm CEST
https://www.youtube.com/embed/KjN_657T9ZY?si=rfFrn6ywMrAciuXI [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]