We have great news for those of you who are interested in what's happening at the InterSystems Ready 2025 but couldn't attend in person. All the keynotes are being streamed! Moreover, you can watch them afterwards if they happen at an inopportune time.
Keynotes from Day 1 are already ready 😉
https://www.youtube.com/embed/mbqKoXBB114 [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
We have an OAuth server configured as an identity provider, and we have an external application (from another provider) that connects correctly with OAuth.
Due to the needs of the project, what we want to do is the following:
https://www.youtube.com/embed/A4qAbMMQMaA [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
We're having an issue with some messages being sent to a downstream system from Healthshare. Part of the free text in the NTE segment is displaying with special characters on the system front-end but is not present in the HL7 message that we send to them. We are guessing that this is a result of some of the text being copied and pasted from something like MS Word. The '?' on their system is just representing an invalid character.
Is anyone aware of a way to prevent this occurring? We are sending the messages using the 'cp1252' encoding.
I want to create a scheduler to montor the status of list of backend jobs ( say limit is 10). there going to be job queue. Need to pick a job form job queue when one of the current processing job is finished. What is the best way to implement this
https://www.youtube.com/embed/qMJcALKsNVE [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
While working with GET request I encountered this situation where FHIR Server return a responseStatusHTTP "HTTP/1.1 200 200" instead of "HTTP/1.1 200 OK" (as highlighted in the attached screenshot).
Although the response code seems valid, these bundles have a total value of 0.
Could anyone clarify what "200 200" signifies in this context? Is there an issue with my setup, or does this indicate a specific condition related to the empty bundle search?
This web interface is designed to facilitate the management of Data Lookup Tables via a user-friendly web page. It is particularly useful when your lookup table values are large, dynamic, and frequently changing. By granting end-users controlled access to this web interface (read, write, and delete permissions limited to this page), they can efficiently manage lookup table data according to their needs.
The data managed through this interface can be seamlessly utilized in HealthConnect rules or data transformations, eliminating the need for constant manual monitoring and management of the lookup tables and thereby saving significant time.
Note: If the standard Data Lookup Table does not meet your mapping requirements, you can create a custom table and adapt this web interface along with its supporting class with minimal modifications. Sample class code is available upon request.
The ^%GCMP utility can be used to compare the contents of two globals.
For example, to compare ^test and ^test in the USER and SAMPLES namespaces, it would look like this: *In the example below, 700 identical globals are created in the two namespaces, and the contents of one of them is changed to make it the detection target.
I'm running : IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2024.3 (Build 217U), and I have https web request call to connect and get data from external server ( https://myserver.com/api/gap/...etc) but I'm getting this error: "fetch failed unable to verify the first certificate".
I was wondering what best practice was for using macros in Embedded Python, i.e. iris.execute('$$$MACRO()') or something else. Does anyone have insight into this?
When handling a %CSP.REST API response for a custom endpoint, how can I capture or access the response content before it is written to the output buffer and sent through the Web Gateway to the UI?
https://www.youtube.com/embed/3PBqQwOn7rs [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
I was wondering if it was possible to use something like EnsLib.SQL.InboundAdapter with tables in IRIS.
This library monitors when a record has been inserted into a table in an external database, so it requires a DSN to connect to that database.
My goal is to make a call to an external API that takes a long time, it could spend nearly an hour (or more) completing its processes, but I don't want to block the main process.
I was wondering if someone could help me. In the past I have been able to call external Stored Procedures through a SQL Outbound Connection and have them return me the EnsLib.SQL.Snapshot to use within a BPL to extract data.
But this time instead of using a SQL Outbound BO to make the Stored Procedure call, I decided to create a Linked Stored Procedure through the %JDBC_Server to point to the Stored Procedure out on MS SQL.
However, I am struggling to get the code just right to return the Column value from the Linked Stored Procedure.
I have two instances of IRIS, one is Production and another one is Staging (both managed by Docker) and I want to set up a full daily recover of the Staging server from a full backup of the Production server. I know how to do this manually using the DBREST utility, as well as how to make a copy of the database by making a full copy of the durable directory (however this option requires a full stop of the Production-database). What is the best way to automatically restore all databases from a full backup using scripting?
We are using a DTL transformation to take HL7 and transform into custom XML (XML is a virtual document, held in an EnsLib.EDI.XML.Document object). The schema specifying the format of the XML says one element should occur no more than 24 times (maxOccurs="24" in the XSD schema). However, the transformation to produce one such element always produces 24 elements, all but the last one blank, when tested stand-alone.
how do I know that my service is running in a production, I've this receiving service and the as blow that says lost TCP then closing TCP connection, then last one says ConfigItem starting job, and in the Jobs tab it says Listing.
$System.Util.GetEnviron("USERPROFILE") returns "C:\WINDOWS\system32\config\systemprofile". I don't know what that is, but that folder doesn't even exist. The correct value which I need is "C:\Users\robert.steed", as seen via the Windows command line "set" command.
The 2025.1.2 and 2024.1.5 maintenance releases of InterSystems IRIS® data platform,InterSystems IRIS® for HealthTM, and HealthShare® Health Connect are now Generally Available (GA). These releases include the fixes for a number of recently issued alerts and advisories, including the following:
Can I please check if anyone has built a simple web interface for maintaining custom SQL lookup class.
We have a simple persistent class in HealthShare which is used for storing Pathology test codes. Test codes in this lookup class is used for message filtering and applying additional logic when processing pathology results/orders.
https://www.youtube.com/embed/Yqhq2JEWeCo [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
I’m excited to join the InterSystems Developer Community, a place where IRIS, Caché, Ensemble, HealthShare, and all things InterSystems come alive through shared knowledge and collaboration.