Background:

This guideline provides an overview of how to design and implement a REST API interface for querying patient demographic data from an Electronic Patient Record (EPR) system using HealthConnect. The process involves sending a query request with the patient's identification number, retrieving the response from the EPR system, extracting the required patient demographic data from the HL7 message, and sending it as a JSON response to the supplier. The high-level process diagram is shown below (Screenshot 1).

7 5
2 324
Article
· Apr 1, 2025 1m read
How to get server/instance info

Hi all,

As part of the development an API to know what is the instance of IRIS is connected, I've found some methods to know information about the server that can help you.

Get the server name: $SYSTEM.INetInfo.LocalHostName()

Get the server IP: $SYSTEM.INetInfo.HostNameToAddr($SYSTEM.INetInfo.LocalHostName())

Get the instance name: $PIECE($SYSTEM,":",2)

6 2
0 332

FHIR Server

A FHIR Server is a software application that implements the FHIR (Fast Healthcare Interoperability Resources) standard, enabling healthcare systems to store, access, exchange, and manage healthcare data in a standardized manner.

Intersystems IRIS can store and retrieve the following FHIR resources:

  • Resource Repository – IRIS Native FHIR server can effortlessly store the FHIR bundles/resources directly in the FHIR repository.
  • FHIR Facade - the FHIR facade layer is a software architecture pattern used to expose a FHIR-compliant API on top of an existing one (often non-FHIR). It also streamlines the healthcare data system, including an electronic health record (EHR), legacy database, or HL7 v2 message store, without requiring the migration of all data into a FHIR-native system.

What is FHIR?

Fast Healthcare Interoperability Resources (FHIR) is a standardized framework created by HL7 International to facilitate the exchange of healthcare data in a flexible, developer-friendly, and modern way. It leverages contemporary web technologies to ensure seamless integration and communication across various healthcare systems.

4 0
3 313

Hi Community,

In the first part of this series, we examined the fundamentals of Interoperability on Python (IoP), specifically how it enables us to construct such interoperability elements as business services, processes, and operations using pure Python.

Now, we are ready to take things a step further. Real-world integration scenarios extend beyond simple message handoffs.They involve scheduled polling, custom message structures, decision logic, filtering, and configuration handling.In this article, we will delve into these more advanced IoP capabilities and demonstrate how to create and run a more complex interoperability flow using only Python.

To make it practical, we will build a comprehensive example: The Reddit Post Analyzer Production. The concept is straightforward: continuously retrieving the latest submissions from a chosen subreddit, filtering them based on popularity, adding extra tags to them, and sending them off for storage or further analysis.

The ultimate goal here is a reliable, self-running data ingestion pipeline. All major parts (the Business Service, Business Process, and Business Operation) are implemented in Python, showcasing how to use IoP as a Python-first integration methodology.

5 2
1 221

Modern data architectures utilize real-time data capture, transformation, movement, and loading solutions to build data lakes, analytical warehouses, and big data repositories. It enables the analysis of data from various sources without impacting the operations that use them. To achieve this, establishing a continuous, scalable, elastic, and robust data flow is essential. The most prevalent method for that is through the CDC (Change Data Capture) technique. CDC monitors for small data set production, automatically captures this data, and delivers it to one or more recipients, including analytical data repositories. The major benefit is the elimination of the D+1 delay in analysis, as data is detected at the source as soon as it is produced, and later is replicated to the destination.

This article will demonstrate the two most common data sources for CDC scenarios, both as a source and a destination. For the data source (origin), we will explore the CDC in SQL databases and CSV files. For the data destination, we will use a columnar database (a typical high-performance analytical database scenario) and a Kafka topic (a standard approach for streaming data to the cloud and/or to multiple real-time data consumers).

Overview

This article will provide a sample for the following interoperability scenario:

10 0
4 183

When I started my journey with InterSystems IRIS, especially in Interoperability, one of the initial and common questions I had was: how can I run something on an interval or schedule? In this topic, I want to share two simple classes that address this issue. I'm surprised that some similar classes are not located somewhere in EnsLib. Or maybe I didn't search well? Anyway, this topic is not meant to be complex work, just a couple of snippets for beginners.

1 0
0 98
Contestant

How to set up RAG for OpenAI agents using IRIS Vector DB in Python

In this article, I’ll walk you through an example of using InterSystems IRIS Vector DB to store embeddings and integrate them with an OpenAI agent.

2 0
0 29