Clear filter
Announcement
Anastasia Dyubaylo · Jul 8
Hi Developers,
We are happy to announce the new InterSystems online programming contest dedicated to creating useful tools to make your fellow developers' lives easier:
🏆 InterSystems Developer Tools Contest 🏆
Duration: July 14 - August 3, 2025
Prize pool: $12,000
The topic
Develop any applications that improve developer experience with IRIS, help to develop faster, contribute more qualitative code, help to test, deploy, support, or monitor your solution with InterSystems IRIS.
General Requirements:
An application or library must be fully functional. It should not be an import or a direct interface for an already existing library in another language (except for C++, there you really need to do a lot of work to create an interface for IRIS). It should not be a copy-paste of an existing application or library.
Accepted applications: new to Open Exchange apps or existing ones, but with a significant improvement. Our team will review all applications before approving them for the contest.
The application should work either on IRIS Community Edition or IRIS for Health Community Edition. Both could be downloaded as host (Mac, Windows) versions from Evaluation site, or can be used in a form of containers pulled from InterSystems Container Registry or Community Containers: intersystemsdc/iris-community:latest or intersystemsdc/irishealth-community:latest .
The application should be Open Source and published on GitHub or GitLab.
The README file to the application should be in English, contain the installation steps, and contain either the video demo or/and a description of how the application works.
Only 3 submissions from one developer are allowed.
NB. Our experts will have the final say in whether the application is approved for the contest or not based on the criteria of complexity and usefulness. Their decision is final and not subject to appeal.
Prizes
1. Experts Nomination - a specially selected jury will determine winners:
🥇 1st place - $5,000
🥈 2nd place - $2,500
🥉 3rd place - $1,000
🏅 4th place - $500
🏅 5th place - $300
🌟 6-10th places - $100
2. Community winners - applications that will receive the most votes in total:
🥇 1st place - $1,000
🥈 2nd place - $600
🥉 3rd place - $300
🏅 4th place - $200
🏅 5th place - $100
❗ If several participants score the same number of votes, they are all considered winners, and the prize money is shared among the winners.❗ Cash prizes are awarded only to those who can verify their identity. If there are any doubts, organizers will reach out and request additional information about the participant(s).
Who can participate?
Any Developer Community member, except for InterSystems employees (ISC contractors allowed). Create an account!
Developers can team up to create a collaborative application. 2 to 5 developers are allowed in one team.
Do not forget to highlight your team members in the README of your application – DC user profiles.
Important Deadlines:
🛠 Application development and registration phase:
July 14, 2025 (00:00 EST): Contest begins.
July 27, 2025 (23:59 EST): Deadline for submissions.
✅ Voting period:
July 28, 2025 (00:00 EST): Voting begins.
August 3, 2025 (23:59 EST): Voting ends.
Note: Developers can improve their apps throughout the entire registration and voting period.
Helpful Resources:
✓ Example applications:
webterminal - an emulation for IRIS terminal as a web application
git-source-control - git tool to manage changes for shared dev environments and IRIS UI dev editors by @Timothy Leavitt
iris-rad-studio - RAD for UI
cmPurgeBackup - backup tool
errors-global-analytics - errors visualization
objectscript-openapi-definition - open API generator
Test Coverage Tool - test coverage helper
iris-bi-utils - a toolset for IRIS BI
and many more.
✓ Templates we suggest to start from:
iris-dev-template
Interoperability-python
rest-api-contest-template
native-api-contest-template
iris-fhir-template
iris-fullstack-template
iris-interoperability-template
iris-analytics-template
✓ For beginners with IRIS:
Build a Server-Side Application with InterSystems IRIS
Learning Path for beginners
✓ For beginners with ObjectScript Package Manager (IPM):
How to Build, Test and Publish IPM Package with REST Application for InterSystems IRIS
Package First Development Approach with InterSystems IRIS and IPM
✓ How to submit your app to the contest:
How to publish an application on Open Exchange
How to submit an application for the contest
Need Help?
Join the contest channel on InterSystems' Discord server or talk with us in the comment to this post.
We're waiting for YOUR project – join our coding marathon to win!
By participating in this contest, you agree to the competition terms laid out here. Please read them carefully before proceeding. Can InterSystems interns participate? Hi Liam! Unfortunately, interns are considered employees, so they’re not eligible to participate. Dang - and we don't even get dental 😢 Hi Devs!
You can already enjoy the recording of the "Kick-off webinar for InterSystems Developer Tools Contest 2025" on the InterSystems YouTube channel!🤓
⏯️ The InterSystems Developer Tools Contest 2025 Kick-Off Webinar
Hey Developers!The first application has already been added to the contest! Check it out:
Interoperability REST API Template by @Andrew.Sklyarov
Hi Devs!
The first week for the registration perіod has passed, and the second week has started! We look forward to receiving your applications.
For now, one more participant has joined the contest:
iristest-html by @Ashok.Kumar Developers!The registration phase is almost over! Only 2 days left till the voting period! Upload your applications and join the contest!Six new apps have been added already, check them out:
Global-Inspector by @Robert.Cemper1003 InterSystems Testing Manager for VS Code by @John.Murray IPM Explorer for VSCode by @John.McBrideDev typeorm-iris by @Dmitry.Maslennikov addsearchtable by @XININGMA iris-message-search by @sara.aplin Hey Developers!Today is the last day to register for the contest!7 more participants have joined! Check the apps that have been uploaded to the contest:
iris4word by @Yuri.GomesPyObjectscript Gen by @Antoine.Dh IrisTest by @Ashok.Kumar dc-artisan by @Henrique templated_email by @Nikolay.Soloviev & @Sam.Senninwsgi-to-zpm by @Eric.Fortenberry
And don't miss the upcoming kick-off webinar for the InterSystems Developer Tools Contest! On Monday, July 14 – 11 am EDT | 5 pm CEST, register here!
Question
Martin Zukal · Jul 30
Hello everyone,
I would like to ask whether it is possible to run InterSystems API manager (IAM) on OpenShift. Is there some documentation describing how to do it? I was searching the forum as well as the internet and I have not found much unfortunately.
Any hints would be highly appreciated.
Best regards
Martin Zukal
Announcement
Anastasia Dyubaylo · Dec 8, 2023
Hey Developers,
Watch this video to learn how InterSystems TrakCare connects care teams, breaks down data barriers and improves safety, efficiency, and the patient experience.
⏯ What is InterSystems TrakCare
Stay informed by subscribing to our YouTube channel - InterSystems Developers! 🎥 Very nice video! But please there is also the Italian Edition and team:-)
Article
Ashok Kumar T · Sep 8
FHIR Server
A FHIR Server is a software application that implements the FHIR (Fast Healthcare Interoperability Resources) standard, enabling healthcare systems to store, access, exchange, and manage healthcare data in a standardized manner.
Intersystems IRIS can store and retrieve the following FHIR resources:
Resource Repository – IRIS Native FHIR server can effortlessly store the FHIR bundles/resources directly in the FHIR repository.
FHIR Facade - the FHIR facade layer is a software architecture pattern used to expose a FHIR-compliant API on top of an existing one (often non-FHIR). It also streamlines the healthcare data system, including an electronic health record (EHR), legacy database, or HL7 v2 message store, without requiring the migration of all data into a FHIR-native system.
What is FHIR?
Fast Healthcare Interoperability Resources (FHIR) is a standardized framework created by HL7 International to facilitate the exchange of healthcare data in a flexible, developer-friendly, and modern way. It leverages contemporary web technologies to ensure seamless integration and communication across various healthcare systems.
Key FHIR Technologies
RESTful APIs: For resource interactions.
JSON and XML: For data representation
OAuth2: For secure authorization and authentication.
FHIR is structured around modular components called resources, each representing a specific healthcare concept, including the following:
Patient: Demographics and identifiers.
Observation: Clinical measurements (e.g., vitals, labs).
Encounter: Patient-provider interactions.
Medication, AllergyIntolerance, Condition, etc.
FHIR Facade:
A FHIR facade is an architectural layer that exposes a FHIR-compliant API on top of an existing non-FHIR system (e.g., a legacy EHR, HL7 v2 store, or custom database), without requiring you to store data directly as FHIR.
It enables on-demand transformation of legacy data into FHIR resource format (JSON or XML), facilitating interoperability while preserving your existing backend infrastructure.
The FHIR Facade receives and sends FHIR resources, relying on the Prebuilt FHIR server architecture without persisting them into the resource repository. Therefore, it provides a granular level of control over your logic.
IRIS FHIR Facade Architecture
HS.FHIRServer.RestHandler This class receives and processes all incoming FHIR requests from client systems, dispatching them afterwards to the HS.FHIRServer.Service class for further handling.
HS.FHIRServer.Service This core singleton class is responsible for handling FHIR resources and bundles. It determines the type of incoming request and routes it appropriately to the following directions:
FHIR Interactions: Handled by HS.FHIRServer.Storage.JsonAdvSQL.Interactions. It is an Interactions class, which serves as the primary interaction handler.
Bulk FHIR Bundle Transactions: Managed by HS.FHIRServer.DefaultBundleProcessor.
FHIR Operations: Processed by HS.FHIRServer.API.OperationHandler.
FHIR Facade Implementation
Prerequisites for Creating a FHIR Server and Implementing a FHIR Facade in InterSystems IRIS
Before creating a FHIR server and implementing the FHIR facade in InterSystems IRIS, ensure the following configurations are properly set up:
Configure and enable the FHIR Foundation in the required namespace.
Customize the FHIR implementation classes, including those below
RepoManager
InteractionStrategy
Interactions
FHIR Foundation configuration
Step 1: Activate the FHIR Foundation
First, you should activate the FHIR Foundation. To do that, take the following steps:
Switch to the HSLIB namespace in the System Management Portal.
Navigate to the Health menu.
There, you will find a list of available foundation namespaces.
For this demo, we will be using the LEARNING namespace (database) as an example.
At the top of the screen, click "Installer Wizard" to view the status of the foundation namespace.Ensure that your namespace is marked as "Activated". If not, click the "Activate" button to enable it.
Manual Configuring of the Foundation Namespace
In case your namespace does not appear on the Installer Wizard page, you can manually configure it by following the steps below:
Click Configure Foundation.
Enter the required details.
Activate the namespace (e.g., the one you have just configured).
Note: Before adding the foundation namespace, ensure the following prerequisites are met:
All HealthShare (HS) packages, routines, and globals* are correctly mapped to your target namespace.
All required roles are created and properly assigned.
Otherwise, you may encounter unexpected errors during configuration.
Once configured, you can see the namespace in the Foundation list.
Programmatic Foundation Configuration
Run the install class method below:
Do ##class(HS.Util.Installer.Foundation).Install(“FoundationNamespace”)
Well done! The foundation namespace has been successfully configured and activated! With this setup in place, you are now ready to proceed.
Since all configuration-related activities are automatically recorded in log files, you can refer to them for troubleshooting if you encounter any issues during the configuration process. (Check out the Analyzing HS Log Files for Configuration section for details.)
The next step is to customize the FHIR implementation classes to meet your specific integration and processing requirements.
Customizing FHIR Implementation Classes
The IRIS FHIR server architecture allows a flexible customization of its FHIR implementation classes regardless of whether you are extending the Resource Repository or writing a custom backend. It can be done in two ways:
Facade Approach (Recommended): You can customize the prebuilt FHIR classes to interact directly with your FHIR resources. This method aligns with the Facade design pattern by abstracting underlying systems through FHIR interfaces.
Interoperability Routing Approach: You may configure the FHIR server to forward incoming requests to IRIS Interoperability at the initial stage of request processing. This allows you to leverage interoperability components for custom logic and routing.With this approach, you can configure the Interoperability Service class directly within the FHIR server setup under “Service Config Name.” The FHIR server will then redirect the request to your Interoperability production environment.
Note: For this Interoperability routing method, only HS.FHIRServer.Interop.Service or its subclasses are supported. IRIS does not permit the use of other general business service classes.
Generally, a FHIR server resource repository without an interoperability production can achieve significantly faster performance.
At this point, let’s proceed to creating custom classes.
Customize the Prebuilt in Classes
To begin customizing the FHIR Server, create subclasses of the classes listed below. You can modify their behavior by adjusting parameters, altering logic, or rewriting functionality as required to meet your specific needs.
Note: starting from version IRIS 2024.1, the interaction logic has been updated. You can find the details of the new implementation below. For backward compatibility, both the legacy and updated classes are still available. However, if you work with the latest version, it is recommended to use the updated implementation.
Starting from version 2024.1:
HS.FHIRServer.Storage.JsonAdvSQL.Interactions
HS.FHIRServer.Storage.JsonAdvSQL.InteractionsStrategy
HS.FHIRServer.Storage.JsonAdvSQL.RepoManager
For versions prior to 2024.1:
HS.FHIRServer.Storage.Json.Interactions
HS.FHIRServer.Storage.Json.InteractionsStrategy
HS.FHIRServer.Storage.Json.RepoManager
If you are writing an entirely custom backend logic to handle your FHIR server architecture instead of employing the Resource Repository, you should subclass the architecture superclasses:
HS.FHIRServer.API.Interactions
HS.FHIRServer.API.InteractionsStrategy
HS.FHIRServer.API.RepoManager
Before customizing the classes, let's take a moment to understand the concept of FHIR interactions first.
What is FHIR Interactions?
FHIR interactions are standard operations defined by the FHIR (Fast Healthcare Interoperability Resources) specification that a client can perform on FHIR resources via a RESTful API.
These interactions define how clients and servers communicate, including retrieving, creating, updating, or deleting healthcare data represented as FHIR resources.
Common FHIR Interactions
Below, we stated a list of the main types of FHIR interactions:
Read: Retrieves a resource by its ID (e.g., GET /Patient/123).
Vread (Versioned Read): Retrieves a specific version of a resource (e.g., GET /Patient/123/_history/2).
Update: Replaces an existing resource (e.g., PUT /Patient/123).
Patch: Partially updates a resource (e.g., PATCH /Patient/123).
Delete: Removes a resource (e.g., DELETE /Patient/123).
Create: Adds a new resource (e.g., POST /Patient).
Search: Queries resources based on parameters (e.g., GET /Patient?name=Ashok).
History: Retrieves the change history for a resource or resource type (e.g., for a resource: GET /Patient/123/_history; for all resources: GET /_history).
Capabilities: Returns a CapabilityStatement showing what the FHIR server supports (e.g., GET /metadata).
Let’s continue customizing our classes.
These three major classes form a hierarchical chain, where they are linked to one another, and together they establish the foundational infrastructure for your FHIR server implementation.
Interaction Class
HS.FHIRServer.Storage.JsonAdvSQL.Interactions is the core class that serves as a backbone for handling FHIR interactions through the InteractionsStrategy class.It facilitates communication between the service class and the resource repository.
This class provides API methods to interact with the FHIR repository at the resource level:
Add(): Creates and stores a new resource in the FHIR repository.
Delete(): Removes an existing resource from the repository.
Read(): Retrieves a specific resource from the repository.
LoadMetadata(): Loads metadata used to define the Capability Statement. (Check out the Modify Capability Statement section for configuration details.)
This is not a full list of methods.
The methods in the Interactions class, which are invoked by the Service (HS.FHIRServer.Service.cls) class during FHIR request processing, can also be called directly from a server-side Object Script application. For instance, instead of sending a POST request to the service, a server-side application can call the Add() method of the Interactions class rather than sending a POST request to the Service.
Interaction Strategy Class
The HS.FHIRServer.JsonAdvSQL.InteractionsStrategy class defines the overall strategy and backend logic for the FHIR server. It serves as the storage strategy, determining how FHIR resources are stored and retrieved.
Each InteractionsStrategy is associated with a subclass of HS.FHIRServer.API.RepoManager, which handles the services that work with this strategy.
Additionally, you must configure the two parameters in the InteractionsStrategy class mentioned below:
StrategyKey: Assigns a unique key used by both InteractionsStrategy and RepoManager (e.g., Parameter StrategyKey As %String = "MyFacade").
2. InteractionsClass: Specifies the custom interactions class to use (e.g., Parameter InteractionsClass As %String = "FHIRFacade.Storage.JsonAdvSQL.Interactions").
Repo Manager Class
The HS.FHIRServer.Storage.JsonAdvSQL.RepoManager Resource Repository is the default storage strategy for a FHIR server. It allows you to install a fully functional FHIR server without any additional development tasks. This Repo automatically stores FHIR data received by the server. This class can create new repositories, configure FHIR databases, formulate a strategy, etc.
Besides, you must configure a parameter in the RepoManager class stated below:
StrategyKey: Assigns a unique key used by both InteractionsStrategy and RepoManager.
Customizing the Interactions Class
To begin the process, let's look at the Interactions class first.
Create a Custom Class Extend the HS.FHIRServer.Storage.JsonAdvSQL.Search class to define your custom behavior.
Override the Add() Method for Specific Resource Types Instead of storing the resource in the default repository, redirect handling to your custom implementation (e.g., when the resourceType is Patient). To achieve it, override the Add() method in your custom class.
Set Required Metadata on the Resource Object As a part of this customization, ensure that particular metadata fields are set on the pResourceObj. Failing to do so may result in unexpected errors. Required fields include the following:
Set pResourceObj.meta = {}
Set pResourceObj.meta.versionId = 1
Set pResourceObj.meta.lastUpdated = $ZDT($H, 3, 7)
It will ensure proper resource versioning and timestamping, required for the FHIR server to function correctly.
Include HS.FHIRServer
Class MyFacade.HS.FHIRServer.Storage.JsonAdvSQL.Interactions Extends HS.FHIRServer.Storage.JsonAdvSQL.Search
{
Method Add(pResourceObj As %DynamicObject, pResourceIdToAssign As %String = "", pHttpMethod = "POST") As %String
{
If pResourceObj.resourceType="Patient" {
Set sc = ##class(MyFacade.FHIRFacade.Patient.Utils).Create(pResourceObj)
If $$$ISOK(sc) {
Return pResourceObj.Id
}
}
Else {
$$$ThrowFHIR($$$HSFHIRErrResourceNotSupported,pResourceObj.resourceType)
}
}
}
/// My custom class to handle the FHIR resource. Here I can modify and store data into my classes/global
Class MyFacade.FHIRFacade.Patient.Utils Extends %RegisteredObject
{
ClassMethod Create(pResourceObj) As %Status
{
///
/// Store logic
/// You have to set the values below as mandatory key-value pairs in pResourceObj
Set pResourceObj.Id=11
Set pResourceObj.meta = {}
Set pResourceObj.meta.versionid=1
Set pResourceObj.meta.lastUpdated = $ZDT($H,3,7)
Set pResourceObj.meta.LastModified = $ZDT($H,3,7)
}
}
Note:In IRIS, once the FHIR server is configured and enabled, it intentionally caches instances of your StrategyInteractions and Interaction classes to help improve overall performance.
Before proceeding with the next customization step,it is simportant to understand what a CapabilityStatement is and how to define one for your FHIR server.
Capability Statement
A FHIR server's Capability Statement is client-facing metadata that describes the functionality the FHIR server supports. It contains details that FHIR clients can retrieve about the server's behavior, operations it supports, and how it processes FHIR requests.
Customizing the FHIR Server CapabilityStatement
As you customize your FHIR server (FHIR Facade), you should also update the Capability Statement so that FHIR clients have an accurate description of its capabilities. You have two options for how to do it.
Typically, you can access the CapabilityStatement by invoking the REST API call to your server with the metadata resource (your_FHIRServer_URL/metadata).
E.g., GET http://127.0.0.1:52782/fhirapp/r4/metadata
InterSystems IRIS provides two ways to customize the CapabilityStatement of the FHIR server architecture:
1. Overriding Methods in a Custom InteractionStrategy Class (Recommended)
This is the preferred and most flexible approach since it gives you greater control over how the CapabilityStatement is generated.
The CapabilityStatement is automatically regenerated when certain FHIR server behaviors change and is cached to improve performance.Therefore, the statement is usually generated during FHIR server activation or deactivation, etc.
IRIS offers several methods to update different parts of the CapabilityStatement, where the key method is the following:
GetCapabilityTemplate()
This method is used to customize basic details (e.g., name, version, publisher, description, status, etc). You can also override the GetCapabilityTemplate() method in your custom InteractionStrategy class to return a modified structure that suits your implementation.
LoadMetadata()
This method is a part of the Interactions class and can be invoked by the InteractionStrategy class to generate the CapabilityStatement.
You can configure your custom CapabilityStatement either directly within this method or by delegating the logic to another class and calling it from within LoadMetadata().
Customization Example
I customized the LoadMetadata() method as follows:
I created a custom class (FHIRFacade.Utils) to define and hold the CapabilityStatement.
I validated the statement and saved it to the ..metadata and ..metadataTime properties.
Method LoadMetadata() As %DynamicObject
{
#; get your capability statement by calling your class method
Set metadata = ##class(FHIRFacade.Utils).FHIRFacadeCapabilityStatement()
Set GLOBAL = ..strategy.GetGlobalRoot()
if metadata="" {
$$$ThrowFHIR($$$HSFHIRErrMetadataNotConfigured,$$$OutcomeIs(503, "fatal", "transient"))
}
if @GLOBAL@("capability", "time") '= ..metadataTime {
Set ..metadata = ##class(%DynamicObject).%FromJSON(metadata)
Set ..metadataTime = $Get(@GLOBAL@("capability", "time"),$ZDT($H,3,7))
}
Return ..metadata
}
Updating the CapabilityStatement for the FHIR Server
As I previously mentioned, the FHIR server caches the CapabilityStatement to improve performance. It means that any changes you make to the configuration or implementation can not take effect until the CapabilityStatement is explicitly updated.
To update the CapabilityStatement, take the steps below:
Open the IRIS terminal and execute the following command:
DO ##class(HS.FHIRServer.ConsoleSetup).Setup()
From the menu, select Option 7: Update the CapabilityStatement Resource.
Choose the FHIR endpoint you wish to update.
Confirm the update when prompted.
Alternatively, you can employ the class method mentioned below to update the CapabilityStatement:
ClassMethod UpdateCapabilityStatement(pEndPointKey As %String = "/csp/healthshare/learning/fhirmyfac/r4")
{
#dim strategy as HS.FHIRServer.API.InteractionsStrategy = ##class(HS.FHIRServer.API.InteractionsStrategy).GetStrategyForEndpoint(pEndPointKey)
if '$IsObject(strategy) {
$$$ThrowFHIR($$$GeneralError, "Unable to create Storage Strategy Class")
}
Set interactions = strategy.NewInteractionsInstance()
do interactions.SetMetadata( strategy.GetMetadataResource() )
}
Manually Editing the CapabilityStatement
You can manually retrieve, modify, and upload the CapabilityStatement for a FHIR server endpoint. This approach enables complete control over the statement’s structure and content.
Steps to Retrieve and Export the CapabilityStatement:
Get the interaction strategy for your FHIR endpoint.
Create a new interactions instance.
Load the existing CapabilityStatement.
Export it to a local JSON file:
ClassMethod UpdateCapabilityStatamentViaFile()
{
Set strategy = ##class(HS.FHIRServer.API.InteractionsStrategy).GetStrategyForEndpoint("/fhirapp/r4")
Set interactions = strategy.NewInteractionsInstance()
Set capabilityStatement = interactions.LoadMetadata()
Do capabilityStatement.%ToJSON("c:\localdata\MyCapabilityStatement.json")
}
5. Edit the exported JSON file according to your configuration. Then run the code below to load the updated CapabilityStatement into the FHIR server:
ClassMethod LoadCapabilitystatmentToFHIR(){
set strategy = ##class(HS.FHIRServer.API.InteractionsStrategy).GetStrategyForEndpoint("/fhirapp/r4")
set interactions = strategy.NewInteractionsInstance()
set newCapabilityStatement = {}.%FromJSONFile("c:\localdata\MyCapabilityStatement.json")
do interactions.SetMetadata(newCapabilityStatement)
}
Configuring the FHIR Server
Configure a FHIR Server Endpoint
The FHIR configuration interface and setup process have undergone significant changes in version 2025.1, which serves as the reference for this documentation
In 2025.1: To add a FHIR endpoint, navigate to IRIS Management Portal → Health > FHIR Server Management > FHIR Configuration
URL: /csp/healthshare/learning/fhirfac/r4
Namespace: select your namespace
FHIR Version: select FHIR version
In Versions Before 2025.1: The navigation path was IRIS Management Portal → Health > FHIR Configuration Example endpoint: /csp/healthshare/learning/fhirfac/r4
Advanced Configuration
The Advanced Configuration section allows you to define a custom interaction strategy class after providing the required details:
Additional Configurations
You can configure the following settings if needed. Otherwise, the system will proceed with the default values:
Finally, click Create to start the configuration process. Please note that it may take some time since the system will perform the tasks below:
FHIR Server Database Setup
Resource Database: Stores the current FHIR resources.
History Database: Maintains historical versions of the resources.
This setup creates a couple of databases to store the resources.
Global Mappings and Additional Configuration The system automatically establishes global mappings and applies additional configuration settings required for the FHIR server.
The system is now ready to receive FHIR requests and send responses accordingly.
Analyzing HS Log Files for Configuration
Foundation Namespace Activation Log Files
During the activation of the Foundation namespace via the Installer Wizard, InterSystems IRIS automatically generates a set of log files. These files capture detailed information about the actions performed during the setup of the foundational components required for the FHIR server.
These logs are invaluable for understanding and troubleshooting the configuration process since they typically include the following:
High-level installation events related to the FHIR server configuration.
Namespace-specific configuration details, showing what was set up or modified in each targeted namespace.
The log file names demonstrate a pattern. They begin with HS.Util.Installer, followed by the namespace name, a hyphen (-), an integer, and the .log extension. For example: HS.Util.Installer.HSSYS-0.log.
Note: An integer count will increment each time the foundation is activated.
To programmatically retrieve these directories, take the steps below:
Use $system.Util.InstallDirectory() to get the IRIS installation directory.
Employ $system.Util.ManagerDirectory() to obtain the full path to the /mgr/ directory.
Common Log Files:
HS.Util.Installer.1.log
HS.Util.Installer.HSSYS-0.log
HS.Util.Installer.HSCUSTOM-0.log
HS.Util.Installer.HSLIB-0.log
HS.Util.Installer.HSSYSLOCALTEMP-0.log
If you encounter any issues during installation or configuration, review these log files for detailed diagnostic information.
Note: Always review these log files to ensure that no errors are present after every IRIS upgrade.
Additionally, the IRISFHIRServerLogs application on Open Exchange provides a web interface for viewing these logs.
Debugging the FHIR Server
InterSystems IRIS provides a debug mode for the FHIR server to facilitate effortless debugging.
Understanding Class Caching in InterSystems IRIS FHIR Server
In InterSystems IRIS, once the FHIR server is configured and enabled, it starts intentionally caching instances of your StrategyInteractions and interaction classes to improve overall performance.
However, due to this caching mechanism, any changes made to your subclasses can not be reflected once the classes have been cached. In other words, updates to your custom logic will not take effect unless the FHIR server creates new instances of those classes.
To ensure your modifications are reflected during development or testing, you can enable debug mode, which forces the server to produce a new instance for each request.
Enabling Debug Mode via IRIS Shell
Take the steps below to enable debug mode using the IRIS terminal:
Open the IRIS Terminal and run the following command:
Do ##class(HS.FHIRServer.ConsoleSetup).Setup()
From the menu, select "Configure a FHIR Server Endpoint".
When prompted, choose the endpoint you wish to configure. In the Edit FHIR Service Configuration section, locate the Debug Mode setting. Then, change its value from 0 to 7 to enable full debug mode.
This setting also enables the following:
Allow Unauthenticated Access
New Service Instance per request
Include Tracebacks in responses
Save the configuration.
Capability Statement Updates
The Do ##class(HS.FHIRServer.ConsoleSetup).Setup() command is a versatile tool and is not limited to debugging purposes only. It can also be used for the following:
Configuring, updating, and decommissioning FHIR endpoints.
Updating the Capability Statement
Useful Macros, Classes, and Globals
Macros
$$$ThrowFHIR: Used to throw FHIR-compliant exceptions within the server logic.
$$$HS*: A set of macros that define standard FHIR-relevant error messages.
Classes
HS.FHIRServer.Installer: Provides relevant class methods to install.
HS.FHIRServer.Tools.CapabilityTemplate: Offers templates and helper methods for constructing and customizing the FHIR CapabilityStatement.
HS.FHIRServer.API.OperationHandler: Manages custom FHIR operations on the server.
GetMetadataResource(): Modifies or extends the CapabilityStatement returned by the FHIR server if you override this method in your custom class.
HS.Util.Installer.Foundation
HS_HC_Util_Installer.Log: internally stores details from the HS.Util.Installer.*.log files.
Globals
^HS.FHIRServer.*: Stores FHIR server configuration data and resource instances.
^FSLOG: Stores FHIR server log entries for auditing and debugging. The FSLog Open Exchange application can provide a web-based view of these logs.
^%ISCLOG: Enables HTTP request logging at various levels. Below you can see an example of how to activate detailed logging:
Enable logging:
Set ^%ISCLOG = 5
Set ^%ISCLOG("Category","HSFHIR") = 5
Set ^%ISCLOG("Category","HSFHIRServer") = 5
Disable logging:
set ^%ISCLOG=1
^FSLogChannel: To enable logging in the foundation namespace. This global specifies the types of logging information that should be captured. Once ^FSLogChannel is set, the system will begin storing logs in the ^FSLOG global. To disable the logs, delete the global kill ^FSLogChannel.
Available channelType values include "Msg", "SQL", "_include", and "all".
Example: Set ^FSLogChannel("all") = 1; Enables logging for all channel types.
Common Errors and Solutions
"Class Does Not Exist" Error on Foundation:
Even with all configurations done correctly, you may sometimes encounter the following error. When it occurs, remember to verify the following:
Ensure that all %HS_DB_* roles are created and properly mapped to the foundation namespace.
Check the HS.Util.Installer.* log files for both the HSSYS namespace and your target namespace.
This article provides an overview of the basic FHIR facade configuration.
Article
Peter Steiwer · Dec 21, 2018
Easily transform a CSV file into a personalized preview of DeepSee - InterSystems BI
AnalyzeThis can be found on InterSystems Open Exchange. Use the Download link to navigate to GitHub and begin installing the project. Follow the “Installation” section of the GitHub README.
After installation, navigate to the User Portal from the Management Portal:
Once here, a new Link will have been added, click on the link and then select "New" to begin:
Use “Browse” to locate a csv file to import:
Click on “Next” to see a preview of your data and select “Measure” or “Dimension”:
As defined, “Measures” are the value you would like to aggregate. “Dimensions” are the values you would like to aggregate on.
You can also select to hide properties that are not good Dimensions or Measures. For example, “ProviderId” is unique, so this will not be a good value to group on. We also do not want any sort of sum or aggregate of this number, so we will not include it here:
All properties default to a Dimension. We will find some properties we want to change to be Measures:
Here we can see that “TotalDischarges”, “AverageCoveredCharges”,”AverageTotalPayments”, and “AverageMedicarePayments” would be good values to aggregate. We will make them Measures:
We also know these are dollar amounts, so we will change their type from Integer to Currency:
Now that we are happy with our data, we can click “Import”. This will start processing the data as seen here:
Once this stage is complete (speed depends on the amount of data being processed), we will see some new buttons on the dialog box:
Here we will click on “Sample Dashboard” to view the generated sample based on our data:
We can now have our data in a Cube so that we can start exploring the analytics capabilities in just a few minutes.
If any bugs are experienced during this process, please feel free to email me at psteiwer@intersystems.com, or file a bug report on GitHub. For general questions, please comment on this article so others can benefit from the information as well.
Head over to InterSystems Open Exchange and download AnalyzeThis today! We had to chose one out of these 4 book covers for AnalyzeThis, we chose #1, do you agree with us? Yes, the 1st one is a full match)@Peter.Steiwer, do you plan also to share the short screencast on how to use the tool in the best way? We could deploy it on Community Channel I do not currently have a screencast, but on the Community Channel there is the Flash Talk from Global Summit 2018 that can be viewed for now Hi @Peter.Steiwer!Tried to play with the demo and got the following:Installed on IRIS Docker Community version: IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2019.2 (Build 107U) Wed Jun 5 2019 17:26:23 EDT Hello @Evgeny.Shvarov,
Just to close the loop on this issue, @Peter.Steiwer fixed it back in July 24, 2019. Please feel free to download the latest version here.
Thanks,
Asaf Hi, @Asaf , @Peter.Steiwer!
Yes, tried this on mac recently with 2019.3 on IRIS docker CE. This bug is solved - great!
But on the final step, it says "It's working"... - and it's working for a long time. Never saw the result, yet)
Hi @Evgeny.Shvarov
Please feel free to create an Issue on GitHub. Please include the CSV file you are trying to use as well
Peter done https://github.com/psteiwer/AnalyzeThis/issues/35 Objectscript Package manager has the updated version 1.1.2 as well. So Analyzethis can be installed as:
USER:zpm>install analyzethis Hi @Evgeny.Shvarov
This is fixed in v1.1.3
Article
Murray Oldfield · Nov 29, 2016
This post provides guidelines for configuration, system sizing and capacity planning when deploying Caché 2015 and later on a VMware ESXi 5.5 and later environment.
I jump right in with recommendations assuming you already have an understanding of VMware vSphere virtualization platform. The recommendations in this guide are not specific to any particular hardware or site specific implementation, and are not intended as a fully comprehensive guide to planning and configuring a vSphere deployment -- rather this is a check list of best practice configuration choices you can make. I expect that the recommendations will be evaluated for a specific site by your expert VMware implementation team.
[A list of other posts in the InterSystems Data Platforms and performance series is here.](https://community.intersystems.com/post/capacity-planning-and-performance-series-index)
_Note:_ This post was updated on 3 Jan 2017 to highlight that VM memory reservations must be set for production database instances to guarantee memory is available for Caché and there will be no swapping or ballooning which will negatively impact database performance. See the section below *Memory* for more details.
### References
The information here is based on experience and reviewing publicly available VMware knowledge base articles and VMware documents for example [Performance Best Practices for VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere-esxi-vcenter-server-67-performance-best-practices.pdf) and mapping to requirements of Caché database deployments.
## Are InterSystems' products supported on ESXi?
It is InterSystems policy and procedure to verify and release InterSystems’ products against processor types and operating systems including when operating systems are virtualised. For specifics see [InterSystems support policy](http://www.intersystems.com/services-support/product-support/virtualization/) and [Release Information](http://www.intersystems.com/services-support/product-support/latest-platform/).
>For example: Caché 2016.1 running on Red Hat 7.2 operating system on ESXi on x86 hosts is supported.
Note: If you do not write your own applications you must also check your application vendors support policy.
### Supported Hardware
VMware virtualization works well for Caché when used with current server and storage components. Caché using VMware virtualization has been deployed succesfully at customer sites and has been proven in benchmarks for performance and scalability. There is no significant performance impact using VMware virtualization on properly configured storage, network and servers with later model Intel Xeon processors, specifically: Intel Xeon 5500, 5600, 7500, E7-series and E5-series (including the latest E5 v4).
Generally Caché and applications are installed and configured on the guest operating system in the same way as for the same operating system on bare-metal installations.
It is the customers responsibility to check the [VMware compatibility guide](http://www.vmware.com/resources/compatibility/search.php) for the specific servers and storage being used.
# Virtualised architecture
I see VMware commonly used in two standard configurations with Caché applications:
- Where primary production database operating system instances are on a ‘bare-metal’ cluster, and VMware is only used for additional production and non-production instances such as web servers, printing, test, training and so on.
- Where ALL operating system instances, including primary production instances are virtualized.
This post can be used as a guide for either scenario, however the focus is on the second scenario where all operating system instances including production are virtualised. The following diagram shows a typical physical server set up for that configuration.
_Figure 1. Simple virtualised Caché architecture_
Figure 1 shows a common deployment with a minimum of three physical host servers to provide N+1 capacity and availability with host servers in a VMware HA cluster. Additional physical servers may be added to the cluster to scale resources. Additional physical servers may also be required for backup/restore media management and disaster recovery.
For recommendations specific to _VMware vSAN_, VMware's Hyper-Converged Infrastructure solution, see the following post: [Part 8 Hyper-Converged Infrastructure Capacity and Performance Planning](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity). Most of the recommendations in this post can be applied to vSAN -- with the exception of some of the obvious differences in the Storage section below.
# VMWare versions
The following table shows key recommendations for Caché 2015 and later:
vSphere is a suite of products including vCenter Server that allows centralised system management of hosts and virtual machines via the vSphere client.
>This post assumes that vSphere will be used, not the "free" ESXi Hypervisor only version.
VMware has several licensing models; ultimately choice of version is based on what best suits your current and future infrastructure planning.
I generally recommend the "Enterprise" edition for its added features such as Dynamic Resource Scheduling (DRS) for more efficient hardware utilization and Storage APIs for storage array integration (snapshot backups). The VMware web site shows edition comparisons.
There are also Advanced Kits that allow bundling of vCenter Server and CPU licenses for vSphere. Kits have limitations for upgrades so are usually only recommended for smaller sites that do not expect growth.
# ESXi Host BIOS settings
The ESXi host is the physical server. Before configuring BIOS you should:
- Check with the hardware vendor that the server is running the latest BIOS
- Check whether there are any server/CPU model specific BIOS settings for VMware.
Default settings for server BIOS may not be optimal for VMware. The following settings can be used to optimize the physical host servers to get best performance. Not all settings in the following table are available on all vendors’ servers.
# Memory
The following key rules should be considered for memory allocation:
When running multiple Caché instances or other applications on a single physical host VMware has several technologies for efficient memory management such as transparent page sharing (TPS), ballooning, swap, and memory compression. For example when multiple OS instances are running on the same host TPS allows overcommitment of memory without performance degradation by eliminating redundant copies of pages in memory, which allows virtual machines to run with less memory than on a physical machine.
>Note: VMware Tools must be installed in the operating system to take advantage of these and many other features of VMware.
Although these features exist to allow for overcommitting memory, the recommendation is to always start by sizing vRAM of all VMs to fit within the physical memory available. Especially important in production environments is to carefully consider the impact of overcommitting memory and overcommit only after collecting data to determine the amount of overcommitment possible. To determine the effectiveness of memory sharing and the degree of acceptable overcommitment for a given Caché instance, run the workload and use Vmware commands `resxtop` or `esxtop` to observe the actual savings.
A good reference is to go back and look at the [fourth post in this series on memory](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory) when planning your Caché instance memory requirements. Especially the section "VMware Virtualisation considerations" where I point out:
>Set VMware memory reservation on production systems.
You want to *must* avoid any swapping for shared memory so set your production database VMs memory reservation to at least the size of Caché shared memory plus memory for Caché processes and operating system and kernel services. If in doubt **Reserve the full production database VMs memory (100% reservation)** to guarantee memory is available for your Caché instance so there will be no swapping or ballooning which will negatively impact database performance.
Notes: Large memory reservations will impact vMotion operations so it is important to take this into consideration when designing the vMotion/management network. A virtual machine can only be live migrated, or started on another host with Vmware HA if the target host has free physical memory greater than or equal to the size of the reservation. This is especially important for production Caché VMs. For example pay particular attention to HA Admission Control policies.
>Ensure capacity planning allows for distribution of VMs in event of HA failover.
For non-production environments (test, train, etc) more aggressive memory overcommitment is possible, however do not over commit Caché shared memory, instead limit shared memory in the Caché instance by having less global buffers.
Current Intel processor architecture has a NUMA topology. Processors have their own local memory and can access memory on other processors in the same host. Not surprisingly accessing local memory has lower latency than remote. For a discussion of CPU check out the [third post in this series](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu) including a discussion about NUMA in the _comments section_.
As noted in the BIOS section above a strategy for optimal performance is to ideally size VMs only up to maximum of number of cores and memory on a single processor. For example if your capacity planning shows your biggest production Caché database VM will be 14 vCPUs and 112 GB memory then consider whether a a cluster of servers with 2x E5-2680 v4 (14-core processor) and 256 GB memory is a good fit.
>**Ideally** size VMs to keep memory local to a NUMA node. But dont get too hung up on this.
If you need a "Monster VM" bigger than a NUMA node that is OK, VMware will manage NUMA for optimal performance. It also important to right-size your VMs and not allocate more resources than are needed (see below).
## CPU
The following key rules should be considered for virtual CPU allocation:
Production Caché systems should be sized based on benchmarks and measurements at live customer sites. For production systems use a strategy of initially sizing the system the same as bare-metal CPU cores and as per best practice monitoring to see if virtual CPUS (vCPUs) can be reduced.
### Hyperthreading and capacity planning
A good starting point for sizing __production database__ VMs based on your rules for physical servers is to calculate physical server CPU requirements for the target processor with hyper-threading enabled then simply make the transaltaion:
>One physical CPU (includes hyperthreading) = One vCPU (includes hyperthreading).
A common misconception is that hyper-threading somehow doubles vCPU capacity. This is NOT true for physical servers or for logical vCPUs. Hyperthreading on a bare-metal server may give a 30% uplift in performance over the same server without hyperthreading, but this can also be variable depending on the application.
For initial sizing assume is that the vCPU has full core dedication. For example; if you have a 32-core (2x 16-core) E5-2683 V4 server – size for a total of up to 32 vCPU capacity knowing there may be available headroom. This configuration assumes hyper-threading is enabled at the host level. VMware will manage the scheduling between all the applications and VMs on the host. Once you have spent time monitoring the appliaction, operating system and VMware performance during peak processing times you can decide if higher consolidation is possible.
### Licencing
In vSphere you can configure a VM with a certain number of sockets or cores. For example, if you have a dual-processor VM (2 vCPUs), it can be configured as two CPU sockets, or as a single socket with two CPU cores. From an execution standpoint it does not make much of a difference because the hypervisor will ultimately decide whether the VM executes on one or two physical sockets. However, specifying that the dual-CPU VM really has two cores instead of two sockets could make a difference for software licenses. Note: Caché license counts the cores (not threads).
# Storage
>This section applies to the more traditional storage model using a shared storage array. For _vSAN_ recommendations also see the following post: [Part 8 Hyper-Converged Infrastructure Capacity and Performance Planning](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity)
The following key rules should be considered for storage:
## Size storage for performance
Bottlenecks in storage is one of the most common problems affecting Caché system performance, the same is true for VMware vSphere configurations. The most common problem is sizing storage simply for GB capacity, rather than allocating a high enough number of spindles to support expected IOPS. Storage problems can be even more severe in VMware because more hosts can be accessing the same storage over the same physical connections.
## VMware Storage overview
VMware storage virtualization can be categorized into three layers, for example:
- The storage array is the bottom layer, consisting of physical disks presented as logical disks (storage array volumes or LUNs) to the layer above.
- The next layer is the virtual environment occupied by vSphere. Storage array LUNs are presented to ESXi hosts as datastores and are formatted as VMFS volumes.
- Virtual machines are made up of files in the datastore and include virtual disks are presented to the guest operating system as disks that can be partitioned and used in file systems.
VMware offers two choices for managing disk access in a virtual machine—VMware Virtual Machine File System (VMFS) and raw device mapping (RDM), both offer similar performance. For simple management VMware generally recommends VMFS, but there may be situations where RDMs are required. As a general recommendation – unless there is a particular reason to use RDM choose VMFS, _new development by VMware is directed to VMFS and not RDM._
###Virtual Machine File System (VMFS)
VMFS is a file system developed by VMware that is dedicated and optimized for clustered virtual environments (allows read/write access from several hosts) and the storage of large files. The structure of VMFS makes it possible to store VM files in a single folder, simplifying VM administration. VMFS also enables VMware infrastructure services such as vMotion, DRS and VMware HA.
Operating Systems, applications, and data are stored in virtual disk files (.vmdk files). vmdk files are stored in the Datastore. A single VM can be made up of multiple vmdk files spread over several datastores. As the production VM in the diagram below shows a VM can include storage spread over several data stores. For production systems best performance is achieved with one vmdk file per LUN, for non-production systems (test, training etc) multiple VMs vmdk files can share a datastore and a LUN.
While vSphere 5.5 has a maximum VMFS volume size of 64TB and VMDK size of 62TB when deploying Caché typically multiple VMFS volumes mapped to LUNs on separate disk groups are used to separate IO patterns and improve performance. For example random or sequential IO disk groups or to separate production IO from IO from other environments.
The following diagram shows an overview of an example VMware VMFS storage used with Caché:
_Figure 2. Example Caché storage on VMFS_
### RDM
RDM allows management and access of raw SCSI disks or LUNs as VMFS files. An RDM is a special file on a VMFS volume that acts as a proxy for a raw device. VMFS is recommended for most virtual disk storage, but raw disks might be desirable in some cases. RDM is only available for Fibre Channel or iSCSI storage.
### VMware vStorage APIs for Array Integration (VAAI)
For the best storage performance, customers should consider using VAAI-capable storage hardware. VAAI can improve the performance in several areas including virtual machine provisioning and of thin-provisioned virtual disks. VAAI may be available as a firmware update from the array vendor for older arrays.
### Virtual Disk Types
ESXi supports multiple virtual disk types:
**Thick Provisioned** – where space is allocated at creation. There are further types:
- Eager Zeroed – writes 0’s to the entire drive. This increases the time it takes to create the disk, but results in the best performance, even on the first write to each block.
- Lazy Zeroed – writes 0’s as each block is first written to. Lazy zero results in a shorter creation time, but reduced performance the first time a block is written to. Subsequent writes, however, have the same performance as on eager-zeroed thick disks.
**Thin Provisioned** – where space is allocated and zeroed upon write. There is a higher I/O cost (similar to that of lazy-zeroed thick disks) during the first write to an unwritten file block, but on subsequent writes thin-provisioned disks have the same performance as eager-zeroed thick disks
_In all disk types VAAI can improve performance by offloading operations to the storage array._ Some arrays also support thin provisioning at the array level, do not thin provision ESXi disks on thin provisioned array storage as there can be conflicts in provisioning and management.
### Other Notes
As noted above for best practice use the same strategies as bare-metal configurations; production storage may be separated at the array level into several disk groups:
- Random access for Caché production databases
- Sequential access for backups and journals, but also a place for other non-production storage such as test, train, and so on
Remember that a datastore is an abstraction of the storage tier and, therefore, it is a logical representation not a physical representation of the storage. Creating a dedicated datastore to isolate a particular I/O workload (whether journal or database files), without isolating the physical storage layer as well, does not have the desired effect on performance.
Although performance is key, choice of shared storage depends more on existing or planned infrastructure at site than impact of VMware. As with bare-metal implementations FC SAN is the best performing and is recommended. For FC 8Gbps adapters are the recommended minimum. iSCSI storage is only supported if appropriate network infrastructure is in place, including; minimum 10Gb Ethernet and jumbo frames (MTU 9000) must be supported on all components in the network between server and storage with separation from other traffic.
Use multiple VMware Paravirtual SCSI (PVSCSI) controllers for the database virtual machines or virtual machines with high I/O load. PVSCSI can provide some significant benefits by increasing overall storage throughput while reducing CPU utilization. The use of multiple PVSCSI controllers allows the execution of several parallel I/O operations inside the guest operating system. It is also recommended to separate journal I/O traffic from the database I/O traffic through separate virtual SCSI controllers. As a best practice, you can use one controller for the operating system and swap, another controller for journals, and one or more additional controllers for database data files (depending on the number and size of the database data files).
Aligning file system partitions is a well-known storage best practice for database workloads. Partition alignment on both physical machines and VMware VMFS partitions prevents performance I/O degradation caused by I/O crossing track boundaries. VMware test results show that aligning VMFS partitions to 64KB track boundaries results in reduced latency and increased throughput. VMFS partitions created using vCenter are aligned on 64KB boundaries as recommended by storage and operating system vendors.
# Networking
The following key rules should be considered for networking:
As noted above VMXNET adapaters have better capabilities than the default E1000 adapter. VMXNET3 allows 10Gb and uses less CPU where as E1000 is only 1Gb. If there is only 1 gigabit network connections between hosts there is not a lot of difference for client to VM communication. However with VMXNET3 it will allow 10Gb between VMs on the same host, which does make a difference especially in multi-tier deployments or where there is high network IO requirements between instances. This feature should also be taken into consideration when planning affinity and antiaffinity DRS rules to keep VMs on the same or separate virtual switches.
The E1000 use universal drivers that can be used in Windows or Linux. Once VMware Tools is installed on the guest operating system VMXNET virtual adapters can be installed.
The following diagram shows a typical small server configuration with four physical NIC ports, two ports have been configured within VMware for infrastructure traffic: dvSwitch0 for Management and vMotion, and two ports for application use by VMs. NIC teaming and load balancing is used for best throughput and HA.
_Figure 3. A typical small server configuration with four physical NIC ports._
# Guest Operating Systems
The following are recommended:
>It is very important to load VMware tools in to all VM operating systems and keep the tools current.
VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system and improves management of the virtual machine. Without VMware Tools installed in your guest operating system, guest performance lacks important functionality.
Its vital that the time is set correctly on all ESXi hosts - it ends up affecting the Guest VMs. The default setting for the VMs is not to sync the guest time with the host - but at certain times the guest still do sync their time with the host and if the time is out has been known to cause major issues. VMware recommends using NTP instead of VMware Tools periodic time synchronization. NTP is an industry standard and ensures accurate timekeeping in your guest. It may be necessary to open the firewall (UDP 123) to allow NTP traffic.
# DNS Configuration
If your DNS server is hosted on virtualized infrastructure and becomes unavailable, it prevents vCenter from resolving host names, making the virtual environment unmanageable -- however the virtual machines themselves keep operating without problem.
# High Availability
High availability is provided by features such as VMware vMotion, VMware Distributed Resource Scheduler (DRS) and VMware High Availability (HA). Caché Database mirroring can also be used to increase uptime.
It is important that Caché production systems are designed with n+1 physical hosts. There must be enough resources (e.g. CPU and Memory) for all the VMs to run on remaining hosts in the event of a single host failure. In the event of server failure if VMware cannot allocate enough CPU and memory resources on the remaining server VMware HA will not restart VMs on the remaining servers.
## vMotion
vMotion can be used with Caché. vMotion allows migration of a functioning VM from one ESXi host server to another in a fully transparent manner. The OS and applications such as Caché running in the VM have no service interruption.
When migrating using vMotion, only the status and memory of the VM—with its configuration—moves. The virtual disk does not need to move; it stays in the same shared-storage location. Once the VM has migrated, it is operating on the new physical host.
vMotion can function only with a shared storage architecture (such as Shared SAS array, FC SAN or iSCSI). As Caché is usually configured to use a large amount of shared memory it is important to have adequare network capacity available to vMotion, a 1Gb nework may be OK, however higher bandwidth may be required or multi-NIC vMotion can be configured.
## DRS
Distributed Resource Scheduler (DRS) is a method of automating the use of vMotion in a production environment by sharing the workload among different host servers in a cluster.
DRS also presents the ability to implement QoS for a VM instance to protect resources for Production VMs by stopping non-production VMs over using resources. DRS collects information about the use of the cluster’s host servers and optimize resources by distributing the VMs’ workload among the cluster’s different servers. This migration can be performed automatically or manually.
## Caché Database Mirror
For mission critical tier-1 Caché database application instances requiring the highest availability consider also using [InterSystems synchronous database mirroring.](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_bp_vm) Additional advantages of also using mirroring include:
- Separate copies of up-to-date data.
- Failover in seconds (faster than restarting a VM then operating System then recovering Caché).
- Failover in case of application/Caché failure (not detected by VMware).
# vCenter Appliance
The vCenter Server Appliance is a preconfigured Linux-based virtual machine optimized for running vCenter Server and associated services. I have been recommending sites with small clusters to use the VMware vCenter Server Appliance as an alternative to installing vCenter Server on a Windows VM. In vSphere 6.5 the appliance is recommended for all deployments.
# Summary
This post is a rundown of key best practices you should consider when deploying Caché on VMware. Most of these best practices are not unique to Caché but can be applied to other tier-1 business critical deployments on VMware.
If you have any questions please let me know via the comments below. Good Day Murray;
Anything to look out for VMs hosting Ensemble Databases on vMware being part of VMware Site Recovery Managermaking use of vSphere Replication? Can it be alone be safely used to boot up the VM on the other site?
Regards;
Anzelem.
Announcement
Emily Geary · Feb 7, 2024
The InterSystems Certification Team is building an InterSystems TrakCare Reports certification exam and is looking for Subject Matter Experts (SMEs) from our community to help write and review questions. You, as a valued InterSystems community member, know the challenges of working with our technology and what it takes to be successful at your job. A work assignment will typically involve writing 15 assigned questions and reviewing 15 assigned directly to you.
Proposed Project Work Dates: The work assignments will be assigned by the Certification Team through September 15, 2024.
Here are the details:
Action Item
Details
Contact InterSystems Certification
Write to certification@intersystems.com to express your interest in the Certification Subject Matter Expert Program. Tell us that you are interested in being an InterSystems TrakCare Reports SME (an individual with at least one year of experience with InterSystems TrakCare Reports tasks).
Complete project profile - External Participants
If you are an external volunteer looking to participate, a team member will send you a profile form to determine if your areas of expertise align with open project.
Accept
If you are selected for an exam development opportunity, a team member will email you a Non-Disclosure Agreement requiring your signature.
Train
After receiving your signed document, and before beginning to write questions, you will be asked to watch a short training video on question-item writing.
Participate
Once onboarded, the Certification Team will send you information regarding your first assignment. This will include:
an invitation to join Certiverse, our new test delivery platform, as an item writer and reviewer
an item writing assignment, which usually consists of the submission of 15 scenario-based questions
an alpha testing assignment, which usually consists of reviewing 15 items written by your peers
You will typically be given one month to complete the assignment.
Subject Matter Experts are eligible for a SME badge based on successful completion of their exam development participation. SMEs are also awarded the InterSystems TrakCare Reports certification if they write questions for all KSA Groups and their questions are accepted.
Interested in participating? Email certification@intersystems.com now!
KSA Group
KSA
Target Item
1. Creates InterSystems Reports using Logi Designer within TrakCare
1. Describes what the specification is saying
1. Recalls what data sources and procedures are, and how to access the sources of data
2. Identifies what parameters are used from the specification
3. Distinguishes between different page report component types (eg. cross tabs, banded objects, normal tables)
2. Identifies the components of InterSystems Reports
1. Distinguishes between catalogues and reports
2. Recalls the features of a catalogue
3. Catalogues connections and terms
4. Accesses the catalogue manager in the designer
5. Identifies which data source types are used in reporting
6. Identifies the data source connection and how to modify it
7. Identify what is required to use a JDBC connection
8. Recalls what a stored procedure is
9. Recalls when and why to update a stored procedure
10. Distinguishes between different data sources and their use cases
11. Recalls the importance of binding parameters
12. Manages catalogues using reference entities
13. Recalls how to change the SQL type of a database field (eg. dates)
14. Identifies how to reuse sub-reports
15. Recalls the different use cases for sub-reports
16. Describe how to use parameter within a sub-report
17. Recalls how to configure the parameters that the sub-report requires
18. Recalls how to link a field on a row to filter sub-reports
19. Recalls the potential impact of updating stored procedures on the settings
3. Uses Logi Designer to design and present data
1. Distinguishes between the different formats of reports
2. Determines when and how to use different kinds of page report component types
3. Recalls the meaning of each band and where they appear (eg. page header vs banded page header)
4. Recalls how to add groups and work with single vs multiple groups
5. Differentiates between the types of summaries
6. Uses tools to manage, organize and group data and pages including effectively using page breaks
7. Identifies when to use formulas
8. Uses formulas to format data and tables
9. Determines how to best work with images including using dynamic images
10. Uses sub-reports effectively
11. Inserts standard page headers and footers into report
12. Recalls how to embed fonts into report
13. Applies correct formatting, localization, and languages
2. Integrates InterSystems reporting within TrakCare
1. Understands TrakCare report Architecture
1. Applies correct formatting, localization, and languages
2. Recalls how many user-inputted parameters can be used in TrakCare
3. Recalls how to setup menu for a report and how to add menu to a header
4. Recalls what a security group is and adds menus to security group access
5. Configure TrakCare layout webcommon.report
6. Differentiates between different types of layout fields
3. Supports InterSytems Reports
1. Verifies printing setup
1. Debug using menu or preview button
2. Tests the report by making sure it runs as expected
3. Demonstrates how to run reports with different combinations of parameters
4. Tests report performance with a big data set
5. Identifies error types
2. Uses print history
1. Identifies use cases for the print history feature
2. Recalls the steps to retry printing after a failed print
3. Uses print to history to verify parameters are correctly passed to the parameters in the stored procedure
4. Recalls how to identify a report was successfully previewed or if it encountered errors
Announcement
Benjamin De Boe · Jan 11, 2024
InterSystems is pleased to announce the General Availability of InterSystems IRIS Cloud SQL and InterSystems IRIS Cloud IntegratedML, two foundational services for developing cloud-native solutions powered by the proven, enterprise-class performance and reliability of InterSystems IRIS technology.
InterSystems IRIS Cloud SQL is a fully managed cloud service that brings the power of InterSystems IRIS relational database capabilities used by thousands of enterprise customers to a broad audience of application developers and data professionals. InterSystems IRIS Cloud IntegratedML is an option to this database-as-a-service that offers easy access to powerful Automated Machine Learning capabilities in a SQL-native form, through a set of simple SQL commands that can easily be embedded in application code to augment them with ML models that run close to the data.
The new offering is available from the AWS marketplace, and customers can subscribe to them using their AWS account. To encourage experimentation and developer creativity, new subscribers automatically receive a $300 credit for running deployments of any size. Beyond this free credit, customers will be billed through their AWS account.
InterSystems IRIS Cloud SQL, and the Cloud IntegratedML add-on, are foundational services in the InterSystems Cloud portfolio, and complement a number of successful earlier software-as-a-service offerings for the healthcare and financial services markets. They are building blocks for a composability approach to implement solutions that are easy to provision, scale, and operate in today’s fast-moving technology landscape.
Please check out the full press release and additional resources on the Developer Hub and get started today! Thanks Benjamin. A journey has started!
And for those wanting to see what it is like working with those services head over to our Learning Services video on InterSystems IRIS Cloud SQL and InterSystems Cloud IntegratedML @Benjamin.DeBoe - congratulations to the entire team for this VERY exciting milestone!!!!
Announcement
Benjamin De Boe · Jun 30, 2023
InterSystems IRIS Cloud SQL is a fully managed cloud service that brings the power of InterSystems IRIS relational database capabilities used by thousands of enterprise customers to a broad audience of application developers and data professionals. InterSystems IRIS Cloud IntegratedML is an option to this database-as-a-service that offers easy access to powerful Automated Machine Learning capabilities in a SQL-native form, through a set of simple SQL commands that can easily be embedded in application code to augment them with ML models that run close to the data.
Today, we announce the Developer Access Program for these two offerings. Application developers can now self-register for the service, create deployments and start building composable applications and smart data services, with all provisioning, configuration and administration taken care of by the service.
Developers can take advantage of a free trial that covers a small deployment for a limited time entirely free of charge in order to get started quickly and experience the performance of InterSystems IRIS Cloud technology. Alternatively, customers will be able to subscribe through the AWS marketplace to deploy the full set of instance sizes, which will get invoiced to their AWS account. To complement and support this core relational database service, the InterSystems Cloud team will continue to rapidly roll out additional features and capabilities throughout the Developer Access Program, and is eager to take your feedback to further enhance the user experience.
InterSystems IRIS Cloud SQL, and the Cloud IntegratedML add-on, are foundational services in the InterSystems Cloud portfolio, and complement a number of successful earlier software-as-a-service and platform-as-a-service offerings for the healthcare market. They are building blocks for a composability approach to implement solutions that are easy to provision, scale, and operate in today’s fast-moving technology landscape.
Register for the Developer Access Program today, and start building your next masterpiece!
During the Developer Access Program, deployments can only be created in the AWS us-east-1 region. These terms and conditions apply.
Announcement
Emily Geary · Aug 7, 2024
The InterSystems Certification Team is building an InterSystems TrakCare Integration certification exam and is looking for Subject Matter Experts (SMEs) from our community to help write and review questions. You, as a valued InterSystems community member, know the challenges of working with our technology and what it takes to be successful at your job. A work assignment will typically involve writing 15 assigned questions and reviewing 15 assigned directly to you.
Proposed Project Work Dates: The work assignments will be assigned by the Certification Team through September 15, 2024.
Here are the details:
Action Item
Details
Contact InterSystems Certification
Write to certification@intersystems.com to express your interest in the Certification Subject Matter Expert Program. Tell us that you are interested in being an InterSystems TrakCare Integration SME (an individual with at least one year of experience with InterSystems TrakCare Integration tasks).
Complete project profile - External Participants
If you are an external volunteer looking to participate, a team member will send you a profile form to determine if your areas of expertise align with open project.
Accept
If you are selected for an exam development opportunity, a team member will email you a Non-Disclosure Agreement requiring your signature.
Train
After receiving your signed document, and before beginning to write questions, you will be asked to watch a short training video on question-item writing.
Participate
Once onboarded, the Certification Team will send you information regarding your first assignment. This will include:
an invitation to join Certiverse, our new test delivery platform, as an item writer and reviewer
an item writing assignment, which usually consists of the submission of 15 scenario-based questions
an alpha testing assignment, which usually consists of reviewing 15 items written by your peers
You will typically be given one month to complete the assignment.
Subject Matter Experts are eligible for a SME badge based on successful completion of their exam development participation. SMEs are also awarded the InterSystems TrakCare Integration certification if they write questions for all KSA Groups and their questions are accepted
Interested in participating? Email certification@intersystems.com now!
KSA Group
KSA
Target Item
Demonstrates mastery of foundational concepts required for TrakCare integrations
1. Accesses TrakCare system and locates test patients and data
Describes components on the EPR
Explains how Edition provides region-specific functionality
Uses TrakCare to perform patient and episode lookup, and to perform other basic operations
2. Examines and configures basic TrakCare settings
Describes important items in the Configuration Manager (e.g. Site Code, System Paths, etc.)
Identifies steps to configure integration-related code tables
Toggles TrakCare features
3. Examines and interprets TrakCare data model and data definitions, and performs basic SQL queries
Examines and interprets data definitions and relationships between tables using TrakCare Data Dictionary UI
Examines and interprets data definitions, relationships, and global storages with InterSystems class reference function
Performs basic SQL queries on patient, episode, and order items
4. Examines and interprets integration security configuration settings
Describes TrakCare application security model
Examines security settings using the Management Portal in InterSystems IRIS
5. Interprets HL7 V2 requirements
Describes common HL7 V2 message types (e.g., ADT, SIU, REF, and ORM/ORU)
Correlates TrakCare triggering events with HL7 V2 message types
Describes mapping between HL7 V2 data and TrakCare data model
6. Interprets HL7 FHIR requirements
Names common HL7 FHIR resources
Correlates HL7 FHIR resources with TrakCare data model
7. Identifies elements in an SDA
Identifies important elements within SDA structure
Examines SDA definition by using the InterSystems class reference guide
Correlates SDA elements with HL7 V2 message segments and fields
2. Uses the HealthCare Messaging Framework (HMF) for TrakCare integrations
1. Demonstrates mastery of foundational HMF concepts
Uses the HMF (TrakCare/SDA3) Clinical Summary
Uses the HMF External Search and Merge workflow
Uses the HMF FHIR Manager
2. Designs and configures HMF productions
Identifies steps to define and configure HMF according to the integration solution design/architecture
Examines existing HMF configuration settings
Lists roles of each generated production (System, Router, Gateway)
Prepares HMF productions and interfaces according to the integration solution design
3. Develops and customizes HMF productions
Formulates development and customization plans
Develops custom inbound/outbound methods
Enhances generated DTL code
Creates extensions
Enables/disables Event Triggers/productions using Integration Manager
Configures Outbound Query interfaces and makes process calls to search patient information from an external system.
Creates additional SDA elements (extensions) at container and patient levels
4. Deploys HMF solutions
Prepares HMF adapter-based productions and interfaces
Contrasts local versus remote deployments
5. Manages and monitors HMF productions
Examines integration events and SDA content using Integration History UI
Views configuration settings (e.g., ports, URLs, file paths, etc.) for business services and business processes generated in each production (System, Router, and Gateway)
Identifies production status using the Production Manager and InterSystems IRIS Message Viewer
Examines message workflow using Visual Trace in the InterSystems IRIS Management Portal for each production
6. Troubleshoots HMF productions
Examines and interprets low-level HMF trace details in ^zTRAK("HMF") global
Examines HMF production settings
Formulates resolution strategies
3. Configures TrakCare for external access and access to 3rd party applications
1. Configures inbound data access
Uses external SQL tools (e.g., WinSQL to perform SQL queries)
2. Configures outbound data access
Creates and runs SQL queries against external tables or views
3. Uses SOAP web services
Writes InterSystems ObjectScript code to invoke external SOAP web services
Uses SOAP Wizard to create proxy clients
Uses industry-standard tools such as SoapUI, Postman, and CURL to make SOAP calls to TrakCare
4. Links to External Viewer
Configures External Viewer for TrakCare to launch into external PACS
Associates External Viewer to receiving location
5. Launches into 3rd party application with context
Names TrakCare in-process variables
Creates and configures custom charts
6. Uses TrakCare REST APIs
Retrieves TrakCare API Swagger documentation
Configures TrakCare REST APIs using the API Manager
Audits REST API history
Announcement
Cindy Olsen · May 5, 2023
Effective May 16, documentation for versions of InterSystems Caché® and InterSystems Ensemble® prior to 2017.1 will only be available in PDF format on the InterSystems documentation website. Local instances of these versions will continue to present content dynamically.
Announcement
Anastasia Dyubaylo · Jun 17, 2020
Hey Developers,
We're pleased to invite you to join the next InterSystems IRIS 2020.1 Tech Talk: Using InterSystems Managed FHIR Service in the AWS Cloud on June 30 at 10:00 AM EDT!
In this InterSystems IRIS 2020.1 Tech Talk, we’ll focus on using InterSystems Managed FHIR Service in the AWS Cloud. We’ll start with an overview of FHIR, which stands for Fast Healthcare Interoperability Resources, and is a next generation standards framework for working with healthcare data.
You'll learn how to:
provision the InterSystems IRIS FHIR server in the cloud;
integrate your own data with the FHIR server;
use SMART on FHIR applications and enterprise identity, such as Active Directory, with the FHIR server.
We will discuss an API-first development approach using the InterSystems IRIS FHIR server. Plus, we’ll cover the scalability, availability, security, regulatory, and compliance requirements that using InterSystems FHIR as a managed service in the AWS Cloud can help you address.
Speakers:🗣 @Patrick.Jamieson3621, Product Manager - Health Informatics Platform, InterSystems 🗣 @Anton.Umnikov, Senior Cloud Solution Architect, InterSystems
Date: Tuesday, June 30, 2020Time: 10:00 AM EDT
➡️ JOIN THE TECH TALK!
Announcement
Daniel Palevski · Nov 27, 2024
InterSystems announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2024.3
The 2024.3 release of InterSystems IRIS® data platform, InterSystems IRIS® for Health, and HealthShare® Health Connect is now Generally Available (GA).
Release Highlights
In this release, you can expect a host of exciting updates, including:
Much faster extension of database and WIJ files
Ability to resend messages from Visual Trace
Enhanced Rule Editor capabilities
Vector search enhancements
and more.
Please share your feedback through the Developer Community so we can build a better product together.
Documentation
Details on all the highlighted features are available through these links below:
InterSystems IRIS 2024.3 documentation, release notes, and the Upgrade Checklist.
InterSystems IRIS for Health 2024.3 documentation, release notes, and the Upgrade Checklist.
Health Connect 2024.3 documentation, release notes, and the Upgrade Checklist.
In addition, check out the upgrade information for this release.
Early Access Programs (EAPs)
There are many EAPs available now. Check out this page and register to those you are interested.
How to get the software?
As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format.
Classic installation packages
Installation packages are available from the WRC's Continuous Delivery Releases page for InterSystems IRIS, InterSystems IRIS for Health, and Health Connect. Additionally, kits can also be found in the Evaluation Services website.
Availability and Package Information
This release comes with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website.
The build number for this Continuous Delivery release is: 2024.3.0.217.0.
Container images are available from the InterSystems Container Registry. Containers are tagged as both "2024.3" or "latest-cd".
Announcement
Fabiano Sanches · Mar 19, 2024
The 2024.1 release of InterSystems IRIS® for HealthTM, and HealthShare® Health Connect is now Generally Available (GA).
❗This announcement does not apply for InterSystems IRIS®
Release Highlights
In this release, you can expect a host of exciting updates, including:
Support for Smart on FHIR 2.0.0
FHIR R4 object model generation
Improved performance of FHIR queries
Removal of the Private Web Server (PWS)
and more.
Please share your feedback through the Developer Community so we can build a better product together.
Documentation
Details on all the highlighted features are available through these links below:
InterSystems IRIS for Health 2024.1 documentation, release notes and the Upgrade Checklist.
HealthShare Health Connect 2024.1 documentation, release notes and the Upgrade Checklist.
In addition, check out this link for upgrade information related to this release.
Early Access Programs (EAPs)
There are many EAPs available now. Check out to this page and register to those you are interested.
How to get the software?
As usual, Extended Maintenance (EM) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format.
Classic installation packages
Installation packages are available from the WRC's Extended Maintenance Releases page for InterSystems IRIS for Health, and from the HealthShare Full Kits page for HealthShare Health Connect. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page.
Availability and Package Information
This release comes with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website (use the flag "Show Preview Software" to get access to the 2024.1).
The build number for this developer preview is: 2024.1.0.263.0.
Container images are available from the InterSystems Container Registry. Containers are tagged as both "2024.1" or "latest-em". I am not seeing HealthShare Health Connect 2024.1 listed under the HealthShare Full Kits. Am I missing something?
@Scott.Roth - there was an issue found with the HSHC 2024.1 kit so it was pulled down and a corrected kit should be available in the near future. So sorry for the inconvenience. What was the correction. We have b263 and wonder if we should 'upgrade' already? @Ian.Minshall , @Scott.Roth Please read this post w.r.t. HSHC2024.1. A new kit is available now.
https://community.intersystems.com/post/apr-8-2024-%E2%80%93-alert-upgrades-fail-healthshare%C2%AE-health-connect-instances-not-licensed-hl7%C2%AE-fhir%C2%AE Thanks, I did receive an email, downloaded the new kit, and upgraded our DEV environment yesterday to start evaluating. great!!
Announcement
Daniel Palevski · Mar 26
InterSystems Announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2025.1
The 2025.1 release of InterSystems IRIS® data platform, InterSystems IRIS® for HealthTM, and HealthShare® Health Connect is now Generally Available (GA). This is an Extended Maintenance (EM) release.
Release Highlights
In this exciting release, users can expect several new features and enhancements, including:
Advanced Vector Search Capabilities
A new disk-based Approximate Nearest Neighbor (ANN) index significantly accelerates vector search queries, yielding sub-second responses across millions of vectors. Access the following exercise to learn more - Vectorizing and Searching Text with InterSystems SQL .
Enhanced Business Intelligence
Automatic dependency analysis in IRIS BI Cube building and synchronization, ensuring consistency and integrity across complex cube dependencies.
Improved SQL and Data Management
Introduction of standard SQL pagination syntax (LIMIT... OFFSET..., OFFSET... FETCH...).
New LOAD SQL command for simplified bulk import of DDL statements.
Enhanced ALTER TABLE commands to convert between row and columnar layouts seamlessly.
Optimized Database Operations
Reduced journal record sizes for increased efficiency.
Faster database compaction, particularly for databases with lots of big string content.
Increased automation when adding new databases to a mirror.
New command-line utility for ECP management tasks.
Strengthened Security Compliance
Support for cryptographic libraries compliant with FIPS 140-3 standards.
Modernized Interoperability UI
Opt-in to a revamped Production Configuration and DTL Editor experience, featuring source control integration, VS Code compatibility, enhanced filtering, split-panel views, and more. Please see this Developer Community article for more information about how to opt-in and provide feedback.
Expanded Healthcare Capabilities
Efficient bulk FHIR ingestion and scheduling, including integrity checks and resource management.
Enhanced FHIR Bulk Access and improved FHIR Search Operations.
New Developer Experience Features
Embedded Python support within the DTL Editor, allowing Python-skilled developers to leverage the InterSystems platform more effectively. Watch the following video to learn more - Using Embedded Python in the BPL and DTL Editors.
Enhanced Observability with OpenTelemetry
Introduction of tracing capabilities in IRIS for detailed observability into web requests and application performance.
Please share your feedback through the Developer Community so we can build a better product together.
Documentation
Details on all the highlighted features are available through these links below:
InterSystems IRIS 2025.1 documentation and release notes.
InterSystems IRIS for Health 2025.1 documentation and release notes.
Health Connect 2025.1 documentation and release notes.
In addition, check out the upgrade impact checklist for an easily navigable overview of all changes you need to be aware of when upgrading to this release.
In particular, please note that InterSystems IRIS 2025.1 introduces a new journal file format version, which is incompatible with earlier releases and therefore imposes certain limitations on mixed-version mirror setups. See the corresponding documentation for more details.
Early Access Programs (EAPs)
There are many EAPs available now. Check out this page and register to those you are interested.
Download the Software
As usual, Extended Maintenance (EM) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format.
Classic Installation Packages
Installation packages are available from the WRC's InterSystems IRIS page for InterSystems IRIS and InterSystems IRIS for Health, and WRC’s HealthShare page for Health Connect. Kits can also be found in the Evaluation Services website.
Availability and Package Information
This release comes with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
The build number for this Extended Maintenance release is 2025.1.0.223.0.
Container images are available from the InterSystems Container Registry. Containers are tagged as both "2025.1" and "latest-em".