Search

Clear filter
Article
Yuri Marx · Nov 1, 2021

Top 10 InterSystems IRIS Features

The InterSystems IRIS is a great data platform and it is met the current features required by the market. In this article, you see the top 10: Note: this list was updated because many features are added to IRIS in last 3 years (thanks @Kristina.Lauer) Rank Feature Why Learning more about it 1 Democratized analytics InterSystems IRIS Adaptive Analytics:Delivers virtual cubes with centralized business semantics, abstracted from technical details and modeling, to allow business users to easily and quickly create their analyses in Excel or their preferred analytics product (PowerBI, Tableau, etc.). There are no consumption restrictions per user.InterSystems Reports:It is a low code report designer to deliver operational data reports embedded on any application or in a web report portal. Overview of Adaptive Analytics, Adaptive Analytics Essentials Introduction to InterSystems Reports,Delivering Data Visually with InterSystems Reports 2 API Manager The digital assets are consumed using API REST. Is required govern the reuse, security, consuming, asset catalog, developer ecosystem and others aspects in a central point. The API Manager is the right tool to do this. So, all the companies have or want to have an API Manager. Hands-On with API Manager for Devs 3 Scalable Databases Sharding DatabaseThe total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching 64.2 zettabytes in 2020. Over the next five years up to 2025, global data creation is projected to grow to more than 180 zettabytes. In 2020, the amount of data created and replicated reached a new high (source: https://www.statista.com/ statistics/871513/worldwide-data-created/). In this scenario, is critical to the business be able to process data in a distributed way (into shards, like hadoop, or mongodb), to increase and mantain the performance. The other important thing is the IRIS is 3 times more rapid then Cache, and more rapid then AWS databases, into the AWS cloud.Columnar storageChanges the storage of repeating data into columns instead of rows, allowing you to achieve up to 10x higher performance, especially in aggregated (analytical) data storage scenarios. Planning and Deploying a Sharded Cluster Scaling for Data Volume with Sharding Increasing Analytical Query Speed Using Columnar StorageUsing Columnar Storage 4 Python support Python is the most popular language to do AI and AI is in the center of the business strategy, because allows you get new insights, get more productivity and reduce costs. Writing Python Applications with InterSystems Leveraging Embedded Python in Interoperability Productions 5 Native APIs (Java, .NET, Node.js, Python) and PEX The US has nearly 1 million open IT jobs (source: https://www.cnbc.com/2019/11/06/ how-switching-careers-to-tech-could-solve-the-us-talent-shortage.html). Is very hard find an Object Script developer. So, is important be able use IRIS features, like interoperability with the developer team official programming language (Python, Java, .NET, etc.). Creating Interoperability Productions Using PEX, InterSystems IRIS for Coders, Node.js QuickStart, Using the Native API for Python 6 Interoperability, FHIR and IoT Businesses are constantly connecting and exchanging data. Departments also need to work connected to deliver business processes with more strategic value and lower cost. The best technology to do this, is the interoperability tools, especially ESB, Integration Adapters, Business Process automation engines (BPL), data transformation tools (DTL) and the adoption of market interoperability standards, like FHIR and MQTT/IoT. The InterSystems Interoperability supports all this (for FHIR use IRIS for Health). Receiving and Routing Data in a Production, Building Basic FHIR Integrations with InterSystems, Monitoring Remotely with MQTT, Building Business Integrations with InterSystems IRIS 7 Cloud, Docker & Microservices Everyone now wants cloud microservices architecture. They want to break the monoliths to create projects that are smaller, less complex, less coupled, more scalable, reusable, and independent. IRIS allows you deploy data, application and analytics microservices, thanks IRIS support to shards, docker, kubernetes, distributed computing, DevOps tools and lower CPU/memory consumption (IRIS supports even ARM processors!). But microservices requires the microservice API management, using API Manager, to be used aligned to the business. Deploying InterSystems IRIS in Containers and the Cloud Deploying and Testing InterSystems Products Using CI/CD Pipelines 8 Vector Search and Generative AI Vectors are mathematical representations of data and textual semantics (NLP), and are the raw material for generative AI applications to understand questions and tasks and return correct answers. Vector repositories and searches are capable of storing vectors (AI processing) so that for each new task or question, they can retrieve what has already been produced (AI memory or knowledge base), making everything faster and cheaper. Developing Generative AI Applications, Using Vector Search 9 VSCode support VSCode is the most popular IDE and InterSystems IRIS has a good set of tools for it. Developing on an InterSystems Server Using VS Code 10 Data Science The ability to apply data science to the data, integration and transaction requests and responses, using Python, R and IntegratedML (AutoML) enable AI intelligence at the moment is required by the business. The InterSystems IRIS deliver AI with Python, R and IntegratedML (AutoML) Hands-On with IntegratedML Developing in Python or R within InterSystems IRIS Predicting Outcomes with IntegratedML in InterSystems IRIS nice summary :) 💡 This article is considered as InterSystems Data Platform Best Practice. I updated this article, thanks @Kristina.Lauer
Article
Kristina Lauer · Jul 29, 2024

Onboarding with InterSystems IRIS: A Comprehensive Guide

Updated 2/27/25 Hi Community, You can unlock the full potential of InterSystems IRIS—and help your team onboard—with the full range of InterSystems learning resources offered online and in person, for every role in your organization. Developers, system administrators, data analysts, and integrators can quickly get up to speed. Onboarding Resources for Every Role Developers Online Learning Program: Getting Started with InterSystems IRIS for Coders (21h) Classroom Training: Developing with InterSystems Objects and SQL (5 days) System Administrators Learning Path: InterSystems IRIS Management Basics (10h) Classroom Training: Managing InterSystems Servers (5 days) Data Analysts Video: Introduction to Analytics with InterSystems (6m) Learning Paths for every tool: Analyzing Data with InterSystems IRIS BI Delivering Data Visually with InterSystems Reports (1h 15m) Build Data Models Using Adaptive Analytics (2h 15m) Classroom Training: Using InterSystems Embedded Analytics (5 days) Integrators Learning Program: Getting Started with InterSystems IRIS for Health for Integrators (14h) Classroom Training: Developing System Integrations and Building and Managing HL7 Integrations (5 days each) Implementers Learning Path: Deploying InterSystems IRIS in Containers and the Cloud (3h) Learning Program: Getting Started with InterSystems IRIS for Implementers (26h) Project managers Watch product overview videos. Read success stories to get inspired—see how others are using InterSystems products! Other Resources from Learning Services 💻 Online Learning: Register for free at learning.intersystems.com to access self-paced courses, videos, and exercises. You can also complete task-based learning paths or role-based programs to advance your career. 👩‍🏫 Classroom Training: Check the schedule of live, in-person or virtual classroom training, or request a private course for your team. Find details at classroom.intersystems.com. 📘 InterSystems IRIS documentation: Comprehensive reference materials, guides, and how-to articles. Explore the documentation. 📧 Support: For technical support, email support@intersystems.com. Certification Opportunities Once you and your team members have gained enough training and experience, get certified according to your role! Learn from the Community 💬Engage in learning on the Developer Community: Chat with other developers, post questions, read articles, and stay updated with the latest announcements. See this post for tips on how to learn on the Developer Community. With these learning resources, your team will be well equipped to maximize the capabilities of InterSystems IRIS, driving your organization’s growth and success. For additional assistance, post questions here or ask your dedicated Sales Engineer. New certification opportunity added to the list: InterSystems IRIS SQL Specialist! Resources for implementers added!
Announcement
Anastasia Dyubaylo · Apr 11

Registration is open for the InterSystems READY 2025!

Hi Community, We're happy to announce that registration for the event of the year — InterSystems Ready 2025 — is now open. This is the Global Summit we all know and love, but with a new name! ➡️ InterSystems Ready 2025 🗓 Dates: June 22-25, 2025 📍 Location: Signia Hilton Bonnet Creek, Orlando, FL, USA InterSystems READY 2025 is a friendly and informative environment for the InterSystems community to meet, interact, and exchange knowledge. READY 2025 event includes: Sessions: 3 and a half days of sessions geared to the needs of software developers and managers. Sessions repeat so you don’t have to miss out as you build your schedule. Inspiring keynotes: Presentations that challenge your assumptions and highlight new possibilities. What’s next: In the keynotes and breakout sessions you’ll learn what’s on the InterSystems roadmap, so you’ll be ready to go when new tech is released. Networking: Meet InterSystems executives, members of our global product and innovation teams, and peers from around the world to discuss what matters most to you. Workshops and personal training: Dive into exactly what you need with an InterSystems expert, including one-on-ones. Startup program: Demonstrate your tech, connect with potential buyers, and learn how InterSystems can help you accelerate growth of your business. Partner Pavilion: Looking for a consultant, systems integrator, tools to simplify your work? It’s all in the pavilion. Fun: Demos and Drinks, Tech Exchange, and other venues. Learn more about the prices on the official website and don't forget that the super early bird discount lapses on April 16th! We look forward to seeing you at the InterSystems Ready 2025!
Article
Guillaume Rongier · Feb 7, 2022

InterSystems IRIS Interoperability with Embedded Python

# 1. interoperability-embedded-python This proof of concept aims to show how the **iris interoperability framework** can be used with **embedded python**. ## 1.1. Table of Contents - [1. interoperability-embedded-python](#1-interoperability-embedded-python) - [1.1. Table of Contents](#11-table-of-contents) - [1.2. Example](#12-example) - [1.3. Register a component](#13-register-a-component) - [2. Demo](#2-demo) - [3. Prerequisites](#3-prerequisites) - [4. Installation](#4-installation) - [4.1. With Docker](#41-with-docker) - [4.2. Without Docker](#42-without-docker) - [4.3. With ZPM](#43-with-zpm) - [4.4. With PyPI](#44-with-pypi) - [4.4.1. Known issues](#441-known-issues) - [5. How to Run the Sample](#5-how-to-run-the-sample) - [5.1. Docker containers](#51-docker-containers) - [5.2. Management Portal and VSCode](#52-management-portal-and-vscode) - [5.3. Open the production](#53-open-the-production) - [6. What's inside the repository](#6-whats-inside-the-repository) - [6.1. Dockerfile](#61-dockerfile) - [6.2. .vscode/settings.json](#62-vscodesettingsjson) - [6.3. .vscode/launch.json](#63-vscodelaunchjson) - [6.4. .vscode/extensions.json](#64-vscodeextensionsjson) - [6.5. src folder](#65-src-folder) - [7. How it works](#7-how-it-works) - [7.1. The `__init__.py`file](#71-the-__init__pyfile) - [7.2. The `common` class](#72-the-common-class) - [7.3. The `business_host` class](#73-the-business_host-class) - [7.4. The `inbound_adapter` class](#74-the-inbound_adapter-class) - [7.5. The `outbound_adapter` class](#75-the-outbound_adapter-class) - [7.6. The `business_service` class](#76-the-business_service-class) - [7.7. The `business_process` class](#77-the-business_process-class) - [7.8. The `business_operation` class](#78-the-business_operation-class) - [7.8.1. The dispacth system](#781-the-dispacth-system) - [7.8.2. The methods](#782-the-methods) - [7.9. The `director` class](#79-the-director-class) - [7.10. The `objects`](#710-the-objects) - [7.11. The `messages`](#711-the-messages) - [7.12. How to regsiter a component](#712-how-to-regsiter-a-component) - [7.12.1. register\_component](#7121-register_component) - [7.12.2. register\_file](#7122-register_file) - [7.12.3. register\_folder](#7123-register_folder) - [7.12.4. migrate](#7124-migrate) - [7.12.4.1. setting.py file](#71241-settingpy-file) - [7.12.4.1.1. CLASSES section](#712411-classes-section) - [7.12.4.1.2. Productions section](#712412-productions-section) - [7.13. Direct use of Grongier.PEX](#713-direct-use-of-grongierpex) - [8. Command line](#8-command-line) - [8.1. help](#81-help) - [8.2. default](#82-default) - [8.3. lists](#83-lists) - [8.4. start](#84-start) - [8.5. kill](#85-kill) - [8.6. stop](#86-stop) - [8.7. restart](#87-restart) - [8.8. migrate](#88-migrate) - [8.9. export](#89-export) - [8.10. status](#810-status) - [8.11. version](#811-version) - [8.12. log](#812-log) - [9. Credits](#9-credits) ## 1.2. Example ```python from grongier.pex import BusinessOperation,Message class MyBusinessOperation(BusinessOperation): def on_init(self): #This method is called when the component is becoming active in the production self.log_info("[Python] ...MyBusinessOperation:on_init() is called") return def on_teardown(self): #This method is called when the component is becoming inactive in the production self.log_info("[Python] ...MyBusinessOperation:on_teardown() is called") return def on_message(self, message_input:MyRequest): # called from service/process/operation, message is of type MyRequest with property request_string self.log_info("[Python] ...MyBusinessOperation:on_message() is called with message:"+message_input.request_string) response = MyResponse("...MyBusinessOperation:on_message() echos") return response @dataclass class MyRequest(Message): request_string:str = None @dataclass class MyResponse(Message): my_string:str = None ``` ## 1.3. Register a component Thanks to the method grongier.pex.Utils.register_component() : Start an embedded python shell : ```sh /usr/irissys/bin/irispython ``` Then use this class method to add a python class to the component list for interoperability. ```python from grongier.pex import Utils Utils.register_component(,,,,) ``` e.g : ```python from grongier.pex import Utils Utils.register_component("MyCombinedBusinessOperation","MyCombinedBusinessOperation","/irisdev/app/src/python/demo/",1,"PEX.MyCombinedBusinessOperation") ``` This is a hack, this not for production. # 2. Demo The demo can be found inside `src/python/demo/reddit/` and is composed of : - An `adapter.py` file that holds a `RedditInboundAdapter` that will, given a service, fetch Reddit recent posts. - A `bs.py` file that holds three `services` that does the same thing, they will call our `Process` and send it reddit post. One work on his own, one use the `RedditInBoundAdapter` we talked about earlier and the last one use a reddit inbound adapter coded in ObjectScript. - A `bp.py` file that holds a `FilterPostRoutingRule` process that will analyze our reddit posts and send it to our `operations` if it contains certain words. - A `bo.py` file that holds : - Two **email operations** that will send a mail to a certain company depending on the words analyzed before, one works on his own and the other one works with an OutBoundAdapter. - Two **file operations** that will write in a text file depending on the words analyzed before, one works on his own and the other one works with an OutBoundAdapter. New json trace for python native messages : # 3. Prerequisites Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker desktop](https://www.docker.com/products/docker-desktop) installed. # 4. Installation ## 4.1. With Docker Clone/git pull the repo into any local directory ```sh git clone https://github.com/grongierisc/interpeorability-embedded-python ``` Open the terminal in this directory and run: ```sh docker-compose build ``` Run the IRIS container with your project: ```sh docker-compose up -d ``` ## 4.2. Without Docker Install the *grongier_pex-1.2.4-py3-none-any.whl* on you local iris instance : ```sh /usr/irissys/bin/irispython -m pip install grongier_pex-1.2.4-py3-none-any.whl ``` Then load the ObjectScript classes : ```ObjectScript do $System.OBJ.LoadDir("/opt/irisapp/src","cubk","*.cls",1) ``` ## 4.3. With ZPM ```objectscript zpm "install pex-embbeded-python" ``` ## 4.4. With PyPI ```sh pip3 install iris_pex_embedded_python ``` Import the ObjectScript classes, open an embedded python shell and run : ```python from grongier.pex import Utils Utils.setup() ``` ### 4.4.1. Known issues If the module is not updated, make sure to remove the old version : ```sh pip3 uninstall iris_pex_embedded_python ``` or manually remove the `grongier` folder in `/lib/python/` or force the installation with pip : ```sh pip3 install --upgrade iris_pex_embedded_python --target /lib/python/ ``` # 5. How to Run the Sample ## 5.1. Docker containers In order to have access to the InterSystems images, we need to go to the following url: http://container.intersystems.com. After connecting with our InterSystems credentials, we will get our password to connect to the registry. In the docker VScode addon, in the image tab, by pressing connect registry and entering the same url as before (http://container.intersystems.com) as a generic registry, we will be asked to give our credentials. The login is the usual one but the password is the one we got from the website. From there, we should be able to build and compose our containers (with the `docker-compose.yml` and `Dockerfile` files given). ## 5.2. Management Portal and VSCode This repository is ready for [VS Code](https://code.visualstudio.com/). Open the locally-cloned `interoperability-embedeed-python` folder in VS Code. If prompted (bottom right corner), install the recommended extensions. **IMPORTANT**: When prompted, reopen the folder inside the container so you will be able to use the python components within it. The first time you do this it may take several minutes while the container is readied. By opening the folder remote you enable VS Code and any terminals you open within it to use the python components within the container. Configure these to use `/usr/irissys/bin/irispython` ## 5.3. Open the production To open the production you can go to [production](http://localhost:52773/csp/irisapp/EnsPortal.ProductionConfig.zen?PRODUCTION=PEX.Production). You can also click on the bottom on the `127.0.0.1:52773[IRISAPP]` button and select `Open Management Portal` then, click on [Interoperability] and [Configure] menus then click [productions] and [Go]. The production already has some code sample. Here we can see the production and our pure python services and operations: New json trace for python native messages : # 6. What's inside the repository ## 6.1. Dockerfile A dockerfile which install some python dependancies (pip, venv) and sudo in the container for conviencies. Then it create the dev directory and copy in it this git repository. It starts IRIS and activates **%Service_CallIn** for **Python Shell**. Use the related docker-compose.yml to easily setup additional parametes like port number and where you map keys and host folders. This dockerfile ends with the installation of requirements for python modules. Use .env/ file to adjust the dockerfile being used in docker-compose. ## 6.2. .vscode/settings.json Settings file to let you immedietly code in VSCode with [VSCode ObjectScript plugin](https://marketplace.visualstudio.com/items?itemName=daimor.vscode-objectscript) ## 6.3. .vscode/launch.json Config file if you want to debug with VSCode ObjectScript [Read about all the files in this article](https://community.intersystems.com/post/dockerfile-and-friends-or-how-run-and-collaborate-objectscript-projects-intersystems-iris) ## 6.4. .vscode/extensions.json Recommendation file to add extensions if you want to run with VSCode in the container. [More information here](https://code.visualstudio.com/docs/remote/containers) ![Archiecture](https://code.visualstudio.com/assets/docs/remote/containers/architecture-containers.png) This is very useful to work with embedded python. ## 6.5. src folder ``` src ├── Grongier │ └── PEX // ObjectScript classes that wrap python code │ ├── BusinessOperation.cls │ ├── BusinessProcess.cls │ ├── BusinessService.cls │ ├── Common.cls │ ├── Director.cls │ ├── InboundAdapter.cls │ ├── Message.cls │ ├── OutboundAdapter.cls │ ├── Python.cls │ ├── Test.cls │ └── _utils.cls ├── PEX // Some example of wrapped classes │ └── Production.cls └── python ├── demo // Actual python code to run this demo | `-- reddit | |-- adapter.py | |-- bo.py | |-- bp.py | |-- bs.py | |-- message.py | `-- obj.py ├── dist // Wheel used to implement python interoperability components │ └── grongier_pex-1.2.4-py3-none-any.whl ├── grongier │ └── pex // Helper classes to implement interoperability components │ ├── _business_host.py │ ├── _business_operation.py │ ├── _business_process.py │ ├── _business_service.py │ ├── _common.py │ ├── _director.py │ ├── _inbound_adapter.py │ ├── _message.py │ ├── _outbound_adapter.py │ ├── __init__.py │ └── _utils.py └── setup.py // setup to build the wheel ``` # 7. How it works ## 7.1. The `__init__.py`file This file will allow us to create the classes to import in the code. It gets from the multiple files seen earlier the classes and make them into callable classes. That way, when you wish to create a business operation, for example, you can just do: ```python from grongier.pex import BusinessOperation ``` ## 7.2. The `common` class The common class shouldn't be called by the user, it defines almost all the other classes. This class defines: `on_init`: The on_init() method is called when the component is started. Use the on_init() method to initialize any structures needed by the component. `on_tear_down`: Called before the component is terminated. Use it to free any structures. `on_connected`: The on_connected() method is called when the component is connected or reconnected after being disconnected.Use the on_connected() method to initialize any structures needed by the component. `log_info`: Write a log entry of type "info". :log entries can be viewed in the management portal. `log_alert`: Write a log entry of type "alert". :log entries can be viewed in the management portal. `log_warning`: Write a log entry of type "warning". :log entries can be viewed in the management portal. `log_error`: Write a log entry of type "error". :log entries can be viewed in the management portal. ## 7.3. The `business_host` class The business host class shouldn't be called by the user, it is the base class for all the business classes. This class defines: `send_request_sync`: Send the specified message to the target business process or business operation synchronously. **Parameters**: - **target**: a string that specifies the name of the business process or operation to receive the request. The target is the name of the component as specified in the Item Name property in the production definition, not the class name of the component. - **request**: specifies the message to send to the target. The request is either an instance of a class that is a subclass of Message class or of IRISObject class. If the target is a build-in ObjectScript component, you should use the IRISObject class. The IRISObject class enables the PEX framework to convert the message to a class supported by the target. - **timeout**: an optional integer that specifies the number of seconds to wait before treating the send request as a failure. The default value is -1, which means wait forever. description: an optional string parameter that sets a description property in the message header. The default is None. **Returns**: the response object from target. **Raises**: TypeError: if request is not of type Message or IRISObject. `send_request_async`: Send the specified message to the target business process or business operation asynchronously. **Parameters**: - **target**: a string that specifies the name of the business process or operation to receive the request. The target is the name of the component as specified in the Item Name property in the production definition, not the class name of the component. - **request**: specifies the message to send to the target. The request is an instance of IRISObject or of a subclass of Message. If the target is a built-in ObjectScript component, you should use the IRISObject class. The IRISObject class enables the PEX framework to convert the message to a class supported by the target. - **description**: an optional string parameter that sets a description property in the message header. The default is None. **Raises**: TypeError: if request is not of type Message or IRISObject. `get_adapter_type`: Name of the registred Adapter. ## 7.4. The `inbound_adapter` class Inbound Adapter in Python are subclass from grongier.pex.InboundAdapter in Python, that inherit from all the functions of the [common class](#72-the-common-class). This class is responsible for receiving the data from the external system, validating the data, and sending it to the business service by calling the BusinessHost process_input method. This class defines: `on_task`: Called by the production framework at intervals determined by the business service CallInterval property. The message can have any structure agreed upon by the inbound adapter and the business service. Example of an inbound adapter ( situated in the src/python/demo/reddit/adapter.py file ): ```python from grongier.pex import InboundAdapter import requests import iris import json class RedditInboundAdapter(InboundAdapter): """ This adapter use requests to fetch self.limit posts as data from the reddit API before calling process_input for each post. """ def on_init(self): if not hasattr(self,'feed'): self.feed = "/new/" if self.limit is None: raise TypeError('no Limit field') self.last_post_name = "" return 1 def on_task(self): self.log_info(f"LIMIT:{self.limit}") if self.feed == "" : return 1 tSC = 1 # HTTP Request try: server = "https://www.reddit.com" request_string = self.feed+".json?before="+self.last_post_name+"&limit="+self.limit self.log_info(server+request_string) response = requests.get(server+request_string) response.raise_for_status() data = response.json() updateLast = 0 for key, value in enumerate(data['data']['children']): if value['data']['selftext']=="": continue post = iris.cls('dc.Reddit.Post')._New() post._JSONImport(json.dumps(value['data'])) post.OriginalJSON = json.dumps(value) if not updateLast: self.LastPostName = value['data']['name'] updateLast = 1 response = self.BusinessHost.ProcessInput(post) except requests.exceptions.HTTPError as err: if err.response.status_code == 429: self.log_warning(err.__str__()) else: raise err except Exception as err: self.log_error(err.__str__()) raise err return tSC ``` ## 7.5. The `outbound_adapter` class Outbound Adapter in Python are subclass from grongier.pex.OutboundAdapter in Python, that inherit from all the functions of the [common class](#72-the-common-class). This class is responsible for sending the data to the external system. The Outbound Adapter gives the Operation the possibility to have a heartbeat notion. To activate this option, the CallInterval parameter of the adapter must be strictly greater than 0. Example of an outbound adapter ( situated in the src/python/demo/reddit/adapter.py file ): ```python class TestHeartBeat(OutboundAdapter): def on_keepalive(self): self.log_info('beep') def on_task(self): self.log_info('on_task') ``` ## 7.6. The `business_service` class This class is responsible for receiving the data from external system and sending it to business processes or business operations in the production. The business service can use an adapter to access the external system, which is specified overriding the get_adapter_type method. There are three ways of implementing a business service: - Polling business service with an adapter - The production framework at regular intervals calls the adapter’s OnTask() method, which sends the incoming data to the the business service ProcessInput() method, which, in turn calls the OnProcessInput method with your code. - Polling business service that uses the default adapter - In this case, the framework calls the default adapter's OnTask method with no data. The OnProcessInput() method then performs the role of the adapter and is responsible for accessing the external system and receiving the data. - Nonpolling business service - The production framework does not initiate the business service. Instead custom code in either a long-running process or one that is started at regular intervals initiates the business service by calling the Director.CreateBusinessService() method. Business service in Python are subclass from grongier.pex.BusinessService in Python, that inherit from all the functions of the [business host](#73-the-business_host-class). This class defines: `on_process_input`: Receives the message from the inbond adapter via the PRocessInput method and is responsible for forwarding it to target business processes or operations. If the business service does not specify an adapter, then the default adapter calls this method with no message and the business service is responsible for receiving the data from the external system and validating it. **Parameters**: - **message_input**: an instance of IRISObject or subclass of Message containing the data that the inbound adapter passes in. The message can have any structure agreed upon by the inbound adapter and the business service. Example of a business service ( situated in the src/python/demo/reddit/bs.py file ): ```python from grongier.pex import BusinessService import iris from message import PostMessage from obj import PostClass class RedditServiceWithPexAdapter(BusinessService): """ This service use our python Python.RedditInboundAdapter to receive post from reddit and call the FilterPostRoutingRule process. """ def get_adapter_type(): """ Name of the registred Adapter """ return "Python.RedditInboundAdapter" def on_process_input(self, message_input): msg = iris.cls("dc.Demo.PostMessage")._New() msg.Post = message_input return self.send_request_sync(self.target,msg) def on_init(self): if not hasattr(self,'target'): self.target = "Python.FilterPostRoutingRule" return ``` ## 7.7. The `business_process` class Typically contains most of the logic in a production. A business process can receive messages from a business service, another business process, or a business operation. It can modify the message, convert it to a different format, or route it based on the message contents. The business process can route a message to a business operation or another business process. Business processes in Python are subclass from grongier.pex.BusinessProcess in Python, that inherit from all the functions of the [business host](#73-the-business_host-class). This class defines: `on_request`: Handles requests sent to the business process. A production calls this method whenever an initial request for a specific business process arrives on the appropriate queue and is assigned a job in which to execute. **Parameters**: - **request**: An instance of IRISObject or subclass of Message that contains the request message sent to the business process. **Returns**: An instance of IRISObject or subclass of Message that contains the response message that this business process can return to the production component that sent the initial message. `on_response`: Handles responses sent to the business process in response to messages that it sent to the target. A production calls this method whenever a response for a specific business process arrives on the appropriate queue and is assigned a job in which to execute. Typically this is a response to an asynchronous request made by the business process where the responseRequired parameter has a true value. **Parameters**: - **request**: An instance of IRISObject or subclass of Message that contains the initial request message sent to the business process. - **response**: An instance of IRISObject or subclass of Message that contains the response message that this business process can return to the production component that sent the initial message. - **callRequest**: An instance of IRISObject or subclass of Message that contains the request that the business process sent to its target. - **callResponse**: An instance of IRISObject or subclass of Message that contains the incoming response. - **completionKey**: A string that contains the completionKey specified in the completionKey parameter of the outgoing SendAsync() method. **Returns**: An instance of IRISObject or subclass of Message that contains the response message that this business process can return to the production component that sent the initial message. `on_complete`: Called after the business process has received and handled all responses to requests it has sent to targets. **Parameters**: - **request**: An instance of IRISObject or subclass of Message that contains the initial request message sent to the business process. - **response**: An instance of IRISObject or subclass of Message that contains the response message that this business process can return to the production component that sent the initial message. **Returns**: An instance of IRISObject or subclass of Message that contains the response message that this business process can return to the production component that sent the initial message. Example of a business process ( situated in the src/python/demo/reddit/bp.py file ): ```python from grongier.pex import BusinessProcess from message import PostMessage from obj import PostClass class FilterPostRoutingRule(BusinessProcess): """ This process receive a PostMessage containing a reddit post. It then understand if the post is about a dog or a cat or nothing and fill the right infomation inside the PostMessage before sending it to the FileOperation operation. """ def on_init(self): if not hasattr(self,'target'): self.target = "Python.FileOperation" return def on_request(self, request): if 'dog'.upper() in request.post.selftext.upper(): request.to_email_address = 'dog@company.com' request.found = 'Dog' if 'cat'.upper() in request.post.selftext.upper(): request.to_email_address = 'cat@company.com' request.found = 'Cat' if request.found is not None: return self.send_request_sync(self.target,request) else: return ``` ## 7.8. The `business_operation` class This class is responsible for sending the data to an external system or a local system such as an iris database. The business operation can optionally use an adapter to handle the outgoing message which is specified overriding the get_adapter_type method. If the business operation has an adapter, it uses the adapter to send the message to the external system. The adapter can either be a PEX adapter, an ObjectScript adapter or a [python adapter](#75-the-outbound_adapter-class). Business operation in Python are subclass from grongier.pex.BusinessOperation in Python, that inherit from all the functions of the [business host](#73-the-business_host-class). ### 7.8.1. The dispacth system In a business operation it is possbile to create any number of function [similar to the on_message method](#782-the-methods) that will take as argument a [typed request](#711-the-messages) like this `my_special_message_method(self,request: MySpecialMessage)`. The dispatch system will automatically analyze any request arriving to the operation and dispacth the requests depending of their type. If the type of the request is not recognized or is not specified in any **on_message like function**, the dispatch system will send it to the `on_message` function. ### 7.8.2. The methods This class defines: `on_message`: Called when the business operation receives a message from another production component [that can not be dispatched to another function](#781-the-dispacth-system). Typically, the operation will either send the message to the external system or forward it to a business process or another business operation. If the operation has an adapter, it uses the Adapter.invoke() method to call the method on the adapter that sends the message to the external system. If the operation is forwarding the message to another production component, it uses the SendRequestAsync() or the SendRequestSync() method. **Parameters**: - **request**: An instance of either a subclass of Message or of IRISObject containing the incoming message for the business operation. **Returns**: The response object Example of a business operation ( situated in the src/python/demo/reddit/bo.py file ): ```python from grongier.pex import BusinessOperation from message import MyRequest,MyMessage import iris import os import datetime import smtplib from email.mime.text import MIMEText class EmailOperation(BusinessOperation): """ This operation receive a PostMessage and send an email with all the important information to the concerned company ( dog or cat company ) """ def my_message(self,request:MyMessage): sender = 'admin@example.com' receivers = 'toto@example.com' port = 1025 msg = MIMEText(request.toto) msg['Subject'] = 'MyMessage' msg['From'] = sender msg['To'] = receivers with smtplib.SMTP('localhost', port) as server: server.sendmail(sender, receivers, msg.as_string()) print("Successfully sent email") def on_message(self, request): sender = 'admin@example.com' receivers = [ request.to_email_address ] port = 1025 msg = MIMEText('This is test mail') msg['Subject'] = request.found+" found" msg['From'] = 'admin@example.com' msg['To'] = request.to_email_address with smtplib.SMTP('localhost', port) as server: # server.login('username', 'password') server.sendmail(sender, receivers, msg.as_string()) print("Successfully sent email") ``` If this operation is called using a MyRequest message, the my_message function will be called thanks to the dispatcher, otherwise the on_message function will be called. ## 7.9. The `director` class The Director class is used for nonpolling business services, that is, business services which are not automatically called by the production framework (through the inbound adapter) at the call interval. Instead these business services are created by a custom application by calling the Director.create_business_service() method. This class defines: `create_business_service`: The create_business_service() method initiates the specified business service. **Parameters**: - **connection**: an IRISConnection object that specifies the connection to an IRIS instance for Java. - **target**: a string that specifies the name of the business service in the production definition. **Returns**: an object that contains an instance of IRISBusinessService `start_production`: The start_production() method starts the production. **Parameters**: - **production_name**: a string that specifies the name of the production to start. `stop_production`: The stop_production() method stops the production. **Parameters**: - **production_name**: a string that specifies the name of the production to stop. `restart_production`: The restart_production() method restarts the production. **Parameters**: - **production_name**: a string that specifies the name of the production to restart. `list_productions`: The list_productions() method returns a dictionary of the names of the productions that are currently running. ## 7.10. The `objects` We will use `dataclass` to hold information in our [messages](#711-the-messages) in a `obj.py` file. Example of an object ( situated in the src/python/demo/reddit/obj.py file ): ```python from dataclasses import dataclass @dataclass class PostClass: title: str selftext : str author: str url: str created_utc: float = None original_json: str = None ``` ## 7.11. The `messages` The messages will contain one or more [objects](#710-the-objects), located in the `obj.py` file. Messages, requests and responses all inherit from the `grongier.pex.Message` class. These messages will allow us to transfer information between any business service/process/operation. Example of a message ( situated in the src/python/demo/reddit/message.py file ): ```python from grongier.pex import Message from dataclasses import dataclass from obj import PostClass @dataclass class PostMessage(Message): post:PostClass = None to_email_address:str = None found:str = None ``` WIP It is to be noted that it is needed to use types when you define an object or a message. ## 7.12. How to regsiter a component You can register a component to iris in many way : * Only one component with `register_component` * All the component in a file with `register_file` * All the component in a folder with `register_folder` ### 7.12.1. register_component Start an embedded python shell : ```sh /usr/irissys/bin/irispython ``` Then use this class method to add a new py file to the component list for interoperability. ```python from grongier.pex import Utils Utils.register_component(,,,,) ``` e.g : ```python from grongier.pex import Utils Utils.register_component("MyCombinedBusinessOperation","MyCombinedBusinessOperation","/irisdev/app/src/python/demo/",1,"PEX.MyCombinedBusinessOperation") ``` ### 7.12.2. register_file Start an embedded python shell : ```sh /usr/irissys/bin/irispython ``` Then use this class method to add a new py file to the component list for interoperability. ```python from grongier.pex import Utils Utils.register_file(,,) ``` e.g : ```python from grongier.pex import Utils Utils.register_file("/irisdev/app/src/python/demo/bo.py",1,"PEX") ``` ### 7.12.3. register_folder Start an embedded python shell : ```sh /usr/irissys/bin/irispython ``` Then use this class method to add a new py file to the component list for interoperability. ```python from grongier.pex import Utils Utils.register_folder(,,) ``` e.g : ```python from grongier.pex import Utils Utils.register_folder("/irisdev/app/src/python/demo/",1,"PEX") ``` ### 7.12.4. migrate Start an embedded python shell : ```sh /usr/irissys/bin/irispython ``` Then use this static method to migrate the settings file to the iris framework. ```python from grongier.pex import Utils Utils.migrate() ``` #### 7.12.4.1. setting.py file This file is used to store the settings of the interoperability components. It has two sections : * `CLASSES` : This section is used to store the classes of the interoperability components. * `PRODUCTIONS` : This section is used to store the productions of the interoperability components. e.g : ```python import bp from bo import * from bs import * CLASSES = { 'Python.RedditService': RedditService, 'Python.FilterPostRoutingRule': bp.FilterPostRoutingRule, 'Python.FileOperation': FileOperation, 'Python.FileOperationWithIrisAdapter': FileOperationWithIrisAdapter, } PRODUCTIONS = [ { 'dc.Python.Production': { "@Name": "dc.Demo.Production", "@TestingEnabled": "true", "@LogGeneralTraceEvents": "false", "Description": "", "ActorPoolSize": "2", "Item": [ { "@Name": "Python.FileOperation", "@Category": "", "@ClassName": "Python.FileOperation", "@PoolSize": "1", "@Enabled": "true", "@Foreground": "false", "@Comment": "", "@LogTraceEvents": "true", "@Schedule": "", "Setting": { "@Target": "Host", "@Name": "%settings", "#text": "path=/tmp" } }, { "@Name": "Python.RedditService", "@Category": "", "@ClassName": "Python.RedditService", "@PoolSize": "1", "@Enabled": "true", "@Foreground": "false", "@Comment": "", "@LogTraceEvents": "false", "@Schedule": "", "Setting": [ { "@Target": "Host", "@Name": "%settings", "#text": "limit=10\nother
Article
sween · Oct 20, 2023

GitOps with the InterSystems Kubernetes Operator

![image](/sites/default/files/inline/images/argo-iko.png) This article will cover turning over control of provisioning the InterSystems Kubernetes Operator, and starting your journey managing your own "Cloud" of InterSystems Solutions through Git Ops practices. This deployment pattern is also the fulfillment path for the [PID^TOO||](https://www.pidtoo.com)| FHIR Breathing Identity Resolution Engine. ### Git Ops I encourage you to do your own research or ask your favorite LLM about Git Ops, but I can paraphrase it here for you as we understand it. Git Ops is an alternative deployment paradigm, where the Kubernetes Cluster itself is "pulling" updates from manifests that reside in source control to manage the state of your solutions, making "Git" an integral part of the name. ### Prerequisites * Provision a Kubernetes Cluster , this has been tested on EKS, GKE, and MicroK8s Clusters * Provision a GitLab, GitHub, or other Git Repo that is accessible by your Kubernetes Cluster ### Argo CD The star of our show here is [ArgoCD](https://argoproj.github.io/cd/), which provides a declarative approach to continuous delivery with a ridiculously well done UI. Getting the chart going on your cluster is a snap with just a couple of strokes on your cluster. kubectl create namespace argocd kubectl apply -n argocd -f \ https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml Let's go get logged into the UI for ArgoCD on your Kubernetes Cluster, to do this, you need to grab the secret that was created for the UI, and setup a port forward to make it accessible on your system. **Grab Secret** Decrypt it and put it on your clipboard. ![image](/sites/default/files/inline/images/secret.png) **Port Forward** Redirect port 4000 (or whatever) to your local host ![image](/sites/default/files/inline/images/pf.png) **UI** Navigate to https://0.0.0.0:4000 and supply the secret to the login screen and login. ![image](/sites/default/files/inline/images/signup.png) ### InterSystems Kubernetes Operator (IKO) Instructions for obtaining the IKO Helm chart in the [documentation ](https://docs.intersystems.com/components/csp/docbook/DocBook.UI.Page.cls?KEY=AIKO)itself, once you get it, check it in to your git repo in a feature branch. I would provide a sample repo for this, but unfortunately cant do it without violating a re-distribution as it does not appear the chart is available in a public repository. Create yourself a feature branch in your git repository and unpack the IKO Helm chart into a single directory. As below, this is `iko/iris_operator_amd-3.5.48.100` off the root of the repo. On `feature/iko` branch as an example: ├── iko │ ├── AIKO.pdf │ └── iris_operator_amd-3.5.48.100 │ ├── chart │ │ └── iris-operator │ │ ├── Chart.yaml │ │ ├── templates │ │ │ ├── apiregistration.yaml │ │ │ ├── appcatalog-user-roles.yaml │ │ │ ├── cleaner.yaml │ │ │ ├── cluster-role-binding.yaml │ │ │ ├── cluster-role.yaml │ │ │ ├── deployment.yaml │ │ │ ├── _helpers.tpl │ │ │ ├── mutating-webhook.yaml │ │ │ ├── service-account.yaml │ │ │ ├── service.yaml │ │ │ ├── user-roles.yaml │ │ │ └── validating-webhook.yaml │ │ └── values.yaml **IKO Setup** Create `isc` namespace, and add secret for `containers.intersystems.com` into it. kubectl create ns isc kubectl create secret docker-registry \ pidtoo-pull-secret --namespace isc \ --docker-server=https://containers.intersystems.com \ --docker-username='ron@pidtoo.com' \ --docker-password='12345' This should conclude the setup for IKO, and enable it's delegate it entirely through Git Ops to Argo CD. ### Connect Git to Argo CD This is a simple step in the UI for Argo CD to connect the repo, this step ONLY "connects" the repo, further configuration will be in the repo itself. ![image](/sites/default/files/inline/images/gitconnect.png) ### Declare Branch to Argo CD Configure Kubernetes to poll branch through Argo CD `values.yml` in the Argo CD chart. It is up to you really for most of these locations in the git repo, but the opinionated way to declare things in your repo can be in an "App of Apps" paradigm. Consider creating the folder structure below, and the files that need to be created as a table of contents below: ├── argocd │ ├── app-of-apps │ │ ├── charts │ │ │ └── iris-cluster-collection │ │ │ ├── Chart.yaml ## Chart │ │ │ ├── templates │ │ │ │ ├── iris-operator-application.yaml ## IKO As Application │ │ │ └── values.yaml ## Application Chart Values │ │ └── cluster-seeds │ │ ├── seed.yaml ## Cluster Seed **Chart** apiVersion: v1 description: 'pidtoo IRIS cluster' name: iris-cluster-collection version: 1.0.0 appVersion: 3.5.48.100 maintainers: - name: intersystems email: support@intersystems.com **IKO As Application** apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: iko namespace: argocd spec: destination: namespace: isc server: https://kubernetes.default.svc project: default source: path: iko/iris_operator_amd-3.5.48.100/chart/iris-operator repoURL: {{ .Values.repoURL }} targetRevision: {{ .Values.targetRevision }} syncPolicy: automated: {} syncOptions: - CreateNamespace=true **IKO Application Chart Values** targetRevision: main repoURL: https://github.com/pidtoo/gitops_iko.git **Cluster Seed** apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: gitops-iko-seed namespace: argocd labels: isAppOfApps: 'true' spec: destination: namespace: isc server: https://kubernetes.default.svc project: default source: path: argocd/app-of-apps/charts/iris-cluster-collection repoURL: https://github.com/pidtoo/gitops_iko.git targetRevision: main syncPolicy: automated: {} syncOptions: - CreateNamespace=true ### Seed the Cluster! This is the final on interacting with your Argo CD/IKO Cluster applications, **the rest is up to Git**! kubectl apply -n argocd -f argocd/app-of-apps/cluster-seeds/seed.yaml ### Merge to Main Ok, this is where we see how we did in the UI, you should immediately start seeing in Argo CD applications starting coming to life. **The apps view:** ![image](/sites/default/files/inline/images/goeswell.png) **InterSystems Kubernetes Operator View** ![image](/sites/default/files/inline/images/ikolights.png) ![image](/sites/default/files/inline/images/nodeview.png) > Welcome to GitOps with the InterSystems Kubernetes Operator! [Git Demos are the Best! - Live October 19, 2023](https://youtu.be/IKoadH_oOPU?feature=shared) Ron Sweeney, Principal Architect Integration Required, LLC (PID^TOO) Dan McCracken, COO, Devsoperative, INC Ohh, so useful Ron & Dan, thanks for sharing your experience and tools. Very insightful Ron, thanks. 💡 This article is considered as InterSystems Data Platform Best Practice.
Article
sween · Jul 23, 2024

Databricks Station - InterSystems Cloud SQL

A Quick Start to InterSystems Cloud SQL Data in Databricks Up and Running in Databricks against an InterSystmes Cloud SQL consists of four parts. Obtaining Certificate and JDBC Driver for InterSystems IRIS Adding an init script and external library to your Databricks Compute Cluster Getting Data Putting Data Download X.509 Certificate/JDBC Driver from Cloud SQL Navigate to the overview page of your deployment, if you do not have external connections enabled, do so and download your certificate and the jdbc driver from the overview page. I have used intersystems-jdbc-3.8.4.jar and intersystems-jdbc-3.7.1.jar with success in Databricks from Driver Distribution. Init Script for your Databricks Cluster Easiest way to import one or more custom CA certificates to your Databricks Cluster, you can create an init script that adds the entire CA certificate chain to both the Linux SSL and Java default cert stores, and sets the REQUESTS_CA_BUNDLE property. Paste the contents of your downloaded X.509 certificate in the top block of the following script: import_cloudsql_certficiate.sh #!/bin/bash cat << 'EOF' > /usr/local/share/ca-certificates/cloudsql.crt -----BEGIN CERTIFICATE----- <PASTE> -----END CERTIFICATE----- EOF update-ca-certificates PEM_FILE="/etc/ssl/certs/cloudsql.pem" PASSWORD="changeit" JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::") KEYSTORE="$JAVA_HOME/lib/security/cacerts" CERTS=$(grep 'END CERTIFICATE' $PEM_FILE| wc -l) # To process multiple certs with keytool, you need to extract # each one from the PEM file and import it into the Java KeyStore. for N in $(seq 0 $(($CERTS - 1))); do ALIAS="$(basename $PEM_FILE)-$N" echo "Adding to keystore with alias:$ALIAS" cat $PEM_FILE | awk "n==$N { print }; /END CERTIFICATE/ { n++ }" | keytool -noprompt -import -trustcacerts \ -alias $ALIAS -keystore $KEYSTORE -storepass $PASSWORD done echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh echo "export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh Now that you have the init script, upload the script to Unity Catalog to a Volume. Once the script is on a volume, you can add the init script to the cluster from the volume in the Advanced Properties of your cluster. Secondly, add the intersystems jdbc driver/library to the cluster... ...and either start or restart your compute. Databricks Station - Inbound InterSystems IRIS Cloud SQL Create a Python Notebook in your workspace, attach it to your cluster and test dragging data inbound to Databricks. Under the hood, Databricks is going to be using pySpark, if that is not immediately obvious. The following spark dataframe construction is all you should need, you can grab your connection info from the overview page as before. df = (spark.read .format("jdbc") .option("url", "jdbc:IRIS://k8s-05868f04-a4909631-ac5e3e28ef-6d9f5cd5b3f7f100.elb.us-east-1.amazonaws.com:443/USER") .option("driver", "com.intersystems.jdbc.IRISDriver") .option("dbtable", "(SELECT name,category,review_point FROM SQLUser.scotch_reviews) AS temp_table;") .option("user", "SQLAdmin") .option("password", "REDACTED") .option("driver", "com.intersystems.jdbc.IRISDriver")\ .option("connection security level","10")\ .option("sslConnection","true")\ .load()) df.show() Illustrating the dataframe output from data in Cloud SQL... boom! Databricks Station - Outbound InterSystems IRIS Cloud SQL Lets now take what we read from IRIS and write it write back with Databricks. If you recall we read only 3 fields into our dataframe, so lets write that back immediately and specify an "overwrite" mode. df = (spark.read .format("jdbc") .option("url", "jdbc:IRIS://k8s-05868f04-a4909631-ac5e3e28ef-6d9f5cd5b3f7f100.elb.us-east-1.amazonaws.com:443/USER") .option("driver", "com.intersystems.jdbc.IRISDriver") .option("dbtable", "(SELECT TOP 3 name,category,review_point FROM SQLUser.scotch_reviews) AS temp_table;") .option("user", "SQLAdmin") .option("password", "REDACTED") .option("driver", "com.intersystems.jdbc.IRISDriver")\ .option("connection security level","10")\ .option("sslConnection","true")\ .load()) df.show() mode = "overwrite" properties = { "user": "SQLAdmin", "password": "REDACTED", "driver": "com.intersystems.jdbc.IRISDriver", "sslConnection": "true", "connection security level": "10", } df.write.jdbc(url="jdbc:IRIS://k8s-05868f04-a4909631-ac5e3e28ef-6d9f5cd5b3f7f100.elb.us-east-1.amazonaws.com:443/USER", table="databricks_scotch_reviews", mode=mode, properties=properties) Executing the Notebook Illustrating the data in InterSystems Cloud SQL! Things to Consider By default, PySpark writes data using multiple concurrent tasks, which can result in partial writes if one of the tasks fails. To ensure that the write operation is atomic and consistent, you can configure PySpark to write data using a single task (i.e., set the number of partitions to 1) or use a iris-specific feature like transactions. Additionally, you can use PySpark’s DataFrame API to perform filtering and aggregation operations before reading the data from the database, which can reduce the amount of data that needs to be transferred over the network. Hello, I have 2 questions if you could help 1 ) Do we need ";" in the end or it is not required .option("dbtable", "(SELECT TOP 3 name,category,review_point FROM SQLUser.scotch_reviews) AS temp_table;") 2) This JDBC works fine until I add one specific column in my query, when I add that column I get following error [%msg: < Input (;) encountered after end of query Kindly help. No, I would leave out the semicolon at the end of that query. It's typically used as a statement separator, but not really part of query syntax itself. IRIS (as of 2023.2) will tolerate it at the end of a statement, but it doesn't seem that Spark really does anything with it as it wraps what you sent to dbtable with further queries, causing the error you saw. You may also want to apply .option(“pushDownLimit”, false)
Announcement
Derek Robinson · Jun 12

[Video] Connecting to InterSystems Cloud Services

Hi, Community! ⛅Need to connect your application to InterSystems Cloud Services? Get a high-level overview of the process: Connecting to InterSystems Cloud Services In this video, you will learn: How to connect with Python, Java, C++, or .NET. Key components for a connection and basic setup steps. The importance of TLS encryption.
Article
Kurro Lopez · Jun 25

InterSystems for dummies – Machine learning II

Previously, we trained our model using machine learning. However, the sample data we utilized was generated directly from insert statements. Today, we will learn how to load this data straight from a file. Dump Data Before dumping the data from your file, check what header the fields have. In this case, the file is called “Sleep_health_and_lifestyle_dataset.csv” and is located in the data/csv folder. This file contains 374 records plus a header (375 lines). The header includes the following names and positions: Person ID Gender Age Occupation Sleep Duration Quality of Sleep Physical Activity Level Stress Level BMI Category Systolic Diastolic Heart Rate Daily Steps Sleep Disorder It is essential to know the names of column headers. The class St.MLL.insomnia02 has different column names; therefore, we need to load the data indicating the name of the column into the file, while the relation with the column is placed in the table. LOAD DATA FROM FILE '/opt/irisbuild/data/csv/Sleep_health_and_lifestyle_dataset.csv' INTO St_MLL.insomnia02 (Gender,Age,Occupation,SleepDuration,QualitySleep,PhysicalActivityLevel, StressLevel,BMICategory,Systolic,Diastolic,HeartRate,DailySteps,SleepDisorder) VALUES ("Gender","Age","Occupation","Sleep Duration","Quality of Sleep","Physical Activity Level", "Stress Level","BMI Category","Systolic","Diastolic","Heart Rate","Daily Steps","Sleep Disorder") USING {"from":{"file":{"header":true}}} All the information makes sense, but… What is the last instruction? { "from": { "file": { "header": true } } } This is an instruction for the LOAD DATA command to determine what the file is (whether or not it has a header; whether the column separator is another character, etc). You can find more information about the JSON options by checking out the following links: LOAD DATA (SQL) LOAD DATA jsonOptions Since the columns of the file do not match those in the tables, it is necessary to indicate that the document has a line with the header, because by default, this value is “false”. Now, we will drill our model once more. With much more data in hand, it will be way more efficient at this point. TRAIN MODEL insomnia01AllModel FROM St_MLL.insomnia02 TRAIN MODEL insomnia01SleepModel FROM St_MLL.insomnia02 TRAIN MODEL insomnia01BMIModel FROM St_MLL.insomnia02 Populate the St_MLL.insomniaValidate02 table with 50% of St_MLL.insomnia02 rows: INSERT INTO St_MLL.insomniaValidate02( Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic) SELECT TOP 187 Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic FROM St_MLL.insomnia02 Validate the models with the newly validated table: INSERT INTO St_MLL.insomniaTest02( Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic) SELECT TOP 50 Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic FROM St_MLL.insomnia02 Proceeding with our previous model (a nurse, 29-year-old, female), we can check what prediction our test table will make. Note: The following queries will be focused exclusively on this type of person. SELECT *, PREDICT(insomnia01AllModel) FROM St_MLL.insomnia02 WHERE age = 29 and Gender = 'Female' and Occupation = 'Nurse' SURPRISE!!! The result is identical to the one with less data. We thought that training our model with more data would improve the outcome, but we were wrong. For a change, I executed the probability query instead, and I got a pretty interesting result: SELECT Gender, Age, SleepDuration, QualitySleep, SleepDisorder, PREDICT(insomnia01SleepModel) As SleepDisorderPrediction, PROBABILITY(insomnia01SleepModel FOR 'Insomnia') as ProbabilityInsomnia, PROBABILITY(insomnia01SleepModel FOR 'Sleep Apnea') as ProbabilityApnea FROM St_MLL.insomniaTest02 WHERE age = 29 and Gender = 'Female' and Occupation = 'Nurse' According to the data (sex, age, sleep quality, and sleep duration), the probability of having insomnia is only 46.02%, whereas the chance of having sleep apnea is 51.46%. Our previous data training provided us with the following percentages: insomnia - 34.63%, and sleep apnea - 64.18%. What does it mean? The more data we have, the more accurate results we obtain. Time Is Money Now, let's try another type of training, using the time series. Following the same steps we took to build the insomnia table, I created a class called WeatherBase: Class St.MLL.WeatherBase Extends %Persistent { /// Date and time of the weather observation in New York City Property DatetimeNYC As %DateTime; /// Measured temperature in degrees Property Temperature As %Numeric(SCALE = 2); /// Apparent ("feels like") temperature in degrees Property ApparentTemperature As %Numeric(SCALE = 2); /// Relative humidity (0 to 1) Property Humidity As %Numeric(SCALE = 2); /// Wind speed in appropriate units (e.g., km/h) Property WindSpeed As %Numeric(SCALE = 2); /// Wind direction in degrees Property WindBearing As %Numeric(SCALE = 2); /// Visibility distance in kilometers Property Visibility As %Numeric(SCALE = 2); /// Cloud cover fraction (0 to 1) Property LoudCover As %Numeric(SCALE = 2); /// Atmospheric pressure in appropriate units (e.g., hPa) Property Pressure As %Numeric(SCALE = 2); } Then, I built two classes extending from WeatherBase (Weather and WeatherTest). It allowed me to have the same columns for both tables. There is a file named “NYC_WeatherHistory.csv” in the csv folder. It contains the temperature, humidity, wind speed, and pressure for New York City in 2015. It is a fortune of data!! For that reason, we will load the file into our table using the knowledge about how to load data from a file. LOAD DATA FROM FILE '/opt/irisbuild/data/csv/NYC_WeatherHistory.csv' INTO St_MLL.Weather (DatetimeNYC,Temperature,ApparentTemperature,Humidity,WindSpeed,WindBearing,Visibility,LoudCover,Pressure) VALUES ("DatetimeNYC","Temperature","ApparentTemperature","Humidity","WindSpeed","WindBearing","Visibility","LoudCover","Pressure") USING {"from":{"file":{"header":true}}} 📣NOTE: The names of the columns and the fields in the table are the same, therefore, we can use the following sentence instead. LOAD DATA FROM FILE '/opt/irisbuild/data/csv/NYC_WeatherHistory.csv' INTO St_MLL.Weather USING {"from":{"file":{"header":true}}} Now we will create our model, but we will do it in a particular way. CREATE TIME SERIES MODEL WeatherForecast PREDICTING (Temperature, Humidity, WindSpeed, Pressure) BY (DatetimeNYC) FROM St_MLL.Weather USING {"Forward":3} If we wish to create a prediction series, we should take into account the recommendations below: The date field must be datetime. Try to sort the data chronologically. 📣NOTE: This advice comes from Luis Angel Perez, thanks to his great experience in Machine Learning. The latest command, USING {"Forward":3}, sets the timesteps for the time series. This parameter has other values: forward specifies the number of timesteps in the future that you would like to foresee as a positive integer. Approximated rows will appear after the latest time or date in the original dataset. However, you may specify both this and the backward setting simultaneously. Example: USING {"Forward":3} backward defines the number of timesteps in the past that you would like to predict as a positive integer. Forecasted rows will appear before the earliest time or date in the original dataset. Remember that you can indicate both this and the forward setting at the same time. The AutoML provider ignores this parameter.Example: USING {"backward":5} frequency determines both the size and unit of the predicted timesteps as a positive integer followed by a letter that denotes the unit of time. If this value is not appointed, the most common timestep in the data is supplied. Example: USING {"Frequency":"d"} This parameter is case-insensitive. The letter abbreviations for units of time are outlined in the following table: Abbreviation Unit of Time y year m month w week d day h hour t minute s second Now… training. You already know the command for that: TRAIN MODEL WeatherForecast Be patient! This training took 1391 seconds, wich is approximately 23 minutes!!!! Now, populate the table St_MLL.WeatherTest with the command Populate. Do ##class(St.MLL.WeatherTest).Populate() It includes the first 5 days of January 2025. When completed, select the prediction using the model and the test table. 📣Remember: It is crucial to have at least three values to be able to make a prognosis. SELECT WITH PREDICTIONS (WeatherForecast) * FROM St_MLL.WeatherTest Well, it is showing us the forecast for the next 3 hours on January 2, 2025. This happens because we defined our model to forecast 3 records ahead. However, our data model has data for every hour of every day (00:00, 01:00, 02:00, etc.) If we want to see the daily outlook, we should create another model trained to do so by the day. Let's create the following model to see the 5-day forecast. CREATE TIME SERIES MODEL WeatherForecastDaily PREDICTING (Temperature, Humidity, WindSpeed, Pressure) BY (DatetimeNYC) FROM St_MLL.Weather USING {"Forward":5, "Frequency":"d"} Now, repeat the same steps… training and displaying the forecast: TRAIN MODEL WeatherForecastDaily SELECT WITH PREDICTIONS (WeatherForecastDaily) * FROM St_MLL.WeatherTest Wait! This time, it throws out the following error: [SQLCODE: <-400>:<Fatal error occurred>][%msg: <PREDICT execution error: ERROR #5002: ObjectScript error: <PYTHON EXCEPTION> *<class 'ValueError'>: forecast_length is too large for training data. What this means is you don't have enough history to support cross validation with your forecast_length. Various solutions include bringing in more data, alter min_allowed_train_percent to something smaller, and also setting a shorter forecast_length to class init for cross validation which you can then override with a longer value in .predict() This error is also often caused by errors in inputing of or preshaping the data. Check model.df_wide_numeric to make sure data was imported correctly. >] What has happened? As the error says, it is due to the lack of data to make a prediction. You might think that it needs more data in the Weather table and training, but it has 8760 records… so what is wrong? If we want to forecast the weather for a large number of days, we need a lot of data in the model. Filling all the data into a table requires extensive training time and a very powerful PC. Therefore, since this is a basic tutorial, we will build a model for 3 days only. Don’t forget to remove the model WeatherForecastDaily before following the instructions. DROP MODEL WeatherForecastDaily I am not going to include all the images of those changes, but I will give you the instructions on what to do: CREATE TIME SERIES MODEL WeatherForecastDaily PREDICTING (Temperature, Humidity, WindSpeed, Pressure) BY (DatetimeNYC) FROM St_MLL.Weather USING {"Forward":3, "Frequency":"d"} TRAIN MODEL WeatherForecastDaily SELECT WITH PREDICTIONS (WeatherForecastDaily) * FROM St_MLL.WeatherTest Important Note The Docker container containers.intersystems.com/intersystems/iris-community-ml:latest-em is no longer available, so you have to use the iris-community container. This container is not initialized with the AutoML configuration, so the following statement will need to be executed first: pip install --index-url https://registry.intersystems.com/pypi/simple --no-cache-dir --target /usr/irissys/mgr/python intersystems-iris-automl If you are using a Dockerfile to deploy your Docker image, remember to add the command below to the deployment instructions: ARG IMAGE=containers.intersystems.com/intersystems/iris-community:latest-em FROM $IMAGE USER root WORKDIR /opt/irisbuild RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisbuild RUN pip install --index-url https://registry.intersystems.com/pypi/simple --no-cache-dir --target /usr/irissys/mgr/python intersystems-iris-automl For more information, please visit the website below: https://docs.intersystems.com/iris20251/csp/docbook/DocBook.UI.Page.cls?KEY=GIML_Configuration_Providers#GIML_Configuration_Providers_AutoML_Install
Article
Irène Mykhailova · Jun 27

Second half of the InterSystems Ready 2025

Hi Community, While writing an article yesterday, I realized I was so busy with people who came to the Developer Community table at the Tech Exchange that I forgot to take photos for you. Luckily, I realized the error of my ways and corrected my behavior accordingly 😉 So, let's look at what happened on Tuesday at the InterSystems Ready 2025! It began with a speech of Scott Gnau about the approach and architecture of InterSystems Data Platform and how it is different from all other DBMSs: Afterwards, @Tom.Woodfin and Peter Lesperance dove into the details of using the novelties of IRIS in Epic: Then, @Gokhan.Uluderya talked about data in AI and how important it is to have good data to be able to apply GenAI or Vector Search to it: @Jeffrey.Fried picked up this topic and went into more detail about InterSystems GenAI strategy: Daniel Franko summed up the tools that are available to developers of IRIS for Health: After lunch most of the participants went on to the sessions or Tech Exchange. For example, @Raj.Singh5479 dropped by our table and we talked about the current Ideas Contest @Henrique, @henry, @Dean.Andrews2971 and @Guilherme.Silva came up to us as well: @Lorenzo.Scalese, @Dean.Andrews2971 , @DKG @Sergei.Shutov3787, @Anastasia.Dyubaylo, @Vishal.Pallerla @Iryna.Mykhailova, @Anastasia.Dyubaylo, @Robert.Kuszewski @Henrique, @Benjamin.DeBoe, @Anastasia.Dyubaylo, @Enrico.Parisi, @henry, @Iryna.Mykhailova, @José.Pereira The "Musketeers" (@henry, @Henrique,@José.Pereira) with @Anastasia.Dyubaylo @Dean.Andrews2971, Mariam Makhmutova, @Anastasia.Dyubaylo, @DKG @Muhammad.Waseem, @Guillaume.Rongier7183, @Anastasia.Dyubaylo, @Oliver.Wilms @DKG, @Anastasia.Dyubaylo, @Benjamin.Spead, @tomd This year to make it more interesting, for our developer guests we prepared a special challenge - a quiz from Global Masters! So, here are @Derek.Robinson, @Myles.Collins and @Patrick.Sulin7198 trying to get all 5 answers correctly: In the next article, you will learn who beat the challenge! While there was all this excitement at the Developer Community table, there were presentations at the big screen in the Tech Exchange, for example from @Brett.Saviano about VS Code: And presentations on smaller tables, for example, from @Guillaume.Rongier7183: Outside the Tech Exchange, the startups were making their presentations. For example, SerenityGPT which created our wonderful DC AI Bot and DC AI Chat: And in the evening we went to the Universal City Walk and were treated to the concert of Integrity Check, which was a blast! After the concert, we had the pleasure of the company of the guitar player, aka @Randy.Pallotta Afterwards, I went to roam and met up with @Dean.Andrews2971, @Adeline.Icard, @Anastasia.Dyubaylo, and @Guillaume.Rongier7183: We finished the day with a rousing game of Table shuffleboard - ladies (@Adeline.Icard, @Anastasia.Dyubaylo, and me) vs gentlemen (@Guillaume.Rongier7183, @Jeffrey.Fried, @Eduard.Lebedyuk). Guess in the comments who won 😁 All in all, we had a wonderful time at the Universal, which was a great end to a great day.
Article
Irène Mykhailova · Jun 25

First half of the InterSystems Ready 2025

Hi Community! I'm super excited to be your on-the-ground reporter for the biggest developer event of the year - InterSystems Ready 2025! As you may know from previous years, our global summits are always exciting, exhilarating, and packed with valuable knowledge, innovative ideas, and exciting news from InterSystems. This year is no different. But let's not get ahead of ourselves and start from the beginning. Pre-summit day was, as usual, filled with fun and educational experiences. Those who enjoy playing golf (I among them) got up at the crack of dawn to tee off before the sun got too high up. Here's our dream team in action: @sween, @Mark.Bolinsky, @Anzelem.Sanyatwe, @Iryna.Mykhailova If you're interested, here are the results (but to save you the suspense, we didn't win 😭): The other group of sports enthusiasts went to play football (AKA soccer). And those who are differently inclined attended the different workshops planned for Sunday: AI-enabling your applications with InterSystems IRIS Discovering InterSystems products: a high-level overview Get ready to build with FHIR in InterSystems: visualizing data as FHIR resources From FHIR to insights: analytics with FHIRPath, SQL Builder, and Pandas Ready Startup Forum: insights, innovations & investment with InterSystems Yet another exciting yearly pre-summit event was a Women's meet-up and reception. Unfortunately, after playing 18 hot and humid holes, I didn't have enough time to make myself presentable before the beginning. Anyway, everyone was ready to begin the InterSystems Ready 2025 with a bang and turned up at the Welcome reception on time! Let me share a secret - it's always a highlight of the event to meet friends and colleagues after a long pause. @Iryna.Mykhailova, @Johan.Jacob7942, @Lorenzo.Scalese, @Adeline.Icard, @Guillaume.Rongier7183 And on Monday, the main event began with the keynote presentation from Terry Ragon, CEO & Founder of InterSystems, with a warm welcome, highlighting InterSystems' dedication to creating technology that truly matters during a time of fast change. He discussed the great promise of AI and data platforms to enhance healthcare and emphasized the importance of making a tangible difference, rather than merely following trends. Later on, there was a panel discussion moderated by Jennifer Eaton between @Donald.Woodlock, Scott Gnau, and Tim Ferris on the future of healthcare. Right before lunch was the best presentation of the day! And it was the best because it mentioned the Developer Community. And to share the excitement of it with you, here's a short clip from it: And to make your day, here are a couple of photos of one of the presenters, @Randy.Pallotta The AI did a good job, or did it 😁 Anyway, after lunch, our Developer Community booth at the Tech Exchange was ready to roll. All our cool prizes and games were out and ready to amaze and entertain our guests! And they soon came. At the same time, in the hallway outside the Tech Exchange, the startups were doing their presentations. Here's a photo from the SerenityGPT presentation about their software, which utilizes IRIS Vector search to maximize the potential of clinical data. And all the while, there were interesting presentations and use-cases of InterSystems technology from InterSystems colleagues and guests: Moreover, there's a big screen for presentations in Tech Exchange, so don't miss it! This very long and exciting day ended on a really high note - the Ready Games at the Demos and Drinks! There were many great demos from which the guests had to choose the winners — two runner-ups in each category and two winners, for Most Innovative and Most Likely to Use. Btw, the winners of the Most Likely to Use category are from Lead North, who brought with them the coolest stickers ever: So, if you're at the Ready 2025 and haven't yet picked up a cute sticker, don't miss your chance to get one (or more) and to talk to @Andre and his colleagues! Swing by the Partner Pavilion (which starts outside the Tech Exchange) and you will definitely find something you like. So this is it about the first 1.5 days of the Ready 2025. Look out for a new recap tomorrow of the rest of it. And let me tell you, it is unforgettable!
Article
Irène Mykhailova · Jun 28

The last day of the InterSystems Ready 2025

Hey Community! Here's the recap of the final half-day of the InterSystems Ready 2025! It was the last chance to see everyone and say farewell until next time. It was a warm and energetic closing, with great conversations, smiles, and unforgettable memories! The final Ready 2025 moment with our amazing team! And, of course, let’s say a huge THANK YOU to a godmother of the Ready 2025, @Maureen.Flaherty! You and your team are the best! Here we are together with @Enrico.Parisi. @Patrick.Sulin7198 dropped by Developer Cmmunity table: And @Yuri.Gomes Caught @Scott.Roth outside the Tech Exchange And @Sergei.Shutov3787 My golf buddy @Anzelem.Sanyatwe also came to spin the wheel of fortune. And Luc Chatty dropped by. We went to visit the source of great ribbons. Here are @Iryna.Mykhailova, @Macey.Minor3011, @Andre, @Anastasia.Dyubaylo It was also time for the winners of the AI Programming Contest to present their AI agentic applications! @Sergei.Shutov3787 talked about AI Agents as First-Class Citizens in InterSystems IRIS: @Eric.Fortenberry presented "A Minimalist View of AI: Exploring Embeddings and Vector Search with EasyBot": @Yuri.Gomes spoke about Natural Language Control of IRIS: @Muhammad.Waseem talked about Next generation of autonomous AI Agentic Applications: @henry, @Henrique, and @José.Pereira got hid by all the people who came to listen "Command the Crew - create an AI crew to automate your work" presentation: @Victor.Naroditskiy explained how Developer Community AI works: Also, on the other tables people carried out other presentations. For example, @Guillaume.Rongier7183 talked about Python: Let's leave Tech Exchange and see what was going on at the DC sessions. @Benjamin.Spead, @Hannah.Sullivan, @Victor.Naroditskiy, and @Dean.Andrews2971 talked about using SerenityGPT to build GenAI middleware: And, of course, the main session of the Ready 2025 - InterSystems Developer Ecosystem: new resources and tools you need to know. @Dean.Andrews2971 and @Anastasia.Dyubaylo gave an overview of all the updates to the DC Ecosystem: Afterwards, @David.Reche checked how attentively everyone was listening by leading the Kahoot! game. Please welcome the winners: @Vishal.Pallerla, @Rochael.Ribeiro and @Jason.Morgan. Congratulations! We hope you enjoy your prize! @Juliana.MatsuzakiModesto, @DKG, @Rochael.Ribeiro, @Katia.Neves, @Anastasia.Dyubaylo, @Dean.Andrews2971, @Enrico.Parisi, @Vishal.Pallerla, @Eduard.Lebedyuk On this happy note, I promised last time to tell you who was the only verified person who answered all the quiz questions correctly at the DC table. And it was @Asaf.Sinay! Congratulations! @Olga.Zavrazhnova2637 and the whole Global Masters team are happy that so many people came and tried to master it. If you're interested to do a quiz, here's the link. And if you want to answer more quiz questions, you can find them on Global Masters! Talking about Global Masters and quizzes, you can't skip the most popular reward 😁 No summit goes by without someone showing me their Developer Community socks 🤣 As you can see, the Brazilian DC team is very happy: @Rochael.Ribeiro, @Juliana.MatsuzakiModesto. @Danusa.Ferreira and @Heloisa.Paiva we really missed you - with you, the Portuguese Developer Community team would've been complete! This was almost the end of the Ready 2025 and it's the end of my story. The rumor is, the next summit will take place in April in Washington, D.C. Put it in your calendar not to double book, you know you want to! See you next year!
Article
Ashok Kumar T · Jun 30

FHIR Interoperability in InterSystems IRIS for Health

Overview Fast Healthcare Interoperability Resources (FHIR) is a standardized framework developed by HL7 International to facilitate the exchange of healthcare data in a flexible, developer-friendly, and modern way. It leverages contemporary web technologies to ensure seamless integration and communication across healthcare systems. Key FHIR Technologies RESTful APIs for resource interaction JSON and XML for data representation OAuth2 for secure authorization and authentication FHIR is structured around modular components called resources, each representing specific healthcare concepts, including the following: Patient – Demographics and identifiers Observation – Clinical measurements (e.g., vitals, labs) Encounter – Patient-provider interactions Medication, AllergyIntolerance, Condition, etc. Resources are individually defined and can reference other resources to form a comprehensive data model. InterSystems IRIS for Health: FHIR Support InterSystems IRIS for Health is a unified data platform designed specifically for health care. It includes native HL7 FHIR support. It provides built-in tools and services, enabling storage, retrieval, transformation, and exchange of FHIR resources.IRIS enhances system interoperability with three major FHIR-handling components: 1.FHIR repository Server IRIS enables rapid deployment of FHIR-compliant servers, with support for the following: The complete FHIR paradigm Implementation of FHIR RESTful APIs, including search and query parameters Importing and utilizing FHIR packages and structure definitions Working with FHIR Profiles Native CRUD operations on FHIR resources Retrieval of FHIR data in JSON or XML formats Support for multiple FHIR versions FHIR SQL builder and bulk FHIR handling capabilities 2. FHIR Facade Layer The FHIR facade layer is a software architecture pattern used to expose a FHIR-compliant API on top of an existing one (often non-FHIR). It also streamlines the healthcare data system, including an electronic health record (EHR), legacy database, or HL7 v2 message store, without migrating all the data into a FHIR-native system. This implementation specifically centers around the FHIR Interoperability Adapter. 3. FHIR Interoperability Adapter InterSystems IRIS for Health offers high flexibility and fine-grained control for transforming such healthcare message standards as HL7 V2.x and C-CDA into FHIR, and vice versa (see the Message Conversion Diagram). However, not all FHIR implementations require a dedicated FHIR repository server. To support such scenarios, IRIS for Health includes an interoperability adapter toolkit that enables detailed message conversion without the need for a FHIR server. This adapter can handle a variety of external requests (e.g., REST or SOAP APIs) from external systems, transform them into FHIR format, and route them to downstream systems, without necessarily persisting the data to a database. Alternatively, if needed, the adapter can transform and store the data in the database. It effectively provides an external interface layer that allows a non-FHIR database to behave as if it were a FHIR server, enabling seamless interoperability. Message conversion SDA: Summary Document Architecture The Summary Document Architecture (SDA) is InterSystems’ intermediary XML-based format used to represent patient data internally within IRIS and HealthShare products. This powerful native data structure enables you to access discrete data and easily convert between multiple data formats, including HL7 V2, CCDA, C32, HL7 FHIR, and others. SDA Structure The SDA (Structured Data Architecture) is primarily divided into two main components: Container – Top-level structure containing one or more sections Sections – Representation of specific healthcare elements(e.g., Patient, Encounter, AllergyIntolerance) Container The container is the top level of the SDA standard, and it includes multiple sections (e.g., patient, encounter, allergyIntolerance and others). Let's explore the internal structure of the SDA and its components. class definition of Container: The HS.SDA3.Container class serves as the primary definition for representing an SDA document. Various sections, such as patient and encounter, are defined as objects and included as properties within this class.Sections. A section is a discrete piece of a container element represented as an IRIS class definition with relevant data elements on the container. Patient – HS.SDA3.Patient Encounter – HS.SDA3.Encounter Allergy - HS.SDA3.Allergy SDA Container Structure The below XML structure represents an entire SDA container. <Container> <Patient/> <Encounters/> <Encounters/> <AdvanceDirectives/> </Container> SDA Data Types The FHIR data type formats are different from the IRIS standard data types. So, SDA has specific custom data types that handle the properties in sections more effectively than the standard properties, e.g., %String, %Integer, %Stream, etc. However, the standard properties are also used in SDA sections. Those data type classes are also defined inside the HS.SDA3* package: HS.SDA3.Name HS.SDA3.CodeTableDetail.Allergy HS.SDA3.PatientNumber HS.SDA3.TimeStamp SDA Extension In most cases, the SDA has sufficient properties to manage and generate all the data coming through the system to develop a resource. However, if you need to accommodate additional data as a part of your implementation, IRIS provides a straightforward way to extend it into the SDA extension classes effortlessly. For example, HS.Local.SDA3.AllergyExtension class definition is the extension class for the HS.SDA3.Allergy. You can add the necessary data elements to this extension class, simplifying the access and manipulation throughout your implementation. The next step is to create a container object. Create a container object ClassMethod CreateSDAContainer() { set SDAContainer = ##class(HS.SDA3.Container).%New() #; create patient object set patientSDA = ##class(HS.SDA3.Patient).%New() set patientSDA.Name.FamilyName = "stood" set patientSDA.Name.GivenName = "test" set patientSDA.Gender.Code="male" set patientSDA.Gender.Description="birth gender" #; create Encounter 1 set encounterSDA = ##class(HS.SDA3.Encounter).%New() set encounterSDA.AccountNumber = 12109979 set encounterSDA.ActionCode ="E" set encounterSDA.AdmitReason.Code ="Health Concern" set encounterSDA.AdmitReason.Description = "general health concern" #; create Encounter 2 set encounterSDA1 = ##class(HS.SDA3.Encounter).%New() set encounterSDA1.AccountNumber = 95856584 set encounterSDA1.ActionCode ="D" set encounterSDA1.AdmitReason.Code ="reegular checkup" set encounterSDA1.AdmitReason.Description = "general health ckeckup" #; set the patientSDA into the container. set SDAContainer.Patient = patientSDA #; set multiple encounters into the container SDA do SDAContainer.Encounters.Insert(encounterSDA) do SDAContainer.Encounters.Insert(encounterSDA1) #; convert the SDA object into an XML string. do SDAContainer.XMLExportToString(.containerString) write containerString } SDA – XML Document Output <Container> <Patient> <Name> <FamilyName>stood</FamilyName> <GivenName>test</GivenName> </Name> <Gender> <Code>male</Code> <Description>birth gender</Description> </Gender> </Patient> <Encounters> <Encounter> <AccountNumber>12109979</AccountNumber> <AdmitReason> <Code>Health Concern</Code> <Description>general health concern</Description> </AdmitReason> <ActionCode>E</ActionCode> </Encounter> <Encounter> <AccountNumber>95856584</AccountNumber> <AdmitReason> <Code>reegular checkup</Code> <Description>general health ckeckup</Description> </AdmitReason> <ActionCode>D</ActionCode> </Encounter> </Encounters> <UpdateECRDemographics>true</UpdateECRDemographics> </Container> In the previous section, we discussed the SDA and its components. We also learned how to generate the SDA via Cache ObjectScript. Next, we will generate a FHIR resource or Bundle using Interoperability production (formerly known as Ensemble). Let’s briefly read about interoperability production before creating a FHIR resource. Interoperability Production with FHIR Adaptor An interoperability production is an integration framework for connecting systems and developing applications for interoperability with ease. It is typically divided into 3 major components: Business service – It connects to the external system and receives the request from it. Business process – It receives a request from the other business hosts, processes the request based on your defined business logic, and converts the relevant data. Multiple components are used to convert the data: BPL – Business Process Language DTL – Data Transformation Language BR – Business Rules Record Mapping Business operation – It connects with the external system and sends the response to it. Let’s begin the process of constructing a FHIR message. Create FHIR Resource There are two types of systems: FHIR servers and non-FHIR servers. In our case, we aim to make a non-FHIR InterSystems IRIS database appear as a FHIR-compliant system by generating FHIR resources using the FHIR Interoperability Adapters. In this section, we will demonstrate how to generate FHIR resources from custom data stored in the IRIS database with the help of the InterSystems IRIS for Health Interoperability Toolkit with FHIR adapters. As a part of this implementation, we will create the following types of FHIR resources: Standard FHIR Resource – It utilizes the built-in FHIR classes with minimal or no modifications. Custom FHIR Resource – It involves adding extensions to the SDA model and creating a custom Data Transformation (DTL) for the FHIR resource. Each implementation will be initiated through dedicated business hosts. Business Service The RESTful business host is responsible for receiving requests from external systems. You may configure the appropriate adapter based on your specific integration requirements (e.g., HTTP, SOAP, or other supported protocols). Upon receiving a request from the external system, the workflow will generate a corresponding FHIR resource using data persisted in the custom or legacy database. FHIR Business Process The FHIR message generation process involves two primary steps: Transform custom/proprietary data into SDA (HL7 version 2.X to SDA, and CCDA to SDA, etc.). Add data elements to the SDA and, if required, create a custom DTL. These steps are optional and depend on specific implementation needs, e.g., custom FHIR resource generation. Then, convert the generated SDA into a FHIR Resource with the help of the IRIS built-in process. The Structured Data Architecture (SDA) format serves as an intermediary, enabling flexible data transformation. Once the data is available in SDA format, it can be easily mapped to FHIR or other healthcare data standards. Converting Custom/proprietary Data to SDA Format In this approach, begin by creating a persistent or interoperability request class to facilitate the transformation into SDA. It involves defining a custom patient class that maps data from your legacy or custom database structure into SDA-compliant objects. Utilizing a custom patient class provides significant flexibility: It simplifies object handling and manipulation. It enables clean mapping in Data Transformation Language (DTL). It allows an effortless reuse of the object in other transformation or business logic layers. Request a class for the external layer to convert SDA: Class Samples.FHIRAdapt.CustomStorage.Patient Extends (Ens.Request,%JSON.Adaptor) { Property Name As %String; Property BirthDate As %String; Property Citizenship As %String; Property Religion As %String; Property PrimaryLanguage As %String; Property Married As %String; Property MRN As %String; } This request class serves as the external interface layer, initiating the conversion process from your database format into SDA. Once the SDA object is created, it can be seamlessly transformed into the desired FHIR resource via standard or custom DTL mappings: Add the Samples.FHIRAdapt.CustomStorage.Patient (use your class definition) class as the source class for the transformation. Identify and select the appropriate SDA target class for mapping. In this case, HS.SDA3.Patient is a suitable class for transforming custom data into the SDA format. Sample DTL conversion Class Samples.FHIRAdapt.DTL.CustomDataToPatientSDA Extends Ens.DataTransformDTL [ DependsOn = (Samples.FHIRAdapt.CustomStorage.Patient, HS.SDA3.Patient) ] { Parameter IGNOREMISSINGSOURCE = 1; Parameter REPORTERRORS = 1; Parameter TREATEMPTYREPEATINGFIELDASNULL = 0; XData DTL [ XMLNamespace = "http://www.intersystems.com/dtl" ] { <transform sourceClass='Samples.FHIRAdapt.CustomStorage.Patient' targetClass='HS.SDA3.Patient' create='new' language='objectscript' > <assign value='$Piece(source.Name,",")' property='target.Name.GivenName' action='set' /> <assign value='$Piece(source.Name,",")' property='target.Name.FamilyName' action='set' /> <assign value='$Piece($Piece(source.Name,",",2)," ",2)' property='target.Name.MiddleName' action='set' /> <assign value='source.Citizenship' property='target.Citizenship' action='set' /> <assign value='"fullname"' property='target.Name.Type' action='set' /> <assign value='$Select(source.Married=1:"married",1:"single")' property='target.MaritalStatus.Code' action='set' /> </transform> } } At this stage, the data has been successfully transformed into an SDA document and is ready for conversion into a FHIR resource. Before generating the FHIR resource, additional supporting FHIR resources should be created as a part of this response. Besides, the custom fields need to be included in the FHIR output. To support these custom elements, the corresponding properties must be incorporated into the SDA structure. It can be accomplished with the help of the SDA extensions, which enable the inclusion of custom data elements required for accurate and complete FHIR resource generation. SDA Extension FHIR follows the 80/20 rule, where the core FHIR specification covers approximately 80% of common healthcare use cases, while the remaining 20% are addressed through custom constraints and extensions. To illustrate this, we will create an AllergyIntolerance resource with custom extensions. There are two key steps for the proper implementation of extension data elements in InterSystems IRIS: The class HS.SDA3.*******Extension is used to add extra data elements to each SDA section. For example, the class HS.Local.SDA3.AllergyExtension extends HS.SDA3.Allergy by defining the required custom properties. Since the pre-built DTL mappings do not include these custom extensions, you must create a custom DTL to handle the transformation accordingly. Allergy Extension Class To build the required fields in the HS.Local.SDA3.AllergyExtension class for creating the required allergy resource, use the following lines of code: Class HS.Local.SDA3.AllergyExtension Extends HS.SDA3.DataType { Parameter STREAMLETCLASS = "HS.SDA3.Streamlet.Allergy"; /// Mapped this property due to not being available in the SDA to FHIR conversion Property Criticality As %String; /// Mapped this property due to not being available in the SDA to FHIR conversion Property Type As %String(MAXLEN = ""); Storage Default { <Data name="AllergyExtensionState"> <Subscript>"AllergyExtension"</Subscript> <Value name="1"> <Value>Criticality</Value> </Value> <Value name="2"> <Value>Type</Value> </Value> </Data> <State>AllergyExtensionState</State> <Type>%Storage.Serial</Type> } } Making an extension is a halfway done process because standard DTL does not have a mapping for the extension field. Now, we have to construct a custom DTL to transform the FHIR response properly. Custom DTL Creation Before customizing DTL classes, you need to define a dedicated package for all of your custom DTL implementations. To do that, InterSystems recommends using the package called HS.Local.FHIR.DTL. To build a custom DTL for Allergy, start with the existing data transformation class:HS.FHIR.DTL.SDA3.vR4.Allergy.AllergyIntolerance, which handles the conversion from SDA to FHIR resources. First, make a copy of this class into your custom package as HS.Local.FHIR.DTL.SDA3.vR4.Allergy.AllergyIntolerance. Then, extend it by mapping your custom extensions into the FHIR resource generation process. For instance, the sample class HS.Local.FHIR.DTL.FromSDA.Allergy demonstrates how to map Allergy extension fields for convenience, while inheriting all other mappings from the base class HS.FHIR.DTL.SDA3.vR4.Allergy.AllergyIntolerance. Sample custom DTL mapping is illustrated below: /// Transforms SDA3 HS.SDA3.Allergy to vR4 AllergyIntolerance Class HS.Local.FHIR.DTL.SDA3.vR4.Allergy.AllergyIntolerance Extends Ens.DataTransformDTL [ DependsOn = (HS.SDA3.Allergy, HS.FHIR.DTL.vR4.Model.Resource.AllergyIntolerance), ProcedureBlock ] { XData DTL [ XMLNamespace = "http://www.intersystems.com/dtl" ] { <transform sourceClass='HS.SDA3.Allergy' targetClass='HS.FHIR.DTL.vR4.Model.Resource.AllergyIntolerance' create='existing' language='objectscript' > <assign value='source.Extension.Criticality' property='target.criticality' action='set' /> <assign value='source.Extension.Type' property='target.type' action='set' > <annotation>11/07/2023; ak; Added this set to populate type in AllergyIntolerance resource</annotation> </assign> </transform> } } Once you have created your class package for custom DTL (in case the custom DTL package does not already exist), you must register it for future FHIR data transformation results. set status = ##class(HS.FHIR.DTL.Util.API.ExecDefinition).SetCustomDTLPackage("HS.Local.FHIR.DTL") Furthermore, you can obtain the custom DTL package details (if already defined) by calling the class method. Write ##class(HS.FHIR.DTL.Util.API.ExecDefinition).GetCustomDTLPackage() Stream Container Class for Request Message The setup of the SDA and its optional SDA extension, along with the optional creation of a custom DTL for building the SDA, is now complete. However, the SDA object must now be converted into a standardized Ens.StreamContainer, used specifically in the SDA-to-FHIR conversion business process. Here are the simple steps to convert the SDA object to Ens.StreamContainer. ClassMethod CreateEnsStreamContainer() { set ensStreamCntr="" try { #; refer the CreateSDAContainer() method above #dim SDAContainer As HS.SDA3.Container = ..CreateSDAContainer() do SDAContainer.XMLExportToStream(.stream) #; Create Ens.StreamContainer is the default format for processing the SDA to FHIR process Set ensStreamCntr = ##class(Ens.StreamContainer).%New(stream) } catch ex { Write ex.DisplayString() set ensStreamCntr="" } return ensStreamCntr } The first phase of SDA creation is concluded. The second phase, generating the FHIR resource, is already handled by InterSystems IRIS. The following article will demonstrate how to convert an SDA document into a FHIR resource. SDA to FHIR Transformation Configure Interoperability Business Hosts for FHIR Creation Business logic for FHIR generation is finalized. Now, let’s configure the Interoperability production setup: Set up your inbound service to receive requests from the external system. Business process - it is a crucial step to create the FHIR resource. Business Process Implementation This business process focuses on SDA to FHIR transformation. InterSystems IRIS includes a comprehensive built-in business process, S.FHIR.DTL.Util.HC.SDA3.FHIR.Process that facilitates the transformation of the SDA to the FHIR message. By sending the generated SDA document to this business process, you receive a FHIR resource as a JSON response. The Process supports two types of FHIR responses based on the SDA input. Bundle – when an entire SDA container object is sent as an Ens.StreamConainter, the process returns a FHIR bundle with all resources. Resource - when an individual SDA section (e.g., patient, encounter, allergy) is sent as an Ens.StreamConainter, it returns the corresponding single FHIR resource as a bundle. Business Operation The FHIR Bundle is now ready to be returned to the requester or sent to an external system. Production settings: Business Service Class The business service class handles incoming requests from the external system to generate the FHIR. Upon receiving the request, it creates the SDA using existing logic. The SDA is then converted into a stream object. This stream is transformed into the format expected by the standard business process. Finally, the processed input is sent to the Business Process. Class Samples.Interop.BS.GenerateFHIRService Extends Ens.BusinessService { Parameter ADAPTER = "Ens.InboundAdapter"; Property TargetConfigName As Ens.DataType.ConfigName [ InitialExpression = "HS.FHIR.DTL.Util.HC.FHIR.SDA3.Process" ]; Method OnProcessInput(pInput As %RegisteredObject, Output pOutput As %RegisteredObject) As %Status { #; create your SDA container object and export to stream do ..CreateSDAContainer().XMLExportToStream(.sdaStream) #; convert to the standard Ens.StreamContainer message format set ensStreamCtnr = ##class(Ens.StreamContainer).%New(sdaStream) #; send to the Business process do ..SendRequestSync(..TargetConfigName,ensStreamCtnr,.pOutput) Quit $$$OK } ClassMethod CreateSDAContainer() As HS.SDA3.Container { set SDAContainer = ##class(HS.SDA3.Container).%New() #; create patient object set patientSDA = ##class(HS.SDA3.Patient).%New() set patientSDA.Name.FamilyName = "stood" set patientSDA.Name.GivenName = "test" set patientSDA.Gender.Code="male" set patientSDA.Gender.Description="birth gender" #; create Encounter 1 set encounterSDA = ##class(HS.SDA3.Encounter).%New() set encounterSDA.AccountNumber = 12109979 set encounterSDA.ActionCode ="E" set encounterSDA.AdmitReason.Code ="Health Concern" set encounterSDA.AdmitReason.Description = "general health concern" #; set the patientSDA into the container. set SDAContainer.Patient = patientSDA #; set encounters into the container SDA do SDAContainer.Encounters.Insert(encounterSDA) return SDAContainer } } Creating SDA to FHIR Using ObjectScript In the previous example, the FHIR resource was generated from SDA with the help of the Interoperability framework. In this section, we will build a FHIR bundle directly utilizing ObjectScript. Creating a FHIR Bundle from an SDA Container The CreateSDAContainer method returns an object of type HS.SDA3.Container (we referred to it above). This SDA container must be converted to a stream before being passed to the TransformStream method. The TransformStream method then processes the stream and returns a FHIR bundle as a %DynamicObject in tTransformObj.bundle. ClassMethod CreateBundle(fhirVersion As %String = "R4") As %DynamicObject { try { Set SDAContainer = ..CreateSDAContainer() Do SDAContainer.XMLExportToStream(.stream) #; Should pass stream, not a container object Set tTransformObj = ##class(HS.FHIR.DTL.Util.API.Transform.SDA3ToFHIR).TransformStream( stream, "HS.SDA3.Container", fhirVersion) return tTransformObj.bundle } catch ex { write ex.DisplayString() } return "" } Creating a FHIR Bundle Using an SDA Section In this approach, the patientSDA is declared directly within ObjectScript. This SDA object is then passed to the TransformObject method, which processes it and returns a FHIR bundle as a %DynamicObject. ClassMethod CreatePatientResourceDirectSet() { try { #; convert you're custom dataset into SDA by your DTL set patientSDA = ##class(HS.SDA3.Patient).%New() set patientSDA.Name.FamilyName = "stood" set patientSDA.Name.GivenName = "test" set patientSDA.Gender.Code="male" set patientSDA.Gender.Description="birth gender" #dim tTransformObj As HS.FHIR.DTL.Util.API.Transform.SDA3ToFHIR = ##class(HS.FHIR.DTL.Util.API.Transform.SDA3ToFHIR).TransformObject(patientSDA,"R4") set patinetBundle = tTransformObj.bundle } catch ex { write ex.DisplayString() } return patinetBundle } Creating an Allergy Resource with a Custom FHIR DTL and Allergy Extension Populate all required fields, including custom extension fields, directly within the SDA object. You should mention the FHIR version type as a second parameter in the TransformObject method (“R4” stands for Resource4 FHIR message). Pass the completed SDA object to the FHIR transformation class to generate the AllergyIntolerance FHIR bundle. Note: The custom extension for the allergy resource has already been defined, and the custom DTL mapping has been registered. ClassMethod CreateAllergyWithDTL() { #; I already registered the "HS.Local.FHIR.DTL.SDA3.vR4.Allergy.AllergyIntolerance" for extension mapping #; fetch the data from the table/global and set it into AllergySDA directly. set allerySDA = ##class(HS.SDA3.Allergy).%New() set allerySDA.Extension.Criticality = "critial" set allerySDA.Extension.Type = "t1" set allerySDA.Comments = "testing allergies" set allerySDA.AllergyCategory.Code="food" set allerySDA.AllergyCategory.Description="sea food" #; Set the required and additional properties in SDA, depending on your requirements. #; create a FHIR resource from the allergySDA with extension fields that uses a custom "HS.Local.FHIR.*" DTL #dim tTransformObj As HS.FHIR.DTL.Util.API.Transform.SDA3ToFHIR = ##class(HS.FHIR.DTL.Util.API.Transform.SDA3ToFHIR).TransformObject(allerySDA,"R4") Set patinetBundle = tTransformObj.bundle } FHIR to SDA Conversion Custom data, HL7 v2.x, or CCDA messages were previously converted into FHIR. The next implementation involves converting the FHIR Bundle or resource into SDA format, which can then be stored in the database or transformed into CCDA or HL7 v2.x formats. A JSON or XML-formatted FHIR resource is received from an external system. Upon receipt, the resource must be converted into the internal data structure and stored in the IRIS database. Business Service Requests can be received via HTTP/REST or any other inbound adapters based on the requirements. Business Process - FHIR To SDA Transformation Once InterSystems IRIS receives the FHIR request message, it provides an extensive built-in business process (HS.FHIR.DTL.Util.HC.FHIR.SDA3.Process). This business process takes a FHIR resource or Bundle as input. The FHIR input can only be of the configured FHIR version. This business process transforms the FHIR data into SDA3, forwards the SDA3 stream to a specified business host, receives the response from the business host, and returns a FHIR response. Please note that you cannot send the received request to this Business process directly. The request input type should be in the following: “HS.FHIRServer.Interop.Request” – for Interoperability production. “HS.Message.FHIR.Request” – FHIR repository server. It means that you must convert the request to one of the abovementioned formats before sending. Creating Interop.Request ClassMethod CreateReqObjForFHIRToSDA(pFHIRResource As %DynamicObject) As HS.FHIRServer.Interop.Request { #; sample message set pFHIRResource = {"resourceType":"Patient","name":[{"use":"official","family":"ashok te","given":["Sidharth"]}],"gender":"male","birthDate":"1997-09-08","telecom":[{"system":"phone","value":"1234566890","use":"mobile"},{"system":"email","value":"tornado1212@gmail.com"}],"address":[{"line":["Some street"],"city":"Manipal1","state":"Karnataka1","postalCode":"1234561"}]} set stream = ##class(%Stream.GlobalCharacter).%New() do stream.Write(pFHIRResource.%ToJSON()) #; create Quick stream set inputQuickStream = ##class(HS.SDA3.QuickStream).%New() set inputQuickStreamId = inputQuickStream.%Id() $$$ThrowOnError( inputQuickStream.CopyFrom(stream) ) #dim ensRequest as HS.FHIRServer.Interop.Request = ##class(HS.FHIRServer.Interop.Request).%New() set ensRequest.QuickStreamId = inputQuickStreamId return ensRequest Once the HS.FHIRServer.Interop.Request message is created, send it to the Business process to convert the FHIR resource to an SDA bundle. Production settings: Business Service Class The Class receives the stream of a FHIR resource via an HTTP request, converts this stream input to the standard process expected format HS.FHIRServer.Interop.Request, and finally calls the FHIR adapter process class to generate the SDA. Class Samples.Interop.BS.FHIRReceiver Extends Ens.BusinessService { Parameter ADAPTER = "EnsLib.HTTP.InboundAdapter"; Property TargetConfigName As Ens.DataType.ConfigName [ InitialExpression = "HS.FHIR.DTL.Util.HC.FHIR.SDA3.Process" ]; Method OnProcessInput(pInput As %Stream.Object, Output pOutput As %Stream.Object) As %Status { set inputQuickStream = ##class(HS.SDA3.QuickStream).%New() set inputQuickStreamId = inputQuickStream.%Id() $$$ThrowOnError( inputQuickStream.CopyFrom(pInput) ) #dim ensRequest as HS.FHIRServer.Interop.Request = ##class(HS.FHIRServer.Interop.Request).%New() set ensRequest.QuickStreamId = inputQuickStreamId Do ..SendRequestSync(..TargetConfigName, ensRequest, .pOutput) Quit $$$OK } } Creating SDA from the FHIR Resource Using ObjectScript In the previous example, the SDA document was generated from FHIR with the help of the Interoperability framework. In this section, we will employ an SDA from FHIR directly using ObjectScript. Once you have received the FHIR resource/Bundle as a request into the IRIS, convert the FHIR JSON to an SDA container: Convert the InterSystems %DynamicObject AKA JSON into %Stream object. Execute the TransformStream method from the HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3 class, which returns the SDA container object as a response. ///Simple, straightforward FHIR JSON resource to SDA conversion ClassMethod CreateSDAFromFHIRJSON() { try { ; have to send as a stream, not a %DynamicObject set patientStream = ##Class(%Stream.GlobalCharacter).%New() do patientStream.Write({"resourceType":"Patient","name":[{"use":"official","family":"ashok te","given":["Sidharth"]}],"gender":"male","birthDate":"1997-09-08","telecom":[{"system":"phone","value":"1234566890","use":"mobile"},{"system":"email","value":"tornado1212@gmail.com"}],"address":[{"line":["Some street"],"city":"Manipal1","state":"Karnataka1","postalCode":"1234561"}]}.%ToJSON()) #dim SDAObj As HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3 = ##class(HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3).TransformStream(patientStream,"R4","JSON") set SDAContainer = SDAObj.container ; XML-based SDA output write SDAContainer.XMLExport() } catch ex { write ex.DisplayString() } } FHIR XML to SDA container. Convert the XML into %Stream object. Execute the TransformStream method from the HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3 class, which returns the SDA container object as a response. /// Simple, straightforward FHIR XML resource to SDA conversion ClassMethod CreateSDAFromFHIRXML() { try { set patientXML = "<Patient xmlns=""http://hl7.org/fhir""><id value=""example""/><text><status value=""generated""/><div xmlns=""http://www.w3.org/1999/xhtml""><p>John Doe</p></div></text><identifier><use value=""usual""/><type><coding><system value=""http://terminology.hl7.org/CodeSystem/v2-0203""/><code value=""MR""/></coding></type><system value=""http://hospital.smarthealth.org""/><value value=""123456""/></identifier><name><use value=""official""/><family value=""Doe""/><given value=""John""/></name><gender value=""male""/><birthDate value=""1980-01-01""/></Patient>" set patientStream = ##Class(%Stream.GlobalCharacter).%New() do patientStream.Write(patientXML) #dim SDAObj As HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3 = ##class(HS.FHIR.DTL.Util.API.Transform.FHIRToSDA3).TransformStream(patientStream,"R4","XML") set SDAContainer = SDAObj.container ; XML-based SDA output write SDAContainer.XMLExport() } catch ex { write ex.DisplayString() } } By following the steps detailed above, you can seamlessly transform data to or from a FHIR resource. Other built-in FHIR repository and FHIR Facade options are valuable tools for exposing a FHIR-compliant system and for handling and storing FHIR resources efficiently.
Announcement
Anastasia Dyubaylo · Jun 5

The 4th InterSystems Ideas Contest

Hello Community, We're thrilled to invite all our Developer Community members (both InterSystems employees and not) to participate in our next contest! 💡 The 4th InterSystems Ideas Contest 💡 We're looking for your innovative ideas to enhance InterSystems IRIS and related products and services. We encourage suggestions based on real-life use cases, highlighting the tangible benefits your idea will bring to other users and how it will enhance developers' experiences with InterSystems technology. 📅 Duration: June 9 - July 20, 2025 🏆 Prizes for the best ideas and a random draw! 🎁 Gifts for everyone: A special gift will be given to each author whose idea is accepted in the contest. >> SUBMIT AN IDEA << Accepted ideas should: be created during the Ideas Contest period by a user registered on the InterSystems Ideas portal (you can log in via InterSystems SSO); not be part of other already existing ideas - only new ideas are allowed; not describe the existing functionality of InterSystems IRIS and related Products or Services; be posted in English; be written by a person, not generated by AI; be accepted as meaningful by InterSystems experts; ❗adhere to the structure below: 1️⃣ Description of the idea 2️⃣ Who is the target audience? 3️⃣ What problem does it solve? 4️⃣ How does this impact the efficiency, stability, reliability, etc, of the product? 5️⃣ Provide a specific use case or scenario that illustrates how this idea could be used in practice. All ideas are subject to moderation. We may request to clarify the submitted idea. Ideas that meet the requirements will receive a special "Ideas Contest" status. Who can participate? We invite EVERYONE to join our new Ideas Contest. Both InterSystems employees and non-employees are welcome to participate and submit their ideas. Prizes 1. Participation gift - authors of all accepted ideas will get: 🎁 Aluminum Media Stand 2. Expert award - InterSystems experts will select the best ideas. Winners will get: 🥇 1st place - Stilosa Barista Espresso Machine & Cappuccino Maker 🥈 2nd place - Osmo Mobile 7 🥉 3rd place - Smart Mini Projector XGODY Gimbal 3 3. Random award - an idea chosen at random will get: 🏅 Smart Mini Projector XGODY Gimbal 3 Note: InterSystems employees are eligible to receive only the participation gift. Expert and Random awards can only be won by Developer Community members who are not InterSystems employees. Important dates: ⚠️ Idea Submission: June 9 - July 13 ✅ Voting for ideas: July 14 - July 20 🎉 Winners announcement: July 21 Good luck! 🍀 Note: All prizes are subject to availability and shipping options. Some items may not be available for international shipping to specific countries, in this case, an equivalent alternative will be provided. We will let you know if a prize is not available and offer a possible replacement. Prizes cannot be delivered to residents of Crimea, Russia, Belarus, Iran, North Korea, Syria, or other US-embargoed countries. Can inters join and do we count as employees who can get the participation gift? 😊 Interns are absolutely welcome to join — we’d love to have you involved! 😊 And they are considered employees for this contest. While the Expert and Random awards are reserved for non-employees, taking part is still a great opportunity. As an intern, you have a unique, hands-on perspective that can contribute to a deeper understanding of what the product truly needs. Thanks for clarification. I'll be thinking of ideas! As I'm diving into Angular right now, mine is to add a projection to a TypeScript interface. Is there a specific tag we're supposed to use for the contest this year? Hey Developers! The fantastic prizes for the 4th InterSystems Ideas Contest were chosen, and here they are: Expert Award 🏆 InterSystems experts will select the best ideas, with amazing prizes awaiting the winners: 🥇 1st place: Stilosa Barista Espresso Machine & Cappuccino Maker 🥈 2nd place: Osmo Mobile 7 🥉 3rd place: Smart Mini Projector XGODY Gimbal 3 Random Award One lucky idea chosen at random will win: 🏅 Smart Mini Projector XGODY Gimbal 3 Reminder: To qualify for the contest, ensure your idea submission follows this required structure: 1. Description of the idea2. Who is the target audience?3. What problem does it solve?4. How does this impact the efficiency, stability, reliability, etc., of the product?5. Provide a specific use case or scenario that illustrates how this idea could be used in practice. Good luck, everyone! 🚀✨ There is no need to choose a specific tag. All ideas that pass our experts' master will be added to the Contest. Hi Community! We have an update on the dates of the contest - it's extended until July 20. During InterSystems Ready 2025, we received numerous requests to do this, as many of you were focused on your presentations or other commitments related to the event and didn't have the opportunity to submit your ideas. You asked, we listened! Don't forget, for the idea to take part in the contest, it has to follow the structure: Description of the idea Who is the target audience? What problem does it solve? How does this impact the efficiency, stability, reliability, etc, of the product? Provide a specific use case or scenario that illustrates how this idea could be used in practice. At this point, we have a lot of interesting ideas, but they don't adhere to the terms, so they aren't considered for the contest. @Mark.OReilly, @Andre.LarsenBarbosa, @Abdul.Manan, @Marykutty.George1462, @Jeffrey.Drumm, @Robert.Barbiaux, @Sylvain.Guilbaud, @Ashok.Kumar.
Article
Murray Oldfield · Nov 29, 2016

InterSystems Data Platforms and performance – Part 9 InterSystems IRIS VMware Best Practice Guide

This post provides guidelines for configuration, system sizing and capacity planning when deploying Caché 2015 and later on a VMware ESXi 5.5 and later environment. I jump right in with recommendations assuming you already have an understanding of VMware vSphere virtualization platform. The recommendations in this guide are not specific to any particular hardware or site specific implementation, and are not intended as a fully comprehensive guide to planning and configuring a vSphere deployment -- rather this is a check list of best practice configuration choices you can make. I expect that the recommendations will be evaluated for a specific site by your expert VMware implementation team. [A list of other posts in the InterSystems Data Platforms and performance series is here.](https://community.intersystems.com/post/capacity-planning-and-performance-series-index) _Note:_ This post was updated on 3 Jan 2017 to highlight that VM memory reservations must be set for production database instances to guarantee memory is available for Caché and there will be no swapping or ballooning which will negatively impact database performance. See the section below *Memory* for more details. ### References The information here is based on experience and reviewing publicly available VMware knowledge base articles and VMware documents for example [Performance Best Practices for VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere-esxi-vcenter-server-67-performance-best-practices.pdf) and mapping to requirements of Caché database deployments. ## Are InterSystems' products supported on ESXi? It is InterSystems policy and procedure to verify and release InterSystems’ products against processor types and operating systems including when operating systems are virtualised. For specifics see [InterSystems support policy](http://www.intersystems.com/services-support/product-support/virtualization/) and [Release Information](http://www.intersystems.com/services-support/product-support/latest-platform/). >For example: Caché 2016.1 running on Red Hat 7.2 operating system on ESXi on x86 hosts is supported. Note: If you do not write your own applications you must also check your application vendors support policy. ### Supported Hardware VMware virtualization works well for Caché when used with current server and storage components. Caché using VMware virtualization has been deployed succesfully at customer sites and has been proven in benchmarks for performance and scalability. There is no significant performance impact using VMware virtualization on properly configured storage, network and servers with later model Intel Xeon processors, specifically: Intel Xeon 5500, 5600, 7500, E7-series and E5-series (including the latest E5 v4). Generally Caché and applications are installed and configured on the guest operating system in the same way as for the same operating system on bare-metal installations. It is the customers responsibility to check the [VMware compatibility guide](http://www.vmware.com/resources/compatibility/search.php) for the specific servers and storage being used. # Virtualised architecture I see VMware commonly used in two standard configurations with Caché applications: - Where primary production database operating system instances are on a ‘bare-metal’ cluster, and VMware is only used for additional production and non-production instances such as web servers, printing, test, training and so on. - Where ALL operating system instances, including primary production instances are virtualized. This post can be used as a guide for either scenario, however the focus is on the second scenario where all operating system instances including production are virtualised. The following diagram shows a typical physical server set up for that configuration. _Figure 1. Simple virtualised Caché architecture_ Figure 1 shows a common deployment with a minimum of three physical host servers to provide N+1 capacity and availability with host servers in a VMware HA cluster. Additional physical servers may be added to the cluster to scale resources. Additional physical servers may also be required for backup/restore media management and disaster recovery. For recommendations specific to _VMware vSAN_, VMware's Hyper-Converged Infrastructure solution, see the following post: [Part 8 Hyper-Converged Infrastructure Capacity and Performance Planning](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity). Most of the recommendations in this post can be applied to vSAN -- with the exception of some of the obvious differences in the Storage section below. # VMWare versions The following table shows key recommendations for Caché 2015 and later: vSphere is a suite of products including vCenter Server that allows centralised system management of hosts and virtual machines via the vSphere client. >This post assumes that vSphere will be used, not the "free" ESXi Hypervisor only version. VMware has several licensing models; ultimately choice of version is based on what best suits your current and future infrastructure planning. I generally recommend the "Enterprise" edition for its added features such as Dynamic Resource Scheduling (DRS) for more efficient hardware utilization and Storage APIs for storage array integration (snapshot backups). The VMware web site shows edition comparisons. There are also Advanced Kits that allow bundling of vCenter Server and CPU licenses for vSphere. Kits have limitations for upgrades so are usually only recommended for smaller sites that do not expect growth. # ESXi Host BIOS settings The ESXi host is the physical server. Before configuring BIOS you should: - Check with the hardware vendor that the server is running the latest BIOS - Check whether there are any server/CPU model specific BIOS settings for VMware. Default settings for server BIOS may not be optimal for VMware. The following settings can be used to optimize the physical host servers to get best performance. Not all settings in the following table are available on all vendors’ servers. # Memory The following key rules should be considered for memory allocation: When running multiple Caché instances or other applications on a single physical host VMware has several technologies for efficient memory management such as transparent page sharing (TPS), ballooning, swap, and memory compression. For example when multiple OS instances are running on the same host TPS allows overcommitment of memory without performance degradation by eliminating redundant copies of pages in memory, which allows virtual machines to run with less memory than on a physical machine. >Note: VMware Tools must be installed in the operating system to take advantage of these and many other features of VMware. Although these features exist to allow for overcommitting memory, the recommendation is to always start by sizing vRAM of all VMs to fit within the physical memory available. Especially important in production environments is to carefully consider the impact of overcommitting memory and overcommit only after collecting data to determine the amount of overcommitment possible. To determine the effectiveness of memory sharing and the degree of acceptable overcommitment for a given Caché instance, run the workload and use Vmware commands `resxtop` or `esxtop` to observe the actual savings. A good reference is to go back and look at the [fourth post in this series on memory](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory) when planning your Caché instance memory requirements. Especially the section "VMware Virtualisation considerations" where I point out: >Set VMware memory reservation on production systems. You want to *must* avoid any swapping for shared memory so set your production database VMs memory reservation to at least the size of Caché shared memory plus memory for Caché processes and operating system and kernel services. If in doubt **Reserve the full production database VMs memory (100% reservation)** to guarantee memory is available for your Caché instance so there will be no swapping or ballooning which will negatively impact database performance. Notes: Large memory reservations will impact vMotion operations so it is important to take this into consideration when designing the vMotion/management network. A virtual machine can only be live migrated, or started on another host with Vmware HA if the target host has free physical memory greater than or equal to the size of the reservation. This is especially important for production Caché VMs. For example pay particular attention to HA Admission Control policies. >Ensure capacity planning allows for distribution of VMs in event of HA failover. For non-production environments (test, train, etc) more aggressive memory overcommitment is possible, however do not over commit Caché shared memory, instead limit shared memory in the Caché instance by having less global buffers.  Current Intel processor architecture has a NUMA topology. Processors have their own local memory and can access memory on other processors in the same host. Not surprisingly accessing local memory has lower latency than remote. For a discussion of CPU check out the [third post in this series](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu) including a discussion about NUMA in the _comments section_. As noted in the BIOS section above a strategy for optimal performance is to ideally size VMs only up to maximum of number of cores and memory on a single processor. For example if your capacity planning shows your biggest production Caché database VM will be 14 vCPUs and 112 GB memory then consider whether a a cluster of servers with 2x E5-2680 v4 (14-core processor) and 256 GB memory is a good fit. >**Ideally** size VMs to keep memory local to a NUMA node. But dont get too hung up on this. If you need a "Monster VM" bigger than a NUMA node that is OK, VMware will manage NUMA for optimal performance. It also important to right-size your VMs and not allocate more resources than are needed (see below). ## CPU The following key rules should be considered for virtual CPU allocation: Production Caché systems should be sized based on benchmarks and measurements at live customer sites. For production systems use a strategy of initially sizing the system the same as bare-metal CPU cores and as per best practice monitoring to see if virtual CPUS (vCPUs) can be reduced. ### Hyperthreading and capacity planning A good starting point for sizing __production database__ VMs based on your rules for physical servers is to calculate physical server CPU requirements for the target processor with hyper-threading enabled then simply make the transaltaion: >One physical CPU (includes hyperthreading) = One vCPU (includes hyperthreading). A common misconception is that hyper-threading somehow doubles vCPU capacity. This is NOT true for physical servers or for logical vCPUs. Hyperthreading on a bare-metal server may give a 30% uplift in performance over the same server without hyperthreading, but this can also be variable depending on the application. For initial sizing assume is that the vCPU has full core dedication. For example; if you have a 32-core (2x 16-core) E5-2683 V4 server – size for a total of up to 32 vCPU capacity knowing there may be available headroom. This configuration assumes hyper-threading is enabled at the host level. VMware will manage the scheduling between all the applications and VMs on the host. Once you have spent time monitoring the appliaction, operating system and VMware performance during peak processing times you can decide if higher consolidation is possible. ### Licencing In vSphere you can configure a VM with a certain number of sockets or cores. For example, if you have a dual-processor VM (2 vCPUs), it can be configured as two CPU sockets, or as a single socket with two CPU cores. From an execution standpoint it does not make much of a difference because the hypervisor will ultimately decide whether the VM executes on one or two physical sockets. However, specifying that the dual-CPU VM really has two cores instead of two sockets could make a difference for software licenses. Note: Caché license counts the cores (not threads). # Storage >This section applies to the more traditional storage model using a shared storage array. For _vSAN_ recommendations also see the following post: [Part 8 Hyper-Converged Infrastructure Capacity and Performance Planning](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity) The following key rules should be considered for storage: ## Size storage for performance Bottlenecks in storage is one of the most common problems affecting Caché system performance, the same is true for VMware vSphere configurations. The most common problem is sizing storage simply for GB capacity, rather than allocating a high enough number of spindles to support expected IOPS. Storage problems can be even more severe in VMware because more hosts can be accessing the same storage over the same physical connections. ## VMware Storage overview VMware storage virtualization can be categorized into three layers, for example: - The storage array is the bottom layer, consisting of physical disks presented as logical disks (storage array volumes or LUNs) to the layer above. - The next layer is the virtual environment occupied by vSphere. Storage array LUNs are presented to ESXi hosts as datastores and are formatted as VMFS volumes. - Virtual machines are made up of files in the datastore and include virtual disks are presented to the guest operating system as disks that can be partitioned and used in file systems. VMware offers two choices for managing disk access in a virtual machine—VMware Virtual Machine File System (VMFS) and raw device mapping (RDM), both offer similar performance. For simple management VMware generally recommends VMFS, but there may be situations where RDMs are required. As a general recommendation – unless there is a particular reason to use RDM choose VMFS, _new development by VMware is directed to VMFS and not RDM._ ###Virtual Machine File System (VMFS) VMFS is a file system developed by VMware that is dedicated and optimized for clustered virtual environments (allows read/write access from several hosts) and the storage of large files. The structure of VMFS makes it possible to store VM files in a single folder, simplifying VM administration. VMFS also enables VMware infrastructure services such as vMotion, DRS and VMware HA. Operating Systems, applications, and data are stored in virtual disk files (.vmdk files). vmdk files are stored in the Datastore. A single VM can be made up of multiple vmdk files spread over several datastores. As the production VM in the diagram below shows a VM can include storage spread over several data stores. For production systems best performance is achieved with one vmdk file per LUN, for non-production systems (test, training etc) multiple VMs vmdk files can share a datastore and a LUN. While vSphere 5.5 has a maximum VMFS volume size of 64TB and VMDK size of 62TB when deploying Caché typically multiple VMFS volumes mapped to LUNs on separate disk groups are used to separate IO patterns and improve performance. For example random or sequential IO disk groups or to separate production IO from IO from other environments. The following diagram shows an overview of an example VMware VMFS storage used with Caché: _Figure 2. Example Caché storage on VMFS_ ### RDM RDM allows management and access of raw SCSI disks or LUNs as VMFS files. An RDM is a special file on a VMFS volume that acts as a proxy for a raw device. VMFS is recommended for most virtual disk storage, but raw disks might be desirable in some cases. RDM is only available for Fibre Channel or iSCSI storage. ### VMware vStorage APIs for Array Integration (VAAI) For the best storage performance, customers should consider using VAAI-capable storage hardware. VAAI can improve the performance in several areas including virtual machine provisioning and of thin-provisioned virtual disks. VAAI may be available as a firmware update from the array vendor for older arrays. ### Virtual Disk Types ESXi supports multiple virtual disk types: **Thick Provisioned** – where space is allocated at creation. There are further types: - Eager Zeroed – writes 0’s to the entire drive. This increases the time it takes to create the disk, but results in the best performance, even on the first write to each block. - Lazy Zeroed – writes 0’s as each block is first written to. Lazy zero results in a shorter creation time, but reduced performance the first time a block is written to. Subsequent writes, however, have the same performance as on eager-zeroed thick disks. **Thin Provisioned** – where space is allocated and zeroed upon write. There is a higher I/O cost (similar to that of lazy-zeroed thick disks) during the first write to an unwritten file block, but on subsequent writes thin-provisioned disks have the same performance as eager-zeroed thick disks _In all disk types VAAI can improve performance by offloading operations to the storage array._ Some arrays also support thin provisioning at the array level, do not thin provision ESXi disks on thin provisioned array storage as there can be conflicts in provisioning and management. ### Other Notes As noted above for best practice use the same strategies as bare-metal configurations; production storage may be separated at the array level into several disk groups: - Random access for Caché production databases - Sequential access for backups and journals, but also a place for other non-production storage such as test, train, and so on Remember that a datastore is an abstraction of the storage tier and, therefore, it is a logical representation not a physical representation of the storage. Creating a dedicated datastore to isolate a particular I/O workload (whether journal or database files), without isolating the physical storage layer as well, does not have the desired effect on performance. Although performance is key, choice of shared storage depends more on existing or planned infrastructure at site than impact of VMware. As with bare-metal implementations FC SAN is the best performing and is recommended. For FC 8Gbps adapters are the recommended minimum. iSCSI storage is only supported if appropriate network infrastructure is in place, including; minimum 10Gb Ethernet and jumbo frames (MTU 9000) must be supported on all components in the network between server and storage with separation from other traffic. Use multiple VMware Paravirtual SCSI (PVSCSI) controllers for the database virtual machines or virtual machines with high I/O load. PVSCSI can provide some significant benefits by increasing overall storage throughput while reducing CPU utilization. The use of multiple PVSCSI controllers allows the execution of several parallel I/O operations inside the guest operating system. It is also recommended to separate journal I/O traffic from the database I/O traffic through separate virtual SCSI controllers. As a best practice, you can use one controller for the operating system and swap, another controller for journals, and one or more additional controllers for database data files (depending on the number and size of the database data files). Aligning file system partitions is a well-known storage best practice for database workloads. Partition alignment on both physical machines and VMware VMFS partitions prevents performance I/O degradation caused by I/O crossing track boundaries. VMware test results show that aligning VMFS partitions to 64KB track boundaries results in reduced latency and increased throughput. VMFS partitions created using vCenter are aligned on 64KB boundaries as recommended by storage and operating system vendors. # Networking The following key rules should be considered for networking: As noted above VMXNET adapaters have better capabilities than the default E1000 adapter. VMXNET3 allows 10Gb and uses less CPU where as E1000 is only 1Gb. If there is only 1 gigabit network connections between hosts there is not a lot of difference for client to VM communication. However with VMXNET3 it will allow 10Gb between VMs on the same host, which does make a difference especially in multi-tier deployments or where there is high network IO requirements between instances. This feature should also be taken into consideration when planning affinity and antiaffinity DRS rules to keep VMs on the same or separate virtual switches. The E1000 use universal drivers that can be used in Windows or Linux. Once VMware Tools is installed on the guest operating system VMXNET virtual adapters can be installed. The following diagram shows a typical small server configuration with four physical NIC ports, two ports have been configured within VMware for infrastructure traffic: dvSwitch0 for Management and vMotion, and two ports for application use by VMs. NIC teaming and load balancing is used for best throughput and HA. _Figure 3. A typical small server configuration with four physical NIC ports._ # Guest Operating Systems The following are recommended: >It is very important to load VMware tools in to all VM operating systems and keep the tools current. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system and improves management of the virtual machine. Without VMware Tools installed in your guest operating system, guest performance lacks important functionality. Its vital that the time is set correctly on all ESXi hosts - it ends up affecting the Guest VMs. The default setting for the VMs is not to sync the guest time with the host - but at certain times the guest still do sync their time with the host and if the time is out has been known to cause major issues. VMware recommends using NTP instead of VMware Tools periodic time synchronization. NTP is an industry standard and ensures accurate timekeeping in your guest. It may be necessary to open the firewall (UDP 123) to allow NTP traffic. # DNS Configuration If your DNS server is hosted on virtualized infrastructure and becomes unavailable, it prevents vCenter from resolving host names, making the virtual environment unmanageable -- however the virtual machines themselves keep operating without problem. # High Availability High availability is provided by features such as VMware vMotion, VMware Distributed Resource Scheduler (DRS) and VMware High Availability (HA). Caché Database mirroring can also be used to increase uptime. It is important that Caché production systems are designed with n+1 physical hosts. There must be enough resources (e.g. CPU and Memory) for all the VMs to run on remaining hosts in the event of a single host failure. In the event of server failure if VMware cannot allocate enough CPU and memory resources on the remaining server VMware HA will not restart VMs on the remaining servers. ## vMotion vMotion can be used with Caché. vMotion allows migration of a functioning VM from one ESXi host server to another in a fully transparent manner. The OS and applications such as Caché running in the VM have no service interruption. When migrating using vMotion, only the status and memory of the VM—with its configuration—moves. The virtual disk does not need to move; it stays in the same shared-storage location. Once the VM has migrated, it is operating on the new physical host. vMotion can function only with a shared storage architecture (such as Shared SAS array, FC SAN or iSCSI). As Caché is usually configured to use a large amount of shared memory it is important to have adequare network capacity available to vMotion, a 1Gb nework may be OK, however higher bandwidth may be required or multi-NIC vMotion can be configured. ## DRS Distributed Resource Scheduler (DRS) is a method of automating the use of vMotion in a production environment by sharing the workload among different host servers in a cluster. DRS also presents the ability to implement QoS for a VM instance to protect resources for Production VMs by stopping non-production VMs over using resources. DRS collects information about the use of the cluster’s host servers and optimize resources by distributing the VMs’ workload among the cluster’s different servers. This migration can be performed automatically or manually. ## Caché Database Mirror For mission critical tier-1 Caché database application instances requiring the highest availability consider also using [InterSystems synchronous database mirroring.](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_bp_vm) Additional advantages of also using mirroring include: - Separate copies of up-to-date data. - Failover in seconds (faster than restarting a VM then operating System then recovering Caché). - Failover in case of application/Caché failure (not detected by VMware). # vCenter Appliance The vCenter Server Appliance is a preconfigured Linux-based virtual machine optimized for running vCenter Server and associated services. I have been recommending sites with small clusters to use the VMware vCenter Server Appliance as an alternative to installing vCenter Server on a Windows VM. In vSphere 6.5 the appliance is recommended for all deployments. # Summary This post is a rundown of key best practices you should consider when deploying Caché on VMware. Most of these best practices are not unique to Caché but can be applied to other tier-1 business critical deployments on VMware. If you have any questions please let me know via the comments below. Good Day Murray; Anything to look out for VMs hosting Ensemble Databases on vMware being part of VMware Site Recovery Managermaking use of vSphere Replication? Can it be alone be safely used to boot up the VM on the other site? Regards; Anzelem.