Every thriving community relies on members whose quiet dedication and deep expertise keep it growing stronger year after year. In the InterSystems Developer Community, one such pillar is @Megumi Kakechi, a long-time engineer and support specialist whose 17 years with InterSystems and 9 years on the Developer Community reflect a true passion for helping others learn, solve problems, and innovate.

👏 Let’s take a closer look at Megumi's remarkable journey and her impact on the InterSystems ecosystem.

19 10
0 144

Hello!!!

Data migration often sounds like a simple "move data from A to B task" until you actually do it. In reality, it is a complex process that blends planning, validation, testing, and technical precision.

Over several projects where I handled data migration into a HIS which runs on IRIS (TrakCare), I realized that success comes from a mix of discipline and automation.

Here are a few points which I want to highlight.

1. Start with a Defined Data Format.

Before you even open your first file, make sure everyone, especially data providers, clearly understands the exact data format you expect. Defining templates early avoids unnecessary bank-and-forth and rework later.

While Excel or CSV formats are common, I personally feel using a tab-delimited text file (.txt) for data upload is best. It's lightweight, consistent, and avoids issues with commas inside text fields.

PatID   DOB Gender  AdmDate
10001   2000-01-02  M   2025-10-01
10002   1998-01-05  F   2025-10-05
10005   1980-08-23  M   2025-10-15

Make sure that the date formats given in the file is correct and constant throughout the file because all these files are usually converted from an Excel file and an Basic excel user might make mistakes while giving you the date formats wrong. Wrong date formats can irritate you while converting into horolog.

4 8
1 103
Article
· Oct 22 2m read
Tips on handling Large data

Hello community,

I wanted to share my experience about working on Large Data projects. Over the years, I have had the opportunity to handle massive patient data, payor data and transactional logs while working in an hospital industry. I have had the chance to build huge reports which had to be written using advanced logics fetching data across multiple tables whose indexing was not helping me write efficient code.

Here is what I have learned about managing large data efficiently.

Choosing the right data access method.

As we all here in the community are aware of, IRIS provides multiple ways to access data. Choosing the right method, depends on the requirement.

  • Direct Global Access: Fastest for bulk read/write operations. For example, if i have to traverse through indexes and fetch patient data, I can loop through the globals to process millions of records. This will save a lot of time.
Set ToDate=+H
Set FromDate=+$H-1 For  Set FromDate=$O(^PatientD("Date",FromDate)) Quit:FromDate>ToDate  Do
. Set PatId="" For  Set PatId=$Order(^PatientD("Date",FromDate,PatID)) Quit:PatId=""  Do
. . Write $Get(^PatientD("Date",FromDate,PatID)),!
  • Using SQL: Useful for reporting or analytical requirements, though slower for huge data sets.

3 6
1 104

Over time, while I was working with Interoperability on the IRIS Data Platform, I developed rules for organizing a project code into packages and classes. That is what is called a Naming Convention, usually. In this topic, I want to organize and share these rules. I hope it can be helpful for somebody.

5 4
2 88

Introduction

Businesses often use in-memory databases or key-value stores (caching layers) when applications require extremely high performance. However, in-memory databases incur a high total cost of ownership and have hard scalability limits, incurring reliability problems and restart delays when memory limits are exceeded. In-memory key-value stores share these limitations and introduce architectural complexity and network latency as well.

This article explains why InterSystems IRIS™ data platform is a superior alternative to in-memory databases and key-value stores for highperformance SQL and NoSQL applications.

Taking Performance and Efficiency to the Next Level

InterSystems IRIS is the only persistent database that can match or beat the performance of in-memory databases and caching layers for concurrent data ingestion and analytics processing. It can process incoming transactions, persist the data to disk, and index it for analytics in under one microsecond on commercially available hardware without introducing network latency.

The superior ingest performance of InterSystems IRIS results in part from its multi-dimensional data engine, which allows efficient and compact storage in a rich data structure. Using an efficient, multi-dimensional data model with sparse storage techniques instead of two-dimensional tables, random data access and updates are accomplished with very high performance, fewer resources and less disk capacity. It also provides in-memory, in-process APIs in addition to traditional TCP/IP access APIs to optimize ingest performance.

10 0
5 109

Window functions in InterSystems IRIS let you perform powerful analytics — like running totals, rankings, and moving averages — directly in SQL.
They operate over a "window" of rows related to the current row, without collapsing results like GROUP BY.
This means you can write cleaner, faster, and more maintainable queries — no loops, no joins, no temp tables.

In this article let's understand the mechanics of window functions by addressing some common data analisys tasks.

4 0
0 85

Overview

This web interface is designed to facilitate the management of Data Lookup Tables via a user-friendly web page. It is particularly useful when your lookup table values are large, dynamic, and frequently changing. By granting end-users controlled access to this web interface (read, write, and delete permissions limited to this page), they can efficiently manage lookup table data according to their needs.

The data managed through this interface can be seamlessly utilized in HealthConnect rules or data transformations, eliminating the need for constant manual monitoring and management of the lookup tables and thereby saving significant time.

Note:
If the standard Data Lookup Table does not meet your mapping requirements, you can create a custom table and adapt this web interface along with its supporting class with minimal modifications. Sample class code is available upon request.

2 0
1 63
Article
· Oct 28 3m read
IRIS Home Assistant Add-On (HAOS)

InterSystems IRIS Community Edition HAOS Add-On

Run InterSystems IRIS inside of Home Assistant, as an add-on. Before you dismiss this article possibly under the guise that this is just a gimmick, Id like you to step back and take a look at how easy it is to launch IRIS based applications using this platform. If you look at Open Exchange, you will see dozens of dozens of applications worthy of launching while they are basically hung out to dry as gitware, and launchable if you want to get into a laptop battle with containerd or Docker. With a simple git repo, and a specification, you can now build your app on IRIS, and make it launchable through a marketplace with limited hassle to your end users. Run it along side Ollama and the LLM/LAM implementations, expose anything in IRIS as a sensor or expose an endpoint for interaction in your IRIS app to interact with anything you've connected to HAOS. Wanna restart an IRIS production with a flick of a physical switch or Assisted AI? You can do it with this add-on, or your own, right alongside the home automation hackers.

5 0
1 62

IKO Helm Status: WFH

Here is an option for your headspace if you are designing an multi-cluster architecture and the Operator is an FTE to the design. You can run the Operator from a central Kubernetes cluster (A), and point it to another Kubernetes cluster (B), so that when the apply an IrisCluster to B the Operator works remotely on A and plans the cluster accordingly on B. This design keeps some resource heat off the actual workload cluster, spares us some serviceaccounts/rbac and gives us only one operator deployment to worry about so we can concentrate on the IRIS workloads.

3 1
1 40

Power your IrisCluster serviceTemplate with kube-vip

If you're running IRIS in a mirrored IrisCluster for HA in Kubernetes, the question of providing a Mirror VIP (Virtual IP) becomes relevant. Virtual IP offers a way for downstream systems to interact with IRIS using one IP address. Even when a failover happens, downstream systems can reconnect to the same IP address and continue working.

The lead in above was stolen (gaffled, jacked, pilfered) from techniques shared to the community for vips across public clouds with IRIS by @Eduard Lebedyuk ...

Articles: ☁ vip-aws | vip-gcp | vip-azure

This version strives to solve the same challenges for IRIS on Kubernetes when being deployed via MAAS, on prem, and possibly yet to be realized using cloud mechanics with Manged Kubernetes Services.

1 1
0 19
Article
· Nov 13 1m read
Reviews on Open Exchange - #59

If one of your packages on OEX receives a review, you get notified by OEX only of YOUR own package.
The rating reflects the experience of the reviewer with the status found at the time of review.
It is kind of a snapshot and might have changed meanwhile.
Reviews by other members of the community are marked by * in the last column.

2 1
0 19

When I started my journey with InterSystems IRIS, especially in Interoperability, one of the initial and common questions I had was: how can I run something on an interval or schedule? In this topic, I want to share two simple classes that address this issue. I'm surprised that some similar classes are not located somewhere in EnsLib. Or maybe I didn't search well? Anyway, this topic is not meant to be complex work, just a couple of snippets for beginners.

1 0
0 37

Mirror Your Database Across the Galaxy with Seeding

Hello cpf fans! This distraction I used the "seed" capability in IRIS to provision an entire IrisCluster mirror, 4 maps wide with compute starting from an IRIS.DAT in a galaxy far far away. This is pretty powerful if you have had a great deal of success with a solution running on a monolithic implementation and want it to scale to the outer rim with Kubernetes and the InterSystems Kubernetes Operator. Even though my midichlorian count is admittely low, I have seen some hardcore CACHE hackers shovel around DATS, compact and shrink and update their ZROUTINES, so this same approach could also be helpful shrinking and securing your containerized workload too. If you squint and feel all living things around you, you can see a glimpse of in place (logical) mirroring in the future as a function of the operator and a migration path to a fully operational mirrored Death Star as the workload matures.


0 0
0 35

"Haul" a Portable Registry for Airgapped IrisClusters

Rancher Government Hauler streamlines deploying and maintaining InterSystems container workloads in air-gapped environments by simplifying how you package and move required assets. It treats container images, Helm charts, and other files as content and collections, letting you fetch, store, and distribute them declaratively or via CLI — without changing your existing workflows. Meaning your charts and what have yous, can have conditionals on your pull locations in Helm values, etc.

If you have been tracking how HealthShare is being deployed via IPM Packages, you can certainly appreciate the adoption of OCI compliance storage for the packages themselves using ORAS... which is core to the Hauler solution.

3 0
1 35

Target Practice for IrisClusters with KWOK

KWOK, Kubernetes WithOut Kubelet, is a lightweight tool that simulates nodes and pods—without running real workloads—so you can quickly test and scale IrisCluster behavior, scheduling, and zone assignment. For those of you wondering what value is in this without the IRIS workload, you will quickly realize it when you play with your Desk Toys awaiting nodes and pods to come up or get the bill for provisioning expensive disk behind the pvc's for no other reason than just to validate your topology.

Here we will use it to simulate an IrisCluster and target a topology across 4 zones, implementing high availability mirroring across zones, disaster recovery to an alternate zone, and horizontal ephemeral compute (ecp) to a zone of its own. All of this done locally, suitable for repeatable testing, and a valuable validation check mark on the road to production.

0 0
0 34
Article
· Oct 23 1m read
Reviews on Open Exchange - #57

If one of your packages on OEX receives a review, you get notified by OEX only of YOUR own package.
The rating reflects the experience of the reviewer with the status found at the time of review.
It is kind of a snapshot and might have changed meanwhile.
Reviews by other members of the community are marked by * in the last column.

1 0
0 33

InterSystems FAQ rubric

When exporting using the Export() method of the %Library.Global class, if the export format (fourth argument: OutputFormat) is set to 7, "Block format/Caché block format (%GOF)," mapped globals cannot be exported (only globals in the default global database of the namespace are exported). To export mapped globals in "Block format/Caché block format (%GOF)," specify the database directory to which you want to map them in the first parameter of %Library.Global.Export().

An example of execution is shown below.

2 0
0 31
Article
· Nov 10 2m read
Reviews on Open Exchange - #58

If one of your packages on OEX receives a review, you get notified by OEX only of YOUR own package.
The rating reflects the experience of the reviewer with the status found at the time of review.
It is kind of a snapshot and might have changed meanwhile.
Reviews by other members of the community are marked by * in the last column.

2 0
0 18