Clear filter
Question
Rovmen paul · Jul 9
Hi everyone,I’ve set up a scheduled task in InterSystems IRIS to run every night at 2:00 AM, but it doesn’t seem to trigger consistently. Some nights it works as expected, but other times it just skips without any error in the logs.
The task is marked as enabled, and there are no conflicts in the Task Manager. I’ve double-checked the system time and time zone settings all looks fine.
Could it be a cache issue or something to do with system load?
Thanks! Screenshots would certainly help, and an explanation of what the task actually does would help as well (Is it a backup, does it export data to a different system, wash the dishes maybe? :-) ). Is it possible that the task may at times take longer than 24 hours to execute? I have seen instances where a process 'hangs' so TaskMan thinks it's still running and doesn't fire up a new instance of that task.
Hope this helps!
Article
Luis Angel Pérez Ramos · Jul 9
You've probably encountered the terms Data Lake, Data Warehouse, and Data Fabric everywhere over the last 10-15 years. Everything can be solved with one of these three things, or a combination of them ( here and here are a couple of articles from our official website in case you have any doubts about what each of these terms means). If we had to summarize the purpose of all these terms visually, we could say that they all try to solve situations like this:
Our organizations are like that room, a multitude of drawers filled with data everywhere, in which we are unable to find anything we need, and we are completely unaware of what we have.
Well, at InterSystems we could not be less and taking advantage of the capabilities of InterSystems IRIS we have created a Data Fabric solution called InterSystems Data Fabric Studio (are we or are we not original?).
Data Fabric
First of all, let's take a closer look at the features that characterize a Data Fabric, and what better way to do so than by asking our beloved ChatGPT directly:
A Data Fabric is a modern architecture that seeks to simplify and optimize data access, management, and use across multiple environments, facilitating a unified and consistent view of data. Its most distinctive features include:
Unified and transparent access
Seamless integration of structured, semi-structured, and unstructured data.
Seamless access regardless of physical or technological location.
Centralized metadata management
Advanced data catalogs that provide information on origin, quality, and use.
Automatic data search and discovery capabilities.
Virtualization and data abstraction
Eliminating the need to constantly move or replicate data.
Dynamic creation of virtual views that enable real-time distributed queries.
Integrated government and security
Consistent application of security, privacy, and compliance policies across all environments.
Integrated protection of sensitive data through encryption, masking, and granular controls.
AI-powered automation
Automating data discovery, preparation, integration, and optimization using artificial intelligence.
Automatic application of advanced techniques to improve quality and performance.
Advanced analytical capabilities
Integrated support for predictive analytics, machine learning, and real-time data processing.
InterSystems Data Fabric Studio
InterSystems Data Fabric Studio, or IDFS from now on, is a cloud-based SaaS solution (for now) whose objective is to meet the functionalities demanded of a Data Fabric.
Those of you with more experience developing with InterSystems IRIS will have clearly seen that many of the Data Fabric features are easily implemented on IRIS, which is exactly what we thought at InterSystems. Why not leverage our technology by providing our customers with a solution?
Modern and user-friendly interface.
This is a true first for InterSystems: a simple, modern, and functional web interface based on the latest versions of technologies like Angular.
With transparent access to your source data.
The first step to efficiently exploiting your data begins with connecting to it. Different data sources require different types of connections, such as JDBC, REST APIs, or CSV files.
IDFS provides connectors for a wide variety of data sources, including connections to different databases via JDBC using pre-installed connection libraries.
Analyze your data sources and define your own catalog.
Every Data Fabric must allow users to analyze the information available in their data sources by displaying all associated metadata that allows them to decide whether or not it is relevant for further exploitation.
With IDFS, once you've defined the connections to your different databases, you can begin the discovery and cataloging tasks using features such as importing schemas defined in the database.
In the following image, you can see an example of this discovery phase. From an established connection to an Oracle database, we can access all the schemas present in it, as well as all the tables defined within each schema.
This functionality is not limited to the rigid structures defined by external databases; IDFS, using SQL queries between multiple tables in the data source, allows you to generate catalogs with only the information that is most relevant to the user.
Below you can see an example of a query against multiple tables in the same database and a visualization of the retrieved data.
Once our catalog is defined, IDFS will be responsible for storing the configuration metadata. There is no need to import the actual data at any time, thus providing virtualization of the data.
Consult and manage your data catalog.
The data set present in any organization can be considerable, so managing the catalogs we create based on it is essential to be agile and simple.
IDFS allows us to consult our entire data catalog at any time, allowing us to recognize at a glance what data we have access to.
As you can see, with the functionalities already explained, we perfectly cover the first two points that ChatGPT indicated as necessary for a Data Fabric tool. Let's now see how IDFS covers the remaining points.
One of the advantages of IDFS is that, since it is built on InterSystems IRIS, it leverages its vector search capabilities, which allow semantic searches across the data catalog, allowing you to retrieve all catalogs related to a given search.
Prepare your data for later use.
It's pointless to identify and catalog our data if we can't make it available to third parties in the way they need it. This step is key, as providing data in the required formats will facilitate its use, simplifying the analysis and development processes of new solutions.
IDFS makes this process easier by creating "Recipes," a name that fits perfectly since what we're going to do is "cook" our data.
As with any good recipe, our ingredients (the data) will go through several steps that will allow us to finally prepare the dish to our liking.
Prepare your data (Staging)
The first step in any recipe is to gather all the necessary ingredients. This is the preparation or staging step. This step allows you to choose from your entire catalog the one that contains the required information.
Transform your data (Transformation)
Any Data Fabric worth its salt must be able to transform data sources and must have the capacity to do so quickly and effectively.
IDFS allows data to be conditioned through the necessary transformations so that the client can understand the data.
These transformations can be of various types: string replacement, rounding of values, SQL expressions that transform data, etc. All of these data transformations will be persisted directly to the IRIS database without affecting the data source at any time.
After this step, our data would be adapted to the requirements of the client system that will use it.
Data Validation
In a Data Fabric, it's not enough to simply transform the data; it's necessary to ensure that the data being provided to third parties is accurate.
IDFS has a data validation step that allows us to filter the data we provide to our clients. Data that doesn't meet validation will generate warnings or alerts to be managed by the responsible person.
An important point of this validation phase in IDFS is that it can also be applied to the fields we transformed in the previous step.
Data Reconciliation
It is very common to need to validate our data with an external source to ensure that the data in our Data Fabric is consistent with the information available in other tables in our data source.
IDFS has a reconciliation process that allows us to compare our validated data with this external data source, thereby ensuring its validity.
Data Promotion
Every Data Fabric must be able to forward all the information that has passed through it to third-party systems. To do this, it must have processes that export this transformed and validated data.
IDFS allows you to promote data that has gone through all the previous steps to a previously defined data source. This promotion is done through a simple process in which we define the following:
The data source to which we will send the information.
The target schema (related to a table in the data source).
The mapping between our transformed and validated data and the destination table.
Once the previous configuration is complete, our recipe is ready to go into action whenever we want. To do so, we only need to take one last step: schedule the execution of our recipe.
Business scheduler
Let's do a quick review before continuing what we've done:
Define our data sources.
Import the relevant catalogs.
Create a recipe to cook our data.
Configure the import, transformation, validation, and promotion of our data to an external database.
As you can see, all that's left is to define when we want our recipe to run. Let's get to it!
In a very simple way, we can indicate when we want the steps defined in our recipe to be executed, either on a scheduled basis, at the end of a previous execution, manually, etc.
These execution scheduling capabilities will allow us to seamlessly chain recipe executions, thereby streamlining their execution and giving us more detailed control over what's happening with our data.
Each execution of our recipes will leave a record that we will be able to consult later to know the status of said execution:
The executions will generate a series of reports that are easily searchable and downloadable. Each report will show the results of each of the steps defined in our recipe:
Conclusions
We've reached the end of the article. I hope it helped you better understand the concept of Data Fabric and that you found our new InterSystems Data Fabric Studio solution interesting.
Thank you for your time!
Question
steven Henry · Jul 10
Hello my friends,
I have a problem with Objectscript, why the value of address become like this ?
everything works fine except the Address,
this is my code, do I need something to make this into real address ? should I put something in my code ?
set paper=obj.PAADMPAPMIDR.PAPMIPAPERDR
if '$isobject(paper) continue
set Address=paper.PAPERStName
thank you for your help
Best Regards,
Steven Henry Hi Steven
This property is showing as a Collection property, so it needs to be unpacked to be read as text.
The documentation on processing List properties is available at: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_propcoll#GOBJ_propcoll_list_access
I just put GetAt(1) and it works
this is the correction that I've been made :
set Address=paper.PAPERStName.GetAt(1) From a performance aspect i would not use objects to retrieve the data, but use SQL.
SQL will take care of the conversion for you.
e.g.
select PAADM_PAPMI_DR->PAPMI_PAPER_DR->PAPER_StName
from SQLUSer.PA_Adm
where
PAADM_Hospital_DR = 2
and
PAADM_AdmDate>='19/03/2025'
and
PAADM_AdmDate<='19/03/2025'
Article
Developer Community Admin · Jul 16
Faced with the enormous and evergrowing amounts of data being generated in the world today, software architects need to pay special attention to the scalability of their solutions. They must also design systems that can, when needed, handle many thousands of concurrent users. It’s not easy, but designing for massive scalability is an absolute necessity.
Software architects have several options for designing scalable systems. They can scale vertically by using bigger machines with dozens of cores. They can use data distribution (replication) techniques to scale horizontally for increasing numbers of users. And they can scale data volume horizontally through the use of a data partitioning strategy. In practice, software architects will employ several of these techniques, trading off hardware costs, code complexity, and ease of deployment to suit their particular needs.
This article will discuss how InterSystems IRIS Data Platform supports vertical scalability and horizontal scalability of both user and data volumes. It will outline several options for distributing and partitioning data and/or user volume, giving scenarios in which each option would be particularly useful. Finally, this paper will talk about how InterSystems IRIS helps simplify the configuration and provisioning of distributed systems.
Vertical Scaling
Perhaps the simplest way to scale is to do so “vertically” – deploy on a bigger machine with more CPUs and memory. InterSystems IRIS supports parallel processing of SQL and includes technology for optimizing the use of CPUs in multi-core machines.
However, there are practical limits to what can be achieved through vertical scaling alone. For one thing, even the largest available machine may not be able to handle the enormous data volumes and workloads required by modern applications. Also, “big iron” can be prohibitively expensive. Many organizations find it more cost-effective to buy, say, four 16-core servers than one 64-core machine.
Capacity planning for single-server architectures can be difficult, especially for solutions that are likely to have widely varying workloads. Having the ability to handle peak loads may result in wasteful underutilization during off hours. On the other hand, having too few cores may cause performance to slow to a crawl during periods of high usage. In addition, increasing the capacity of a single server architecture implies buying an entire new machine. Adding capacity “on the fly” is impossible.
In short, although it is important for software to leverage the full potential of the hardware on which it is deployed, vertical scalability alone is not enough to meet all but the most static workloads.
Horizontal Scaling
For all of the reasons given above, most organizations seeking massive scalability will deploy on networked systems, scaling workloads and/or data volumes “horizontally” by distributing the work across multiple servers. Typically, each server in the network will be an affordable machine, but larger servers may be used if needed to take advantage of vertical scalability as well.
Software architects will recognize that no two workloads are the same. Some modern applications may be accessed by hundreds of thousands of users concurrently, racking up very high numbers of small transaction per second. Others may only have a handful of users, but query petabytes worth of data. Both are very demanding workloads, but they require different approaches to scalability. We will start by considering each scenario separately.
Horizontal Scaling of User Volume
To support a huge number of concurrent users (or transactions), InterSystems IRIS relies on unique data caching technology called Enterprise Cache Protocol (ECP).
Within a network of servers, one will be configured as the data server where data is persisted. The others will be configured as application servers. Each application server runs an instance of InterSystems IRIS and presents data to the application as though it were a local database. Data is not persisted on the application servers. They are there to provide cache and CPU processing power.
User sessions are distributed among the application servers, typically through a load balancer, and queries are satisfied from the local application server cache, if possible. Application servers will only retrieve data from the data server if necessary. InterSystems IRIS automatically synchronizes data between all cluster participants.
With the compute work taken care of by the application servers, the data server can be dedicated mostly to persisting transaction outcomes. Application servers can easily be added to, or removed from, the cluster as workloads vary. For example, in a retail use case, you may want to add a few application servers to deal with the exceptional load of Black Friday and switch them off again after the holiday season has finished.
Application servers are most useful for applications where large numbers of transactions must be performed, but each transaction only affects a relatively small portion of the entire data set. Deployments that use application servers with ECP have been shown to support many thousands of concurrent users in a variety of industries.
Horizontal Scaling of Data Volume
When queries – usually analytic queries – must access a large amount of data, the “working dataset” that needs to be cached in order to support the query workload efficiently may exceed the memory capacity on a single machine. InterSystems IRIS provides a capability called sharding, which physically partitions large database tables across multiple server instances. Applications still access a single logical table on an instance designated as the shard master. The shard master decomposes incoming queries and sends them to the shard servers, each of which holds a distinct portion of the table data and associated indices. The shard servers process the shard-local queries in parallel, and send their results back to the shard server for aggregation.
Data is partitioned among shard servers according to a shard key, which can be automatically managed by the system, or defined by the software architect based on selected table columns. Through the careful selection of shard keys, tables that are often joined can be co-sharded, so rows from those tables that would typically be joined together are stored on the same shard server, enabling the join to happen entirely local to each shard server, and thus maximizing parallelization and performance.
As data volumes grow, additional shards can easily be added. Sharding is completely transparent to the application and to users.
Not all tables need to be sharded. For example, in analytics applications, facts tables (e.g. orders in a retail scenario) are usually very large, and will be sharded. The much smaller dimensions tables (e.g. product, point of sale, etc.) will not be. Non-sharded tables are persisted on the shard master. If a query requires joins between sharded and non-sharded tables, or if data from two different shards must be joined, InterSystems IRIS uses a highly efficient ECP-based mechanism to correctly and efficiently satisfy the request. In these cases, InterSystems IRIS will only share between shards the rows that are needed, rather than broadcasting entire tables over the network, as many other technologies would. InterSystems IRIS transparently improves the efficiency and performance of big data query workloads through sharding, without limiting the types of queries that can be satisfied.
The InterSystems IRIS architecture enables complex multi-table joins when querying distributed, partitioned data sets without requiring co-sharding, without replicating data, and without requiring entire tables to be broadcast across networks.
Scaling Both User and Data Volumes
Many modern solutions must simultaneously support both a high transaction rate (user volume) and analytics on large volumes of data. One example: a private wealth management application that provides dashboards summarizing clients’ portfolios and risk, in real time based on current market data.
InterSystems IRIS enables such Hybrid Transactional and Analytical Processing (HTAP) applications by allowing application servers and sharding to be used in combination. Application servers can be added to the architecture pictured in Figure #2 to distribute the workload on the shard master. Workloads and data volumes can be scaled independently of each other, depending on the needs of the application.
When applications require the ultimate in scalability (for example, if a predictive model must score every record in a large table while new records are being ingested and queried at the same time) each individual data shard can act as the data server in an ECP model. We refer to the application servers that share the workloads on data shards as “query shards.” This, combined with the transparent mechanisms for ensuring high availability of an InterSystems IRIS cluster, provides solution architects with everything they need to satisfy their solution’s unique scalability and reliability requirements.
The comparative performance and efficiency of InterSystems IRIS’ approach to sharding has been demonstrated and documented in a benchmark test validated by a major technology analyst firm. In tests of an actual mission critical financial services use case, InterSystems IRIS was shown to be faster than several highly specialized databases, while requiring less hardware and accessing more data.
Flexible Deployment
InterSystems IRIS gives software developers a great deal of flexibility when it comes to designing a highly efficient, scalable solution. But scalability may come at the cost of increased complexity, as additional servers, taking on a variety of roles, are added to the architecture. Simplify the provisioning and deployment of servers (whether physical or virtual) in a distributed architecture with InterSystems IRIS.
InterSystems enables simple scripts to be used for configuring InterSystems IRIS containers as a data server, shard server, shard master, application server, etc. Containers can be deployed easily in public or private clouds. They are also easily decommissioned, so scalable architectures can be designed to grow or contract with fluctuating needs.
Conclusion
Massive scalability is a requirement for modern applications, particularly Hybrid Transactional and Analytical Processing applications that must handle very high workloads and data volumes simultaneously. InterSystems IRIS Data Platform gives software architects options for cost-effectively scaling their applications. It supports vertical scaling, application servers for horizontally scaling user volume, and a highly efficient approach to sharding for horizontally scaling data volume that eliminates the need for network broadcasts. All these technologies can be used independently or in combination to tailor a scalable architecture to an application’s specific requirements.
More articles on the subject:
Horizontal Scalability with InterSystems IRIS
Running InterSystems IRIS in a FaaS mode with Kubeless
Scaling Cloud Hosts and Reconfiguring InterSystems IRIS
Highly available IRIS deployment on Kubernetes without mirroring
Source: Massive Scalability with InterSystems IRIS Data Platform
Announcement
Anastasia Dyubaylo · Jul 21
Hi Community!
Our 💡 InterSystems Ideas Contest 💡 has come to an end. 26 new ideas that followed the required structure were accepted into the contest!
They all focus on improving InterSystems IRIS and related products, highlighting tangible benefits for developers once the ideas are implemented.
And now let's announce the winners...
Expert Awards
🥇 1st place goes to the idea Extending an open source LLM to support efficient code generation in InterSystems technology by @Yuri.Gomes The winner will receive🎁 Stilosa Barista Espresso Machine & Cappuccino Maker.
🥈 2nd place goes to the idea Streaming JSON Parsing Support by @Ashok.Kumar The winner will receive🎁 Osmo Mobile 7.
🥉 3rd place goes to the idea Auto-Scaling for Embedded Python Workloads in IRIS by @diba The winner will receive🎁 Smart Mini Projector XGODY Gimbal 3.
Random Award
Using the Wheel of Names, we've chosen one random lucky winner:
🏆 Random award goes to the idea Do not include table statistics when exporting Production for deployment by @Enrico.Parisi The winner will receive🎁 Smart Mini Projector XGODY Gimbal 3.
Here's the recording of the draw.
🔥 Moreover, all participants will get a special gift - an aluminum media stand.
Let's have a look at the participants and their brilliant ideas
Author
Idea
@Yuri.Gomes
Extending an open source LLM to support efficient code generation in intersystems technology
@David.Hockenbroch
Add Typescript Interface Projection
@Enrico.Parisi
Make DICOM iteroperability adapter usable in Mirror configuration/environment
@Marykutty.George1462
Ability to abort a specific message from message viewer or visual trace page
@Enrico.Parisi
Do not include table statistics when exporting Production for deployment
@Ashok.Kumar
recursive search in Abstract Set Query
@Ashok.Kumar
TTL(Time To Live) Parameter in %Persistent Class
@Ashok.Kumar
Programmatic Conversion from SDA to HL7 v2
@Ashok.Kumar
Streaming JSON Parsing Support
@Ashok.Kumar
Differentiating System-Defined vs. User-Defined Web Applications in IRIS
@Ashok.Kumar
Need for Application-Specific HTTP Tracing in Web Gateway
@Ashok.Kumar
Add Validation for Dispatch Class in Web Application Settings
@Ashok.Kumar
Encoding in SQL functions
@Ashok.Kumar
Compression in SQL Functions
@Alexey.Maslov
Universal Global Exchange Utility
@Ashok.Kumar
Automatically Expose Interactive API Documentation
@Vishal.Pallerla
Dark Mode for Management Portal
@Ashok.Kumar
IRIS Native JSON Schema Validator
@Ashok.Kumar
Enable Schema Validation for REST APIs Using Swagger Definitions
@diba
Auto-Scaling for Embedded Python Workloads in IRIS
@Dmitry.Maslennikov
Integrate InterSystems IRIS with SQLancer for Automated SQL Testing and Validation
@Dmitry.Maslennikov
Bring IRIS to the JavaScript ORM World
@Ashok.Kumar
HTML Report for UnitTest Results
@Andre.LarsenBarbosa
AI Suggestions for Deprecated Items
@Mark.OReilly
Add a field onto Oauth Client to allow alerting expiry dates alert
@Mark.OReilly
Expose "Reply To" as default on EnsLib.EMail.AlertOperation
OUR CONGRATULATIONS TO ALL WINNERS AND PARTICIPANTS!
Thank you for your attention to the Ideas Contest and the effort you devote to the official InterSystems feedback portal 💥
Important note: The prizes are in production. We will contact all the participants when they are ready to ship. If you have any questions, please contact @Liubka.Zelenskaia. Congratulations to the winners! The ideas were really well presented in this contest! Grateful for the recognition. Kudos to all winners and participants👏! Many thanks!! Great initiative, Intersystems and DC always working to get our feedbacks and ideas Congratulations to the winners! Congratulations to the winners!! congrats to the winners! Congratulations to the winners and participants. Congratulations to all the winners! Using the Wheel of Names, we've chosen one random lucky winner:
🏆 Random award goes to the idea Do not include table statistics when exporting Production for deployment by @Enrico Parisi
🤣😂🤣😂
Announcement
Bob Kuszewski · Aug 15
Welcome to the 2025 third quarter update.
Last quarter, we had a few important announcements that are worth reiterating this quarter.
RHEL 10 support was added to IRIS 2025.1
2025.3 will use OpenSSL 3 across all operating systems SUSE 15 sp6 will be the minimum OS for orgs using SUSE
The minimum CPU standards are going up in 2025.3
Older Windows Server operating systems will no longer be supported in 2025.3
If you’re new to these updates, welcome! This update aims to share recent changes as well as our best current knowledge on upcoming changes, but predicting the future is tricky business and this shouldn’t be considered a committed roadmap.
InterSystems IRIS Production Operating Systems and CPU Architectures
Minimum Supported CPU Architecture
In 2024, InterSystems introduced a minimum supported CPU architecture for all Intel- & AMD-based servers that allows us to take advantage of new CPU instructions to create faster versions of IRIS. IRIS 2025.3 will update that list to require the x86-64-v3 microarchitecture level, which requires the AVX, AVX2, BMI, and BMI2 instructions.
For users with Intel-based systems, this means that Haswell and up will be required.
For users with AMD-based systems, this means that Excavator and up will be required while Piledriver & Steamroller will not be supported.
Are you wondering if your CPU will still be supported? We published a handy article on how to look up your CPU’s microarchitecture in 2023.
Red Hat Enterprise Linux
Upcoming Changes
RHEL 10 - Red Hat released RHEL 10 on May 20th. We released a version of IRIS 2025.1.0 that supports RHEL 10 on June 20th.
IRIS 2025.2 and up will support RHEL 9 & 10, which means that we stopped supporting RHEL 8.
Further reading: RHEL Release Page
Ubuntu
Current Update
Ubuntu 24.04.2 has just been released and minor OS certification has completed successfully.
Further Reading: Ubuntu Releases Page
SUSE Linux
Upcoming Changes
IRIS 2025.3+ will require SUSE Linux Enterprise Server 15 SP6 or greater – SLES 15 sp6 has given us the option to use OpenSSL 3 and, to provide you with the most secure platform possible, we’re going to change IRIS to start taking advantage of it.
In preparation for moving to OpenSSL 3 in IRIS 2025.3, there was no IRIS 2025.2 for SUSE.
Further Reading: SUSE lifecycle
Oracle Linux
Upcoming Changes
We’ve started testing Oracle Linux 10. If history is to be our guide, it should work just fine with any version of IRIS that supports RHEL 10.
Further Reading: Oracle Linux Support Policy
Microsoft Windows
Previous Updates
Windows Server 2025 is now supported in IRIS 2025.1 and up.
Upcoming Changes
IRIS 2025.3+ will no longer support Windows Server 2016 & 2019.
Microsoft has pushed back the anticipated release date for Windows 12 yet again. At this point, it’s best to stop speculating on when it’ll arrive. Whenever it does arrive, we’ll start the process of supporting the new OS then.
Further Reading: Microsoft Lifecycle
AIX
Upcoming Changes
IBM released new Power 11 hardware in July. We anticipate running the new hardware through the paces over the course of the late summer and early fall. Look for a full update on our findings in the Q4’25 or Q1’26 newsletter.
Further Reading: AIX Lifecycle
Containers
Previous Updates
We changed the container base image from Ubuntu 22.04 to Ubuntu 24.04 with IRIS 2024.2
We’re considering changes to the default IRIS container to, by default, have internal traffic (ECP, Mirroring, etc) on a different port from potentially externally facing traffic (ODBC, JDBC, etc). If you have needs in this area, please reach out and let me know.
InterSystems IRIS Development Operating Systems and CPU Architectures
MacOS
Recent Changes
IRIS 2025.1 adds support for MacOS 15 on both ARM- and Intel-based systems.
InterSystems Components
Upcoming Releases
InterSystems API Manager 3.10 has been released. Users of earlier versions of the API manager will need an updated IRIS license key to use version 3.10.
InterSystems Kubernetes Operator 3.8 has been released.
Caché & Ensemble Production Operating Systems and CPU Architectures
Previous Updates
A reminder that the final Caché & Ensemble maintenance releases are scheduled for Q1-2027, which is coming up sooner than you think. See Jeff’s excellent community article for more info.
InterSystems Supported Platforms Documentation
The InterSystems Supported Platforms documentation is the definitive source information on supported technologies.
IRIS 2025.1 Supported Server Platforms
IRIS 2024.1 Supported Server Platforms
IRIS 2023.1 Supported Server Platforms
Caché & Ensemble 2018.1 Supported Server Platforms
… and that’s all folks. Again, if there’s something more that you’d like to know about, please let us know. Based on InterSystems IRIS Minimum Supported CPU Models this CPU change just affects AMD.
Article
Yuri Marx · Aug 8
This article outlines the process of utilizing the renowned Jaeger solution for tracing InterSystems IRIS applications. Jaeger is an open-source product for tracking and identifying issues, especially in distributed and microservices environments. This tracing backend that emerged at Uber in 2015 was inspired by Google's Dapper and Twitter's OpenZipkin. It later joined the Cloud Native Computing Foundation (CNCF) as an incubating project in 2017, achieving graduated status in 2019. This guide will demonstrate how to operate the containerized Jaeger solution integrated with IRIS.
Jaeger Features
Monitor transaction flows executed by one or more applications and components (services) in conventional or distributed environments:
Pinpoint performance bottlenecks in business flows, including distributed ones:
Analyze and optimize dependencies between services, components, classes, and methods:
Identify performance metrics to discover opportunities for improvement:
Jaeger Components
From Jaeger documentation: https://www.jaegertracing.io/docs/1.23/architecture/
Client: Solutions, applications, or technologies (such as IRIS) that send monitoring data to Jaeger.
Agent: A network daemon that listens for spans sent over UDP, batches them, and forwards them to the collector. It is designed for deployment on all hosts as an infrastructure component, abstracting the routing and discovery of collectors from the client.
Collector: Receives traces from Jaeger agents and runs them through a processing pipeline, which currently validates traces, indexes them, performs any necessary transformations, and finally stores them.
Ingester (optional): If a message queue like Apache Kafka buffers data between the collector and storage, the Jaeger ingester reads from Kafka and writes to the storage.
Query: A service that retrieves traces from storage and hosts a UI to display them.
UI: The Jaeger Web Interface for analyzing and tracing transaction flows.
Overview of Open Telemetry (OTel)
Since OpenTelemetry is the technology exploited by IRIS to send tracing data to Jaeger, it is important to understand how it works.OpenTelemetry (aka OTel) is a vendor-neutral, open-source observability framework for instrumenting, generating, collecting, and exporting such telemetry data as traces, metrics, and logs. For this article, we will focus on the traces feature.The fundamental unit of data in OpenTelemetry is the "signal." The purpose of OpenTelemetry is to collect, process, and export these signals, which are system outputs describing the underlying activity of the operating system and applications running on a platform. A signal can be something you want to measure at a specific point in time (e.g., temperature, memory usage), or an event that traverses components of your distributed system that you wish to trace. You can group different signals to observe the internal workings of the same piece of technology from various angles (source: https://opentelemetry.io/docs/concepts/signals/). This article will demonstrate how to emit signals associated with traces (the path of a request through your application) from IRIS to OTel collectors.OpenTelemetry is an excellent choice for monitoring and tracing your IRIS environment and source code because it is supported by more than 40 observability vendors. It is also integrated by many libraries, services, and applications, and adopted by numerous end users (source: https://opentelemetry.io/docs/).
Microservices developed in Java, .NET, Python, NodeJS, InterSystems ObjectScript, and dozens of other languages can send telemetry data to an OTel collector with the help of the remote endpoints and ports (in our example, we will use an HTTP port).
Infrastructure components can also send data, particularly data about performance, resource usage (processor, memory, etc.), and other relevant information for monitoring these components. For InterSystems IRIS, data is collected by Prometheus from the /monitor endpoint and can be transferred to an OTel Collector.
APIs and database tools can also send telemetry data. Some database products are capable of doing it automatically (instrumentalization).
The OTel Collector receives the OTel data and stores it in a compatible database and/or forwards it to monitoring tools (e.g., Jaeger).
How InterSystems IRIS Sends Monitoring Data to Jaeger
Starting with version 2025, InterSystems launched support for OpenTelemetry (OTel) in its monitoring API. This new functionality includes the emission of telemetry data for tracing, logging, and environment metrics. This article discusses sending tracing data to Jaeger via OTel. You can find more details at https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AOTEL&ADJUST=1.While some programming languages and technologies support automatic OTel data transmission (automatic instrumentation), for IRIS, you need to write specific code instructions to enable this functionality. Download the source code of the sample application from https://openexchange.intersystems.com/package/iris-telemetry-sample, open your IDE, and follow the steps below:
1. Navigate to the dc.Sample.REST.TelemetryUtil class. The SetTracerProvider method initializes a tracer provider, allowing you to set the name and version of the monitored service:
/// Set tracer provider
ClassMethod SetTracerProvider(ServiceName As %String, Version As %String) As %Status
{
Set sc = $$$OK
set attributes("service.name") = ServiceName
set attributes("service.version") = Version
Set tracerProv = ##class(%Trace.TracerProvider).%New(.attributes)
Set sc = ##class(%Trace.Provider).SetTracerProvider(tracerProv)
Quit sc
}
2. The next step is to retrieve the created tracer provider instance:
/// Get tracer provider
ClassMethod GetTracerProvider() As %Trace.Provider
{
Return ##class(%Trace.Provider).GetTracerProvider()
}
3. This sample will monitor the PersonREST API on the /persons/all endpoint. Go to the dc.Sample.PersonREST class (GetAllPersons class method):
/// Retreive all the records of dc.Sample.Person
ClassMethod GetAllPersons() As %Status
{
#dim tSC As %Status = $$$OK
do ##class(dc.Sample.REST.TelemetryUtil).SetTracerProvider("Get.All.Persons", "1.0")
set tracerProv = ##class(dc.Sample.REST.TelemetryUtil).GetTracerProvider()
set tracer = tracerProv.GetTracer("Get.All.Persons", "1.0")
4. The tracer provider has just created a monitoring service called Get.All.Persons (with version 1.0), and obtained the tracer instance with GetTracer.5. The sample should create a root Span as follows:
set rootAttr("GetAllPersons") = 1
set rootSpan = tracer.StartSpan("GetAllPersons", , "Server", .rootAttr)
set rootScope = tracer.SetActiveSpan(rootSpan)
6. Spans are pieces of the tracing flow. Each span must be mapped to a piece of source code you want to analyze.7. SetActiveSpan is mandatory to set the current span that is being monitored.8. Now, the sample creates some child spans mapped to important pieces of the flow:
set childSpan1 = tracer.StartSpan("Query.All.Persons")
set child1Scope = tracer.SetActiveSpan(childSpan1)
Try {
Set rset = ##class(dc.Sample.Person).ExtentFunc()
do childSpan1.SetStatus("Ok")
} Catch Ex {
do childSpan1.SetStatus("Error")
}
do childSpan1.End()
kill childSpan1
9. This first child span monitors the query for all persons in the database. To create a child span, the sample exploits StartSpan with a suggested title (Query.All.Persons). The active span must then be set to the current child span, which the sample achieves using SetActiveSpan with the childSpan1 reference.10. The sample executes the business source code (Set rset = ##class(dc.Sample.Person).ExtentFunc()). If the operation is successful, it sets the status to "Ok"; otherwise, it sets it to "Error."11. The sample ends the monitoring of this piece of code using the End method and killing the childSpan1 reference.12. You can repeat this procedure for all other code segments you wish to scrutinize:
To monitor retrieving a person by ID (get person details):
set childSpan2 = tracer.StartSpan("Get.PersonByID")
set child2Scope = tracer.SetActiveSpan(childSpan2)
Set person = ##class(dc.Sample.Person).%OpenId(rset.ID)
To observe the Age calculation (class dc.Sample.Person, method CalculateAge):
set tracerProv = ##class(dc.Sample.REST.TelemetryUtil).GetTracerProvider()
set tracer = tracerProv.GetTracer("Get.All.Persons", "1.0")
set childSpan1 = tracer.StartSpan("CalculateAge")
set child1Scope = tracer.SetActiveSpan(childSpan1)
To survey the Zodiac Sign definition (class dc.Sample.Person, method CalculateZodiacSign):
set tracerProv = ##class(dc.Sample.REST.TelemetryUtil).GetTracerProvider()
set tracer = tracerProv.GetTracer("Get.All.Persons", "1.0")
set childSpan1 = tracer.StartSpan("GetZodiacSign")
set child1Scope = tracer.SetActiveSpan(childSpan1)
Configure IRIS and Jaeger Containers
1. Create the OTel collector container (on docker-composer.yml):
# --- 2. OpenTelemetry Collector ---
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yml"]
volumes:
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "9464:9464" # Metrics
depends_on:
- iris
- jaeger
2. Adjust the OTel collector configuration file:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
otlp:
endpoint: jaeger:4317 # O nome do serviço 'jaeger' do docker-compose para o Collector gRPC
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:9464"
debug: {}
processors:
batch: # Processador para agrupar traces em batches
send_batch_size: 100
timeout: 10s
connectors:
spanmetrics: # O conector SpanMetrics
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch] # Traces são processados para gerar métricas
exporters: [otlp, spanmetrics]
metrics:
receivers: [otlp, spanmetrics]
exporters: [prometheus]
logs:
receivers: [otlp]
exporters: [debug]
3. The OTel collector will receive monitoring data at the following address:
http:
endpoint: "0.0.0.0:4318"
4. The OTel collector will send Exporters to Jaeger as shown below:
exporters:
otlp:
endpoint: jaeger:4317 # O nome do serviço 'jaeger' do docker-compose para o Collector gRPC
tls:
insecure: true
5. The OTel service will create a pipeline to receive and send monitoring data:
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
6. In docker-compose.yml, the sample will build an IRIS container, setting the OTEL_EXPORTER_OTLP_ENDPOINT with the OTel collector's address:
iris:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- 51773:1972
- 52773:52773
- 53773
volumes:
- ./:/home/irisowner/dev
environment:
- ISC_DATA_DIRECTORY=/home/irisowner/dev/durable
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
7. Finally, a Jaeger container is created to receive data from the OTel collector and provide a UI for users to monitor and trace the IRIS OTel data:
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686" # Jaeger UI
- "14269:14269" # Jaeger Metrics
- "14250:14250" # Jaeger Collector gRPC
environment:
- COLLECTOR_OTLP_ENABLED=true
There are additional port options for Jaeger that allow you to work with multiple types of collectors. Below you can see the breakdown of the possible exposed ports:
6831/udp: Accepts jaeger.thrift spans (Thrift compact)
6832/udp: Accepts jaeger.thrift spans (Thrift binary)
5778: Jaeger configuration
16686: Jaeger UI
4317: OpenTelemetry Protocol (OTLP) gRPC receiver
4318: OpenTelemetry Protocol (OTLP) HTTP receiver
14250: Accepts model.proto spans over gRPC
14268: Accepts jaeger.thrift spans directly over HTTP
14269: Jaeger health check
9411: Zipkin compatibility
Running the Sample
1. Copy the source code of the sample:
$ git clone https://github.com/yurimarx/iris-telemetry-sample.git
2. Open the terminal in this directory and run the code below:
$ docker-compose up -d --build
3. Create some fake testing data. To do that, open IRIS terminal or web terminal on /localhost:52773/terminal/ and call the following:
USER>do ##class(dc.Sample.Person).AddTestData(10)
4. Open http://localhost:52773/swagger-ui/index.html and execute the endpoint /persons/all.
Analyzing and Tracing the IRIS Code on Jaeger
1. Go to Jaeger at http://localhost:16686/search.
2. Select Get.All.Persons in the "Service" field, and click "Find Traces."
3. Click on the found trace to see its details.
4. Observe the timeline of the selected trace.
5. On the top right, select "Trace Graph":
6. Analyze the dependencies:
7. This dependency analysis is crucial for identifying problems in remote systems/services within your environment, especially if different distributed services use the same service name.8. Now, select the “Framegraph”:
9. Observe all components of the monitored transaction flow in a graphical table:
10. With all these Jaeger resources, you can effectively resolve performance problems and identify the source of errors.
Learn More
To delve deeper into distributed tracing, consider the following external resources:
Mastering Distributed Tracing (2019) by Yuri Shkuro: A blog post by Jaeger's creator explaining the history and architectural choices behind Jaeger. The book provides in-depth coverage of Jaeger's design and operations, as well as distributed tracing in general.
Take Jaeger for a HotROD ride: A step-by-step tutorial demonstrating how to use Jaeger to solve application performance problems.
Introducing Jaeger: An (old) webinar introducing Jaeger and its capabilities.
Detailed tutorial about Jaeger: https://betterstack.com/community/guides/observability/jaeger-guide/
Evolving Distributed Tracing at Uber.
Emit Telemetry Data to an OpenTelemetry-Compatible Monitoring Tool by InterSystems documentation.
Modern Observability with InterSystems IRIS & OpenTelemetry: A video demonstrating how to work with OpenTelemetry and IRIS: https://www.youtube.com/watch?v=NxA4nBe31nA
OpenTelemetry on GitHub: A collection of APIs, SDKs, and tools for instrumenting, generating, collecting, and exporting telemetry data (metrics, logs, and traces) to help analyze software performance and behavior: https://github.com/open-telemetry.
@Yuri.Gomes - thank you for time you spent pulling together this article ... well done :) Thanks
Announcement
Liubov Zelenskaia · Aug 6
The InterSystems team is heading to MIT Hacking Medicine in Brazil for the first time, taking place September 5–7, 2025!
Hosted by Einstein Hospital’s innovation hub Eretz.bio, this marks the first-ever edition of the world-renowned MIT healthcare hackathon in Brazil — bringing together experts and students from health, tech, business, and design to develop real-world solutions with lasting impact.Register to participate here: 🔗 www.eretz.bio/mit-einstei
📍 Location: Albert Einstein Teaching and Research Center – Morumbi, São Paulo, Brazil📅 Dates: September 5, 6, and 7, 2025🎯 Free participation – applications open until August 17
Announcement
Michael Braam · Aug 29
#InterSystems Demo Games entry
⏯️ Copilot for InterSystems Embedded BI
The Co-Pilot enables you to leverage InterSystems BI without deep knowledge in InterSystems BI. You can create new cube, modify existing cubes or leverage existing cubes to plot charts and pivots just by speaking to the copilot.
Presenters:🗣 @Michael.Braam, Sales Engineer Manager, InterSystems🗣 @Andreas.Schuetz, Sales Engineer, InterSystems🗣 @Shubham.Sumalya, Sales Engineer, InterSystems
👉 Like this demo? Support the team by voting for it in the Demo Games!
Announcement
Anastasia Dyubaylo · Mar 10
Hi Community!
It's time to celebrate our 25 fellow members who took part in the latest InterSystems Technical Article Contest and wrote
🌟 38 AMAZING ARTICLES 🌟
The competition was filled with outstanding articles, each showcasing innovation and expertise. With so many high-quality submissions, selecting the best was no easy task for the judges.
Let's meet the winners and look at their articles:
⭐️ Expert Awards – winners selected by InterSystems experts:
🥇 1st place: Creating FHIR responses with IRIS Interoperability production by @Laura.BlázquezGarcía
🥈 2nd place: Monitoring InterSystems IRIS with Prometheus and Grafana by @Stav
🥉 3rd place: SQLAchemy-iris with the latest version Python driver by @Dmitry.Maslennikov
⭐️ Community Award – winner selected by Community members:
🏆 Generation of OpenAPI Specifications by @Alessandra.Carena
And...
⭐️ We'd like to highlight the author who submitted 8 articles for the contest: @Julio.Esquerdo
Let's congratulate all our heroes who took part in the Tech Article contest #6:
@Robert.Cemper1003
@Stav
@Aleksandr.Kolesov
@Alessandra.Carena
@Dmitry.Maslennikov
@André.DienesFriedrich
@Ashok.Kumar
@Julio.Esquerdo
@Andre.LarsenBarbosa
@Yuri.Marx
@sween
@Eric.Fortenberry
@Jinyao
@Laura.BlázquezGarcía
@Corentin.Blondeau
@Rob.Tweed
@Timothy.Scott
@Muhammad.Waseem
@Robert.Barbiaux
@rahulsinghal
@Alice.Heiman
@Roy.Leonov
@Parani.K
@Suze.vanAdrichem
@Sanjib.Pandey9191
THANK YOU ALL! You have made an incredible contribution to our Dev Community!
The prizes are in production now. We will contact all the participants when they are ready to ship. Congratulations to all the winners and participants Honored to receive second place and grateful to be part of such a talented community. Congratulations to all winners and participants! Congratulations to all the winners 🎉 Congratulations to the participants and winners.Special BIG THANKS to the organizers and administrators of this contest. 💐🏵🌷🌻🌹I'm really proud to see how this community has grown and raised in quality. Congratulations to all the winners and participants! Congratulations everyone, and thank you so much for making such an excellent community of developers possible !💐 Congratulations to all the winners and participants. Your articles were truly inspiring and showcased exceptional creativity and insight.
Congratulations to all the winners and participants. All the articles are excellent!!! Congratulations to everyone! Great articles all around! Congratulations to all the participants! Thanks a lot for all your support! It's been a pleasure to participate 😊 And congratulations to everyone! I think it was a tough competition, all articles were so great! good turn out here, congrats all. Congratulations to all the participants! Congratulations to the winners... the best community ever !!! Amazing contributions! Thanks to all the participants! Kudos to all participants and winners🎉! Congratulations all! Congratulations to all the winners 🎉 Congratulations all winners and participants!!!! Congratulations to everyone👏 Congulations all!
Announcement
Anastasia Dyubaylo · Aug 4
Hi Community,
It's time to announce the winners of the InterSystems Developer Tools Contest!
Thanks to all our amazing participants who submitted 17 applications 🔥🔥
Now it's time to announce the winners!
Experts Nomination
🥇 1st place and $5,000 go to the InterSystems Testing Manager for VS Code app by @John.Murray
🥈 2nd place and $2,500 go to the typeorm-iris app by @Dmitry.Maslennikov
🥉 3rd place and $1,000 go to the IPM Explorer for VSCode app by @John.McBrideDev
🏅 4th place and $500 go to the dc-artisan app by @José.Pereira, @henry, @Henrique
🏅 5th place and $300 go to the iris4word app by @Yuri.Gomes
🌟 $100 go to the Interoperability REST API Template app by @Andrew.Sklyarov
🌟 $100 go to the toolqa app by @André.DienesFriedrich, @Andre.LarsenBarbosa
🌟 $100 go to the iris-message-search app by @sara.aplin
🌟 $100 go to the wsgi-to-zpm app by @Eric.Fortenberry
🌟 $100 go to the templated_email app by @Nikolay.Soloviev, @Sam.Sennin
Community Nomination
🥇 1st place and $1,000 go to the InterSystems Testing Manager for VS Code app by @John.Murray
🥈 2nd place and $600 go to the iris-message-search app by @sara.aplin
🥉 3rd and 4th place and $250 each go to the dc-artisan app by @José.Pereira, @henry, @Henrique and addsearchtable app by @XININGMA
🏅 5th place and $100 go to the toolqa app by @André.DienesFriedrich, @Andre.LarsenBarbosa
Our sincerest congratulations to all the winners!
Join the fun next time ;) Congratulations to all!!
 Kudos to all the winners! Congrats! Congratulations to all the winners! Congratulations !!! Thanks all. I feel honoured to have been awarded a double first! I couldn't have done it without @Timothy.Leavitt 's excellent Test Coverage Tool package. Many thanks Tim. Congratulations everyone !! Congratulations! Well done everyone! Congrats to all! Congratulations to all !!! And specially, Very well done! @José Pereira, @Henry Pereira, @Henrique Dias 🍾🎉 That's it Danusa, congratulations to the Brazilians, the 3 musketeers, who are always making excellent applications, congratulations everyone 
Announcement
Bob Kuszewski · Nov 9, 2023
When AMD published the x86-64 standard in 1999, little did they know they were inventing what would become the de-facto architecture for server CPUs. But the CPUs of today aren’t the same as ones produced 20 years back – as they have extensions for everything from Advanced Vector Extensions (AVX) to Hardware-Assisted Virtualization (VT-d).
InterSystems would like to take better advantage of these new extensions in upcoming versions of InterSystems IRIS. While our compilers are smart enough to create optimized code for many situations, some optimizations can only be turned on by explicitly cutting off support for processors that do not have that instruction set. Additionally, we are finding it increasingly difficult to maintain older CPU models to test on.
Starting with IRIS 2024.1, InterSystems is planning to start requiring a minimum CPU microarchitecture for all Intel- & AMD-based servers. This applies across all x86-64 Operating Systems, including Windows, Red Hat, Ubuntu, SUSE, Oracle Linux, and MacOS. The following table lists the CPU microarchitectures that will be supported.
Manufacturer
Supported Microarchitecture
Intel
Haswell (Broadwell), Skylake (Kaby Lake, Amber Lake, Whiskey Lake, Skylake-X, Coffee Lake, Cascade Lake, Comet Lake, Cooper Lake), Palm Cove (Cannon Lake), Sunny Cove (Ice Lake, Lakefield, Ice Lake-SP), Cypress Cove (Rocket Lake), Willow Cove (Tiger Lake), Golden Cove (Alder Lake, Sapphire Rapids), Raptor Cove (Raptor Lake), and newer
AMD
Bulldozer (Piledriver, Steamroller, Excavator), Zen (Zen+, Zen 2, Zen 3, Zen 4, Zen 5), and newer
If you bought a new machine in the last 6 years or so, it’s likely that your processor is included in the above. Please take a few minutes to verify that your servers will meet these new criteria and let me know if this policy will have material impact on your business. The article linked below shows how you can do this.
Support for other CPU architectures remains unchanged.
IBM POWER 8 and higher CPUs for AIX-based workloads.
ARM64v8 and higher are supported for Red Hat Enterprise Linux and Ubuntu workloads.
Customers running in AWS, Azure, or GCP can rest assured that current models available are supported.
In the future, we plan to review this list on an annual basis. Does this mean IRIS will refuse to install/start if it detects an unsupported processor?
Apple's processors M* specifically were not mentioned, but I suppose it's as part of ARM64v8 support? Good point !
None of my machines fit. Ivy Bridge is not on your list.Would this mean IRIS 2023.* is the last version I can use without major investments in hardware? Your instincts are correct - IRIS will not install on unsupported processors and Apple silicon are ARM processors. Ivy Bridge is the generation just before the cutoff, so you may need to upgrade your computer. We'll have more detailed information on the precise extensions require for 2024.1 when that's closer to release. So, this means that customers not migrated yet from Caché to IRIS, now need to add additional upgrades for servers. That would not help them. Or they will just stick to the 2023.* versions. If a server installation running Caché wants to migrate to IRIS most likely need to upgrade/move/migrate the Operating System as well.
What IRIS officially supported Server Operating System supports older processor CPU architecture/model?
Please note that mine is a genuine question, I don't want to open a debate.
Enrico
I hope this will not affect also community licensed versions !What when the license expires? Next version just might lock out. not amused Thank you for the detailed update on the minimum supported CPU models for InterSystems IRIS. I appreciate the clarity provided regarding the supported Intel and AMD microarchitectures.
I have a question regarding the compatibility of InterSystems IRIS with the 'Common KVM processor' model. Could you please confirm if this processor is included in the list of supported microarchitectures? "Common KVM Processor" is a generic CPU name that some virtualization systems provide to client VMs. You can typically override this in your hypervisor to tell the client system the actual CPU type. Unfortunately, the term "Common KVM Processor" doesn't tell us anything about your CPU's capabilities.
If you use VMware 5.5, you won't install IRIS even the processor of physical host has AVX/BMI feature. The VMware hides this processor feature.
It is true, that VMware 5.5 is a very old software. Its release date is 2013/09/22.
Article
André Dienes Friedrich · Feb 6
Overview
With the help of SQL, you can build, train, and manage machine learning models directly in the database with InterSystems IRIS's robust IntegratedML tool. Using SQL examples that represent your data, we will go over IntegratedML configuration and how it is used in practical situations in this article.
IntegratedML Configuration
A ML configuration (“ML Configuration”) defines the machine learning provider that will perform the training, in addition to other necessary information. IntegratedML has a default configuration called %AutoML, already activated after installing InterSystems IRIS.
Creating ML Configuration
To create a new ML configuration, we can use the System Management Portal or SQL commands.
Creating ML Configuration via SQL:
CREATE ML CONFIGURATION MeuMLConfig PROVIDER AutoML USING {'verbosity': 1};
To set this configuration as default:
SET ML CONFIGURATION MeuMLConfig;
To view the training settings:
SELECT * FROM INFORMATION_SCHEMA.ML_TRAINING_RUNS;
IntegratedML Application
Creating a predictive model to estimate the amount of energy generated by a consumer unit:
CREATE MODEL PredicaoEnergia PREDICTING (quantidade_generada) FROM UnidadeConsumidora;
Training the model:
TRAIN MODEL PredicaoEnergia;
Making predictions:
SELECT quanto_generada, PREDICT(PredicaoEnergia) AS predicao FROM UnidadeConsumidora WHERE id = 1001;
Implementation: Machine Learning in Solar Energy
1. Data Integration with IRIS
We extracted essential data from multiple tables to build the dataset:
SELECT PSID, CHNNLID, TYPENAME, DEVICESN, DEVICETYPE, FACTORYNAME, STATUS FROM datafabric_solar_bd.EQUIPAMENTS;
2. Predictive Maintenance Model Training
Using Python Embedded in IRIS to train a predictive maintenance model:
from sklearn.ensemble import RandomForestClassifier
from iris import irispy
# Load data
sql_query = "SELECT PSID, DEVSTATUS, ALARMCOUNT FROM datafabric_solar_bd.USINAS;" data = irispy.sql(sql_query)
# Train the model
model = RandomForestClassifier()
model.fit(data[['DEVSTATUS', 'ALARMCOUNT']], data['PSID'])
3. Forecasting Energy Production
Using time series analysis to forecast daily energy production:
from fbprophet import Prophet
# Prepare dataset
df = irispy.sql("SELECT STARTTIMESTAMP, PRODDAYPLANT FROM datafabric_solar_bd.POINTMINUTEDATA;")
df.rename(columns={'STARTTIMESTAMP': 'ds', 'PRODDAYPLANT': 'y'}, inplace=True)
# Train forecasting model
model = Prophet()
model.fit(df)
future = model.make_future_dataframe(periods=30)
forecast = model.predict(future)
4. Identifying Areas of High Solar Irradiance
The analysis of geospatial data allows the identification of areas with the greatest potential for solar energy generation, optimizing resource allocation.
Conclusion
IntegratedML makes it easier to implement machine learning in InterSystems IRIS by allowing models to be trained and applied directly using SQL. Furthermore, using machine learning techniques for predictive maintenance and energy generation forecasting can help solar plants operate more efficiently great article
Announcement
Vadim Aniskin · Aug 28, 2023
Hi Community!
Our 2nd InterSystems Idea-A-Thon has come to an end and resulted in 29 brilliant ideas dedicated to the topic of the contest:
💡 Run solutions fast, safe, and green with InterSystems IRIS 💡
Thank you all for your ideas, comments, and votes!
And now it's time to announce the winners...
Experts Nomination
🥇 1st place goes to the idea Light version InterSystems IRIS by @Andre.LarsenBarbosa The winner will receive🎁 Apple Watch SE / Fairphone Fairbuds XL headphones
🥈 2nd place goes to the idea Support for Liquibase by @Yuri.GomesThe winner will receive🎁 Speaker bundle JBL Pulse 5 / Apple AirPods Pro 2nd Generation / LEGO Star Wars R2-D2.
🥉 3rd place goes to the idea InterSystems IRIS for Energy management by @Heloisa.Paiva The winner will receive🎁 LEGO Porsche 911 / Beeline Bike GPS Computer - Velo 2.
Community Nomination
🌟 Goes to the idea Light version InterSystems IRIS by @Andre.LarsenBarbosaThe winner will receive🎁 LEGO Porsche 911 / Beeline Bike GPS Computer - Velo 2.
All the winners will get a special Global Masters badge called "Idea-A-Thon Winner".
🔥 We'd like to highlight all the participants and their bright ideas:
Light version InterSystems IRIS by @Andre.LarsenBarbosa Make IRIS a part of Spring Cloud by @wang.zheInterSystems IRIS "green time" profile by @Pietro.Montorfano Surface test coverage information in VS Code by @John.Murray Include bar code/QR code recognition as a standard function by @David.Hockenbroch Energy Consumption Estimate Report Per Production / Business Component by @Rob.Ellis7733Make interoperability's visual trace to easy to check, when process was running parallel. by @Ohata.Yuji IRIS Community with no connections limit by @Dmitry.Maslennikov Support for Liquibase by @Yuri.Gomes Binary index implementation by @Akio.Hashimoto1419 Full studio migration to VSCode by @Akio.Hashimoto1419 Database Cache (Global Buffers) needs to be optimized by @wang.zheInterSystems IRIS for Energy management by @Heloisa.Paiva InterSystems IRIS for Carbon tracking by @Heloisa.Paiva Advanced real-time data processing optimization for InterSystems IRIS by @Yone.Moreno testing framework for InterSystems IRIS microservices by @diba testing dashboard for InterSystems IRIS by @diba Develop a testing framework for InterSystems IRIS blockchain applications by @diba Add support for AI-powered test automation by @diba InterSystems IRIS on Cloud. by @Ohata.Yuji Download the HL7v2 Browser Extension for Interface Analysts by @Rob.Ellis7733Add "Create New Router" Option to Business Operation Wizard by @VICTORIA.CASTILLO Through queuing theory and fuzzy logic: Optimization of patient queue management in hospitals using InterSystems IRIS by @Yone.Moreno AI-Driven GreenIRIS: Optimization and Sustainability in InterSystems IRIS Solutions through Artificial Intelligence by @Yone.Moreno Programmatic reports by @Yuri.Gomes Create real world application by using IRIS and Python Streamlit web framework by @Muhammad.Waseem AntiMatter Code accelerator by @Alex.Woodhead Optional "Green Mode" Configuration Setting for IRIS by @Nelson.Tarr Why use storage for an index that has never been used? by @Nelson.Tarr
All the participants of the 2nd Idea-A-Thon will get our special gift - a Wireless Charging Mouse Pad.
And more...
👏 Special thanks to @Dmitry.Maslennikov who created a liquibase-iris app to realize the Support for Liquibase idea from @Yuri.Gomes submitted for the Idea-a-thon.
And we would like to recognize Dimitri's efforts with one of the top prizes as well!
OUR CONGRATULATIONS TO ALL WINNERS AND PARTICIPANTS!
Thank you for your significant contributions to the official InterSystems feedback portal 💥
Important note: The prizes are in production now. We will contact all the participants when they are ready to ship. Thank you, thank you and thank you! It is an honor to have participated again. Thank you to everyone who voted and believed in my idea. Thank you to the entire InterSystems team. Thank you to all participants for making this event even more fantastic. congrats to the winners!! Congratulation to all the winners and organizers 👏 Thanks the nomination and for @Dmitry Maslennikov and all ideas portal staff Thank you very much! I hope my idea can help contribute somehow to healthier energy consumption soon! Congratulations to all of the winners!
Congratulations to all!! Congratulations, well done everyone! Congratulations kudos everyone! Congratulations guys! Congratulations Congratulations!!)🎉
Announcement
Anastasia Dyubaylo · Sep 25, 2023
Hey Community,
It's time to announce the winners of the InterSystems Python Programming Contest 2023!
Thank you to all our amazing participants who submitted 15 applications 🔥
Experts Nomination
🥇 1st and 🥈 2nd places and $4,000 each are shared between two applications that got the same number of expert votes: iris-vector by @Dmitry.Maslennikov and iris-GenLab by @Muhammad.Waseem
🥉 3rd place and $1,500 go to the iris-recorder-helper app by @Alexey.Nechaev
🏅 4th place and $750 go to the iris-python-machinelearn app by @Dienes
🏅 5th place and $500 go to the Face Login app by @Yuri.Gomes
🌟 $100 go to the native-api-command-line-py-client app by @Robert.Cemper1003
🌟 $100 go to the IRIS-Cloudproof-Encryption app by @Li.XU5494
🌟 $100 go to the BardPythonSample app by @xuanyou.du
🌟 $100 go to the iris-python-lookup-table-utils app by Johannes Heinzl
🌟 $100 go to the apptools-django app by @MikhailenkoSergey
Community Nomination
🥇 1st place and $1,000 go to the iris-vector app by @Dmitry.Maslennikov
🥈 2nd place and $750 go to the iris-python-machinelearn app by @Dienes
🥉 3rd place and $500 go to the iris-GenLab app by @Muhammad.Waseem
🏅 4th place and $300 go to the BardPythonSample app by @xuanyou.du
🏅 5th place and $200 go to the native-api-py-demo app by @shan.yue
Our sincerest congratulations to all the participants and winners!
Join the fun next time ;) Thanks all 15 applications of real quality 😀
Thanks to all participants for their involvement and ideas. Congratulations to all participants!!
Thanks all for your efforts to provide fantastic apps and wonderful articles.
 Congratulations! Congratulations to all the winners and organizers 👏It was a great competition and again a lot to learn.2nd time sharing the place. This time with @Dmitry.Maslennikov Heeeeeyyyy guys 👋
Congratulations everyone!! Congratulations to the winners! Well done! kudos to all the winners👏 Video highlighting the winners!
Wonderful photograph !!
NB to all future participants: having a beard has never been a criterion facilitating access to the first prize 🏆