Search

Clear filter
Announcement
Anastasia Dyubaylo · Apr 22

Early bird discount for the InterSystems READY 2025

Hey Community, Great news for all those of you who have missed the Super Early Bird discount for the InterSystems READY 2025! You still have a chance to get an Early bird discount up until 26th of May! So don't miss you chance to participate in the event of the year. ➡️ InterSystems Ready 2025 🗓 Dates: June 22-25, 2025 📍 Location: Signia Hilton Bonnet Creek, Orlando, FL, USA Also, there are several ways you can get your passes without paying a dime! Check out our newest competition Code Your Way to InterSystems READY 2025. The rules are simple: upload your IRIS-based side project to Open Exchange, and record a short inspirational video about why you should be the one to get the pass to THE event of the year! Redeem the reward in your Global Masters My Rewards section if you have enough points. It comprises a ticket and a three-night hotel accommodation (Sunday, Monday, Tuesday). Please note that flights/transportation costs are not included. We hope to see you there! Awesome! I'll be there!
Announcement
Anastasia Dyubaylo · Mar 10

Winners of the InterSystems Technical Article Contest 2025

Hi Community! It's time to celebrate our 25 fellow members who took part in the latest InterSystems Technical Article Contest and wrote 🌟 38 AMAZING ARTICLES 🌟 The competition was filled with outstanding articles, each showcasing innovation and expertise. With so many high-quality submissions, selecting the best was no easy task for the judges. Let's meet the winners and look at their articles: ⭐️ Expert Awards – winners selected by InterSystems experts: 🥇 1st place: Creating FHIR responses with IRIS Interoperability production by @Laura.BlázquezGarcía 🥈 2nd place: Monitoring InterSystems IRIS with Prometheus and Grafana by @Stav 🥉 3rd place: SQLAchemy-iris with the latest version Python driver by @Dmitry.Maslennikov ⭐️ Community Award – winner selected by Community members: 🏆 Generation of OpenAPI Specifications by @Alessandra.Carena And... ⭐️ We'd like to highlight the author who submitted 8 articles for the contest: @Julio.Esquerdo Let's congratulate all our heroes who took part in the Tech Article contest #6: @Robert.Cemper1003 @Stav @Aleksandr.Kolesov @Alessandra.Carena @Dmitry.Maslennikov @André.DienesFriedrich @Ashok.Kumar @Julio.Esquerdo @Andre.LarsenBarbosa @Yuri.Marx @sween @Eric.Fortenberry @Jinyao @Laura.BlázquezGarcía @Corentin.Blondeau @Rob.Tweed @Timothy.Scott @Muhammad.Waseem @Robert.Barbiaux @rahulsinghal @Alice.Heiman @Roy.Leonov @Parani.K @Suze.vanAdrichem @Sanjib.Pandey9191 THANK YOU ALL! You have made an incredible contribution to our Dev Community! The prizes are in production now. We will contact all the participants when they are ready to ship. Congratulations to all the winners and participants Honored to receive second place and grateful to be part of such a talented community. Congratulations to all winners and participants! Congratulations to all the winners 🎉 Congratulations to the participants and winners.Special BIG THANKS to the organizers and administrators of this contest. 💐🏵🌷🌻🌹I'm really proud to see how this community has grown and raised in quality. Congratulations to all the winners and participants! Congratulations everyone, and thank you so much for making such an excellent community of developers possible !💐 Congratulations to all the winners and participants. Your articles were truly inspiring and showcased exceptional creativity and insight. Congratulations to all the winners and participants. All the articles are excellent!!! Congratulations to everyone! Great articles all around! Congratulations to all the participants! Thanks a lot for all your support! It's been a pleasure to participate 😊 And congratulations to everyone! I think it was a tough competition, all articles were so great! good turn out here, congrats all. Congratulations to all the participants! Congratulations to the winners... the best community ever !!! Amazing contributions! Thanks to all the participants! Kudos to all participants and winners🎉! Congratulations all! Congratulations to all the winners 🎉 Congratulations all winners and participants!!!! Congratulations to everyone👏 Congulations all!
Announcement
Bob Kuszewski · May 21

InterSystems Platforms Update Q2-2025

We have a big update this quarter. RHEL 10 was released yesterday, read on for what that means for you 2025.3 will use OpenSSL 3 across all operating systems  SUSE 15 sp6 will be the minimum OS for orgs using SUSE The minimum CPU standards are going up in 2025.3 Older Windows Server operating systems will no longer be supported in 2025.3 If you’re new to these updates, welcome! This update aims to share recent changes as well as our best current knowledge on upcoming changes, but predicting the future is tricky business and this shouldn’t be considered a committed roadmap. InterSystems IRIS Production Operating Systems and CPU Architectures Minimum Supported CPU Architecture In 2024, InterSystems introduced a minimum supported CPU architecture for all Intel- & AMD-based servers that allows us to take advantage of new CPU instructions to create faster versions of InterSystems IRIS. InterSystems IRIS 2025.3 will update that list to require the x86-64-v3 microarchitecture level, which requires the AVX, AVX2, BMI, and BMI2 instructions. For users with Intel-based systems, this means that Skylake and up will be required while Hasell/Broadwell will not be supported. For users with AMD-based systems, this means that Excavator and up will be required while Piledriver & Steamroller will not be supported. Are you wondering if your CPU will still be supported? We published a handy article on how to look up your CPU’s microarchitecture in 2023. Red Hat Enterprise Linux Upcoming Changes RHEL 10 - Red Hat released RHEL 10 on May 20th. We’ve been testing on the latest beta of RHEL 10 on InterSystems IRIS 2025.1. InterSystems IRIS 2025.1 support – We anticipate officially adding support for RHEL 10 in about a month. That’s assuming the GA version of RHEL 10 doesn’t introduce any significant problems, of course. Moving forward – once we have support for RHEL 10 in InterSystems IRIS, we will stop supporting RHEL 8 on moving forward versions of InterSystems IRIS. This likely means that InterSystems IRIS 2025.2 will just support RHEL 9 & 10. Previous Updates RHEL 9.5 has undergone minor OS certification without incident. Further reading: RHEL Release Page Ubuntu Current Update Ubuntu 24.04.2 has just been released and minor OS certification has begun. Further Reading: Ubuntu Releases Page SUSE Linux Upcoming Changes InterSystems IRIS 2025.3+ will require SUSE Linux Enterprise Server 15 SP6 or greater – SLES 15 sp6 has given us the option to use OpenSSL 3 and, to provide you with the most secure platform possible, we’re going to change InterSystems IRIS to start taking advantage of it. In preparation for moving to OpenSSL 3 in IRIS 2025.3, there will be no IRIS 2025.2 for SUSE. Further Reading: SUSE lifecycle Oracle Linux Upcoming Changes We’re expecting Oracle Linux 10 to be released around the same time as RHEL 10. Since we support Oracle Linux via the IRIS RHEL kit, we’re expecting Oracle Linux 10 support at the same time as RHEL 10 support is released. Further Reading: Oracle Linux Support Policy Microsoft Windows Recent Changes Windows Server 2025 is now supported in InterSystems IRIS 2025.1 and up. Upcoming Changes InterSystems IRIS 2025.3+ will no longer support Windows Server 2016 & 2019. Microsoft has pushed back the anticipated release date for Windows 12 to the fall of 2025. We’ll start the process of supporting the new OS after it’s been released. Further Reading: Microsoft Lifecycle AIX Upcoming Changes IBM is rolling out new Power 11 hardware this summer. We anticipate running the new hardware through the paces over the course of the late summer and early fall. Look for a full update on our findings in the Q4 newsletter. Previous Changes IRIS 2024.3 and up only support OpenSSL 3. NOTE: This means that 2024.2 is the last version of IRIS that has both OpenSSL 1 and OpenSSL 3 kits. In IRIS 2023.3, 2024.1, & 2024.2, we provided two separate IRIS kits – one that supports OpenSSL 1 and one that supports OpenSSL 3. Given the importance of OpenSSL 3 for overall system security, we’ve heard from many of you that you’ve already moved to OpenSSL 3. Further Reading: AIX Lifecycle Containers Previous Updates We changed the container base image from Ubuntu 22.04 to Ubuntu 24.04 with IRIS 2024.2 We’re considering changes to the default IRIS container to, by default, have internal traffic (ECP, Mirroring, etc) on a different port from potentially externally facing traffic (ODBC, JDBC, etc). If you have needs in this area, please reach out and let me know. InterSystems IRIS Development Operating Systems and CPU Architectures MacOS Recent Changes IRIS 2025.1 adds support for MacOS 15 on both ARM- and Intel-based systems. InterSystems Components Upcoming Releases InterSystems API Manager 3.10 will be released soon. InterSystems Kubernetes Operator 3.8 will be released in the coming weeks as well. Caché & Ensemble Production Operating Systems and CPU Architectures Previous Updates A reminder that the final Caché & Ensemble maintenance releases are scheduled for Q1-2027, which is coming up sooner than you think. See Jeff’s excellent community article for more info. InterSystems Supported Platforms Documentation The InterSystems Supported Platforms documentation is the definitive source information on supported technologies. IRIS 2025.1 Supported Server Platforms IRIS 2024.1 Supported Server Platforms IRIS 2023.1 Supported Server Platforms Caché & Ensemble 2018.1 Supported Server Platforms … and that’s all folks. Again, if there’s something more that you’d like to know about, please let us know. IBM POWER supported processors continues the same as POWER 8 or later? My mistake - please ignore this.. That's right. There's no planned change in supported POWER processors. IBM does a good job of phasing out older processors with new versions of the OS.
Question
Ashok Kumar T · Jun 10

InterSystems Package Manager (ZPM) Installation errors

Hello Community, I encountered the following errors while installing the ZPM module on version 2025.1. The ZPM install command failed on the Community Edition of IRIS for Health. Skipping installation of python wheel 'attrs-25.1.0-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM. Full error message Skipping installation of python wheel 'attrs-25.1.0-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'certifi-2025.1.31-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'charset_normalizer-2.1.1-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'idna-3.10-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'jsonschema-4.23.0-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'jsonschema_specifications-2024.10.1-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'oras-0.1.30-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'referencing-0.36.2-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'requests-2.32.3-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'typing_extensions-4.12.2-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM.Skipping installation of python wheel 'urllib3-2.3.0-py3-none-any.whl' due to error: '0 ;«WCould not find a suitable pip caller. Consider setting UseStandalonePip and PipCallerÓUSERÇ'e^OnAsStatus+1^%Exception.General.1^1/e^AsStatus+1^%Exception.AbstractException.1^15e^OnPhase+28^%IPM.ResourceProcessor.PythonWheel.1^1)e^%Initialize+8^%IPM.Lifecycle.Base.1^1-e^ExecutePhases+163^%IPM.Storage.Module.1^1+e^LoadNewModule+118^%IPM.Utils.Module.1^1e^Load+44^%IPM.Main.1^1"d^ShellInternal+48^%IPM.Main.1^1d^Shell+2^%IPM.Main.1^1e^ZPMLoad+3^IPM.Installer.1^1e^setup+63^IPM.Installer.1^1"e^AsyncSetup+2^IPM.Installer.1^1d^^^0'. You may need to install this wheel manually or from PyPI to use certain features of IPM. The ZPM installation failed with a 'not found in any repository' error for the applications I attempted to install on the 2024 and 2025 versions. USER>zpm "install System-Task-REST" ERROR! 'System-Task-REST' not found in any repository. USER>zpm ============================================================================= || Welcome to the Package Manager Shell (ZPM). Version: 0.10.2 || || Enter q/quit to exit the shell. Enter ?/help to view available commands || || No registry configured || || System Mode: <unset> || || Mirror Status: NOTINIT || ============================================================================= IRIS for Windows (x86-64) 2025.1 (Build 223U) Tue Mar 11 2025 18:18:59 EDT [Health:8.2.2] zpm:USER>install "System-Task-REST" ERROR! 'System-Task-REST' not found in any repository. zpm:USER>install swagger-ui ERROR! 'swagger-ui' not found in any repository. zpm:USER>install "swagger-ui" ERROR! 'swagger-ui' not found in any repository. zpm:USER> @Ashok.Kumar - could you please raise it here too? As for community images - the latest releases do not have community registry enabled by default. Use this command to make them work: USER>zpm repo -r -n registry -url https://pm.community.intersystems.com/ -user "" -pass "" I've raised the issue. Thanks! The issue is caused by the PythonRuntimeLibrary not being configured. Starting from version 2024.2, Python is no longer bundled with IRIS due to the introduction of the Flexible Python Runtime feature. As a result, the Python directory is missing, preventing the installation of the Wheel file. Check this post about this configuration.
Announcement
Anastasia Dyubaylo · May 20

Vote for the Best Demo! The InterSystems Demo Games Are On!

Hi Community, We’ve got something exciting for you — it’s time for a new demo video contest, and this time, you’re in the judge’s seat! 📺 Demo Games for InterSystems Sales Engineers 📺 For this contest, InterSystems Sales Engineers from around the world submitted short demo videos showcasing unique use cases, smart solutions, and powerful capabilities of InterSystems technologies. Now it’s your turn! We’re opening up voting to the entire Developer Community. Your insight and perspective as developers make you the perfect experts. 👉 How to participate Watch the demos on the Demo Games Contest Page Vote for your favourite entries (Developer Community login required) 🗓 Contest period Voting is open from May 26 until September 14, 2025, and winners will be announced shortly after. New videos will be added throughout the contest — so keep checking back. You might find a favourite right at the end! ✅ How to vote: All active members who have made a valid contribution to the Developer Community or Open Exchange — such as asking or answering questions, writing articles, or publishing applications — are eligible to vote. This includes customers, partners, and employees who registered using their corporate email addresses. To vote: Log in (or create an account) on the Developer Community Visit the Contest Page Select your top 3 favourite videos and click the “Vote” button for each 🏆 Scoring system: 1st place vote = 9 points 2nd place = 6 points 3rd place = 3 points 🔁 Votes from the Official Community Moderators* count double, so your top picks really matter! * Moderators are indicated by a green circle around their avatar. Let the Demo Games begin – and may the best demo win!
Article
Luis Angel Pérez Ramos · Jul 9

InterSystems Data Fabric Studio at your service!

You've probably encountered the terms Data Lake, Data Warehouse, and Data Fabric everywhere over the last 10-15 years. Everything can be solved with one of these three things, or a combination of them ( here and here are a couple of articles from our official website in case you have any doubts about what each of these terms means). If we had to summarize the purpose of all these terms visually, we could say that they all try to solve situations like this: Our organizations are like that room, a multitude of drawers filled with data everywhere, in which we are unable to find anything we need, and we are completely unaware of what we have. Well, at InterSystems we could not be less and taking advantage of the capabilities of InterSystems IRIS we have created a Data Fabric solution called InterSystems Data Fabric Studio (are we or are we not original?). Data Fabric First of all, let's take a closer look at the features that characterize a Data Fabric, and what better way to do so than by asking our beloved ChatGPT directly: A Data Fabric is a modern architecture that seeks to simplify and optimize data access, management, and use across multiple environments, facilitating a unified and consistent view of data. Its most distinctive features include: Unified and transparent access Seamless integration of structured, semi-structured, and unstructured data. Seamless access regardless of physical or technological location. Centralized metadata management Advanced data catalogs that provide information on origin, quality, and use. Automatic data search and discovery capabilities. Virtualization and data abstraction Eliminating the need to constantly move or replicate data. Dynamic creation of virtual views that enable real-time distributed queries. Integrated government and security Consistent application of security, privacy, and compliance policies across all environments. Integrated protection of sensitive data through encryption, masking, and granular controls. AI-powered automation Automating data discovery, preparation, integration, and optimization using artificial intelligence. Automatic application of advanced techniques to improve quality and performance. Advanced analytical capabilities Integrated support for predictive analytics, machine learning, and real-time data processing. InterSystems Data Fabric Studio InterSystems Data Fabric Studio, or IDFS from now on, is a cloud-based SaaS solution (for now) whose objective is to meet the functionalities demanded of a Data Fabric. Those of you with more experience developing with InterSystems IRIS will have clearly seen that many of the Data Fabric features are easily implemented on IRIS, which is exactly what we thought at InterSystems. Why not leverage our technology by providing our customers with a solution? Modern and user-friendly interface. This is a true first for InterSystems: a simple, modern, and functional web interface based on the latest versions of technologies like Angular. With transparent access to your source data. The first step to efficiently exploiting your data begins with connecting to it. Different data sources require different types of connections, such as JDBC, REST APIs, or CSV files. IDFS provides connectors for a wide variety of data sources, including connections to different databases via JDBC using pre-installed connection libraries. Analyze your data sources and define your own catalog. Every Data Fabric must allow users to analyze the information available in their data sources by displaying all associated metadata that allows them to decide whether or not it is relevant for further exploitation. With IDFS, once you've defined the connections to your different databases, you can begin the discovery and cataloging tasks using features such as importing schemas defined in the database. In the following image, you can see an example of this discovery phase. From an established connection to an Oracle database, we can access all the schemas present in it, as well as all the tables defined within each schema. This functionality is not limited to the rigid structures defined by external databases; IDFS, using SQL queries between multiple tables in the data source, allows you to generate catalogs with only the information that is most relevant to the user. Below you can see an example of a query against multiple tables in the same database and a visualization of the retrieved data. Once our catalog is defined, IDFS will be responsible for storing the configuration metadata. There is no need to import the actual data at any time, thus providing virtualization of the data. Consult and manage your data catalog. The data set present in any organization can be considerable, so managing the catalogs we create based on it is essential to be agile and simple. IDFS allows us to consult our entire data catalog at any time, allowing us to recognize at a glance what data we have access to. As you can see, with the functionalities already explained, we perfectly cover the first two points that ChatGPT indicated as necessary for a Data Fabric tool. Let's now see how IDFS covers the remaining points. One of the advantages of IDFS is that, since it is built on InterSystems IRIS, it leverages its vector search capabilities, which allow semantic searches across the data catalog, allowing you to retrieve all catalogs related to a given search. Prepare your data for later use. It's pointless to identify and catalog our data if we can't make it available to third parties in the way they need it. This step is key, as providing data in the required formats will facilitate its use, simplifying the analysis and development processes of new solutions. IDFS makes this process easier by creating "Recipes," a name that fits perfectly since what we're going to do is "cook" our data. As with any good recipe, our ingredients (the data) will go through several steps that will allow us to finally prepare the dish to our liking. Prepare your data (Staging) The first step in any recipe is to gather all the necessary ingredients. This is the preparation or staging step. This step allows you to choose from your entire catalog the one that contains the required information. Transform your data (Transformation) Any Data Fabric worth its salt must be able to transform data sources and must have the capacity to do so quickly and effectively. IDFS allows data to be conditioned through the necessary transformations so that the client can understand the data. These transformations can be of various types: string replacement, rounding of values, SQL expressions that transform data, etc. All of these data transformations will be persisted directly to the IRIS database without affecting the data source at any time. After this step, our data would be adapted to the requirements of the client system that will use it. Data Validation In a Data Fabric, it's not enough to simply transform the data; it's necessary to ensure that the data being provided to third parties is accurate. IDFS has a data validation step that allows us to filter the data we provide to our clients. Data that doesn't meet validation will generate warnings or alerts to be managed by the responsible person. An important point of this validation phase in IDFS is that it can also be applied to the fields we transformed in the previous step. Data Reconciliation It is very common to need to validate our data with an external source to ensure that the data in our Data Fabric is consistent with the information available in other tables in our data source. IDFS has a reconciliation process that allows us to compare our validated data with this external data source, thereby ensuring its validity. Data Promotion Every Data Fabric must be able to forward all the information that has passed through it to third-party systems. To do this, it must have processes that export this transformed and validated data. IDFS allows you to promote data that has gone through all the previous steps to a previously defined data source. This promotion is done through a simple process in which we define the following: The data source to which we will send the information. The target schema (related to a table in the data source). The mapping between our transformed and validated data and the destination table. Once the previous configuration is complete, our recipe is ready to go into action whenever we want. To do so, we only need to take one last step: schedule the execution of our recipe. Business scheduler Let's do a quick review before continuing what we've done: Define our data sources. Import the relevant catalogs. Create a recipe to cook our data. Configure the import, transformation, validation, and promotion of our data to an external database. As you can see, all that's left is to define when we want our recipe to run. Let's get to it! In a very simple way, we can indicate when we want the steps defined in our recipe to be executed, either on a scheduled basis, at the end of a previous execution, manually, etc. These execution scheduling capabilities will allow us to seamlessly chain recipe executions, thereby streamlining their execution and giving us more detailed control over what's happening with our data. Each execution of our recipes will leave a record that we will be able to consult later to know the status of said execution: The executions will generate a series of reports that are easily searchable and downloadable. Each report will show the results of each of the steps defined in our recipe: Conclusions We've reached the end of the article. I hope it helped you better understand the concept of Data Fabric and that you found our new InterSystems Data Fabric Studio solution interesting. Thank you for your time!
Question
steven Henry · Jul 10

How to change the Address in InterSystems Trakcare for Objectscript

Hello my friends, I have a problem with Objectscript, why the value of address become like this ? everything works fine except the Address, this is my code, do I need something to make this into real address ? should I put something in my code ? set paper=obj.PAADMPAPMIDR.PAPMIPAPERDR if '$isobject(paper) continue set Address=paper.PAPERStName thank you for your help Best Regards, Steven Henry Hi Steven This property is showing as a Collection property, so it needs to be unpacked to be read as text. The documentation on processing List properties is available at: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_propcoll#GOBJ_propcoll_list_access I just put GetAt(1) and it works this is the correction that I've been made : set Address=paper.PAPERStName.GetAt(1)
Article
Developer Community Admin · Jul 16

Massive Scalability with InterSystems IRIS Data Platform

Faced with the enormous and evergrowing amounts of data being generated in the world today, software architects need to pay special attention to the scalability of their solutions. They must also design systems that can, when needed, handle many thousands of concurrent users. It’s not easy, but designing for massive scalability is an absolute necessity. Software architects have several options for designing scalable systems. They can scale vertically by using bigger machines with dozens of cores. They can use data distribution (replication) techniques to scale horizontally for increasing numbers of users. And they can scale data volume horizontally through the use of a data partitioning strategy. In practice, software architects will employ several of these techniques, trading off hardware costs, code complexity, and ease of deployment to suit their particular needs. This article will discuss how InterSystems IRIS Data Platform supports vertical scalability and horizontal scalability of both user and data volumes. It will outline several options for distributing and partitioning data and/or user volume, giving scenarios in which each option would be particularly useful. Finally, this paper will talk about how InterSystems IRIS helps simplify the configuration and provisioning of distributed systems. Vertical Scaling Perhaps the simplest way to scale is to do so “vertically” – deploy on a bigger machine with more CPUs and memory. InterSystems IRIS supports parallel processing of SQL and includes technology for optimizing the use of CPUs in multi-core machines. However, there are practical limits to what can be achieved through vertical scaling alone. For one thing, even the largest available machine may not be able to handle the enormous data volumes and workloads required by modern applications. Also, “big iron” can be prohibitively expensive. Many organizations find it more cost-effective to buy, say, four 16-core servers than one 64-core machine. Capacity planning for single-server architectures can be difficult, especially for solutions that are likely to have widely varying workloads. Having the ability to handle peak loads may result in wasteful underutilization during off hours. On the other hand, having too few cores may cause performance to slow to a crawl during periods of high usage. In addition, increasing the capacity of a single server architecture implies buying an entire new machine. Adding capacity “on the fly” is impossible. In short, although it is important for software to leverage the full potential of the hardware on which it is deployed, vertical scalability alone is not enough to meet all but the most static workloads. Horizontal Scaling For all of the reasons given above, most organizations seeking massive scalability will deploy on networked systems, scaling workloads and/or data volumes “horizontally” by distributing the work across multiple servers. Typically, each server in the network will be an affordable machine, but larger servers may be used if needed to take advantage of vertical scalability as well. Software architects will recognize that no two workloads are the same. Some modern applications may be accessed by hundreds of thousands of users concurrently, racking up very high numbers of small transaction per second. Others may only have a handful of users, but query petabytes worth of data. Both are very demanding workloads, but they require different approaches to scalability. We will start by considering each scenario separately. Horizontal Scaling of User Volume To support a huge number of concurrent users (or transactions), InterSystems IRIS relies on unique data caching technology called Enterprise Cache Protocol (ECP). Within a network of servers, one will be configured as the data server where data is persisted. The others will be configured as application servers. Each application server runs an instance of InterSystems IRIS and presents data to the application as though it were a local database. Data is not persisted on the application servers. They are there to provide cache and CPU processing power. User sessions are distributed among the application servers, typically through a load balancer, and queries are satisfied from the local application server cache, if possible. Application servers will only retrieve data from the data server if necessary. InterSystems IRIS automatically synchronizes data between all cluster participants. With the compute work taken care of by the application servers, the data server can be dedicated mostly to persisting transaction outcomes. Application servers can easily be added to, or removed from, the cluster as workloads vary. For example, in a retail use case, you may want to add a few application servers to deal with the exceptional load of Black Friday and switch them off again after the holiday season has finished. Application servers are most useful for applications where large numbers of transactions must be performed, but each transaction only affects a relatively small portion of the entire data set. Deployments that use application servers with ECP have been shown to support many thousands of concurrent users in a variety of industries. Horizontal Scaling of Data Volume When queries – usually analytic queries – must access a large amount of data, the “working dataset” that needs to be cached in order to support the query workload efficiently may exceed the memory capacity on a single machine. InterSystems IRIS provides a capability called sharding, which physically partitions large database tables across multiple server instances. Applications still access a single logical table on an instance designated as the shard master. The shard master decomposes incoming queries and sends them to the shard servers, each of which holds a distinct portion of the table data and associated indices. The shard servers process the shard-local queries in parallel, and send their results back to the shard server for aggregation. Data is partitioned among shard servers according to a shard key, which can be automatically managed by the system, or defined by the software architect based on selected table columns. Through the careful selection of shard keys, tables that are often joined can be co-sharded, so rows from those tables that would typically be joined together are stored on the same shard server, enabling the join to happen entirely local to each shard server, and thus maximizing parallelization and performance. As data volumes grow, additional shards can easily be added. Sharding is completely transparent to the application and to users. Not all tables need to be sharded. For example, in analytics applications, facts tables (e.g. orders in a retail scenario) are usually very large, and will be sharded. The much smaller dimensions tables (e.g. product, point of sale, etc.) will not be. Non-sharded tables are persisted on the shard master. If a query requires joins between sharded and non-sharded tables, or if data from two different shards must be joined, InterSystems IRIS uses a highly efficient ECP-based mechanism to correctly and efficiently satisfy the request. In these cases, InterSystems IRIS will only share between shards the rows that are needed, rather than broadcasting entire tables over the network, as many other technologies would. InterSystems IRIS transparently improves the efficiency and performance of big data query workloads through sharding, without limiting the types of queries that can be satisfied. The InterSystems IRIS architecture enables complex multi-table joins when querying distributed, partitioned data sets without requiring co-sharding, without replicating data, and without requiring entire tables to be broadcast across networks. Scaling Both User and Data Volumes Many modern solutions must simultaneously support both a high transaction rate (user volume) and analytics on large volumes of data. One example: a private wealth management application that provides dashboards summarizing clients’ portfolios and risk, in real time based on current market data. InterSystems IRIS enables such Hybrid Transactional and Analytical Processing (HTAP) applications by allowing application servers and sharding to be used in combination. Application servers can be added to the architecture pictured in Figure #2 to distribute the workload on the shard master. Workloads and data volumes can be scaled independently of each other, depending on the needs of the application. When applications require the ultimate in scalability (for example, if a predictive model must score every record in a large table while new records are being ingested and queried at the same time) each individual data shard can act as the data server in an ECP model. We refer to the application servers that share the workloads on data shards as “query shards.” This, combined with the transparent mechanisms for ensuring high availability of an InterSystems IRIS cluster, provides solution architects with everything they need to satisfy their solution’s unique scalability and reliability requirements. The comparative performance and efficiency of InterSystems IRIS’ approach to sharding has been demonstrated and documented in a benchmark test validated by a major technology analyst firm. In tests of an actual mission critical financial services use case, InterSystems IRIS was shown to be faster than several highly specialized databases, while requiring less hardware and accessing more data. Flexible Deployment InterSystems IRIS gives software developers a great deal of flexibility when it comes to designing a highly efficient, scalable solution. But scalability may come at the cost of increased complexity, as additional servers, taking on a variety of roles, are added to the architecture. Simplify the provisioning and deployment of servers (whether physical or virtual) in a distributed architecture with InterSystems IRIS. InterSystems enables simple scripts to be used for configuring InterSystems IRIS containers as a data server, shard server, shard master, application server, etc. Containers can be deployed easily in public or private clouds. They are also easily decommissioned, so scalable architectures can be designed to grow or contract with fluctuating needs. Conclusion Massive scalability is a requirement for modern applications, particularly Hybrid Transactional and Analytical Processing applications that must handle very high workloads and data volumes simultaneously. InterSystems IRIS Data Platform gives software architects options for cost-effectively scaling their applications. It supports vertical scaling, application servers for horizontally scaling user volume, and a highly efficient approach to sharding for horizontally scaling data volume that eliminates the need for network broadcasts. All these technologies can be used independently or in combination to tailor a scalable architecture to an application’s specific requirements. More articles on the subject: Horizontal Scalability with InterSystems IRIS Running InterSystems IRIS in a FaaS mode with Kubeless Scaling Cloud Hosts and Reconfiguring InterSystems IRIS Highly available IRIS deployment on Kubernetes without mirroring Source: Massive Scalability with InterSystems IRIS Data Platform
Announcement
Anastasia Dyubaylo · Jul 18

[Video] From Idea to Impact: The InterSystems Approach

Hi Community, Enjoy the new video on InterSystems Developers YouTube: ⏯ From Idea to Impact: The InterSystems Approach @ Global Summit 2024 Tune in for the discussion of the structured approach to accelerating innovation within the company, emphasizing culture, process, and execution using InterSystems technologies. Innovation tournaments, speed brainstorming, innovation loops, and test beds are used to drive idea development from inception to impact. Presenters:🗣 @Jeffrey.Fried, Director, Platform Strategy, InterSystems🗣 @Alki.Iliopoulou, Innovation Ecosystem Leader, InterSystems Stay ahead of the curve! Watch the video and subscribe to keep learning!
Announcement
Anastasia Dyubaylo · Jul 21

Winners of the 4th InterSystems Ideas Contest

Hi Community! Our 💡 InterSystems Ideas Contest 💡 has come to an end. 26 new ideas that followed the required structure were accepted into the contest! They all focus on improving InterSystems IRIS and related products, highlighting tangible benefits for developers once the ideas are implemented. And now let's announce the winners... Expert Awards 🥇 1st place goes to the idea Extending an open source LLM to support efficient code generation in InterSystems technology by @Yuri.Gomes The winner will receive🎁 Stilosa Barista Espresso Machine & Cappuccino Maker. 🥈 2nd place goes to the idea Streaming JSON Parsing Support by @Ashok.Kumar The winner will receive🎁 Osmo Mobile 7. 🥉 3rd place goes to the idea Auto-Scaling for Embedded Python Workloads in IRIS by @diba The winner will receive🎁 Smart Mini Projector XGODY Gimbal 3. Random Award Using the Wheel of Names, we've chosen one random lucky winner: 🏆 Random award goes to the idea Do not include table statistics when exporting Production for deployment by @Enrico.Parisi The winner will receive🎁 Smart Mini Projector XGODY Gimbal 3. Here's the recording of the draw. 🔥 Moreover, all participants will get a special gift - an aluminum media stand. Let's have a look at the participants and their brilliant ideas Author Idea @Yuri.Gomes Extending an open source LLM to support efficient code generation in intersystems technology @David.Hockenbroch Add Typescript Interface Projection @Enrico.Parisi Make DICOM iteroperability adapter usable in Mirror configuration/environment @Marykutty.George1462 Ability to abort a specific message from message viewer or visual trace page @Enrico.Parisi Do not include table statistics when exporting Production for deployment @Ashok.Kumar recursive search in Abstract Set Query @Ashok.Kumar TTL(Time To Live) Parameter in %Persistent Class @Ashok.Kumar Programmatic Conversion from SDA to HL7 v2 @Ashok.Kumar Streaming JSON Parsing Support @Ashok.Kumar Differentiating System-Defined vs. User-Defined Web Applications in IRIS @Ashok.Kumar Need for Application-Specific HTTP Tracing in Web Gateway @Ashok.Kumar Add Validation for Dispatch Class in Web Application Settings @Ashok.Kumar Encoding in SQL functions @Ashok.Kumar Compression in SQL Functions @Alexey.Maslov Universal Global Exchange Utility @Ashok.Kumar Automatically Expose Interactive API Documentation @Vishal.Pallerla Dark Mode for Management Portal @Ashok.Kumar IRIS Native JSON Schema Validator @Ashok.Kumar Enable Schema Validation for REST APIs Using Swagger Definitions @diba Auto-Scaling for Embedded Python Workloads in IRIS @Dmitry.Maslennikov Integrate InterSystems IRIS with SQLancer for Automated SQL Testing and Validation @Dmitry.Maslennikov Bring IRIS to the JavaScript ORM World @Ashok.Kumar HTML Report for UnitTest Results @Andre.LarsenBarbosa AI Suggestions for Deprecated Items @Mark.OReilly Add a field onto Oauth Client to allow alerting expiry dates alert @Mark.OReilly Expose "Reply To" as default on EnsLib.EMail.AlertOperation OUR CONGRATULATIONS TO ALL WINNERS AND PARTICIPANTS! Thank you for your attention to the Ideas Contest and the effort you devote to the official InterSystems feedback portal 💥 Important note: The prizes are in production. We will contact all the participants when they are ready to ship. If you have any questions, please contact @Liubka.Zelenskaia.
Announcement
Kevin Xu · Jul 14

Advisory for IRISSECURITY in InterSystems IRIS 2025.2

InterSystems IRIS 2025.2 introduces the IRISSECURITY database, the new home for security data. Unlike IRISSYS, the previous home for security data, IRISSECURITY can be encrypted, which secures your sensitive data at rest. In a future version, IRISSECURITY will be mirrorable. This version also introduces the %SecurityAdministrator role for general security administration tasks. The changes described here affect both continuous delivery (CD) and extended maintenance (EM) release tracks. That is, starting with versions 2025.2 (CD, releasing on 23 July, 2025) and 2026.1 (EM), InterSystems IRIS will include the IRISSECURITY database, and all security data is automatically moved from IRISSYS to IRISSECURITY when you upgrade. While InterSystems IRIS 2025.2 is expected to release on 23 July 2025, we are holding off on the public release of InterSystems IRIS for Health and HealthShare Health Connect 2025.2 while we complete work on a remediation plan for a known mirroring issue that impacts OAuth configuration data. Before You Upgrade IRISSECURITY makes several potentially breaking changes to how users interact with security data: Users can no longer directly access security globals and must instead use the APIs provided by the various security classes. OAuth2 globals can no longer be mapped to a different database. Users can no longer arbitrarily query security tables, even when SQL security is disabled. System databases now use predefined resources that cannot be changed. On Unix, if you created and assigned a new resource to a system database in a previous version, it will be replaced by the predefined resource when you upgrade (though if any roles reference the non-default resource, they must be changed manually to use the default resource to keep database access). On Windows, you must change the resource back to the default. If you attempt to upgrade on Windows while databases have non-default resources, the upgrade will halt (the instance is not modified) and display an error message "Database must have a resource label of..." The following sections go into detail about these changes and what you should do instead if you depended on the original behavior, but in general, before you upgrade, you should verify and test that your applications and macros: Use the provided security APIs to administer security (as opposed to direct global access). Have the necessary permissions (%DB_IRISSYS:R and Admin_Secure:U) for using those APIs. Global Access Previously, when security globals were stored in the IRISSYS database, users could access security data with the following privileges: %DB_IRISSYS:R: Read security globals both directly and through security APIs. %DB_IRISSYS:RW: Read and write security globals. %DB_IRISSYS:RW and Admin_Secure:U: Administer security through security APIs. In InterSystems IRIS 2025.2: Users can no longer access security globals directly. Both %DB_IRISSYS:R and %Admin_Secure:U are the minimum privileges needed to both access security data (through the provided security APIs) and administer security through the various security classes. For general security administration, you can use the new %SecurityAdministrator role. Read-only access to security data (previously available through %DB_IRISSYS:R) has been removed. Global Locations In InterSystems IRIS 2025.2, the following security globals have been moved from IRISSYS to the ^SECURITY global located in IRISSECURITY: ^SYS("SECURITY") ^OAuth2.* ^PKI.* ^SYS.TokenAuthD The following table lists the most notable globals that have been moved, their security classes, old locations, and new locations: Security Class Old Location (IRISSYS) New Location (IRISSECURITY) N/A ^SYS("Security","Version") ^SECURITY("Version") Security.Applications ^SYS("Security","ApplicationsD") ^SECURITY("ApplicationsD") Security.DocDBs ^SYS("Security","DocDBsD") ^SECURITY("DocDBsD") Security.Events ^SYS("Security","EventsD") ^SECURITY("EventsD") Security.LDAPConfigs ^SYS("Security","LDAPConfigsD") ^SECURITY("LDAPConfigsD") Security.KMIPServers ^SYS("Security","KMIPServerD") ^SECURITY("KMIPServerD") Security.Resources ^SYS("Security","ResourcesD") ^SECURITY("ResourcesD") Security.Roles ^SYS("Security","RolesD") ^SECURITY("RolesD") Security.Services ^SYS("Security","ServicesD") ^SECURITY("ServicesD") Security.SSLConfigs ^SYS("Security","SSLConfigsD") ^SECURITY("SSLConfigsD") Security.System ^SYS("Security","SystemD") ^SECURITY("SystemD") Security.Users ^SYS("Security","UsersD") ^SECURITY("UsersD") %SYS.PhoneProviders ^SYS("Security","PhoneProvidersD") ^SECURITY("PhoneProvidersD ") %SYS.X509Credentials ^SYS("Security","X509CredentialsD") ^SECURITY("X509CredentialsD ") %SYS.OpenAIM.IdentityServices ^SYS("Security","OpenAIMIdentityServersD") ^SECURITY("OpenAIMIdentityServersD") OAuth2.AccessToken ^OAuth2. AccessTokenD ^SECURITY("OAuth2.AccessToken ") OAuth2.Client ^OAuth2.ClientD ^SECURITY("OAuth2.Client") OAuth2.ServerDefinition ^OAuth2.ServerDefinitionD ^SECURITY("OAuth2.ServerDefinitionD") OAuth2.Client.MetaData ^OAuth2.Client.MetaDataD ^SECURITY("OAuth2.Client.MetaDataD") OAuth2.Server.AccessToken ^OAuth2.Server.AccessTokenD ^SECURITY("OAuth2.Server.AccessTokenD") OAuth2.Server.Client ^OAuth2.Server.ClientD ^SECURITY("OAuth2.Server.ClientD") OAuth2.Server.Configuration ^OAuth2.Server.ConfigurationD ^SECURITY("OAuth2.Server.ConfigurationD") OAuth2.Server.JWTid ^OAuth2.Server.JWTidD ^SECURITY("OAuth2.Server.JWTidD") OAuth2.Server.Metadata ^OAuth2.Server.MetadataD ^SECURITY("OAuth2.Server.MetadataD") PKI.CAClient ^PKI.CAClientD ^SECURITY("PKI.CAClient") PKI.CAServer ^PKI.CAServerD ^SECURITY("PKI.CAServer") PKI.Certificate ^PKI.CertificateD ^SECURITY("PKI.Certificate") %SYS.TokenAuth ^SYS.TokenAuthD ^SECURITY("TokenAuthD") OAuth2 Global Mapping Previously, you could map OAuth2 globals to a different database, which allowed OAuth2 configurations to be mirrored. In InterSystems IRIS 2025.2, OAuth2 globals can no longer be mapped, and IRISSECURITY cannot be mirrored. If you depended on this behavior for mirroring, you can use any of the following workarounds: Manually make changes to both the primary and failover. Export the settings from the primary and then import them to the failover (requires %ALL). To export OAuth2 configuration data: set items = $name(^|"^^:ds:IRISSECURITY"|SECURITY("OAuth2"))_".gbl" set filename = "/home/oauth2data.gbl" do $SYSTEM.OBJ.Export(items,filename) To import OAuth2 configuration data: do $SYSTEM.OBJ.Import(filename) SQL Security Previously, SQL security was controlled by the CPF parameter DBMSSecurity. When DBMSSecurity was disabled, users with SQL privileges could arbitrarily query all tables in the database. In InterSystems IRIS 2025.2: The DBMSSecurity CPF parameter has been replaced with the system-wide SQL security property. You can set this in several ways: Management Portal: System Administration > Security > System Security > System-wide Security Parameters > Enable SQL security SetOption: ##class(%SYSTEM.SQL.Util).SetOption("SQLSecurity", "1") Security.System.Modify: ##Class(Security.System).Modify(,.properties), where properties is properties("SQLSecurity")=1 Security tables can now only be queried through the Detail and List APIs, which require both %DB_IRISSYS:R and %Admin_Secure:U even when SQL security is disabled. For example, to get a list of roles, you can no longer directly query the Security.Roles table. Instead, you should use the Security.Roles_List() query: SELECT Name, Description FROM Security.Roles_List() Encrypting IRISSECURITY To encrypt IRISSECURITY, use the following procedure: Create a new encryption key. Go to System Administration > Encryption > Create New Encryption Key File and specify the following: Key File – The name of the encryption key. Administrator Name – The name of the administrator. Password – The password for the key file. Activate the encryption key. Go to System Administration > Encryption > Database Encryption and select Activate Key, specifying the Key File, Administrator Name, and Password from step 1. Go to System Administration > Encryption > Database Encryption and select Configure Startup Settings. From the Key Activation at Startup dropdown menu, select a key activation method. InterSystems highly recommends Interactive key activation. From the Encrypt IRISSECURITY Database dropdown, select Yes. Restart your system to encrypt IRISSECURITY. Percent-class Access Rules In previous versions of InterSystems IRIS, the procedure for managing a web application’s access to additional percent classes involved writing to security globals. You can accomplish this in InterSystems IRIS 2025.2 through the Management Portal or the ^SECURITY routine. Management Portal To create a percent-class access rule with the Management Portal: Go to System Administration > Security > Web Applications. Select your web application. In the Percent Class Access tab, set the following options: Type: Controls whether the rule applies to the application’s access to just the specified percent class (AllowClass) or all classes that contain the specified prefix (AllowPrefix). Class name: The percent class or prefix to give the application access to. Allow access: Whether to give the application access to the specified percent class or package. Add this same access to ALL applications: Whether to apply the rule for all applications. ^SECURITY To create a class access rule with the ^SECURITY routine: From the %SYS namespace, run the ^SECURITY routine: DO ^SECURITY Choose options 5, 1, 8, and 1 to enter the class access rule prompt. Follow the prompts, specifying the following: Application? – The name of the application. Allow type? – Whether the rule applies to the application's ability to access a particular class (AllowClass) or all classes that contain the specified prefix (AllowPrefix). Class or package name? – The class or prefix to give the application access to. Allow access? – Whether to give the application access to the specified class or package. IRISSECURITY cannot be mirrored, yet*. 😎 an active project now
Announcement
Anastasia Dyubaylo · May 21, 2022

[Video] Drinking Our Own Champagne: InterSystems AppServices' Move from Zen Reports to InterSystems Reports

Hey Community, Check out the latest video on InterSystems Developers YouTube channel: ⏯ Drinking Our Own Champagne: InterSystems AppServices' Move from Zen Reports to InterSystems Reports The InterSystems internal applications team, which manages the Change Control Record System for both internal and for customer use, moved all reporting capabilities from Zen Reports to InterSystems Reports last year. Join us for a look at the before and after, hear about lessons learned, and learn about the move's unexpected benefits. 🗣 Presenter: @Jean.Millette, Developer AppServices, InterSystems Support this video with your 👍
Announcement
Bob Kuszewski · Jun 2, 2022

InterSystems announces General Availability of InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2022.1

InterSystems is pleased to announce the 2022.1 releases of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and HealthShare Health Connect are now Generally Available (GA). 2022.1 is an extended maintenance release, which means that maintenance builds will be available for two years, followed by an additional two years of security-specific builds. Release Highlights Platform Updates InterSystems IRIS Data Platform 2022.1 expands support to include the following new & updated operating systems for production workloads: Windows Server 2022 Windows 11 AIX 7.3 Oracle Linux 8 We’re also happy to announce that both Apple’s M1 & Intel chipsets are supported with MacOS 12 (Monterey) for development environments. Better Development Embedded Python – use Python & ObjectScript inside IRIS Interoperability Adapters for Kafka, AWS S3, AWS SNS, & CloudWatch Redesigned user experience for Production Extensions (PEX) Speed, Scale, & Security Online Shard Rebalancing Adaptive SQL Journal & Stream Compression TLS 1.3, OAuth 2 Support for email Analytics & AI SQL Loader InterSystems Reports deployment improvements More details on all of these features can be found in the product documentation: InterSystems IRIS 2022.1 documentation and release notes InterSystems IRIS for Health 2022.1 documentation and release notes HealthShare Health Connect 2022.1 documentation and release note How to get the software The software is available as both classic installation packages and container images. For the complete list of available installers and container images, please refer to the Supported Platforms document. Full installation packages for each product are available from the WRC's Software Distribution page. Using the Custom installation option, you to pick the options you need, such as InterSystems Studio and IntegratedML, to right-size your installation footprint. Container images for the Enterprise Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry using the following commands: docker pull containers.intersystems.com/intersystems/iris:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/irishealth:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/iris-arm64:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/irishealth-arm64:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/iris-ml:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/iris-ml-arm64:2022.1.0.209.0 For a full list of the available images, please refer to the ICR documentation. Container images for the Community Edition can also be pulled from the InterSystems Container Registry using the following commands: docker pull containers.intersystems.com/intersystems/iris-community:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/irishealth-community:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/iris-community-arm64:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/irishealth-community-arm64:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/iris-ml-community:2022.1.0.209.0 docker pull containers.intersystems.com/intersystems/iris-ml-community-arm64:2022.1.0.209.0 InterSystems IRIS Studio 2022.1 is a standalone IDE for use with Microsoft Windows and can be downloaded via the WRC's components download page. It works with InterSystems IRIS and IRIS for Health version 2022.1 and below. InterSystems also supports the VSCode-ObjectScript plugin for developing applications for InterSystems IRIS with Visual Studio Code, which is available for Microsoft Windows, Linux and MacOS. Our corresponding listings on the main cloud marketplaces will be updated in the next few days. The build number for this release release is 2022.1.0.209.0. Is this issue first added with 2022.1, solved in this latest build? Unfortunately, we do not have a workaround for Docker's breaking change included in 2022.1. The instructions in my Docker post should still be followed. What about previous images in containers.intersystems.com? It seems they are gone? What about InterSystems images on the DockerHub? Will they be published there too? Previously we had irishealth-ml images too, e.g. containers.intersystems.com/intersystems/irishealth-ml:2021.2.0.651.0 What about this brunch of images? And images with ZPM package manager 0.3.2 are available accordingly: intersystemsdc/iris-community:2022.1.0.209.0-zpm intersystemsdc/irishealth-community:2022.1.0.209.0-zpm intersystemsdc/irishealth-ml-community:2022.1.0.209.0-zpm intersystemsdc/irishealth-community:2022.1.0.209.0-zpm intersystemsdc/iris-community:2021.2.0.651.0-zpm intersystemsdc/iris-ml-community:2021.2.0.651.0-zpm intersystemsdc/irishealth-community:2021.2.0.651.0-zpm intersystemsdc/irishealth-ml-community:2021.2.0.651.0-zpm And to launch IRIS do: docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2022.1.0.209.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2022.1.0.209.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2022.1.0.209.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2022.1.0.209.0-zpm And for terminal do: docker exec -it my-iris iris session IRIS and to start the control panel: http://localhost:9092/csp/sys/UtilHome.csp To stop and destroy container do: docker stop my-iris And the FROM clause in dockerfile can look like this: FROM intersystemsdc/iris-community:2022.1.0.209.0-zpm Or to take the latest image: FROM intersystemsdc/iris-community So if you are on the latest image already just rebuild the repository to get the latest IRIS production image with ZPM onboard. Also to stay up-to-date with InterSystems containers repository images I recommend taking a look at this remarkable free extension to Docker Desktop by @Dmitry.Maslennikov he contributed to the community: As is our long standing practice, preview releases are taken down once a GA release is available. We currently have the following versions available on IRC: 2019.1.3.722.0, 2020.1.3.521.0, 2021.1.2.338.0, 2021.2.0.651.0, 2022.1.0.209.0 Thanks, Bob! Community images are published to DockerHub, as is the longstanding practice. Docker is currently semi-retiring their old "store" portion of DockerHub leaving a confusing interface to find our containers. We expect that Docker will clear this up in the next few months. In the meanwhile, you can easily find InterSystems community containers via the InterSystems Organization on DockerHub: https://hub.docker.com/u/intersystems containers.intersystems.com/intersystems/irishealth-ml:2022.1.0.209.0 is available on ICR, if that's what you're asking. We didn't list every single container in the announcement because, well, there are a lot of them Got it! It was listed in the previous GA announcement, that's why I'm asking. This is very helpful, thanks!
Announcement
Benjamin De Boe · Jun 14, 2021

InterSystems IRIS, InterSystems IRIS for Health & HealthShare Health Connect 2021.1 are now Generally Available

InterSystems is very pleased to announce the 2021.1 release of InterSystems IRIS Data Platform, InterSystems IRIS for Health and HealthShare Health Connect, which are now Generally Available to our customers and partners. The enhancements in this release offer developers more freedom to build fast and robust applications in their language of choice, both server-side and client-side. This release also enables users to consume large amounts of information more effectively through new and faster analytics capabilities. We expect many customers and partners to upgrade their Caché and Ensemble deployments to this InterSystems IRIS release and have made every effort to make that a smooth and worthwhile transition. Most applications will see immediate performance benefits just from running on InterSystems IRIS, even before they explore the many powerful capabilities IRIS brings to the table. We kindly invite you to join our webinar presenting the highlights of the new release, at 11AM EDT on June 17. The webinar will be recorded and made available for replay afterwards. Or listen to our Data Points podcast episode on what's new in 2021.1. Release Highlights With InterSystems IRIS 2021.1, customers can deploy InterSystems IRIS Adaptive Analytics, an add-on product that extends InterSystems IRIS to provide business users with superior ease of use and self-service analytics capabilities to visualize, analyze, and interrogate vast amounts of data to get the information they need to make timely and accurate business decisions, without being experts in data design or data management. Adaptive Analytics transparently accelerates analytic query workloads that run against InterSystems IRIS by autonomously building and maintaining interim data structures in the background. Other spotlight features new in this release include: Improved manageability for our External Language Servers, which now also cover R and Python. This gateway technology enables robust and scalable leveraging of server-side code in your language of choice The InterSystems Kubernetes Operator (IKO) offers declarative configuration and automation for your environments, and now also supports deploying InterSystems System Alerting & Monitoring (SAM). InterSystems API Manager v2.3, including an improved user experience, Kafka support and hybrid mode. Mainstream availability of IntegratedML, enabling SQL developers to build and deploy Machine Learning models directly in a purely SQL-based environment. Support for stream fields on sharded tables, giving you full SQL schema flexibility on InterSystems IRIS horizontally scalable architecture An iris-lockeddown container image, implementing many security best practices such as disabling web access to the management portal and appropriate OS-level permissions. Support for Proof Key for Code Exchange (PKCE) for Oauth 2.0 InterSystems IRIS for Health 2021.1 includes all of the enhancements of InterSystems IRIS. In addition, this release further extends the platform's comprehensive support for the FHIR® standard through APIs for parsing & evaluating FHIRPath expressions against FHIR data. This is in addition to the significant FHIR-related capabilities released since 2020.1, including support for FHIR Profiles, FHIR R4 Transforms and the FHIR client API. This release also includes HealthShare Health Connect, our InterSystems IRIS for Health based integration engine that delivers high-volume transaction support, process management, and monitoring to support mission critical applications. For a detailed overview of how its feature set compares to InterSystems IRIS for Health, please see here. More details on all of these capabilities can be found in the product documentation, which has recently been made even easier to navigate through a convenient Table of Contents sidebar. InterSystems IRIS 2021.1 documentation and release notes InterSystems IRIS for Health 2021.1 documentation and release notes HealthShare Health Connect 2021.1 documentation and release notes If you are upgrading from an earlier version and use TLS 1.3, please see these upgrade considerations. How to get the software InterSystems IRIS 2021.1 is an Extended Maintenance (EM) release and comes with classic installation packages for all supported platforms, as well as container images in OCI (Open Container Initiative) a.k.a. Docker format. Full installation packages for each product are available from the WRC's product download site. Using the "Custom" installation option enables users to pick the options they need, such as InterSystems Studio and IntegratedML, to right-size their installation footprint. Container images for the Enterprise Editions of InterSystems IRIS and InterSystems IRIS for Health and all corresponding components are available from the InterSystems Container Registry using the following commands: docker pull containers.intersystems.com/intersystems/iris:2021.1.0.215.0 docker pull containers.intersystems.com/intersystems/iris-ml:2021.1.0.215.0 docker pull containers.intersystems.com/intersystems/irishealth:2021.1.0.215.0 docker pull containers.intersystems.com/intersystems/irishealth-ml:2021.1.0.215.0 For a full list of the available images, please refer to the ICR documentation. Container images for the Community Edition can also be pulled from the Docker store using the following commands: docker pull store/intersystems/iris-community:2021.1.0.215.0 docker pull store/intersystems/iris-ml-community:2021.1.0.215.0 docker pull store/intersystems/irishealth-community:2021.1.0.215.0 docker pull store/intersystems/irishealth-ml-community:2021.1.0.215.0 Alternatively, tarball versions of all container images are available via the WRC's product download site. InterSystems IRIS Studio 2021.1 is a standalone IDE for use with Microsoft Windows and can be downloaded via the WRC's product download site. It works with InterSystems IRIS and InterSystems IRIS for Health version 2021.1 and below. InterSystems also supports the VSCode ObjectScript plugin for developing applications for InterSystems IRIS with Visual Studio Code, which is available for Microsoft Windows, Linux and MacOS. Other standalone InterSystems IRIS 2021.1 components, such as the ODBC driver and web gateway, are available from the same page. Sharing your experiences We only get to announce one EM release per year, so we are excited to see this version now hitting the GA milestone and eager to hear your experiences with the new software. Please don't hesitate to get in touch through your account team or here on the Developer Community with any comments on the technology or the use cases you're addressing with it. For selected new capabilities and products, we've set up Early Access Programs to allow our users to evaluate software before it gets released. Through these focused initiatives, we can learn from our target audience and ensure the new product fulfills their needs when it gets released. Please reach out through your account team or watch the Developer Community if you're interested in participating in any of these. Thanks for the new release, @Benjamin.DeBoe ! Is there any publicly available list of publicly available images on the InterSystems registry? Are there any ARM images available? Yes to both questions: For a full list of the available images, please refer to the ICR documentation. Our doc team is updating that page today, but generally anything for which there was a 2021.1.0.205.0 tag, there should also be a 2021.1.0.215.0. There were a few open questions on the new permutations that we were looking through this morning. Improved manageability for our External Language Servers, which now also cover R and Python. Are there any docs for that, especially R and Python? I think you're looking for this. The doc is not separate for each language, but the quick reference in the last section points you at the respective class references. Thank you Benjamin! But I see it does not mention R at all and there's no corresponding R method in $system.external: set javaGate = $system.external.getJavaGateway() set netGate = $system.external.getDotNetGateway() set pyGate = $system.external.getPythonGateway() Can't wait for Python, I checked out R on Wiki. It looks interested for graph plotting and statistical analysis. The Wiki page also mentioned APL which is defined as: APL (named after the book A Programming Language)[3] is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array. It uses a large range of special graphic symbols[4] to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming,[5] and computer math packages.[6] It has also inspired several other programming languages.[7][8] in Wiki I learned this language while I was studying to become an actuary and during the university holidays I worked at Legal & General who were the Insurance company who were sponsoring my University Tuition and during or of these work experience holidays, instead of putting me in the room where the actuaries worked, they put me ina room with an IBM computer with a special keyboard and told me that this computer was specifically designed to run APL programs. Here are some examples of the language: Simple statistics[edit] Suppose that X is an array of numbers. Then (+/X)÷⍴X gives its average. Reading right-to-left, ⍴X gives the number of elements in X, and since ÷ is a dyadic operator, the term to its left is required as well. It is in parenthesis since otherwise X would be taken (so that the summation would be of X÷⍴X, of each element of X divided by the number of elements in X), and +/X adds all the elements of X. Building on this, ((+/((X - (+/X)÷⍴X)*2))÷⍴X)*0.5 calculates the standard deviation. Further, since assignment is an operator, it can appear within an expression, so SD←((+/((X - AV←(T←+/X)÷⍴X)*2))÷⍴X)*0.5 Prime numbers[edit] The following expression finds all prime numbers from 1 to R. In both time and space, the calculation complexity is {\displaystyle O(R^{2})\,\!} (in Big O notation). (~R∊R∘.×R)/R←1↓ιR Executed from right to left, this means: Iota ι creates a vector containing integers from 1 to R (if R= 6 at the start of the program, ιR is 1 2 3 4 5 6) Drop first element of this vector (↓ function), i.e., 1. So 1↓ιR is 2 3 4 5 6 Set R to the new vector (←, assignment primitive), i.e., 2 3 4 5 6 The / replicate operator is dyadic (binary) and the interpreter first evaluates its left argument (fully in parentheses): Generate outer product of R multiplied by R, i.e., a matrix that is the multiplication table of R by R (°.× operator), i.e., 4 6 8 10 12 6 9 12 15 18 8 12 16 20 24 10 15 20 25 30 12 18 24 30 36 Build a vector the same length as R with 1 in each place where the corresponding number in R is in the outer product matrix (∈, set inclusion or element of or Epsilon operator), i.e., 0 0 1 0 1 Logically negate (not) values in the vector (change zeros to ones and ones to zeros) (∼, logical not or Tilde operator), i.e., 1 1 0 1 0 Select the items in R for which the corresponding element is 1 (/ replicate operator), i.e., 2 3 5 And I wrote a number of programs which had the IT department asking "How on earth did you do that and what does it mean?" to which I replied "I asked the computer to do this for me and it said 'Yes Nigel' and the programs worked. It was then that I knew I was destind to become a developer . When, a few years later in London I was introduced to Multidimedsional structures I already knew what they were because of my exposure to APL. Dear InterSystems, how about introducing APL into our library of analytical programming languages? Nigel I believe the R one goes through the Java gateway, but not sure why it's not described (yet) in the docs. Pinging @Robert.Kuszewski for more detail Hi Nigel, glad you like the Python Gateway work. Perhaps you're also interested in participating in the embedded python EAP? As for APL, we're typically looking for some critical mass of customer demand before embarking on such a development project and only recall it being mentioned once before. But thanks for bringing it up, as each critical mass starts somewhere :-). This said, we've had some really great work pioneered in the community here. The Python Gateway project is one example that originated here and its popularity (thanks @Eduard.Lebedyuk and @Sergey.Lukyanchikov !) inspired the one now released as part of the core InterSystems IRIS platform. Similar concepts, such as the Julia Gateway are still "incubating" as a community-driven project. Maybe APL could fit that same path? And we updated the images with ZPM 0.2.14 too: intersystemsdc/iris-community:2021.1.0.215.0-zpm intersystemsdc/iris-ml-community:2021.1.0.215.0-zpm intersystemsdc/iris-community:2020.4.0.547.0-zpm intersystemsdc/irishealth-community:2021.1.0.215.0-zpm intersystemsdc/irishealth-ml-community:2021.1.0.215.0-zpm intersystemsdc/irishealth-community:2020.4.0.547.0-zpm And to launch IRIS do: docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2021.1.0.215.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2021.1.0.215.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2020.4.0.547.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2021.1.0.215.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-ml-community:2021.1.0.215.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2020.4.0.547.0-zpm And for terminal do: docker exec -it my-iris iris session IRIS and to start the control panel: http://localhost:9092/csp/sys/UtilHome.csp To stop and destroy container do: docker stop my-iris Hi Ben Yeah, I added the comments about APL because I just happened to notice it in Wiki while looking at R and it reminded me of my Actuarial days. It was a curious language being almost entirely symbolic in nature. The standard "Hello World" program that every language tutorial teaches was something more like I learned it in 1981 and I see that there have been releases of APL2 and so I guess that somewhere someone is using it (probably in Mathematical Modelling which is what many Actuaries do instead of designing Insurance Policies and staring at Life Expectancy Tables which was what Actuaries did in those day) and I think that a language like Julia (which I have also downloaded and played with a few months ago is much more suited to Pure Math). Can I ask a question, I notice that Python is an Interpreted language like ObjectScript. Was that part of the decision to include it into InterSystems IRIS (and also that it is one of the most popular languages on the Top 10 language list) based on the fact that it could ultimately be compiled down to .obj code? You're spot on. The similarity between Python and ObjectScript plus its popularity (Python's, that is ;-) ) are exactly what drove us to build Embedded Python. We're not compiling it to .obj code though, but running it "as Python" in the kernel. @Robert.Kuszewski and @David.McCaldon are much better at explaining that nuance (and actually do in the intro webinar to our early access program for this upcoming feature. Thank you Benjamin! Is community edition download is available to run directly from windows other then docker image? Yes. If you navigate to the WRC software distribution site or even just https://download.intersystems.com/, you can select community edition kits for a variety of platforms, including Windows. You are right Benjamin, the R gateway go through the Java gateway with two helper classes : - com.intersystems.rgateway.Helper - org.rosuda.REngine.Rserve An example can be found here : - https://github.com/grongierisc/iris-r-gateway-template If I may, I prefer the approach of Eduard for the R gateway : https://github.com/intersystems-community/RGateway who by pass the java gateway and directly use socket connection to the R Server. @Eduard.Lebedyuk : you are right no documentation a this time for the R Gateway. Thank you for the info Guillaume! I would like to clarify that I'm not an author of Community R Gateway (although I do publish it). Shiao-Bin Soong is the author of both Community R Gateway and R Gateway. Hi Ben I certainly would like to join the Python EAP. I was actually sent an invite but it slipped through the gap and I had communicated with Eduard to issue another Invite. I have already installed the InterSystems IRIS 2021.1 PYTHON kit and I just need a license. At Anastasia's recommendation, I sent a request to Bob to get a license. I don't know if it's too late to join the current EAP as gather there will be another one but if I can join in the current one then that would be great When i run the following: set srv = $system.external.getServers() write srv.%ToJSON()["%DotNet Server","%IntegratedML Server","%JDBC Server","%Java Server","%Python Server","%R Server","%XSLT Server"] R does appear Nigel I'm running: store/intersystems/iris-ml-community:2021.1.0.215.0 And $zv is: IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2021.1 (Build 215U) Wed Jun 9 2021 12:37:06 EDT Are $zv for ml and non ml builds the same? How can I distinguish if my app is running on ml or non ml build? Use docker-ls: PS D:\Cache\distr> .\docker-ls tags --registry https://containers.intersystems.com intersystems/iris-community requesting list . done repository: intersystems/iris-community tags: - 2020.1.1.408.0 - 2020.3.0.221.0 - 2020.4.0.547.0 - 2021.1.0.215.0 Any new speed testing doing or planing on InterSystems IRIS 2021? @Amir.Samary ? eh, it's the sixth item in the list? yes, they are the same. Like Studio and ODBC, it's an install-time option to right-size your footprint (and therefore highly relevant for container images). I'm not sure if there's a handy utility method to check if it's been installed or not, but @Thomas.Dyar would know. What happened to intersystems/arbiter? I can't find it in containers.intersystems.com registry: >docker-ls.exe repositories --registry https://containers.intersystems.com requesting list . done repositories: - intersystems/iris-community - intersystems/iris-community-arm64 - intersystems/iris-ml-community - intersystems/irishealth-aa-community - intersystems/irishealth-community - intersystems/irishealth-community-arm64 - intersystems/irishealth-ml-community - intersystems/sam However, a direct pull succeeds: docker pull containers.intersystems.com/intersystems/arbiter:2021.1.0.215.0 2021.1.0.215.0: Pulling from intersystems/arbiter f22ccc0b8772: Already exists 3cf8fb62ba5f: Already exists e80c964ece6a: Already exists cc40d98799c0: Pull complete 4179ff34652c: Pull complete 70ed38c703cc: Pull complete ab1c2108b984: Pull complete 758289e88757: Pull complete Digest: sha256:51c31749251bea1ab8019a669873fd33efa6020898dd4b1749a247c264448592 Status: Downloaded newer image for containers.intersystems.com/intersystems/arbiter:2021.1.0.215.0 containers.intersystems.com/intersystems/arbiter:2021.1.0.215.0 @Luca.Ravazzolo? It's still there, docker-ls just needs the auth. docker-ls repositories --registry https://containers.intersystems.com --user "********" --password "****************" requesting list . done repositories: - intersystems/arbiter - intersystems/arbiter-arm64 ....... I'm authorized in containers.intersystems.com in docker. Is that not enough? Interesting. I see both of them { "RepositoryName": "intersystems/arbiter", "Tags": [ "2019.1.1.615.1", "2020.1.0.215.0", "2020.1.1.408.0", "2020.2.0.211.0", "2020.3.0.221.0", "2020.4.0.547.0", "2021.1.0.215.0" ] }, { "RepositoryName": "intersystems/arbiter-arm64", "Tags": [ "2020.4.0.547.0" ] }, -- Command used docker run --rm carinadigital/docker-ls \ docker-ls \ -u luxabc \ -p abcdefghijklmnopqrstuvxyz0987654321 \ --registry https://containers.intersystems.com \ repositories \ --level 2 \ --json -- Here are the updated images with ZPM and license prolonged: intersystemsdc/iris-community:2021.1.0.215.3-zpm intersystemsdc/iris-ml-community:2021.1.0.215.3-zpm intersystemsdc/irishealth-community:2021.1.0.215.3-zpm intersystemsdc/irishealth-ml-community:2021.1.0.215.3-zpm And to launch IRIS do: docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2021.1.0.215.3-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2021.1.0.215.3-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2021.1.0.215.3-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-ml-community:2021.1.0.215.3-zpm And for terminal do: docker exec -it my-iris iris session IRIS and to start the control panel: http://localhost:9092/csp/sys/UtilHome.csp To stop and destroy container do: docker stop my-iris
Announcement
Fabiano Sanches · Jan 18, 2023

InterSystems announces availability of InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2022.1.2

InterSystems is pleased to announce that the extended maintenance releases of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2022.1.2 are now available. These releases provide a few selected features and bug fixes for the 2022.1.0 and 2022.1.1 releases. You can find additional information about what has changed on these pages: InterSystems IRIS InterSystems IRIS for Health HealthShare Health Connect Please share your feedback through the Developer Community so we can build a better product together. How to get the software The software is available as both classic installation packages and container images. For the complete list of available installers and container images, please refer to the Supported Platforms webpage. Full installation packages for each product are available from the WRC's Software Distribution page. Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website. Container images for the Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry. The number of all kits & containers in this release is 2022.1.2.574.0.