Search

Clear filter
Article
Mark Bolinsky · Mar 3, 2020

InterSystems IRIS and Intel Optane DC Persistent Memory

InterSystems and Intel recently conducted a series of benchmarks combining InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors, also known as “Cascade Lake”, and Intel® Optane™ DC Persistent Memory (DCPMM). The goals of these benchmarks are to demonstrate the performance and scalability capabilities of InterSystems IRIS with Intel’s latest server technologies in various workload settings and server configurations. Along with various benchmark results, three different use-cases of Intel DCPMM with InterSystems IRIS are provided in this report. Overview Two separate types of workloads are used to demonstrate performance and scaling – a read-intensive workload and a write-intensive workload. The reason for demonstrating these separately is to show the impact of Intel DCPMM on different use cases specific to increasing database cache efficiency in a read-intensive workload, and increasing write throughput for transaction journals in a write-intensive workload. In both of these use-case scenarios, significant throughput, scalability and performance gains for InterSystems IRIS are achieved. The read-intensive workload leveraged a 4-socket server and massive long-running analytical queries across a dataset of approximately 1.2TB of total data. With DCPMM in “Memory Mode”, benchmark comparisons yielded a significant reduction in elapsed runtime of approximately six times faster when compared to a previous generation Intel E7v4 series processor with less memory. When comparing like-for-like memory sizes between the E7v4 and the latest server with DCPMM, there was a 20% improvement. This was due to both the increased InterSystems IRIS database cache capability afforded by DCPMM and the latest Intel processor architecture. The write-intensive workload leverages a 2-socket server and InterSystems HL7 messaging benchmark which consists of numerous inbound interfaces, each message has several transformations and then four outbound messages for each inbound. One of the critical components in sustaining high throughput is the message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. With DCPMM in “APP DIRECT” mode as DAX XFS presenting an XFS file system for transaction journals, this benchmark demonstrated a 60% increase in message throughput. To summarize the test results and configurations; DCPMM offers significant throughput gains when used in the proper InterSystems IRIS setting and workload. The high-level benefits are increasing database cache efficiency and reducing disk IO block reads in read-intensive workloads and also increasing write throughput for journals in write-intensive workloads. In addition, Cascade Lake based servers with DCPMM provide an excellent update path for those looking into refreshing older hardware and improving performance and scaling. InterSystems technology architects are available to help with those discussions and provide advice on suggested configurations for your existing workloads. READ-INTENSIVE WORKLOAD BENCHMARK For the read-intensive workload, we used an analytical query benchmark comparing an E7v4 (Broadwell) with 512GiB and 2TiB database cache sizes, against the latest 2nd Generation Intel® Xeon® Scalable Processors (Cascade Lake) with 1TB and 2TB database cache sizes using Intel® Optane™ DC Persistent Memory (DCPMM). We ran several workloads with varying global buffer sizes to show the impact and performance gain of larger caching. For each configuration iteration we ran a COLD, and a WARM run. COLD is where the database cache was not pre-populated with any data. WARM is where the database cache has already been active and populated with data (or at least as much as it could) to reduce physical reads from disk. Hardware Configuration We compared an older 4-Socket E7v4 (aka Broadwell) host to a 4-socket Cascade Lake server with DCPMM. This comparison was chosen because it would demonstrate performance gains for existing customers looking for a hardware refresh along with using InterSystems IRIS. In all tests, the same version of InterSystems IRIS was used so that any software optimizations between versions were not a factor. All servers have the same storage on the same storage array so that disk performance wasn’t a factor in the comparison. The working set is a 1.2TB database. The hardware configurations are shown in Figure-1 with the comparison between each of the 4-socket configurations: Figure-1: Hardware configurations Server #1 Configuration Server #2 Configuration Processors: 4 x E7-8890 v4 @ 2.5Ghz Processors: 4 x Platinum 8280L @ 2.6Ghz Memory: 2TiB DRAM Memory: 3TiB DCPMM + 768GiB DRAM Storage: 16Gbps FC all-flash SAN @ 2TiB Storage: 16Gbps FC all-flash SAN @ TiB DCPMM: Memory Mode only Benchmark Results and Conclusions There is a significant reduction in elapsed runtime (approximately 6x) when comparing 512GiB to either 1TiB and 2TiB DCPMM buffer pool sizes. In addition, it was observed that in comparing 2TiB E7v4 DRAM and 2TiB Cascade Lake DCPMM configurations there was a ~20% improvement as well. This 20% gain is believed to be mostly attributed to the new processor architecture and more processor cores given that the buffer pool sizes are the same. However, this is still significant in that in the 4-socket Cascade Lake tested only had 24 x 128GiB DCPMM installed, and can scale to 12TiB DCPMM, which is about 4x the memory of what E7v4 can support in the same 4-socket server footprint. The following graphs in figure-2 depict the comparison results. In both graphs, the y axis is elapsed time (lower number is better) comparing the results from the various configurations. Figure-2: Elapse time comparison of various configurations WRITE-INTENSIVE WORKLOAD BENCHMARK The workload in this benchmark was our HL7v2 messaging workload using all T4 type workloads. The T4 Workload used a routing engine to route separately modified messages to each of four outbound interfaces. On average, four segments of the inbound message were modified in each transformation (1-to-4 with four transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database. Each system is configured with 128 inbound Business Services and 4800 messages sent to each inbound interface for a total of 614,400 inbound messages and 2,457,600 outbound messages. The measurement of throughput in this benchmark workload is “messages per second”. We are also interested in (and recorded) the journal writes during the benchmark runs because transaction journal throughput and latency are critical components in sustaining high throughput. This directly influences the performance of message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. When journal throughput suffers, application processes will block on journal buffer availability. Hardware Configuration For the write-intensive workload, we decided to use a 2-socket server. This is a smaller configuration than our previous 4-socket configuration in that it only had 192GB of DRAM and 1.5TiB of DCPMM. We compared the workload of Cascade Lake with DCPMM to that of the previous 1st Generation Intel® Xeon® Scalable Processors (Skylake) server. Both servers have locally attached 750GiB Intel® Optane™ SSD DC P4800X drives. The hardware configurations are shown in Figure-3 with the comparison between each of the 2-socket configurations: Figure-3: Write intensive workload hardware configurations Server #1 Configuration Server #2 Configuration Processors: 2 x Gold 6152 @ 2.1Ghz Processors: 2 x Gold 6252 @ 2.1Ghz Memory: 192GiB DRAM Memory:1.5TiB DCPMM + 192GiB DRAM Storage: 2 x 750GiB P4800X Optane SSDs Storage: 2 x 750GiB P4800X Optane SSDs DCPMM: Memory Mode & App Direct Modes Benchmark Results and Conclusions Test-1: This test ran the T4 workload described above on the Skylake server detailed as Server #1 Configuration in Figure-3. The Skylake server provided a sustained throughput of ~3355 inbound messages a second with a journal file write rate of 2010 journal writes/second. Test-2: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, and specifically with DCPMM in Memory Mode. This demonstrated a significant improvement of sustained throughput of ~4684 inbound messages per second with a journal file write rate of 2400 journal writes/second. This provided a 39% increase compared to Test-1. Test-3: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, this time using DCPMM in App Direct Mode but not actually configuring DCPMM to do anything. The purpose of this was to gauge just what the performance and throughput would be comparing Cascade Lake with DRAM only to Cascade Lake with DCPMM + DRAM. Results we not surprising in that there was a gain in throughput without DCPMM being used, albeit a relatively small one. This demonstrated an improvement of sustained throughput of ~4845 inbound message a second with a journal file write rate of 2540 journal writes/second. This is expected behavior because DCPMM has a higher latency compared to DRAM, and with the massive influx of updates there is a penalty to performance. Another way of looking at it there is <5% reduction in write ingestion workload when using DCPMM in Memory Mode on the same exact server. Additionally, when comparing Skylake to Cascade Lake (DRAM only) this provided a 44% increase compared to the Skylake server in Test-1. Test-4: This test ran the same workload on the Cascade Lake server detailed as Server #2 configuration in Figure-3, this time using DCPMM in App Direct Mode and using App Direct Mode as DAX XFS mounted for the journal file system. This yielded even more throughput of 5399 inbound messages per second with a journal file write rate of 2630/sec. This demonstrated that DCPMM in App Direct mode for this type of workload is the better use of DCPMM. Comparing these results to the initial Skylake configuration there was a 60% increase in throughput compared to the Skylake server in Test-1. InterSystems IRIS Recommended Intel DCPMM Use Cases There are several use cases and configurations for which InterSystems IRIS will benefit from using Intel® Optane™ DC Persistent Memory. Memory Mode This is ideal for massive database caches for either a single InterSystems IRIS deployment or a large InterSystems IRIS sharded cluster where you want to have much more (or all!) of your database cached into memory. You will want to adhere to a maximum of 8:1 ratio of DCPMM to DRAM as this is important for the “hot memory” to stay in DRAM acting as an L4 cache layer. This is especially important for some shared internal IRIS memory structures such as seize resources and other memory cache lines. App Direct Mode (DAX XFS) – Journal Disk Device This is ideal for using DCPMM as a disk device for transaction journal files. DCPMM appears to the operating system as a mounted XFS file system to Linux. The benefit of using DAX XFS is this alleviates the PCIe bus overhead and direct memory access from the file system. As demonstrated in the HL7v2 benchmark results, the write latency benefits significantly increased the HL7 messaging throughput. Additionally, the storage is persistent and durable on reboots and power cycles just like a traditional disk device. App Direct Mode (DAX XFS) – Journal + Write Image Journal (WIJ) Disk Device In this use case, this extends the use of App Direct mode to both the transaction journals and the write image journal (WIJ). Both of these files are write-intensive and will certainly benefit from ultra-low latency and persistence. Dual Mode: Memory + App Direct Modes When using DCPMM in dual mode, the benefits of DCPMM are extended to allow for both massive database caches and ultra-low latency for the transaction journals and/or write image journal devices. In this use case, DCPMM appears both as mounted XFS filesystem to the OS and RAM to operating systems. This is achieved by allocating a percentage of DCPMM as DAX XFS and the remaining is allocated in memory mode. As mentioned previously, the DRAM installed will operate as an L4 like cache to the processors. “Quasi” Dual Mode To extend the use case models on a bit of slant with a Quasi Dual Mode in that you have a concurrent transaction and analytic workloads (also known as HTAP workloads) type workload where there is a high rate of inbound transactions/updates for OLTP type workloads, and also an analytical or massive querying need, then having each InterSystems IRIS node type within an InterSystems IRIS sharded cluster operating with different modes for DCPMM. In this example there is the addition of InterSystems IRIS compute nodes which will handle the massive querying/analytics workload running with DCPMM Memory Mode so that they benefit from massive database cache in the global buffers, and the data nodes either running in Dual mode or App Direct the DAX XFS for the transactional workloads. Conclusion There are numerous options available for InterSystems IRIS when it comes to infrastructure choices. The application, workload profile, and the business needs drive the infrastructure requirements, and those technology and infrastructure choices influence the success, adoption, and importance of your applications to your business. InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors and Intel® Optane™ DC Persistent Memory provides for groundbreaking levels of scaling and throughput capabilities for your InterSystems IRIS based applications that matter to your business. Benefits of InterSystems IRIS and Intel DCPMM capable servers include: Increases memory capacity so that multi-terabyte databases can completely reside in InterSystems IRIS or InterSystems IRIS for Health database cache with DCPMM in Memory Mode. In comparison to reading from storage (disks), this can increase query response performance by up six times with no code changes due to InterSystems IRIS proven memory caching capabilities that take advantage of system memory as it increases in size. Improves the performance of high-rate data interoperability throughput applications based on InterSystems IRIS and InterSystems IRIS for Health, such as HL7 transformations, by as much as 60% in increased throughput using the same processors and only changing the transaction journal disk from the fastest available NVMe drives to leveraging DCPMM in App Direct mode as a DAX XFS file system. Exploiting both the memory speed data transfers and data persistence is a significant benefit to InterSystems IRIS and InterSystems IRIS for Health. Augment the compute resources where needed for a given workload whether read or write-intensive, or both, without over-allocating entire servers just for the sake of one resource component with DCPMM in Mixed Mode. InterSystems Technology Architects are available to discuss hardware architectures ideal for your InterSystems IRIS based application. Great article, Mark! I have a few notes and questions: 1. Here's a brief comparison of different storage categories: Intel® Optane™ DC Persistent Memory has read throughput of 6.8 GB/s and write throughput 1.85 GB/s (source). Intel® Optane™ SSD has read throughput of 2.5 GB/s and write throughput of 2.2 GB/s at (source). Modern DDR4 RAM has read throughput of ~25 GB/s. While I certainly see the appeal of DC Persistent Memory if we need more memory than RAM can provide, is it useful on smaller scale? Say I have a few hundred gigabytes of indices I need to keep in global buffer and be able to read-access fast. Would plain DDR4 RAM be better? Costs seem comparable and read throughput of 25 Gb/s seems considerably better. 2. What RAM was used in a Server #1 configuration? 3. Why are there different CPUs between servers? 4. Workload link does not work. 6252 supports DCPM, while 6152 does not 6252 can be used for both DCPM and DRAM configuration. Hi Eduard, Thanks for you questions. 1- On small scale I would stay with traditional DRAM. DCPMM becomes beneficial when >1TB of capacity. 2- That was DDR4 DRAM memory in both read-intensive and write-intensive Server #1 configurations. In the read-intensive server configuration it was specifically DDR-2400, and in the write-intensive server configuration it was DDR-2600. 3- There are different CPUs in configuration in the read-intensive workload because this testing is meant to demonstrate upgrade paths from older servers to new technologies and the scalability increases offered in that scenario. The write-intensive workload only used a different server in the first test to compare previous generation to the current generation with DCPMM. Then the three following results demonstrated the differences in performance within the same server - just different DCPMM configurations. 4- Thanks. I will see what happened to the link and correct. Correct. Gold 6252 series (aka "Cascade Lake") supports both DCPMM and DRAM. However, keep in mind that when using DCPMM you need to have DRAM and should adhere to at least a 8:1 ratio of DCPMM:DRAM.
Article
Renan Lourenco · Mar 9, 2020

InterSystems IRIS for Health ENSDEMO (supports arm64)

# InterSystems IRIS for Health ENSDEMO Yet another basic setup of ENSDEMO content into InterSystems IRIS for Health. **Make sure you have Docker up and running before starting.** ## Setup Clone the repository to your desired directory ```bash git clone https://github.com/OneLastTry/irishealth-ensdemo.git ``` Once the repository is cloned, execute: **Always make sure you are inside the main directory to execute docker-compose commands.** ```bash docker-compose build ``` ## Run your Container After building the image you can simply execute below and you be up and running 🚀: *-d will run the container detached of your command line session* ```bash docker-compose up -d ``` You can now access the manager portal through http://localhost:9092/csp/sys/%25CSP.Portal.Home.zen - **Username:** SuperUser - **Password:** SYS - **SuperServer port:** 9091 - **Web port:** 9092 - **Namespace:** ENSDEMO ![ensdemo](https://openexchange.intersystems.com/mp/img/packages/468/screenshots/zhnwycjrflt4q7gttwsidcntxk.png) To start a terminal session execute: ```bash docker exec -it ensdemo iris session iris ``` To start a bash session execute: ```bash docker exec -it ensdemo /bin/bash ``` Using [InterSystems ObjectScript](https://marketplace.visualstudio.com/items?itemName=daimor.vscode-objectscript) Visual Studio Code extension, you can access the code straight from _vscode_ ![vscode](https://openexchange.intersystems.com/mp/img/packages/468/screenshots/bgirfnblz2zym4zi2q92lnxkmji.png) ## Stop your Container ```bash docker-compose stop ``` ## Support to ZPM ```bash zpm "install irishealth-ensdemo" ``` Nice, Rhenan! And ZPM it, please, too! Interesting. Is it available for InterSystems IRIS? Will do soon! Haven't tested but I would guess yes, I will run some tests changing the version in Dockerfile and post the outcome here. Hi, here is a similar article about ensdemo for IRIS and IRIS for health. https://community.intersystems.com/post/install-ensdemo-iris Works for IRIS4Health Also available as ZPM module now: USER>zpm "install irishealth-ensdemo"[irishealth-ensdemo] Reload START (/usr/irissys/mgr/.modules/USER/irishealth-ensdemo/1.0.0/)[irishealth-ensdemo] Reload SUCCESS[irishealth-ensdemo] Module object refreshed.[irishealth-ensdemo] Validate START[irishealth-ensdemo] Validate SUCCESS[irishealth-ensdemo] Compile START[irishealth-ensdemo] Compile SUCCESS[irishealth-ensdemo] Activate START[irishealth-ensdemo] Configure START[irishealth-ensdemo] Configure SUCCESS[irishealth-ensdemo] MakeDeployed START[irishealth-ensdemo] MakeDeployed SUCCESS[irishealth-ensdemo] Activate SUCCESS USER> Here is the set of productions available: Is there any documentation on what the ens-demo module can do? Unfortunately not as I'd like it to have. Even when ENSDEMO was part of Ensemble information was a bit scattered all over. If you access the Ensemble documentation and search for "Demo." you can see some of the references I mentioned. (since IRIS does not have ENSDEMO by default, documentation has also been removed) Thanks, @Renan.Lourenco ! Perhaps, we could wrap this part of the documentation as a module too. Could be a nice extension to the app. I like your idea @Evgeny.Shvarov !! How do you envision that, a simple index with easy access like: DICOM: Link1 Link2 HL7 Link1 Link2 Or something more elaborated? Also would that be a separate module altogether or part of the existing? I see that the documentation pages are IRIS CSP classes. So I guess it could work if installed in IRIS. I guess also there is a set of static files (FILECOPY could help). IMHO, the reasonable approach to have a separate repo ensdemo-doc and a separate module the, which will be a dependent module to irishealth-ensdemo So people could contribute to documentation independently and update it independently too. I had my bit of fun with documentation before, they are not as straightforward as they appear to be. That's why I thought of having a separate index. I guess you know more about it. I’d also ping @Dmitry.Maslennikov as he tried to make a ZPM package for the whole documentation.
Article
Peter Steiwer · Mar 6, 2020

InterSystems IRIS Business Intelligence: Building vs Synchronizing

InterSystems IRIS Business Intelligence allows you to keep your cubes up to date in multiple ways. This article will cover building vs synchronizing. There are also ways to manually keep cubes up to date, but these are very special cases and almost always cubes are kept current by building or synchronizing. What is Building? The build starts by removing all data in the cube. This ensures that the build is starting in a clean state. The build then goes through all records specified by the source class. This may take all records from the source class or it may take a restricted set of records from the source class. As the build goes through the specific records, the data required by the cube is inserted into the cube. Finally, once all of the data has been inserted into the cube, the indices are built. During this process, the cube is not available to be queried. The build can be executed single-threaded or multi-threaded. It can be initiated by both the UI or Terminal. The UI will be multi-threaded by default. Running a build from terminal will default to multi-threaded unless a parameter is passed in. In most cases multi-threaded builds are possible. There are specific cases where it is not possible to perform a multi-threaded build and it must be done single-threaded. What is Synchronizing? If a cube's source class is DSTIME Enabled (see documentation), it is able to be synchronized. DSTime allows modifications to the source class to be tracked. When synchronization is called, only the records that have been modified will be inserted, updated, or deleted as needed within the cube. While a synchronize is running, the cube is available to be queried. A Synchronize can only be initiated from Terminal. It can be scheduled in the Cube Manager through the UI, but it can't be directly executed from the UI. By default, synchronize is executed single-threaded, but there is a parameter to initiate the synchronize multi-threaded. It is always a good idea to initially build your cube and then it can be kept up to date with synchronize if desired. Recap of differences Build Synchronize Which records are modified? All Only records that have changed Available in UI? Yes No Multi-Threaded Yes, by default Yes, not the default Cube available for query No(*1) Yes Requires source class modification No Yes, DSTIME must be enabled Build Updates (*1) Starting with InterSystems IRIS 2020.1, Selective Build is now an available option while building your cube. This allows the cube to be available for querying while being built selectively. For additional information see Getting Started with Selective Build Synchronize Updates Starting with InterSystems IRIS 2021.2, DSTIME has a new "CONDITIONAL" option. This allows implementations to conditionally enable DSTIME for specific sites/installations. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Evgeny Shvarov · Mar 23, 2020

How to Win in InterSystems IRIS Online Programming Contest

Hi, participants of the InterSystems IRIS Online Programming Contest! This is an announcement for the current and all the future participants of online contests. To win the contest you need to gather the maximum votes of InterSystems Developer Community members. Below are the few ideas of how to achieve that. Winner Criteria First of all, you need to build and submit the application which matches terms and the winner criteria: Idea and value - the app makes the world a better place or makes the life of a developer better at least; Functionality and usability - how well and how much the application/library does; The beauty of code - has a readable and qualitative ObjectScript code. But even if you know exactly that your application is great you need to other developers to be sure in it too. And here are three ways how you can make it: 0. Bugs and Documentation Use the voting week to clean up the code, fix bugs and make an accurate documentation. 1. Article on DC Write an article on Developers Community that describes how your app works and why this is the best application in the contest. It works even better if you connect the article with an application and vice-versa. Example of an article that is connected and describes the app and the app has a linked article on DC (button Discuss). 2. Video on YouTube Record the screencast where you show and pitch how your application works and solves problems. E.g. you can record the video with QuickTime or other screen-recording. apps and send it to @Anastasia.Dyubaylo - we'll publish it then on InterSystems Developers YouTube channel. 3. Social Media We'll publish announcements on your video and article (articles) on social DC social media channels: Twitter, Facebook, and LinkedIn. And we encourage you to advertise your OEX. application, article, and video in your social networks too. These three recipes will help to make your application more visible and noticed and thus increase your chances to win! Good luck and happy coding! Also we will make posts on your applications in DC Social media channels: Twitter, Facebook, DC Telegram,and LinkedIn. We will do it in the order you submitted the apps: earlier submitted, earlier posted in social media. And will spread it through 5 working days of the week. Another thing you may want to add to your OEX and Github README.md - is the Online Contest Github shield! Here is how it looks like: Here is the code you can install into your Github README.md [![Gitter](https://img.shields.io/static/v1?label=Vote%20for%20my%20App&message=InterSystems%20IRIS%20Contest&labelColor=%23333695&color=%2300b2a9)](https://openexchange.intersystems.com/contest/current) Learn more about Github Shields Hey Developers! Our contestant @Maks.Atygaev recorded a promo video specially for the IRIS Programming Contest! Please welcome: ⏯ Declarative ObjectScript Promo Big applause! Great video content! 👏🏼 P.s. This is a prime example of how you can increase your chances of winning a contest. Let the world know about your cool apps. Don't slow down and good luck! And another way to win - to have clear instructions. Often fantastic applications with bad instructions can loose to poor applications with perfect instructions. Please make sure that the instructions you have in your README.md really work. It is always helpful to try to go through your instruction steps by yourself before releasing the application. Or and ask your colleague to do it. Good luck!
Announcement
Anastasia Dyubaylo · Mar 24, 2020

New Video: InterSystems IRIS and Node.js Overview

Hi Community! Enjoy watching the new video on InterSystems Developers YouTube: ⏯ InterSystems IRIS and Node.js Overview InterSystems IRIS™ supports a native API for Node.js that provides direct access to InterSystems IRIS data structures from your Node.js application. Visit the Node.js QuickStart on the InterSystems learning site for more. Stay tuned with InterSystems Developers! 👍🏼 These APIs appear to be synchronous, and therefore will not be usable in a standard production Node.js environment where all concurrent users coexist in the same physical process. This is precisely the reason why QEWD was created - ie to allow the safe use of synchronous APIs, but, then again, if you use QEWD, you won't need or use the APIs described here
Announcement
Anastasia Dyubaylo · Jan 20, 2020

The Best InterSystems Open Exchange Developers and Applications in 2019

Hi Developers, 2019 was a really great year with almost 100 applications uploaded to the InterSystems Open Exchange! To thank our Best Contributors we have special annual achievement badges in Global Masters Advocacy Hub. This year we introduced 2 new badges for contribution to the InterSystems Open Exchange: ✅ InterSystems Application of the Year 2019 ✅ InterSystems Developer of the Year 2019 We're glad to present the most downloaded applications on InterSystems Data Platforms! Badge's Name Advocates Rules Nomination: InterSystems Application of the Year Gold InterSystems Application of the Year 2019 - 1st place VSCode-ObjectScript by @Maslennikov.Dmitry 1st / 2nd/ 3rd / 4th-10th place in "InterSystems Application of the Year 2019" nomination. Given to developers whose application gathered the maximum amount of downloads on InterSystems Open Exchange in the year of 2019. Silver InterSystems Application of the Year 2019 - 2nd place PythonGateway by @Eduard.Lebedyuk Bronze InterSystems Application of the Year 2019 - 3rd place iris-history-monitor by @Henrique InterSystems Application of the Year 2019 - 4th-10th places WebTerminal by @Nikita.Savchenko7047 Design Pattern in Caché Object Script by @Tiago.Ribeiro Caché Monitor by @Andreas.Schneider AnalyzeThis by @Peter.Steiwer A more useFull Object Dump by @Robert.Cemper1003 Light weight EXCEL download by @Robert.Cemper1003 ObjectScript Class Explorer by @Nikita.Savchenko7047 Nomination: InterSystems Developer of the Year Gold InterSystems Developer of the Year 2019 - 1st place @Robert.Cemper1003 1st / 2nd / 3rd / 4th-10th place in "InterSystems Developer of the Year 2019" nomination. Given to developers who uploaded the largest number of applications to InterSystems Open Exchange in the year of 2019. Silver InterSystems Developer of the Year 2019 - 2nd place @Evgeny.Shvarov @Eduard.Lebedyuk Bronze InterSystems Developer of the Year 2019 - 3rd place @Maslennikov.Dmitry @David.Crawford @Otto.Karlinger InterSystems Developer of the Year 2019 - 4th-10th places @Peter.Steiwer @Amir.Samary @Guillaume.Rongier7183 @Rubens.Silva9155 Congratulations! You are doing so valuable and important job for all the community! Thank you all for being part of the InterSystems Community. Share your experience, ask, learn and develop, and be successful with InterSystems! ➡️ See also the Best Articles and the Best Questions on InterSystems Data Platform and the Best Contributors in 2019.
Announcement
Anastasia Dyubaylo · Dec 18, 2019

New Video: InterSystems IRIS Roadmap - Analytics and AI

Hi Community, The new video from Global Summit 2019 is already on InterSystems Developers YouTube: ⏯ InterSystems IRIS Roadmap: Analytics and AI This video outlines what's new and what's next for Business Intelligence (BI), Artificial Intelligence (AI), and analytics within InterSystems IRIS. We will present the use cases that we are working to solve, what has been delivered to address those use cases, as well as what we are working on next. Takeaway: You will gain knowledge of current and future business intelligence and analytics capabilities within InterSystems IRIS. Presenters: 🗣 @Benjamin.DeBoe, Product Manager, InterSystems 🗣 @tomd, Product Specialist - Machine Learning, InterSystems 🗣 @Carmen.Logue, Product Manager - Analytics and AI, InterSystems Additional materials to this video you can find in this InterSystems Online Learning Course. Enjoy watching this video! 👍🏼
Article
Timothy Leavitt · Mar 24, 2020

Unit Tests and Test Coverage in the InterSystems Package Manager

This article will describe processes for running unit tests via the InterSystems Package Manager (aka IPM - see https://openexchange.intersystems.com/package/InterSystems-Package-Manager-1), including test coverage measurement (via https://openexchange.intersystems.com/package/Test-Coverage-Tool). Unit testing in ObjectScript There's already great documentation about writing unit tests in ObjectScript, so I won't repeat any of that. You can find the Unit Test tutorial here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TUNT_preface It's best practice to include your unit tests somewhere separate in your source tree, whether it's just "/tests" or something fancier. Within InterSystems, we end up using /internal/testing/unit_tests/ as our de facto standard, which makes sense because tests are internal/non-distributed and there are types of tests other than unit tests, but this might be a bit complex for simple open source projects. You may see this structure in some of our GitHub repos. From a workflow perspective, this is super easy in VSCode - you just create the directory and put the classes there. With older server-centric approaches to source control (those used in Studio) you'll need to map this package appropriately, and the approach for that varies by source control extension. From a unit test class naming perspective, my personal preference (and the best practice for my group) is: UnitTest.<package/class being tested>[.<method/feature being tested>] For example, if unit tests for method Foo in class MyApplication.SomeClass, the unit test class would be named UnitTest.MyApplication.SomeClass.Foo; if the tests were for the class as a whole, it'd just be UnitTest.MyApplication.SomeClass. Unit tests in IPM Making the InterSystems Package Manager aware of your unit tests is easy! Just add a line to module.xml like the following (taken from https://github.com/timleavitt/ObjectScript-Math/blob/master/module.xml - a fork of @Peter.Steiwer 's excellent math package from the Open Exchange, which I'm using as a simple motivating example): <Module> ... <UnitTest Name="tests" Package="UnitTest.Math" Phase="test"/></Module> What this all means: The unit tests are in the "tests" directory underneath the module's root. The unit tests are in the "UnitTest.Math" package. This makes sense, because the classes being tested are in the "Math" package. The unit tests run in the "test" phase in the package lifecycle. (There's also a "verify" phase in which they could run, but that's a story for another day.) Running Unit Tests With unit tests defined as explained above, the package manager provides some really helpful tools for running them. You can still set ^UnitTestRoot, etc. as you usually would with %UnitTest.Manager, but you'll probably find the following options much easier - especially if you're working on several projects in the same environment. You can try out all of these by cloning the objectscript-math repo listed above and then loading it with zpm "load /path/to/cloned/repo/", or on your own package by replacing "objectscript-math" with your package names (and test names). To reload the module and then run all the unit tests: zpm "objectscript-math test" To just run the unit tests (without reloading): zpm "objectscript-math test -only" To just run the unit tests (without reloading) and provide verbose output: zpm "objectscript-math test -only -verbose" To just run a particular test suite (meaning a directory of tests - in this case, all the tests in UnitTest/Math/Utils) without reloading, and provide verbose output: zpm "objectscript-math test -only -verbose -DUnitTest.Suite=UnitTest.Math.Utils" To just run a particular test case (in this case, UnitTest.Math.Utils.TestValidateRange) without reloading, and provide verbose output: zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange" Or, if you're just working out the kinks in a single test method: zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange -DUnitTest.Method=TestpValueNull" Test coverage measurement via IPM So you have some unit tests - but are they any good? Measuring test coverage won't fully answer that question, but it at least helps. I presented on this at Global Summit back in 2018 - see https://youtu.be/nUSeGHwN5pc . The first thing you'll need to do is install the test coverage package: zpm "install testcoverage" Note that this doesn't require IPM to install/run; you can find more information on the Open Exchange: https://openexchange.intersystems.com/package/Test-Coverage-Tool That said, you can get the most out of the test coverage tool if you're also using IPM. Before running tests, you need to specify which classes/routines you expect your tests to cover. This is important because, in very large codebases (for example, HealthShare), measuring and collecting test coverage for all of the files in the project may require more memory than your system has. (Specifically, gmheap for the line-by-line monitor, if you're curious.) The list of files goes in a file named coverage.list within your unit test root; different subdirectories (suites) of unit tests can have their own copy of this to override which classes/routines will be tracked while the test suite is running. For a simple example with objectscript-math, see: https://github.com/timleavitt/ObjectScript-Math/blob/master/tests/UnitTest/coverage.list ; the user guide for the test coverage tool goes into further details. To run the unit tests with test coverage measurement enabled, there's just one more argument to add to the command, specifying that TestCoverage.Manager should be used instead of %UnitTest.Manager to run the tests: zpm "objectscript-math test -only -DUnitTest.ManagerClass=TestCoverage.Manager" The output (even in non-verbose mode) will include a URL where you can view which lines of your classes/routines were covered by unit tests, as well as some aggregate statistics. Next Steps What about automating all of this in CI? What about reporting unit test results and coverage scores/diffs? You can do that too! For a simple example using Docker, Travis CI and codecov.io, see https://github.com/timleavitt/ObjectScript-Math ; I'm planning to write this up in a future article that looks at a few different approaches. Excellent article Tim! Great description of how people can move the ball forward with the maturity of their development processes :) Hello @Timothy.Leavitt Thank you for this great article! I tried to add "UnitTest" tag to my module.xml but something wrong during the publish process.<UnitTest Name="tests" Package="UnitTest.Isc.JSONFiltering.Services" Phase="test"/> tests directory contain a directory tree UnitTest/Isc/JSONFiltering/Services/ with a %UnitTest.TestCase sublcass. Exported 'tests' to /tmp/dirLNgC2s/json-filter-1.2.0/tests/.testsERROR #5018: Routine 'tests' does not exist[json-filter] Package FAILURE - ERROR #5018: Routine 'tests' does not existERROR #5018: Routine 'tests' does not exist I also tried with objectscript-math project. This is the output of objectscript-math publish -v :Exported 'src/cls/UnitTests' to /tmp/dir7J1Fhz/objectscript-math-0.0.4/src/cls/unittests/.src/cls/unittestsERROR #5018: Routine 'src/cls/UnitTests' does not exist[objectscript-math] Package FAILURE - ERROR #5018: Routine 'src/cls/UnitTests' does not existERROR #5018: Routine 'src/cls/UnitTests' does not exist Did I miss something or is a package manager issue ?Thank you. Perhaps try Name="/tests" with a leading slash? Yes, that's it ! We can see a dot. It works fine.Thank you for your help. @Timothy.Leavitt Do you all still use your Test Coverage Tool at InterSystems? I haven't seen any recent updates to it on the repo so I I'm wondering if you consider it still useful and it's just in a steady state, stable place or are there different tactics for test coverage metrics since you published? @Michael.Davidovich yes we do! It's useful and just in a steady state (although I have a PR in process around some of the recent confusing behavior that's been reported in the community). Thanks, @Timothy.Leavitt! For others working through this too, I wanted to sum some points up that I discussed with Tim over PM. - Tim reiterated the usefulness of the Test Coverage tool and the Cobertura output for finding starting places based on complexity and what are the right blocks to test. - When it comes to testing persistent data classes, it is indeed tricky but valuable (e.g. data validation steps). Using transactions (TSTART and TROLLBACK) is a good approach for this. I also discussed the video from some years ago on the mocking framework. It's an awesome approach, but for me, it depends on retooling classes to fit the framework. I'm not in a place where I want to or can rewrite classes for the sake of testing, however this might be a good approach for others. There may be other open source frameworks for mocking available later. Hope this helps and encourages more conversation! In a perfect world we'd start with our tests and code from there, but well, the world isn't perfect! great summary ... thank you! @Timothy.Leavitt and others: I know this isn't Jenkins support, but I seem to be having trouble allowing the account running Jenkins to get into IRIS. Just trying to get this to work locally at the moment. I'm running on Windows through an organizational account, so I created a new local account on the computer, jenkinsUser, which I'm to understand is the 'user' that logs in and runs everything on Jenkins. When I launch IRIS in the build script using . . . C:\MyPath\bin\irisdb -s C:\MyPath\mgr -U MYNAMESPACE 0<inFile . . . I can see in the console it's trying to login. I turned on O/S authentication for the system and allowed the %System.Login function to use Kerbose. I can launch Terminal from my tray and I'm logged in without a user/password prompt. I am guessing that IRIS doesn't know about my jenkinsUser local account, so it won't allow that user to us O/S authentication? I'm trying to piece this together in my head. How can I allow this computer user trying to run Jenkins access to IRIS without authentication? Hope this helps others who are trying to set this up. Not sure if this is right, but I created a new IRIS user and then created delegated access to %Service_Console and included this in my ZAUTHENTICATE routine. Seems to have worked. Now . . . on to the next problem: DO ##CLASS(UnitTest.Manager).OutputResultsXml("junit.xml") ^ <CLASS DOES NOT EXIST> *UnitTest.Manager Please try %UnitTest.Manager I had to go back . . . that was a custom class and method that was written for the Widgets Direct demo app. Trial and error folks: @Timothy.Leavitt your presentation mentioned a custom version of the Coberutra plugin for the scatter plot . . . is that still necessary or does the current version support that? Not sure if I see any mention of the custom plugin on the GitHub page. Otherwise, I seem to me missing something key: I don't have build logic in my script. I suppose I just thought that step was for automation purposes so that the latest code would be compiled on whatever server. I don't have anything like that yet and thought I could just run the test coverage utility but it's coming up with nothing. I'll keep playing tomorrow but appreciate anyone's thoughts on this especially if you've set it up before! For those following along, I got this to work finally by creating the "coverage.list" file in the unit test root. I tried setting the parameter node "CoverageClasses" but that didn't work (maybe I used $LB wrong). Still not sure how to get the scatter plot for complexity as @Timothy.Leavitt mentioned in the presentation the Cobertura plugin was customized. Any thoughts on that are appreciated! I think this is it: GitHub - timleavitt/covcomplplot-plugin: Jenkins covcomplplot pluginIt's written by Tim, it's on the plugin library, and it looks like what was in the presentation, however I have some more digging come Monday. @Michael.Davidovich I was out Friday, so still catching up on all this - glad you were able to figure out coverage.list. That's generally a better way to go for automation than setting a list of classes. re: the plugin, yes, that's it! There's a GitHub issue that's probably the same here: https://github.com/timleavitt/covcomplplot-plugin/issues/1 - it's back on my radar given what you're seeing. So I originally installed the scatter plot plugin from the library, not the one from your repo. I uninstalled that and I'm trying to install the one you modified. I'm having a little trouble because it seems I have to download your source, make sure I have a JDK installed and Maven and package the code into a .hpi file? Does this sound right? I'm getting some issues with the POM file while running 'mvn pacakge'. Is it possible to provide the packaged file for those of us not Java-savvy? For other n00bs like me . . . in GitHub you click the Releases link on the code page and you can find the packaged code. Edit: I created a separate thread about this so it gets more visibility: The thread can be found from here: https://community.intersystems.com/post/test-coverage-coverage-report-not-generating-when-running-unit-tests-zpm ... Hello, @Timothy.Leavitt, thanks for the great article! I am facing a slight problem and was wondering if you, or someone else, might have some insight into the matter. I am running my unit tests in the following way with ZPM, as instructed. They work well and test reports are generated correctly. Test coverage is also measured correctly according to the logs. However, even though I instructed ZPM to generate Cobertura-style coverage reports, it is not generating one. When I run the GenerateReport() method manually, the report is generated correctly. I am wondering what I am doing wrong. I used the test flags from the ObjectScript-Math repository, but they seem not to work. Here is the ZPM command I use to run the unit tests: zpm "common-unit-tests test -only -verbose -DUnitTest.ManagerClass=TestCoverage.Manager -DUnitTest.UserParam.CoverageReportClass=TestCoverage.Report.Cobertura.ReportGenerator -DUnitTest.UserParam.CoverageReportFile=/opt/iris/test/CoverageReports/coverage.xml -DUnitTest.Suite=Test.UnitTests.Fw -DUnitTest.JUnitOutput=/opt/iris/test/TestReports/junit.xml -DUnitTest.FailuresAreFatal=1":1 The test suite runs okay, but coverage reports do not generate. However, when I run these commands stated in the TestCoverage documentation, the reports are generated. Set reportFile = "/opt/iris/test/CoverageReports/coverage.xml" Do ##class(TestCoverage.Report.Cobertura.ReportGenerator).GenerateReport(<index>, reportFile) Here is a short snippet from the logs where you can see that test coverage analysis is run: Collecting coverage data for Test: .036437 seconds Test passed Mapping to class/routine coverage: .041223 seconds Aggregating coverage data: .019707 seconds Code coverage: 41.92% Use the following URL to view the result: http://192.168.208.2:52773/csp/sys/%25UnitTest.Portal.Indices.cls?Index=19&$NAMESPACE=COMMON Use the following URL to view test coverage data: http://IRIS-LOCALDEV:52773/csp/common/TestCoverage.UI.AggregateResultViewer.cls?Index=17 All PASSED [COMMON|common-unit-tests] Test SUCCESS What am I doing wrong? Thank you, and have a good day!Kari Vatjus-Anttila %UnitTest mavens may be interested in this announcement: https://community.intersystems.com/post/intersystems-testing-manager-new-vs-code-extension-unittest-framework Helllo @Timothy.Leavitt Is there a way to ensure that the code sending messages through BusinessService or BusinessProcess can be fully tracked? The current issue is that when methods contain "SendRequestSync" or "SendRequestAsync", the code at the receiving end cannot be tracked and included in the test coverage report. Thank you. Here we are using the mocking framework that we developed (GitHub - GendronAC/InterSystems-UnitTest-Mocking: This project contains a mocking framework to use with InterSystems' Products written in ObjectScript Have a look at the https://github.com/GendronAC/InterSystems-UnitTest-Mocking/blob/master/Src/MockDemo/CCustomPassthroughOperation.cls class. Instead of calling ..SendRequestAsync we do ..ensHost.SendRequestAsync(...) Doing so enables us to create Expectations (..Expect(..ensHost.SendRequestAsync(.... Here a code sample : Class Sample.Src.CExampleService Extends Ens.BusinessService { /// The type of adapter used to communicate with external systems Parameter ADAPTER = "Ens.InboundAdapter"; Property TargetConfigName As %String(MAXLEN = 1000); Parameter SETTINGS = "TargetConfigName:Basic:selector?multiSelect=0&context={Ens.ContextSearch/ProductionItems?targets=1&productionName=@productionId}"; // -- Injected dependencies for unit tests Property ensService As Ens.BusinessService [ Private ]; /// initialize Business Host object Method %OnNew( pConfigName As %String, ensService As Ens.BusinessService = {$This}) As %Status { set ..ensService = ensService return ##super(pConfigName) } /// Override this method to process incoming data. Do not call SendRequestSync/Async() from outside this method (e.g. in a SOAP Service or a CSP page). Method OnProcessInput( pInput As %RegisteredObject, Output pOutput As %RegisteredObject, ByRef pHint As %String) As %Status { set output = ##class(Ens.StringContainer).%New("Blabla") return ..ensService.SendRequestAsync(..TargetConfigName, output) } } Import Sample.Src Class Sample.Test.CTestExampleService Extends Tests.Fw.CUnitTestBase { Property exampleService As CExampleService [ Private ]; Property ensService As Ens.BusinessService [ Private ]; ClassMethod RunTests() { do ##super() } Method OnBeforeOneTest(testName As %String) As %Status { set ..ensService = ..CreateMock() set ..exampleService = ##class(CExampleService).%New("Unit test", ..ensService) set ..exampleService.TargetConfigName = "Some test target" return ##super(testName) } // -- OnProcessInput tests -- Method TestOnProcessInput() { do ..Expect(..ensService.SendRequestAsync("Some test target", ..NotNullObject(##class(Ens.StringContainer).%ClassName(1))) ).AndReturn($$$OK) do ..ReplayAllMocks() do $$$AssertStatusOK(..exampleService.OnProcessInput()) do ..VerifyAllMocks() } Method TestOnProcessInputFailure() { do ..Expect(..ensService.SendRequestAsync("Some test target", ..NotNullObject(##class(Ens.StringContainer).%ClassName(1))) ).AndReturn($$$ERROR($$$GeneralError, "Some error")) do ..ReplayAllMocks() do $$$AssertStatusNotOK(..exampleService.OnProcessInput()) do ..VerifyAllMocks() } } The answer about mocking is great. At the TestCoverage level, by default the tool tracks coverage for the current process only. This prevents noise / pollution of stats from other concurrent use of the system. You can override this (see readme at https://github.com/intersystems/TestCoverage - set tPidList to an empty string), but there are sometimes issues with the line-by-line monitor if you do; #14 has a bit more info on this. Note - question also posted/answered at https://github.com/intersystems/TestCoverage/issues/33
Announcement
Anastasia Dyubaylo · Apr 6, 2020

Webinar: What's New in InterSystems IRIS 2020.1

InterSystems IRIS latest release (v2020.1) makes it even easier for you to build high performance, machine learning-enabled applications to streamline your digital transformation initiatives. Join this webinar to learn about what's new in InterSystems IRIS 2020.1, including: Machine learning and analytics Integration and healthcare interoperability enhancements Ease of use for developers Even higher performance And more... Speakers: 🗣 @Jeffrey.Fried, Director, Product Management - Data Platforms, InterSystems🗣 @Joseph.Lichtenberg, Director, Product Marketing, InterSystems IRIS Date: Tuesday, April 7, 2020Time: 10:00 a.m. - 11:00 a.m. EDT JOIN THE WEBINAR! Is a recording of this going to be available? Yes it is.I missed it and entered via registration. JOIN THE WEBINAR! Hi Developers! ➡️ Please find the webinar recording here. Enjoy!
Announcement
Anastasia Dyubaylo · Apr 10, 2020

New Video: What is IntegratedML in InterSystems IRIS?

Hi Community! Enjoy watching the new video on InterSystems Developers YouTube and learn about IntegratedML feature: ⏯ What is IntegratedML in InterSystems IRIS? This video provides an overview of IntegratedML - the feature of InterSystems IRIS Data Platform that allows developers to implement machine learning directly from the existing SQL environment. Ready to try InterSystems IRIS? Take our data platform for a spin with the IDE trial experience: Start Coding for Free. Stay tuned! 👍🏼 If you would like to explore a wider range of topics related to this video including videos and infographics, please check out the IntegratedML Resource Guide. Enjoy!
Question
Mohamed Hassan Anver · Apr 8, 2020

Using Entity Framework with InterSystems IRIS Data Platform

Hi There, I have Microsoft Visual Studio Community 2019 installed and tried to setup the entity framework as per Using Entity Framework with InterSystems IRIS Data Platform (https://learning.intersystems.com/course/view.php?id=1046) tutorial but I can't see the ISC data source in MS Visual Studio's Data source section. Does this mean that MS VS Community 2019 is not supported with the Entity Frmawork? Hassan Hello @MohamedHassan.Anver, I think that the tutorial is for EF 6 that is designed for .NET Framework. And MS is not promoting more EF Framework, right now, MS has EF core as goal (check this: https://docs.microsoft.com/es-es/ef/efcore-and-ef6/ ) and is the right EF to go in my opinion. However IRIS is not supporting EF Core https://community.intersystems.com/post/how-can-i-use-iris-net-core-entity-framework. :-( Any thought @Robert.Kuszewski ? Thank you @David.Reche for the reply. I wish IRIS would release support for EF Core in the near future. For now we will develop our app based on IRIS and EF.
Announcement
Anastasia Dyubaylo · Oct 6, 2023

[Webinar] GitOps using the InterSystems Kubernetes Operator

Hi Community, We're super excited to invite you to the webinar on How GitOps can use the InterSystems Kubernetes Operator prepared as a part of the Community webinars program. Join this webinar to learn how the FHIR Breathing Identity and Entity Resolution Engine for Healthcare (better known as PID^TOO||) was created. ⏱ Date & Time: Thursday, October 19, 12:00 PM EDT | 6:00 PM CEST 👨‍🏫 Speakers: @Ron Sweeney, Principal Architect at Integration Required Dan McCracken, COO at DevsOperative This webinar is a must for those of you tasked with running mission critical systems in the cloud. Tune in here for GitOps, a new era of running InterSystems workloads in the cloud! >> REGISTER HERE << Hey Community, We remind you about the upcoming webinar on GitOps using the InterSystems Kubernetes Operator! >> You can still register here Discover how cloud and InterSystems IRIS can streamline your deployments and boost productivity ✌️ 🚨 Last call to register! 🚨 Let's meet tomorrow at the online webinar on How GitOps can use the InterSystems Kubernetes Operator! You'll make a technical deep dive into the inner workings of the FHIR Breathing Identity and Entity Resolution Engine for Healthcare. ⏱ TOMORROW at 12:00 PM EDT | 6:00 PM CEST ➡️ REGISTER HERE Don't miss this opportunity to learn more about PID^TOO||! Hey everyone, The webinar will start in 20 minutes! Please join us here. Or enjoy watching the live stream on YouTube. Hi All, The recording of the "[Webinar] GitOps using the InterSystems Kubernetes Operator" is on InterSystems Developers YouTube! 🔥
Announcement
Olga Zavrazhnova · Nov 16, 2023

InterSystems Developer Community Roundtable - November 30 2023

Hi Developers,Our next online Developer Roundtable will take place on November 30 at 10 am ET | 4 pm CET.📍 Tech talks: 1. Foreign Tables - by @Benjamin.DeBoe Manager, Analytics Product Management, InterSystems2. Building "data products" with dbt and InterSystems IRIS - by @tomd Product Manager, Machine Learning, InterSystems We will have time for Q&A and open discussion. ▶ Update: watch the recording of the roundtable below: Not a Global Masters member yet? Log in using your InterSystems SSO credentials to join the program. Hi Community, please don't forget to register - we will send you a calendar hold and a reminder with direct link to join the roundtable :) Looking forward to seeing you tomorrow! Hi All, the roundtable has started - join us here. This is the final roundtable in 2023, looking forward to see you ! :) I tried to join 10 minutes before the start (20 minuts ago) but it's no longer possible: Ooops! Sorry friend, looks like this challenge is no longer available. Enrico Hi Enrico, correct the challenge for registration is already expired, please use this direct link to join the roundtable The recording of the roundtable is now available to watch here https://youtu.be/RxLj4d8GvkQ
Announcement
Anastasia Dyubaylo · Feb 5, 2024

Winners of InterSystems FHIR and Digital Health Interoperability Contest

Hi Community, It's time to announce the winners of the InterSystems FHIR and Digital Health Interoperability Contest! Thanks to all our amazing participants who submitted 12 applications 🔥 Experts Nomination 🥇 1st place and $5,000 go to the iris-fhirfy app by @José.Pereira, @henry, @Henrique.GonçalvesDias 🥈 2nd place and $3,000 go to the iris-fhir-lab app by @Muhammad.Waseem 🥉 3rd place and $1,500 go to the ai-query app by @Flavio.Naves, Denis Kiselev, Maria Ogienko, Anastasia Samoilova, Kseniya Hoar 🏅 4th place and $750 go to the Health Harbour app by @Maria.Gladkova, @KATSIARYNA.Shaustruk, @Maria.Nesterenko, @Alena.Krasinskiene 🏅 5th and 6th places and $300 each go to the FHIR-OCR-AI app by @xuanyou.du and iris-hl7 app by @Oliver.Wilms 🌟 $100 go to the Fhir-HepatitisC-Predict app by @shan.yue 🌟 $100 go to the fhirmessageverification app by @珊珊.喻 🌟 $100 go to the Clinical Mindmap Viewer app by @Yuri.Gomes 🌟 $100 go to the Patient-PSI-Data app by @Chang.Dao Community Nomination 🥇 1st place and $1,000 go to the iris-fhirfy app by @José.Pereira, @henry, @Henrique 🥈 2nd place and $750 go to the Fhir-HepatitisC-Predict app by @shan.yue 🥉 3rd place and $500 go to the FHIR-OCR-AI app by @xuanyou.du 🏅 4th place and $300 go to the iris-fhir-lab app by @Muhammad.Waseem 🏅 5th place and $200 go to the ai-query app by @Flavio.Naves, Denis Kiselev, Maria Ogienko, Anastasia Samoilova, Kseniya Hoar Our sincerest congratulations to all the participants and winners! Join the fun next time ;) Congrats @José Roberto Pereira, @Henry Pereira, @Henrique Dias , @Muhammad Waseem, @Flavio Naves, Denis Kiselev, Maria Ogienko, Anastasia Samoilova, Kseniya Hoar and all the participants to this FHIR contest !! Congratulations to all the winners and organizers 👏Once again It was a great competition and again a lot to learn. Thanks @Sylvain.Guilbaud Congratulations to all participants!!! ![happy](https://i.giphy.com/XR9Dp54ZC4dji.gif) thanks Congratulations to all the participants and winners! I'd like to thank the organizers for this contest and congratulate everyone who entered 🎉 🎉 🎉 Thank you @Sylvain.Guilbaud Really appreciate it thanks Thanks @José Pereira, @Henry Pereira, @Henrique Dias for your effort. Thanks for sharing your knowledge. Thank you!. ![thanks](https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExZDJsMGpkZW9sZHRlNjNsazF2MWlzeWR0bzZ6bXNhdDZ1aGloZjl6dSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/oxdg03Fr4E09wrjykN/giphy.gif)
Article
Hiroshi Sato · Feb 8, 2024

Points to note when uninstalling InterSystems IRIS on Linux

InterSystems FAQ rubric On Linux, use the following steps to delete an instance of InterSystems IRIS (hereinafter referred to as IRIS). (1) Stop the IRIS instance you want to uninstall using iris stop # iris stop <instance name> (2) Delete the instance information using the following command # iris delete <instance name> (3) Delete the IRIS installation directory using the rm -r command # rm -r <install directory> In addition to the installation directory, IRIS also uses (a) and (b) below. ----------------------------------------- (a) /usr/local/etc/irissys <folder> (b) /usr/bin/iris /usr/bin/irisdb /usr/bin/irissession ----------------------------------------- If you want to completely remove all IRIS from your machine, please remove all of (a) and (b) in addition to the uninstallation steps above. However, these are commonly used in all instances. Therefore, please do not remove it from Unix/Linux unless you are "uninstalling all IRIS". *Note(a) There are three files in the directory: the executable file (iris, irissession) and the instance information (iris.reg).(b) The three files are symbolic links.