Hey Developers,
Play the new video on InterSystems Developers YouTube:
⏯ Chatbots are all the RAGe: Generative AI with InterSystems IRIS @ Global Summit 2024
InterSystems IRIS is a Complete Data Platform
InterSystems IRIS gives you everything you need to capture, share, understand, and act upon your organization’s most valuable asset – your data.
As a complete platform, InterSystems IRIS eliminates the need to integrate multiple development technologies. Applications require less code, fewer system resources, and less maintenance.
Hey Developers,
Play the new video on InterSystems Developers YouTube:
⏯ Chatbots are all the RAGe: Generative AI with InterSystems IRIS @ Global Summit 2024
Hello.
I have a business process with an "ASTM" class : .png)
Inside this process i have a business rule. Inside this rule i want to get a specific field inside the message :.png)
So i will use the same logic for HL7 messages. Here is the "path" to the segment and the field :
.png)
.png)
So i will use this inside the routing rule like i did for HL7 messages. .png)
i have tried MUCH prefix like "ASTM" "X12" "ESI.X12" ..... I only got errors like << PROPERTY DOES NOT EXIST >> .png)
Can someone tell me what i'm doing wrong ?
Thanks.
Thomas.
IRIS Health Monitor is part of System Monitor (see here).
The intention is to further process the captured sensor reading in order to identify the "health" of a system by checking the sensor reading values against pre-defined Base, Min and Max absolute values, and alert accordingly. Additionally, instead of absolute values, you can create Charts (which can be different for different periods of a day), that contain a learned minimum and maximum value after a time spent by the system (at least 24 hours) analysing sensor readings.
The sensor readings included in Health Monitor are defined here.
Journaling is a critical IRIS feature and a part of what makes IRIS a reliable database. While journaling is fundamental to IRIS, there are nuances, so I wrote this article to summarize (more briefly than our documentation which has all the details) what you need to know. I realize the irony of saying the 27 minute read is brief.
Every modification to a journaled database (sets and kills) is recorded with its timestamp in a journal file. This runs in parallel with writes to the databases and the write image journal (WIJ) for redundancy.
Hi Developers!
Recently we launched InterSystems Package Manager - ZPM. And one of the intentions of the ZPM is to let you package your solution and submit into the ZPM registry to make its deployment as simple as "install your-package" command.
To do that you need to introduce module.xml file into your repository which describes what is your InterSystems IRIS package consists of.
This article describes different parts of module.xml and will help you to craft your own.
I will start from samples-objectscript package, which installs into IRIS the Sample ObjectScript application and could be installed with:
zpm: USER>install samples-objectscript
It is probably the simplest package ever and here is the module.xml which describes the package:
<?xml version="1.0" encoding="UTF-8"?>
<Export generator="Cache" version="25">
<Document name="samples-objectscript.ZPM">
<Module>
<Name>samples-objectscript</Name>
<Version>1.0.0</Version>
<Packaging>module</Packaging>
<SourcesRoot>src</SourcesRoot>
<Resource Name="ObjectScript.PKG"/>
</Module>
</Document>
</Export>Hello Community,
In this article, I will outline and illustrate the process of implementing ObjectScript within embedded Python. This discussion will also reference other articles related to embedded Python, as well as address questions that have been beneficial to my learning journey.
As you may know, the integration of Python features within IRIS has been possible for quite some time. This article will focus on how to seamlessly incorporate ObjectScript with embedded Python.
Essentially, embedded Python serves as an extension that allows for independent writing and execution.
During testing the added Multi-Namespace feature I met a challenge
that required intervention. This simple request created 1000 lines of output.
USER>do ^rcc.find
----------------
enter search string [$ZU] <blank> to exit:
Verbose? (0,1) [0]:
Force UpperCase? (1,0) [1]:
enter code type (CLS,MAC,INT,INC,ALL) [ALL]:
select namespace (ALL,%SYS,DOCBOOK,ENSDEMO,ENSEMBLE,SAMPLES,USER) [USER]: all
Hello Community,
The Certification Team of InterSystems Learning Services is excited to announce the release of our new InterSystems IRIS SQL Specialist exam. It is now available for purchase and scheduling in InterSystems exam catalog. Potential candidates can review the exam topics and the practice questions to help orient them to exam question approaches and content. Candidates who successfully pass the exam will receive a digital certification badge that can be shared on social media accounts like LinkedIn. 
Hi there,
I'm discovering IRIS and I need to POC the solution, with a constraint: containerization.
I'm used to deploy my apps in a Swarm cluster, and all my bind volumes are written on a GlusterFS volume.
The problem here, when I start my stack, the first log is:
[WARN] ISC_DATA_DIRECTORY is located on a mount of type 'fuse.glusterfs' which is not supported, consider a named volume for '/iris_conf'
And of course the deployment fails.
Any idea? How can I provide my data on all my cluster nodes? I read this article: https://community.intersystems.
Hello every one, here in our company we are trying to develop a tool to help us in our "Code Review" that today is completely done for another developer.
So I need to develope a tool that reads a class/routine (Already done) and identify if in the current line there is some abbreviated command, that is against our policy of codification, for example:
s variable = "test"
d ..SomeMethod()
.png)
So if you are following from the previous post or dropping in now, let's segway to the world of eBPF applications and take a look at Parca, which builds on our brief investigation of performance bottlenecks using eBPF, but puts a killer app on top of your cluster to monitor all your iris workloads, continually, cluster wide!
.png)
Continous Profiling with Parca, IRIS Workloads Cluster Wide
Hi there, I'm wondering if anyone has run into an issue with <FILEFULL> when building an image from the ISC image? Specifically what's happening in our build is we are pre-loading our codebase into the image to make deployments faster and setting up source control, etc. When loading our libraries however we get hit with a <FILEFULL>. The resource limits on docker are pretty beefy and when observing resources on both the machine and container level we don't hit any issues. Oddly, this only happens when using the ARM64 version. When using the AMD64 version of 2024.
Hi Community,
Play the new video on InterSystems Developers YouTube:
⏯ InterSystems IRIS Vector Search and the Python Ecosystem @ Global Summit 2024
Hi Community!
We are pleased to invite you to a new webinar in Spanish: ‘Facial recognition applied to application login using JavaScript+IRIS’, on Thursday 26th September, at 4:00 PM (CEST).
.png)
I attended Cloud Native Security Con in Seattle with full intention of crushing OTEL day, then perusing the subject of security applied to Cloud Native workloads the following days leading up to CTF as a professional excercise. This was happily upended by a new understanding of eBPF, which got my screens, career, workloads, and atitude a much needed upgrade with new approaches to solving workload problems.
So I made it to the eBPF party and have been attending clinic after clinic on the subject ever since, here I would like to "unbox" eBPF as a technical solution, mapped directly to what we do in practice (even if its a bit off), and step through eBPF through my experimentation on supporting InterSystems IRIS Workloads, particularly on Kubernetes, but not necessarily void on standalone workloads.
.png)
An IRIS.DAT file was removed, as it was not needed anymore. But the database was mirrored, so it still shows up in the mirror monitor and database list. How can this be fixed? There is no backup of the .DAT file so it cannot be restored.
w ##class(SYS.Mirror).RemoveMirroredDatabase("/mydir/")throws a protect error.
(everything else works fine and this is not a production system, it only is annoying in the mirror monitor and database list)
In today's data landscape, businesses encounter a number of different challenges. One of them is to do analytics on top of unified and harmonized data layer available to all the consumers. A layer that can deliver the same answers to the same questions irrelative to the dialect or tool being used. InterSystems IRIS Data Platform answers that with and add-on of Adaptive Analytics that can deliver this unified semantic layer. There are a lot of articles in DevCommunity about using it via BI tools.
In SQL, NULL data and the empty string ('') are different data. The method for setting and checking each is as follows.
(1) NULL data
[SQL]
insert into test(a) values(NULL)
select * from test where a IS NULL[InterSystems ObjectScript]
set x=##class(User.test).%New()
set x.a=""(2) Empty string ('')
[SQL]
insert into test(a) values('')
select * from test where a = ''[InterSystems ObjectScript]
set x=##class(User.test).%New()
set x.a=$C(0)For more information, please refer to the following documents:
When I was searching for the ideas to migrate IRIS DB to Oracle, I could find a tool called "ESF Data Migration tools" over internet with easy steps.
Can we use them to migrate?
Link for your reference
https://www.dbsofts.com/articles/cache_to_oracle/
Thanks in advance
I need to know if given package exists or not.
Currently found two solution - one doesn't work, another works but I don't like it.
Solution 1.
I started, of course, with %Dictionary package - it has PackageDefinition class after all.
However, %ExistsId returned 0 on packages that clearly do exist, so I went to %LoadData, it uses this macro to determine if the package exist:
#define PACKAGE(%package) ^oddPKG($zcvt(%package,"u"))And zw ^oddPKG showed the root cause - ^oddPKG global only contains data for packages with tables (or something along these lines).
Solution 2.
For those of you who still use the Studio IDE for ObjectScript programming and are going through the process of migrating to VS Code, did you know there's a section in the VS Code documentation just for you? Have a look at the Migrating from Studio chapter. It covers:
And now there's a new section, Keyboard Shortcuts, that shows you the VS Code equivalent for shortcuts you may be used to, so your hands never have to leave the keyboard.
I'm trying to convert date - 2023-09-28T20:35:41Z to BST/GMT format. I tried with $ZDT($ZDTH("2023-09-28T20:35:41Z",-2),8,1) but it's giving the output as '19700101 01:33:43' and looks link the date format in $ZDTH specified is wrong. Any inputs or solution would be appreciated.
Hello,
I've recently updated the python version of a linux server running Red Hat Enterprise Linux 8.10 (Ootpa). We have an instance 2023.1 running there, and whenever I run the $System.Pyhthon.Shell() I can see it's still pointing to the old version. From within linux, it runs the latest one (we've change all the links to the new 3.11, so no scripts are broken).
So I guess the problem comes from the fact irispython is still compiled using old python version. So, how can I do to force IRIS to use the current version on the server, or update the irispython file?
Thanks!
I encountered an unexpected behavior while working with the $ZTIMEH and $ZTIME functions, specifically with times between 12:00 and 13:00. Here's what I observed:
W $ZTIMEH("08:50:38.975411826")
Output: 31838
W $ZTIME(31838,1)
Output: 08:50:38
This behavior is correct as $ZTIME returns the expected time of 08:50:38.
However, with the following example:
W $ZTIMEH("12:05:38.975411826")
Output: 338
W $ZTIME(338,1)
Output: 00:05:38
This seems incorrect to me. $ZTIME should have returned 12:05:38, but instead it returns 00:05:38.
Hi,
I'm getting an unexpected behavior when using pandas function to_sql(), which uses sqlalchemy-iris. After the first execution, a transaction seems to be opened and all rows inserted are lost after closing the connection:
engine = create_engine(f"iris://{args['username']}:{args['password']}@{args['hostname']}:{args['port']}/{args['namespace']}")
conn = engine.connect()
# rows are kept after close connection
train_df.to_sql(name='table1', con=conn, if_exists='replace', index=False)
# rows **aren't** kept after close connection
train_df.Hey Community,
Watch this video to learn how InterSystems can help you access, understand, and use your data quickly and easily:
⏯ Unleash the Power of Your Data with InterSystems @ Global Summit 2024
Currently we are exploring how we can allocate additional disk space to our current environment as we have seen a significant increase in growth of our Database files. Currently we have 3 namespaces, all with 1 IRIS.dat each that contains both the Global and Routines.
Since we have started down the route of everything within a single IRIS.dat file for each namespace, is it logical as we see growth to be able to split the current IRIS.dat for each namespace into a separate IRIS.dat for global and a IRIS.dat with for routines for each namespace in a Mirror environment?
In the preceding section, we explored the installation process and initiated the writing of the IRIS in native Python. We will now proceed to examine global traversal and engage with IRIS class objects.
get: this function is used to get values from the traversal node.
def traversal_firstlevel_subscript():
"""
^mygbl(235)="test66,62" and ^mygbl(912)="test118,78"
"""
for i in irispy.node('^mygbl'):
print(i, gbl_node.Hello developers!
Last year, for the first time, we held the Technical Article Contest on Japan's InterSystems Developer Community, and 📣 we are holding it again this year!📣
The topics are the same as last year, and you can submit any content related to InterSystems IRIS/InterSystems IRIS for Health.
🖋 InterSystems Japan Technical Article Contest – 2024: Articles related to IRIS 🖋
🎁 Participation prize:Everyone who submits a post will receive our👚Developer Community’s original T-shirt👕!!
🏆 Special Prize:Authors of three selected works will receive special prizes.
Updated on 30/8: Prize information added!Please check it out!👇
Hi community members!
I'm testing some functionalities about Foreign Tables and it works smoothly with PostgreSQL database, but I found out an issue with MySQL database, I followed the documentation:
.png)
CREATE FOREIGN SERVER Test.MySQLDB FOREIGN DATA WRAPPER JDBC CONNECTION 'MySQL'
CREATE FOREIGN TABLE Test.PatientMySQL SERVER Test.MySQLDB TABLE 'patient'