Clear filter
Announcement
Olga Zavrazhnova · Sep 13, 2022
Fall Hackathon Season is ready now!
InterSystems will take part in HackMIT hackaton, a long-weekend hackaton organized by the MIT (Massachusetts Institute of Technology), where thousands of students come together to work on cool software and/or hardware projects. This year the HackMIT is in-person again, at the MIT campus, and will take place during the first weekend of October.This year the main tracks are Education, Sustainability, New Frontiers, and Entertainment.InterSystems challenge will be related to 1 or 2 of the main tracks and will be revealed on September 28.Stay tuned!
HackMIT it's a great event! I hope this year could be bigger than the last one. Agree - last year's event was so much fun! And it was so successful, with the help of our superb mentors! :)This year they do it fully in person, I also hope it will big and awesome!
Announcement
Anastasia Dyubaylo · Nov 13, 2022
Hi Developers,
We're pleased to announce that InterSystems is hosting its partner days in Germany – InterSystems Partnertage DACH 2022!
During this time you will be able to exchange product innovations and practical tips and tricks between InterSystems experts and your colleagues in Darmstadt. And of course, a lot of networking, because there is a lot to catch up on!
🗓 Dates: November 23-24, 2022
📍 Venue: Wissenschafts- und Kongresszentrum darmstadtium in Herzen DarmstadtsSchloßgraben 1, 64283 Darmstadt
The agenda of the two-day partner days consists of a mix of keynotes, informative sessions, and masterclasses. Read on for more details.
The agenda at a glance:
November 23 (focus on healthcare):
Innovations in healthcare
Transferring innovative ideas into concrete projects – with InterSystems technology
Outlook on the next development steps for InterSystems HealthShare and InterSystems IRIS for Health
Masterclass sessions with our Sales Engineering team (InterSystems SAM, Embedded Python for ObjectScript developers)
November 24:
Keynotes on the successful cross-industry use of InterSystems technologies
Our innovative data management concept „Smart Data Fabric“ – what is it all about?
Round Table „migration on InterSystems IRIS“ – our partners share their experiences
Presentations and live demos focusing on the following key topics: Columnar Storage, IntegratedML, Container-Support, InterSystems Reports
Masterclass sessions on „social selling“
✅ REGISTER HERE
We look forward to seeing you in Darmstadt!
Announcement
Anastasia Dyubaylo · Jan 13, 2023
Hey Developers,
We'd like to invite you to join our next contest dedicated to creating useful tools to make your fellow developers' lives easier:
🏆 InterSystems Developer Tools Contest 🏆
Submit an application that helps to develop faster, contributes more qualitative code, and helps in testing, deployment, support, or monitoring of your solution with InterSystems IRIS.
Duration: January 23 - February 12, 2023
Prize pool: $13,500
The topic
💡 InterSystems IRIS developer tools 💡
In this contest, we expect applications that improve developer experience with IRIS, help to develop faster, contribute more qualitative code, help to test, deploy, support, or monitor your solution with InterSystems IRIS.
General Requirements:
Accepted applications: new to Open Exchange apps or existing ones, but with a significant improvement. Our team will review all applications before approving them for the contest.
The application should work on InterSystems IRIS Community Edition.
Types of applications that match: UI-frameworks, IDE, Database management, monitoring, deployment tools, etc.
The application should be an Open Source application and published on GitHub.
The README file to the application should be in English, contain the installation steps, and contain either the video demo or/and a description of how the application works.
One developer can enter the competition with a maximum of 3 applications.
Prizes
1. Experts Nomination - a specially selected jury will determine the winners:
🥇 1st place - $5,000
🥈 2nd place - $3,000
🥉 3rd place - $1,500
🏅 4th place - $750
🏅 5th place - $500
🌟 6-10th places - $100
2. Community winners - an application that will receive the most votes in total:
🥇 1st place - $1000
🥈 2nd place - $750
🥉 3rd place - $500
If several participants score the same amount of votes, they all are considered winners, and the money prize is shared among the winners.
Important Deadlines:
🛠 Application development and registration phase:
January 23, 2023 (00:00 EST): Contest begins.
February 5, 2023 (23:59 EST): Deadline for submissions.
✅ Voting period:
February 6, 2023 (00:00 EST): Voting begins.
February 12, 2023 (23:59 EST): Voting ends.
Note: Developers can improve their apps throughout the entire registration and voting period.
Who can participate?
Any Developer Community member, except for InterSystems employees. Create an account!
👥 Developers can team up to create a collaborative application. Allowed from 2 to 5 developers in one team.
Do not forget to highlight your team members in the README of your application – DC user profiles.
Helpful resources
✓ Example applications:
iris-rad-studio - RAD for UI
cmPurgeBackup - backup tool
errors-global-analytics - errors visualization
objectscript-openapi-definition - open API generator
Test Coverage Tool - test coverage helper
and many more.
✓ Templates we suggest to start from:
iris-dev-template
rest-api-contest-template
native-api-contest-template
iris-fhir-template
iris-fullstack-template
iris-interoperability-template
iris-analytics-template
✓ For beginners with IRIS:
Build a Server-Side Application with InterSystems IRIS
Learning Path for beginners
✓ For beginners with ObjectScript Package Manager (ZPM):
How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS
Package First Development Approach with InterSystems IRIS and ZPM
✓ How to submit your app to the contest:
How to publish an application on Open Exchange
How to submit an application for the contest
Need Help?
Join the contest channel on InterSystems' Discord server or talk with us in the comment to this post.
We can't wait to see your projects! Good luck 👍
By participating in this contest, you agree to the competition terms laid out here. Please read them carefully before proceeding.
What a contest!
It'd be great if someone could implement the tool to export Interoperability components into a local folder with every changes saved in the interoperability UI?
Currently git-source-control can do the job, but it is not complete. Some Interoperability components (e.g. business rules) are not being exported. And lookup tables are exported in an not importable format.
I published and idea regarding it.
For your inspiration see also the ideas with the "Community Opportunity" status on the Ideas Portal. Here're more ideas for your inspiration by @Guillaume.Rongier7183: https://community.intersystems.com/post/kick-webinar-intersystems-developer-tools-contest#comment-212426 @Evgeny.Shvarov as we've covered in GitHub issues, the business rule issue is a product-level issue (in the new Angular rule editor only, not the old Zen rule editor). I clarified https://github.com/intersystems/git-source-control/issues/225 re: the importable format.
The non-"wrapped" XML export format is importable by git-source-control and, I believe, IPM itself, although not by $System.OBJ.Load. It's just a matter of preference/readability. In a package manager context being loadable by $System.OBJ.Load isn't as important, and while the enclosing <Export> and <Document> tags aren't as annoying for XML files as for XML-exported classes/routines/etc., they're still annoying and distract from the true content of the document. Also - git-source-control 2.1.0 fixes issues with import of its own export format. You should try it out. ;) nice job Tim! Thank you for continuing to improve this :) Hey Developers,
Watch the recording of the Kick-off Webinar on InterSystems Developers YouTube:
⏯ [Kick-off Webinar] InterSystems Developer Tools Contest
Community!
There are 5 apps that have been added to the contest already!
GlobalStreams-to-SQL by @Robert.Cemper1003 helper-for-objectscript-language-extensions by @Robert.Cemper1003 gateway-sql by @MikhailenkoSergeyxml-to-udl by @Tommy.Heyding iris-persistent-class-audit by @Stefan.Cronje1399
Who's the application going to be next?! Dear Developers!
Please use technology bonuses to collect more votes and get closer to victory. 🥳
Happy coding!✌ I am now in the running with DX Jetpack for VS Code Hi Evgeny,
The ompare tool has a Schedule option that happily exports classes, routines, productions, lookup tables, HL7 schemas. It can be configured to traverse multiple namespaces in a single run, generating an export file for each namespace.
Wonder if this is useful for your case. A periodic export / snapshot of some components.
Cheers,
Alex Devs!
Four more applications have been uploaded to the contest!
DX Jetpack for VS Code by @John.Murray JSONfile-to-Global by @Robert.Cemper1003 apptools-admin by @MikhailenkoSergey irissqlcli by @Dmitry.Maslennikov
Check them out! I just uploaded my app... It is the first time that I participate in one of the programming contests... I hope you like it. my helper was eliminated today by censorshelper-for-objectscript-language-extensionsTag version: 0.0.3 Released: 2023-01-26 22:33:33
Question
Evgeny Shvarov · May 13, 2023
Hi folks!
Those who actively use unittests with ObjectScript know that they are methods of instance but not classmethods.
Sometimes this is not very convenient. What I do now if I face that some test method fails I COPY(!) this method somewhere else as classmethod and run/debug it.
Is there a handy way to call the particular unittest method in terminal? And what is more important, a handy way to debug the test method?
Why do we have unittest methods as instance methods?
VSCode has a way to help with running tests, but it requires implementing from our side A handy way to call UnitTest Case method in a terminal
Do ##class(%UnitTest.Manager).DebugRunTestCase("", "[ClassName]", "", "[MethodName]")
Run all Test methods for a TestCase:
Do ##class(%UnitTest.Manager).DebugRunTestCase("", "[ClassName]", "", "")
Placing a "break" line within a method can be useful when iterating creating the test. See the variables. Run other code and then type "g"+ [Enter] to continue.
The instance gives context to current test run, when raising assertions and other functionality. Thanks, Dima! Is there a listed related issue? Thanks Alex.
See the following:
USER>Do ##class(%UnitTest.Manager).DebugRunTestCase("", "dc.irisbi.unittests.TestBI", "", "TestPivots")
(root) begins ...LogStateStatus:0:Finding directories: ERROR #5007: Directory name '/usr/irissys/mgr/user/u:/internal/testing/unit_tests/' is invalid <<==== **FAILED** (root)::
In fact there is a handy way to run all the tests via:
zpm "test module-name"
But, I'd love to see debugging of it @Alexander.Woodhead , do you know by a chance why unittest methods are instance methods but not classmethods? Could it be converted to classmethods or provided the option to do that? Hi Evgeny,
The global ^UnitTestRoot needs to be set to a real directory to satisfy the CORE UnitTest Runner.
As the first argument "Suite" is not specified, then no sub-directories are needed.
Set ^UnitTestRoot="\tmp\"
... may be sufficient for your purposes. It is common to need to run unit tests for other modules that have overlapping / breaking functionality.
This is where the value of loading and running multiple TestSuites comes into its own. There are no reasons for them to be ClassMethods, UnitTests is quite a complex thing, and it's there are use-cases where it needs to be this way. with zpm you can use additional parameter for it
zpm "test module-name -only -D UnitTest.Case=Test.PM.Unit.CLI:TestParser"
Use class name only to run all tests there, or add Method name to test only that method in the class Thanks! this is helpful. @Evgeny.Shvarov I have a detailed writeup here (although Dmitry already hit the important point re: IPM): https://community.intersystems.com/post/unit-tests-and-test-coverage-objectscript-package-manager
A few other notes:
Unit test class instances have a property (..Manager) that refers to the %UnitTest.Manager instance, and may be helpful for referencing the folder from which unit tests were loaded (e.g., to load additional supporting data or do file comparisons without assuming an absolute path) or "user parameters" / "user fields" that were passed in the call to run tests (e.g., to support running a subset of tests defined in unit test code). Sure, you could do the same thing with PPGs or % variables, but using OO features is much better.
I'll also often write unit tests that do setup in OnBeforeAllTests and cleanup in %OnClose, so that even if something goes very horribly wrong it'll have the best chance of actually running. Properties of the unit test are useful to store state relevant to this setup - the initial $TLevel (although that should always be 0), device settings, global configuration flags, etc. Thanks Tim! How do you debug a particular unit test in VSCode? Unit testing from within VS Code is now possible. Please see https://community.intersystems.com/post/intersystems-testing-manager-new-vs-code-extension-unittest-framework thanks, @John.Murray ! Will give it a try! Why are unit test methods instance methods? Since a running unit test is an instantiated object (%RegisteredObject), the unit test class itself can have custom properties, and the instance methods can use those properties as a way to share information. For example, initialize the properties in %OnBeforeAllTests(), and then access/change the properties in the test methods. 💡 This question is considered a Key Question. More details here. @Evgeny.Shvarov my hacky way of doing this is to create an untracked mac file with ROUTINE debug defined.
I just swap in the suite or method I want to run per Alex's instructions, set my breakpoints in VS code and make sure the debug configuration is in my VSCode settings:
"launch": {
"version": "0.2.0",
"configurations": [
{
"type": "objectscript",
"request": "launch",
"name": "debugRoutine",
"program": "^debug"
}
]
} That's indeed hacky! Thanks @Michael.Davidovich! Thanks @Joel.Solon!
But all this could be achieved without instance methods, right? Anyway, I'm struggling to find an easy way to debug a failed unittest. @Michael.Davidovich suggested the closest way to achieve it but I still want to find something really handy, e.g. an additional "clickable button" in VSCode above the test method that invites "debug this test method". Similar what we have for class methods now - debug this classmethod and copy invocation.
That'd be ideal. That's one of the things the new extension is designed to achieve. I am utilizing properties on class methods to good effect.
I would not use a classmethod only approach for normal development.
There is nothing stopping a community parallel UnitTest.Manager re-implementation that follows a ClassMethod pattern.
Some have reimplemented UnitTest.Manager:
1) Without Deleteing the test classes at end of run (With Run instead of DebugRun)
2) Not needed ^UnitTestRoot to be defined. Looking forward! Anything can be achieved without instance methods. The point here is that instance methods exist in object-oriented systems because they are considered a good, straightforward way to achieve certain things. In the case of unit tests sharing information using properties, that approach saves you from having to pass info around as method arguments, or declaring a list of public variables. I understand the argumentation, makes sense. Just curious how do you debug those unittests that fail?
Announcement
Andreas Dieckow · Oct 24, 2019
InterSystems Atelier has been tested with OpenJDK 8. The InterSystems Eclipse plug-in is currently available for Eclipse Photon (4.8), which requires and works with Java 8.
Article
Evgeny Shvarov · Jul 22, 2019
Hi Community!
We've introduced Direct Messages on InterSystems Community.
What's that?
Direct message(DM) is a Developer Community feature which lets you to send a direct message to InterSystems community member you want.
How to send it?
Open member's page, and click "Send Direct Message". Like here:
Or, open your account page and open the section "Direct Messages":
In Direct Messages you can see all the conversations and start the new one with Write new message:
The conversation could be between two or more people.
How a member will know about the message?
DC sends an email notification to a member if he has a new DM. Of course, you can setup if you want to receive DM email notifications.
Privacy
Attention! Direct messages are not private messages. Direct messages are pretty much the same as posts and comments but with the difference that you can alter the visibility of the message to certain people.
E.g. if John sends DM to Paul this DM is visible to John, Paul and to Developer Community admin. But this DM is hidden from other community members and public access, e.g. from search crawlers.
So it is safe to send contact data to each other which you consider possible to share with your recipient and DC admin.
What About Spam?
Only registered members who have postings can send direct messages.
Any registered members can receive and answer messages.
So, there is no spam expected.
Please report on any issues on Developers Issue Tracker or on Community Feedback track.
Stay tuned!
Question
Joaquin Montero · Mar 2, 2020
Hi Everyone,
I've been working on deploying an IRIS for Health environment in EKS. There is a video session in the InterSystems learning portal about this feature but I have not succeeded in finding the proper documentation and resources to use this in my Kubernetes cluster.
Has this been deprecated/discontinued? Any idea where can I find the resources? Should I stick to StatefulSets instead of using the IrisCluster resource type provided by this operator? Tagging @Luca.Ravazzolo and @Steven.LeBlanc I cannot find any real information about the Kubernetes operator either. They said in the video that it's ready and working, but it does not seem that way. Hi Guys
Thank you for your interest on the InterSystems Kubernetes Operator.
We are working hard at preparing it to be available for 2020.2 timeframe.
Stay tuned.
Article
Sergey Kamenev · Nov 11, 2019
InterSystems IRIS supports a unique data structure, called globals, for information storage. Essentially, globals are persistent arrays with multi-level indices, having several extra capabilities—transactions, quick traversal of tree structures, and a programming language known as ObjectScript.
I'd note that for the remainder of the article, or at least the code samples, we'll assume you have familiarised yourself with the basics of globals:
Globals Are Magic Swords For Managing Data. Part 1.Globals - Magic swords for storing data. Trees. Part 2.Globals - Magic swords for storing data. Sparse arrays. Part 3.
Globals are completely different structures for storing data than the usual tables, and operate at a much lower level. And that begs the question, how would transactions look when working with globals, and what peculiarities might you encounter in the effort?
We know from relational database theory that a good transaction implementation needs to pass the ACID test (see ACID in Wikipedia).
Atomicity: All changes made in the transaction are recorded, or none at all. See Atomicity (database systems) in Wikipedia.
Consistency: After the transaction is completed, the logical state of the database should be internally consistent. In many ways, this requirement applies to the programmer, but in the case of SQL databases, it also applies to foreign keys.
Isolation: Transactions running in parallel shouldn’t affect one another.
Durability: After successful completion of the transaction, low-level problems (such as a power failure) should not affect the data changed by the transaction.
Globals are non-relational data structures. They were designed to support ultra-fast work on hardware with a minimal footprint. Let's look at how transactions are implemented in globals using the IRIS/docker-image.
1. Atomicity
Consider the situation when 3 values must be saved in database together, or none of them should be recorded.
The easiest way to check atomicity is to enter the following code in terminal:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
TCOMMIT
Then conclude with:
ZWRITE ^a
The result should be this:
^a(1)=1
^a(2)=2
^a(3)=3
As expected, Atomicity is observed. But now let's complicate the task by introducing an error and see how the transaction is saved—partially, or not at all. We’ll start checking atomicity as we did before, like so:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
But this time we’ll forcibly stop the container using the command docker kill my-iris, which is almost equivalent to a forced power off as it sends a SIGKILL (halt process immediately) signal. After restarting the container, we check the contents of our global to see what happened. Maybe the transaction has been partially saved?
ZWRITE ^a
Nothing got out
No, nothing has been saved. So, in the case of accidental server stop, the IRIS database will guarantee the atomicity of your transactions.
But what if we want to cancel changes intentionally? So now let's try this with the rollback command, as follows:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
TROLLBACK 1
ZWRITE ^a
Nothing got out
Once again, nothing has been saved.
2. Consistency
Recall that globals are lower-level structures for storing data than relational tables, and with a globals database, indices are also stored as globals. Thus, to meet the requirement of consistency, you need to include an index change in the same transaction as a global node value change.
Say, for example, we have a global ^person, in which we store personal data using the social security number (SSN) as the key:
^person(1234567, 'firstname') = 'Sergey'
^person(1234567, ‘lastname’) = ‘Kamenev’
^person(1234567, ‘phone’) = ‘+74995555555
...
We’ve created an ^index key to enable rapid search by last or last and first names, as follows:
^index(‘Kamenev’, ‘Sergey’, 1234567) = 1
To keep the database consistent, we need to add persons like this:
TSTART
^person(1234567, ‘firstname’) = ‘Sergey’
^person(1234567, ‘lastname’) = ‘Kamenev’
^person(1234567, ‘phone’) = ‘+74995555555
^index(‘Kamenev’, ‘Sergey’, 1234567) = 1
TCOMMIT
Accordingly, when deleting a person, we must use the transaction:
TSTART
Kill ^person(1234567)
Kill ^index(‘Kamenev’, ‘Sergey’, 1234567)
TCOMMIT
In other words, fulfilling the consistency requirement for your application logic is entirely up to the programmer when working with a low-level storage format such as globals.
Luckily, IRIS offers the commands to organise your transactions and deliver Consistency guarantees for your applications. When using SQL, IRIS will use these commands under the hood to ensure consistency of its underlying globals data structures when performing INSERT, UPDATE, and DELETE statements. Of course, IRIS SQL also offers corresponding SQL commands for starting and stopping transactions to leverage in your (SQL) application logic.
3. Isolation
Here’s where things get wild. Suppose many users are working on the same database at the same time, changing the same data. The situation is comparable to when many developers are working with the same code repository and trying to commit changes to many files at the same time.
The database needs to keep up with everything in real time. Given that serious companies typically have a person responsible for version control—merging branches, managing conflict resolution, and so forth—and that the database needs to take care of this in real time, the complexity of the problem and the importance of correctly designing the database and the code that serves it both become self-evident.
The database can’t understand the meaning of actions performed by users and try to prevent conflicts when they’re working on the same data. It can only cancel one transaction that contradicts another or execute them sequentially.
Moreover, as a transaction is executing (before the commit), the state of the database may be inconsistent. Other transactions should not have access to the inconsistent database state. In relational databases, this is achieved in many ways, such as by creating snapshots or using multi-versioned rows.
When transactions execute in parallel, it’s important that they not interfere with each other. This is what isolation is all about.
SQL defines four levels of isolation, in order of increasing rigor. They are:
READ UNCOMMITTED
READ COMMITTED
REPEATABLE READ
SERIALIZABLE
Let's consider each level separately. Note that the cost of implementing each level grows almost exponentially as you move up the stack.
READ UNCOMMITTED is the lowest level of isolation, but it’s also the fastest. Transactions can read the changes made by other transactions.
READ COMMITTED is the next level of isolation and represents a compromise. Transactions can’t read each other's changes before a commit, but can read any changes after a commit.
Say we have a long-running transaction (T1), during which commits have happened in transactions T2, T3... Tn while working on the same data as T1. In such cases, each time we request data in T1, we may well obtain a different result. This is called a non-repeatable read.
REPEATABLE READ is the next level of isolation, in which we no longer have non-repeatable reads because a snapshot of the result is taken each time we request to read data. The snapshot is used if the same data is requested again during the same transaction. However, at this isolation level, it’s possible that what will be read is phantom data—new strings that were added by transactions committed in parallel.
SERIALIZABLE is the highest level of isolation. It’s characterized by the fact that any data used in a transaction (whether read or changed) becomes accessible to other transactions only after the first transaction has finished.
First, let’s see whether there’s isolation of operations between threads with transactions and threads without transactions. Open two terminal windows and enter the following:
Kill ^t
Write ^t(1)
2
TSTART
Set ^t(1)=2
There’s no isolation. One thread sees what the second one does when it opens a transaction.
Now let's see whether transactions in different threads can see what’s happening inside. Open two terminal windows and start two transactions in parallel.
Kill ^t
TSTART
Write ^t(1)
2
TSTART
Set ^t(1)=2
A 3 appears on the screen. What we have here is the simplest (but also the fastest) isolation level: READ UNCOMMITTED.
In principle, this is what we expect from a low-level data representation such as globals, which always prioritize speed. IRIS SQL provides different transaction isolation levels to choose from, but what if we need a higher level of isolation when working with globals directly?
Here we need to think about what isolation levels are actually for and how they work. For instance, lower levels of isolation are compromises designed to speed up database operations.
The highest isolation level, SERIALIZABLE, ensures that the result of transactions executed in parallel is equivalent to the result of executing them serially. This guarantees there will be no collisions. We can achieve this with properly used locks in ObjectScript, which can be applied in multiple ways. This means you can create regular, incremental, or multiple locks using the LOCK command.
Let's see how to use locks to achieve different levels of isolation. In ObjectScript, you use the LOCK operator. This operator permits not just exclusive locks, which are necessary for changing data, but also what are called shared locks. These shared locks can be accessed by several threads at once to read data that won’t be changed by other processes during the reading process.
For more details about locking, see the article “Locking and Concurrency Control”. To learn about two-phase locking, see the article "Two-phase locking" on Wikipedia.
The difficulty is that the state of the database may be inconsistent during the transaction, with the inconsistent data visible to other processes. How can this be avoided? For this example, we’ll use locks to create visibility windows within which the state of the database can be consistent. Access to any of these visibility windows will be through a lock.
Shared locks on the same data are reusable—several processes can take them. These locks prevent other processes from changing data. That is, they’re used to form windows of a consistent database state.
Exclusive locks, on the other hand, are used when you’re modifying data—only one process can take such a lock.
Exclusive locking can be employed in two scenarios. First, it can take any process if the data doesn’t have locks. Second, it can take only the process that has a shared lock on the data and the first one that requested an exclusive lock.

The narrower the visibility window, the longer the wait for other processes becomes—but the more consistent the state of the database in it will be.
READ COMMITTED ensures that we see only committed data from other threads. If data in another transaction hasn't yet been committed, we see the old version. This lets us parallelize the work instead of waiting for a lock to be released.
In IRIS, you can't see an old version of the data without using special tricks, so we'll have to make do with locks. We need to use shared locks to permit data to be read only at points where it’s consistent.
Let's say we have a database of users, ^person, who transfer money from one person to another. Here’s the point at which money is transferred from person 123 to person 242:
LOCK +^person(123), +^person(242)
TSTART
Set ^person(123, amount) = ^person(123, amount) - amount
Set ^person(242, amount) = ^person(242, amount) + amount
TCOMMIT
LOCK -^person(123), -^person(242)
The point where the amount is requested for person 123 before the deduction should have an exclusive lock (by default):
LOCK +^person(123)
Write ^person(123)
But if we need to display the account status in the user's personal account, we can use a shared lock, or none at all:
LOCK +^person(123)#”S”
Write ^person(123)
LOCK -^person(123)#”S”
However, if we accept that database operations are carried out virtually instantaneously (remember that globals are a much lower-level structure than a relational table), then this level is no longer so necessary in favor higher isolation levels.
Full example, for READ COMMITTED:
LOCK +^person(123)#”S”, +^person(242)#”S”
Read data (сoncurrent committed transactions can change the data)
LOCK +^person(123), +^person(242)
TSTART
Set ^person(123, amount) = ^person(123, amount) - amount
Set ^person(242, amount) = ^person(242, amount) + amount
TCOMMIT
LOCK -^person(123), -^person(242)
Read data (сoncurrent committed transactions can change the data)
LOCK -^person(123)#”S”, -^person(242)#”S”
REPEATABLE READ is the second-highest level of isolation. At this level we accept that data may be read several times with the same results in one transaction, but may be changed by parallel transactions.
The easiest way to ensure a REPEATABLE READ is to take an exclusive lock on the data, which automatically turns this isolation level into a SERIALIZABLE one.
LOCK +^person(123, amount)
read ^person(123, amount)
other operations (parallel streams try to change ^person(123, amount), but can't)
change ^person(123, amount)
read ^person(123, amount)
LOCK -^person(123, amount)
If locks are separated by commas, they are taken in sequence. But they will be taken atomically, all at once, if they’re listed like this:
LOCK +(^person(123),^person(242))
SERIALIZABLE is the highest level of isolation and the most costly. When working with classic locks like we did in the above examples, we have to set the locks in such a way that all transactions with data in common will end up being performed serially. For this approach, most of the locks should be exclusive and taken to the smallest fields of the global, for performance.
If we’re talking about deducting funds from a ^person global, then SERIALIZABLE is the only acceptable level. Money spent needs to be strictly serial, otherwise it’s possible to spend the same amount several times.
4. Durable
I conducted tests with a hard cut-off of the container using the docker kill my-iris command. The database stood up well to these tests. No problems were identified.
Tools to manage globals and locks
You may find useful the following tools in IRIS Management portal:
View and manage locks.
View and manage globals.
Conclusion
InterSystems IRIS has support for transactions using globals, which are atomic and durable. To ensure database consistency with globals, some programming effort and the use of transactions are necessary, since there are no complex built-in constructions like foreign keys.
Globals without locks are equivalent to the READ UNCOMMITTED level of isolation, but this can be raised to the SERIALIZABLE level using locks. The correctness and transaction speed achievable with globals depend considerably on the programmer's skill and intent. The more widely that shared locks are used when reading data, the higher the isolation level. And the more narrowly exclusive locks are used, the greater the speed. Sergey, it's great that you are writing articles for newbies, nevertheless you don't explicitly mark it. Just a quick note on your samples: ZWRITE command never returns <UNDEFINED> in IRIS, so to check the global existence one should use something like
if '$data(^A) { write "Global ^A is UNDEFINED",! }
I'm sure that you are aware of it; just thinking of novices that should not be confused. In the example for READ UNCOMMITTED, after the second (right-hand) Terminal session sets ^t(1)=2, when the first (left-hand) Terminal session writes ^t(1), the example shows/states that a "3" appears, but that's wrong; it should be a "2". Since transactions can be nested, and TROLLBACK rolls back all transactions, it's best practice to pair each TSTART with TROLLBACK 1, which will rollback only the currenct transaction. Thanks! I fixed the error Thanks! You're right Thanks! I fixed the error As you are talking about locks and transactions, and as others have noted, you can nest transactions it might be worth warning people about locks inside transactions and the fact that the unlock will not take place until the tcommit.
These can cause issues, especially where this is a method using transactions and locks which calls another that does the same.
Announcement
Anastasia Dyubaylo · Jan 13, 2020
Dear Community,
We're pleased to invite you to the InterSystems Benelux Symposium 2020, which will take place from February 11th to 12th in Antwerp, Belgium!
At the Symposium, both InterSystems experts and external thought leaders will discuss what it takes to make your IT innovation work. You're more than welcome to join us in the Radisson Blu Astrid Hotel in Antwerp.
Fastest Path to Possible - With Digital Innovation
"The future lies in the hands of those who use data the best", said InterSystems CEO & Founder Terry Ragon at the InterSystems Global Summit 2019. "However, this data is spread across an enormous set of databases and hardware devices, meaning we need to find ways to connect these systems together."
There is, of course, a way: interoperability. But how do you approach interoperability? And most importantly: how can you use it to create better applications and be a better partner to your customers?
Be part of the discussions! Don’t miss out and register for the event by clicking right here.
Can’t wait to see you in Antwerp!
So, remember:
⏱ Time: February 11-12, 2020
📍Venue: Radisson Blu Astrid Hotel, Antwerp, Belgium
✅ Registration: SAVE YOUR SEAT TODAY I'm finally going to Antwerp this year.
So, you have a chance to meet me there, to see VSCode-ObjectScript in action, get quick help with the migration process, and give your feedback.
See you there. Will have two sessions on Feb 12th on during InterSystems Benelux Summit on behalf of CUG meetup.
See you there! So, today, I'm presenting a demo of VSCode-ObjectScript at CUG meetup here in Antwerp ObjectScript Package Manager presentation is attached Please welcome a brief video overview of the InterSystems Benelux Symposium 2020:
Enjoy! 👍🏼
Announcement
Anastasia Dyubaylo · Nov 26, 2019
Hi Community,
We are pleased to invite you to the InterSystems Meetup in Moscow on December 10, 2019!
InterSystems Moscow Meetup is a pre-New Year meeting for users and developers on InterSystems technologies. The meetup will be devoted to the InterSystems IRIS Data Platform.
Please check out the agenda:
📌 18:40 Registration. Welcome coffee
📌 19:00Review presentation on the InterSystems Russia news for 2019
📌 19:15 Technology News on InterSystems IRIS by @Eduard.Lebedyuk, InterSystems Sales Engineer
REST, JSON, IAM
PEX, Native API, etc.
📌 20:00 Coffee Break
📌 20:20 Migration to InterSystems IRIS
📌 21:00 ObjectScript Package Manager Introduction — Package Manager Client for InterSystems IRIS by @Evgeny.Shvarov, Startups and Community Manager
📌 21:15 Open Exchange and other resources & services for InterSystems Developers by @Evgeny.Shvarov, Startups and Community Manager
📌 21:30 Free time. Drinks and snacks
The agenda is full of interesting stuff. We look forward to seeing you!
So, remember:
🗓 Date: December 10, 2019
⏱ Time: 18:40-22:00
📍 Venue: Loft-Ministerstvo, Stoleshnikov Lane 6/3, Moscow
✅ Registration: Register for FREE here
Space is limited, so register today to secure your place. Admission free, registration is mandatory for attendees.
Save your seat today! Hey!
The agenda is updated - I'll do the following sessions:
📌 21:00 ObjectScript Package Manager Introduction — Package Manager Client for InterSystems IRIS
📌 21:15 Open Exchange and other resources & services for InterSystems Developers
Come to chat!
Discussion
Eduard Lebedyuk · Mar 6, 2020
Temporary tables are tables available for a current process only (and destroyed when process ends).
What are you approaches to creating temporary tables?
Here's the two I know:
Process-private Globals storage can be used as a data global in storage definition. That way, each process can have its own objects for the class with ppg storage. Here's how. Here's how 2.
InterSystems TSQL supports #tablename temporary tables. A #tablename temporary table is visible to the current procedure of the current process. It is also visible to any procedure called from the current procedure. #tablename syntax is only supported in TSQL procedures (class methods projected as procedures with language tsql).
Any other ways to achieve similar functionality? CREATE GLOBAL TEMPORARY TABLE <mytable> {} has similar effects as process-private globals... Indeed, that's the recommended SQL way of achieving what Eduard described about PPGs. Drawbacks are that queries on these tables cannot be parallelized (as that implies multiple processes, of course).
Our TSQL support is meant for Sybase customers wishing to redeploy their TSQL applications on IRIS (especially now that SAP/Sybase is terminating support for those platforms). Just temporary table support by itself wouldn't be a reason to start building TSQL applications and abandon IRIS SQL/ObjectScript, of course :-). However, for a recent TSQL migration we did some work on our TSQL temp table support and were considering to roll that out to regular IRIS SQL, so this thread is a good place to share your experiences and requirements so we can make sure to do that properly, as needed. Hi,
I recently needed a temporary class (not just a table) to store data while it was manipulated (imported, queried, modified, etc. and eventually exported) . I eventually set up all the storage locations with PPG refs as noted above, which works, but did wonder if there was a class parameter I had missed that just indicated the extent was temporary. Or maybe an alternative to extending %Persistent?
Mike I'm not really sure what do you mean by
if there was a class parameter I had missed that just indicated the extent was temporary
what do you want to achieve with this? Hi,
I don't want to achieve anything else, it's just that "cache being cache" there's often another way to do the same thing and it might be easier. :-)
Mike
Article
Stefan Wittmann · Aug 14, 2019
As you might have heard, we just introduced the InterSystems API Manager (IAM); a new feature of the InterSystems IRIS Data Platform™, enabling you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement.
In this article, I will show you how to set up IAM and highlight some of the many capabilities IAM allows you to leverage. InterSystems API Manager brings everything you need
to monitor your HTTP-based API traffic and understand who is using your APIs; what are your most popular APIs and which could require a rework.
to control who is using your APIs and restrict usage in various ways. From simple access restrictions to throttling API traffic and fine-tuning request payloads, you have fine-grained control and can react quickly.
to protect your APIs with central security mechanisms like OAuth2.0 or Key Token Authentication.
to onboard third-party developers and provide them with a superb developer experience right from the start by providing a dedicated Developer Portal for their needs.
to scale your API demands and deliver low-latency responses
I am excited to give you a first look at IAM, so let's get started right away.
Getting started
IAM is available as a download from the WRC Software Distribution site and is deployed as a docker container of its own. So, make sure to meet the following minimum requirements:
The Docker engine is available. Minimum supported version is 17.04.0+.
The docker-compose CLI tool is available. Minimum supported version is 1.12.0+.
The first step requires you to load the docker image via
docker load -i iam_image.tar
This makes the IAM image available for subsequent use on your machine. IAM runs as a separate container so that you can scale it independently from your InterSystems IRIS backend. To start IAM requires access to your IRIS instance to load the required license information. The following configuration changes have to happen:
Enable the /api/IAM web application
Enable the IAM user
Change the password of the IAM user
Now we can configure our IAM container. In the distribution tarball, you will find a script for Windows and Unix-based systems named "iam-setup". This script helps you to set the environment variables correctly, enabling the IAM container to establish a connection with your InterSystems IRIS instance. This is an exemplary run from my terminal session on my Mac:
source ./iam-setup.sh Welcome to the InterSystems IRIS and InterSystems API Manager (IAM) setup script.This script sets the ISC_IRIS_URL environment variable that is used by the IAM container to get the IAM license key from InterSystems IRIS.Enter the full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1Enter the IP address for your InterSystems IRIS instance. The IP address has to be accessible from within the IAM container, therefore, do not use "localhost" or "127.0.0.1" if IRIS is running on your local machine. Instead use the public IP address of your local machine. If IRIS is running in a container, use the public IP address of the host environment, not the IP address of the IRIS container. xxx.xxx.xxx.xxx Enter the web server port for your InterSystems IRIS instance: 52773Enter the password for the IAM user for your InterSystems IRIS instance: Re-enter your password: Your inputs are:Full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1IP address for your InterSystems IRIS instance: xxx.xxx.xxx.xxxWeb server port for your InterSystems IRIS instance: 52773Would you like to continue with these inputs (y/n)? yGetting IAM license using your inputs...Successfully got IAM license!The ISC_IRIS_URL environment variable was set to: http://IAM:****************@xxx.xxx.xxx.xxx:52773/api/iam/licenseWARNING: The environment variable is set for this shell only!To start the services, run the following command in the top level directory: docker-compose up -dTo stop the services, run the following command in the top level directory: docker-compose downURL for the IAM Manager portal: http://localhost:8002
I obfuscated the IP address and you can't see the password I used, but this should give you an idea how simple the configuration is. The full image name, IP address and port of your InterSystems IRIS instance and the password for your IAM user, that's everything you need to get started.
Now you can start your IAM container by executing
docker-compose up -d
This orchestrates the IAM containers and ensures everything is started in the correct order. You can check the status of your containers with the following command:
docker ps
Opening localhost:8002 in my browser brings up the web-based UI:
The global report does not show any throughput yet, as this is a brand new node. We will change that shortly. You can see that IAM supports a concept of workspaces to separate your work into modules and/or teams. Scrolling down and selecting the "default" workspace brings us to the Dashboard for, well, the "default" workspace we will use for our first experiments.
Again, the number of requests for this workspace is still zero, but you get a first look at the major concepts of the API Gateway in the menu on the left side. The first two elements are the most important ones: Services and Routes. A service is an API we want to expose to consumers. Therefore, a REST API in your IRIS instance is considered a service, as is a Google API you might want to leverage. A route decides to which service incoming requests should be routed to. Every route has a certain set of conditions and if the conditions are fulfilled the request is routed to the associated service. To give you an idea, a route can match the IP or domain of the sender, HTTP methods, parts of the URI or a combination of the mentioned examples.
Let's create a service targeting our IRIS instance, with the following values:
field
value
description
name
test-iris
the logical name of this service
host
xxx.xxx.xxx.xxx
public IP-address of your IRIS instance
port
52773
the port used for HTTP requests
protocol
http
the protocols you want to support
Keep the default for everything else. Now let's create a route:
field
value
description
paths
/ api /atelier
requests with this path will be forwarded to our IRIS instance
protocols
http
the protocols you want to support
service
test-iris
requests matching this route will be forwarded to this service
Again, keep the default for everything else. IAM is listening on port 8000 for incoming requests by default. From now on requests that are sent to http://localhost:8000 and start with the path /api/atelier will be routed to our IRIS instance. Let's give this a try in a REST client (I am using Postman).
Sending a GET request to http://localhost:8000/api/atelier/ indeed returns a response from our IRIS instance. Every request goes through IAM and metrics like HTTP status code, latency, and consumer (if configured) are monitored. I went ahead and issues a couple more requests (including two requests to non-existing endpoints, like /api/atelier/test/) and you can see them all aggregated in the dashboard:
Working with plugins
Now that we have a basic route in place, we can start to manage the API traffic. Now we can start to add behavior that complements our service. Now the magic happens.
The most common way to enforce a certain behavior is by adding a plugin. Plugins isolate a certain functionality and can usually be attached to certain parts of IAM. Either they are supposed to affect the global runtime or just parts like a single user (group), a service or a route. We will start by adding a Rate Limiting plugin to our route. What we need to establish the link between the plugin and the route is the unique ID of the route. You can look it up by viewing the details of the route.
If you are following this article step by step, the ID of your route will be different. Copy the ID for the next step.
Click on Plugins on the left sidebar menu. Usually, you see all active plugins on this screen, but as this node is relatively new, there are no active plugins yet. So, move on by selecting "Add New Plugin".
The plugin we are after is in the category "Traffic Control" and is named "Rate Limiting". Select it. There are quite a few fields that you can define here as plugins are very flexible, but we only care about two fields:
field
value
description
route_id
d6a97e5b-7da6-4c98-bc69-6a09263039a8
paste the ID of your route here
config.minute
5
number of calls allowed per minute
That's it. The plugin is configured and active. You probably have seen that we can pick from a variety of time intervals, like minutes, hours or days but I deliberately used minutes as this allows us to easily understand the impact of this plugin.
If you send the same request again in Postman you will realize that the response comes back with 2 additional headers. XRateLimit-Limit-minute (value 5) and XRateLimit-Remaining-minute (value 4). This tells the client that he can make up to 5 calls per minute and has 4 more requests available in the current time interval.
If you keep making the same request over and over again, you will eventually run out of your available quota and instead get back an HTTP status code 429 with the following payload:
Wait until the minute is over and you will be able to get through again. This is a pretty handy mechanism allowing you to achieve a couple of things:
Protect your backend from spikes
Set an expectation for the client how many calls he is allowed to make in a transparent way for your services
Potentially monetize based on API traffic by introducing tiers (e.g. 100 calls per hour at the bronze level and unlimited with gold)
You can set values for different time intervals and hereby smoothing out API traffic over a certain period. Let's say you allow 600 calls per hour for a certain route. That's 10 calls per minute on average. But you are not preventing clients from using up all of their 600 calls in the very first minute of their hour. Maybe that's what you want. Maybe you would like to ensure that the load is distributed more equally over the hour. By setting the config_minute field to 20 you ensure that your users are not making more than 20 calls per minute AND 600 per hour. This would allow some spikes on the minute-level interval as they can only make 10 calls per minute on average, but users can't use up the hourly quota in a single minute. Now it will take them at least 30 minutes if they hit your system with full capacity. Clients will receive additional headers for each configured time interval, e.g.:
header
value
X-RateLimit-Limit-hour
600
X-RateLimit-Remaining-hour
595
X-RateLimit-Limit-minute
20
X-RateLimit-Remaining-minute
16
Of course, there are many different ways to configure your rate-limits depending on what you want to achieve.
I will stop at this point as this is probably enough for the first article about InterSystems API Manager. There are plenty more things you can do with IAM, we've just used one out for more than 40 plugins and haven't even used all of the core concepts yet! Here are a couple of things you can do as well and I might cover in future articles:
Add a central authentication mechanism for all your services
Scale-out by load-balancing requests to multiple targets that support the same set of APIs
Introduce new features or bugfixes to a smaller audience and monitor how it goes before you release it to everyone
Onboard internal and external developers by providing them a dedicated and customizable developer portal documenting all APIs they have access to
Cache commonly requested responses to reduce response latency and the load on the service systems
So, let's give IAM a try and let me know what you think in the comments below. We worked hard to bring this feature to you and are eager to learn what challenges you overcome with this technology. Stay tuned...
More resources
The official Press Release can be found here: InterSystems IRIS Data Platform 2019.2 introduces API Management capabilities
A short animated overview video: What is InterSystems API Manager
An 8-minute video walking you through some of the key highlights: Introducing InterSystems API Manager
The documentation is part of the regular IRIS documentation: InterSystems API Manager Documentation
Nice to have a clear step-by-step example to follow! Hi @Stefan.Wittmann !
If IRIS API Manager is published on InterSystems Docker Hub? `docker load -I iam_image.tar` did not work for me.
I used `docker import aim_image.tar iam` instead and it worked. Have you unpacked the IAM-0.34-1-1.tar.gz?
Originally I was trying to import the whole archive, which failed. After unpacking container it was imported successfully. No, it is not for multiple reasons. We have plans to publish the InterSystems API Manager on Docker Repositories at a later point, but I can't give you an ETA. Hi, I didn't have any problems loading the image, but when I run the setup script, I get the following error:
Your inputs are:Full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1IP address for your InterSystems IRIS instance: xxx.xxx.xxx.xxxWeb server port for your InterSystems IRIS instance: 52773Would you like to continue with these inputs (y/n)? yGetting IAM license using your inputs...No content. Either your InterSystems IRIS instance is unlicensed or your license key does not contain an IAM license.
Which license is required for IAM?
We have installed Intersystems IRIS for Health in 2019.3 version on this server.
$ iris list
Configuration 'IRISDEV01' (default) directory: /InterSystems versionid: 2019.3.0.304.0 datadir: /InterSystems conf file: iris.cpf (SuperServer port = 51773, WebServer = 52773) status: running, since Tue Aug 13 08:27:34 2019 state: warn product: InterSystems IRISHealth
Thanks for your help! Please write to InterSystems to receive your license. While trying to do this I've used both my public IP address and my VPN IP address and I get the error
Couldn't reach InterSystems IRIS at xxx.xx.x.xx:52773. One or both of your IP and Port are incorrect.
Strangely I got this to work last week without having the IAM application or user enabled. That said I was able to access the IAM portal and set some stuff up but I'm not sure it was actually truly working.
Any troubleshooting advice? Michael,
please create a separate question for this comment.