Clear filter
Announcement
Simon Player · Sep 12, 2017
Modern businesses need new kinds of applications — ones that are smarter, faster, and can scale more quickly and cost-effectively to accommodate larger data sets, greater workloads, and more users.With this in mind, we have unveiled InterSystems IRIS Data Platform™, a complete, unified solution that provides a comprehensive and consistent set of capabilities spanning data management, interoperability, transaction processing, and analytics. It redefines high performance for application developers, systems integrators, and end-user organizations who develop and deploy data-rich and mission-critical solutions. InterSystems IRIS Data Platform provides all of the following capabilities in a single unified platform:Data ManagementAn ultra-high performance, horizontally scalable, multi-model database stores and accesses data modeled as objects, schema-free data, relational data, and multi-dimensional arrays in a single, highly efficient representation. It simultaneously processes both transactional and analytic workloads in a single database at very high scale, eliminating latencies between event, insight, and action, and reducing the complexities associated with maintaining multiple databases. InteroperabilityA comprehensive integration platform provides application integration, data coordination, business process orchestration, composite application development, API management, and real-time monitoring and alerting capabilities to support the full spectrum of integration scenarios and requirements. AnalyticsA powerful open analytics platform supports a wide range of analytics, including business intelligence, predictive analytics, distributed big data processing, real-time analytics, and machine learning. It is able to analyze real-time and batch data simultaneously at scale, and developers can embed analytic processing into business processes and transactional applications, enabling sophisticated programmatic decisions based on real-time analyses. The analytics platform also provides natural language processing capabilities to extract meaning and sentiment from unstructured text, allowing organizations to streamline processes that reference customer emails, knowledge databases, social media content, and other unstructured text data. Cloud DeploymentAutomated “cloud-first” deployment options simplify public cloud, private cloud, on-premise, and virtual machine deployments and updates. You can learn more about this new data platform by visiting our online learning page Simon Player,Director of Development, Data Platforms and TrakCare. Hi , Did we have iris cube like cache and ensemble or iris is different. please explain me how to work with iris i am really confused about iris. Here is a lot of information on Intersystems IRIS to reduce confusion.Your InterSystems sales rep will have more.
Announcement
Evgeny Shvarov · Apr 13, 2017
Hi, Community!You are very welcome to watch just uploaded InterSystems Atelier Welcome video on the InterSystems Developers YouTube Channel! Subscribe and stay tuned!
Question
Jose Sampaio · Sep 19, 2018
Hi community members!Please, I'm looking for any references or experiences using InterSystems technologies with MQTT (Message Queuing Telemetry Transport) protocol .Thanks in advance! Hi Evgeny!I will take look on this.Tks. Hi, Jose!Have you seen this article? Also pinging @Attila.Toth in hope to provide most recent updates.
Announcement
Daniel Kutac · Oct 29, 2018
We had our first meetup of the Prague Meetup for InterSystems Data Platform last Thursday!
As it was our first such venue, the attendance was not large, but we believe it was a good start. Those who attended could learn about new features that InterSystems IRIS brings to our partners and customers as well as listen to a presentation discussing what it takes to migrate from Caché or Ensemble to InterSystems IRIS and eventually containerizing their applications.
We all enjoyed excellent assortment of various tea species, accompanied by vegetarian food. (so you know what you can expect next time :) )
Attached you find a picture taken at the meetup.
Looking forward to see you next time and perhaps in a bigger group!
Dan Kutac Congratulations! In past, we had a similar event in Austria named "Tech Talk" that formed a national user community over time.I wish you a lot of success,Robert Thank you Robert!Dan
Article
Evgeny Shvarov · Jul 22, 2019
Hi Community!
We've introduced Direct Messages on InterSystems Community.
What's that?
Direct message(DM) is a Developer Community feature which lets you to send a direct message to InterSystems community member you want.
How to send it?
Open member's page, and click "Send Direct Message". Like here:
Or, open your account page and open the section "Direct Messages":
In Direct Messages you can see all the conversations and start the new one with Write new message:
The conversation could be between two or more people.
How a member will know about the message?
DC sends an email notification to a member if he has a new DM. Of course, you can setup if you want to receive DM email notifications.
Privacy
Attention! Direct messages are not private messages. Direct messages are pretty much the same as posts and comments but with the difference that you can alter the visibility of the message to certain people.
E.g. if John sends DM to Paul this DM is visible to John, Paul and to Developer Community admin. But this DM is hidden from other community members and public access, e.g. from search crawlers.
So it is safe to send contact data to each other which you consider possible to share with your recipient and DC admin.
What About Spam?
Only registered members who have postings can send direct messages.
Any registered members can receive and answer messages.
So, there is no spam expected.
Please report on any issues on Developers Issue Tracker or on Community Feedback track.
Stay tuned!
Article
Stefan Wittmann · Aug 14, 2019
As you might have heard, we just introduced the InterSystems API Manager (IAM); a new feature of the InterSystems IRIS Data Platform™, enabling you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement.
In this article, I will show you how to set up IAM and highlight some of the many capabilities IAM allows you to leverage. InterSystems API Manager brings everything you need
to monitor your HTTP-based API traffic and understand who is using your APIs; what are your most popular APIs and which could require a rework.
to control who is using your APIs and restrict usage in various ways. From simple access restrictions to throttling API traffic and fine-tuning request payloads, you have fine-grained control and can react quickly.
to protect your APIs with central security mechanisms like OAuth2.0 or Key Token Authentication.
to onboard third-party developers and provide them with a superb developer experience right from the start by providing a dedicated Developer Portal for their needs.
to scale your API demands and deliver low-latency responses
I am excited to give you a first look at IAM, so let's get started right away.
Getting started
IAM is available as a download from the WRC Software Distribution site and is deployed as a docker container of its own. So, make sure to meet the following minimum requirements:
The Docker engine is available. Minimum supported version is 17.04.0+.
The docker-compose CLI tool is available. Minimum supported version is 1.12.0+.
The first step requires you to load the docker image via
docker load -i iam_image.tar
This makes the IAM image available for subsequent use on your machine. IAM runs as a separate container so that you can scale it independently from your InterSystems IRIS backend. To start IAM requires access to your IRIS instance to load the required license information. The following configuration changes have to happen:
Enable the /api/IAM web application
Enable the IAM user
Change the password of the IAM user
Now we can configure our IAM container. In the distribution tarball, you will find a script for Windows and Unix-based systems named "iam-setup". This script helps you to set the environment variables correctly, enabling the IAM container to establish a connection with your InterSystems IRIS instance. This is an exemplary run from my terminal session on my Mac:
source ./iam-setup.sh Welcome to the InterSystems IRIS and InterSystems API Manager (IAM) setup script.This script sets the ISC_IRIS_URL environment variable that is used by the IAM container to get the IAM license key from InterSystems IRIS.Enter the full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1Enter the IP address for your InterSystems IRIS instance. The IP address has to be accessible from within the IAM container, therefore, do not use "localhost" or "127.0.0.1" if IRIS is running on your local machine. Instead use the public IP address of your local machine. If IRIS is running in a container, use the public IP address of the host environment, not the IP address of the IRIS container. xxx.xxx.xxx.xxx Enter the web server port for your InterSystems IRIS instance: 52773Enter the password for the IAM user for your InterSystems IRIS instance: Re-enter your password: Your inputs are:Full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1IP address for your InterSystems IRIS instance: xxx.xxx.xxx.xxxWeb server port for your InterSystems IRIS instance: 52773Would you like to continue with these inputs (y/n)? yGetting IAM license using your inputs...Successfully got IAM license!The ISC_IRIS_URL environment variable was set to: http://IAM:****************@xxx.xxx.xxx.xxx:52773/api/iam/licenseWARNING: The environment variable is set for this shell only!To start the services, run the following command in the top level directory: docker-compose up -dTo stop the services, run the following command in the top level directory: docker-compose downURL for the IAM Manager portal: http://localhost:8002
I obfuscated the IP address and you can't see the password I used, but this should give you an idea how simple the configuration is. The full image name, IP address and port of your InterSystems IRIS instance and the password for your IAM user, that's everything you need to get started.
Now you can start your IAM container by executing
docker-compose up -d
This orchestrates the IAM containers and ensures everything is started in the correct order. You can check the status of your containers with the following command:
docker ps
Opening localhost:8002 in my browser brings up the web-based UI:
The global report does not show any throughput yet, as this is a brand new node. We will change that shortly. You can see that IAM supports a concept of workspaces to separate your work into modules and/or teams. Scrolling down and selecting the "default" workspace brings us to the Dashboard for, well, the "default" workspace we will use for our first experiments.
Again, the number of requests for this workspace is still zero, but you get a first look at the major concepts of the API Gateway in the menu on the left side. The first two elements are the most important ones: Services and Routes. A service is an API we want to expose to consumers. Therefore, a REST API in your IRIS instance is considered a service, as is a Google API you might want to leverage. A route decides to which service incoming requests should be routed to. Every route has a certain set of conditions and if the conditions are fulfilled the request is routed to the associated service. To give you an idea, a route can match the IP or domain of the sender, HTTP methods, parts of the URI or a combination of the mentioned examples.
Let's create a service targeting our IRIS instance, with the following values:
field
value
description
name
test-iris
the logical name of this service
host
xxx.xxx.xxx.xxx
public IP-address of your IRIS instance
port
52773
the port used for HTTP requests
protocol
http
the protocols you want to support
Keep the default for everything else. Now let's create a route:
field
value
description
paths
/ api /atelier
requests with this path will be forwarded to our IRIS instance
protocols
http
the protocols you want to support
service
test-iris
requests matching this route will be forwarded to this service
Again, keep the default for everything else. IAM is listening on port 8000 for incoming requests by default. From now on requests that are sent to http://localhost:8000 and start with the path /api/atelier will be routed to our IRIS instance. Let's give this a try in a REST client (I am using Postman).
Sending a GET request to http://localhost:8000/api/atelier/ indeed returns a response from our IRIS instance. Every request goes through IAM and metrics like HTTP status code, latency, and consumer (if configured) are monitored. I went ahead and issues a couple more requests (including two requests to non-existing endpoints, like /api/atelier/test/) and you can see them all aggregated in the dashboard:
Working with plugins
Now that we have a basic route in place, we can start to manage the API traffic. Now we can start to add behavior that complements our service. Now the magic happens.
The most common way to enforce a certain behavior is by adding a plugin. Plugins isolate a certain functionality and can usually be attached to certain parts of IAM. Either they are supposed to affect the global runtime or just parts like a single user (group), a service or a route. We will start by adding a Rate Limiting plugin to our route. What we need to establish the link between the plugin and the route is the unique ID of the route. You can look it up by viewing the details of the route.
If you are following this article step by step, the ID of your route will be different. Copy the ID for the next step.
Click on Plugins on the left sidebar menu. Usually, you see all active plugins on this screen, but as this node is relatively new, there are no active plugins yet. So, move on by selecting "Add New Plugin".
The plugin we are after is in the category "Traffic Control" and is named "Rate Limiting". Select it. There are quite a few fields that you can define here as plugins are very flexible, but we only care about two fields:
field
value
description
route_id
d6a97e5b-7da6-4c98-bc69-6a09263039a8
paste the ID of your route here
config.minute
5
number of calls allowed per minute
That's it. The plugin is configured and active. You probably have seen that we can pick from a variety of time intervals, like minutes, hours or days but I deliberately used minutes as this allows us to easily understand the impact of this plugin.
If you send the same request again in Postman you will realize that the response comes back with 2 additional headers. XRateLimit-Limit-minute (value 5) and XRateLimit-Remaining-minute (value 4). This tells the client that he can make up to 5 calls per minute and has 4 more requests available in the current time interval.
If you keep making the same request over and over again, you will eventually run out of your available quota and instead get back an HTTP status code 429 with the following payload:
Wait until the minute is over and you will be able to get through again. This is a pretty handy mechanism allowing you to achieve a couple of things:
Protect your backend from spikes
Set an expectation for the client how many calls he is allowed to make in a transparent way for your services
Potentially monetize based on API traffic by introducing tiers (e.g. 100 calls per hour at the bronze level and unlimited with gold)
You can set values for different time intervals and hereby smoothing out API traffic over a certain period. Let's say you allow 600 calls per hour for a certain route. That's 10 calls per minute on average. But you are not preventing clients from using up all of their 600 calls in the very first minute of their hour. Maybe that's what you want. Maybe you would like to ensure that the load is distributed more equally over the hour. By setting the config_minute field to 20 you ensure that your users are not making more than 20 calls per minute AND 600 per hour. This would allow some spikes on the minute-level interval as they can only make 10 calls per minute on average, but users can't use up the hourly quota in a single minute. Now it will take them at least 30 minutes if they hit your system with full capacity. Clients will receive additional headers for each configured time interval, e.g.:
header
value
X-RateLimit-Limit-hour
600
X-RateLimit-Remaining-hour
595
X-RateLimit-Limit-minute
20
X-RateLimit-Remaining-minute
16
Of course, there are many different ways to configure your rate-limits depending on what you want to achieve.
I will stop at this point as this is probably enough for the first article about InterSystems API Manager. There are plenty more things you can do with IAM, we've just used one out for more than 40 plugins and haven't even used all of the core concepts yet! Here are a couple of things you can do as well and I might cover in future articles:
Add a central authentication mechanism for all your services
Scale-out by load-balancing requests to multiple targets that support the same set of APIs
Introduce new features or bugfixes to a smaller audience and monitor how it goes before you release it to everyone
Onboard internal and external developers by providing them a dedicated and customizable developer portal documenting all APIs they have access to
Cache commonly requested responses to reduce response latency and the load on the service systems
So, let's give IAM a try and let me know what you think in the comments below. We worked hard to bring this feature to you and are eager to learn what challenges you overcome with this technology. Stay tuned...
More resources
The official Press Release can be found here: InterSystems IRIS Data Platform 2019.2 introduces API Management capabilities
A short animated overview video: What is InterSystems API Manager
An 8-minute video walking you through some of the key highlights: Introducing InterSystems API Manager
The documentation is part of the regular IRIS documentation: InterSystems API Manager Documentation
Nice to have a clear step-by-step example to follow! Hi @Stefan.Wittmann !
If IRIS API Manager is published on InterSystems Docker Hub? `docker load -I iam_image.tar` did not work for me.
I used `docker import aim_image.tar iam` instead and it worked. Have you unpacked the IAM-0.34-1-1.tar.gz?
Originally I was trying to import the whole archive, which failed. After unpacking container it was imported successfully. No, it is not for multiple reasons. We have plans to publish the InterSystems API Manager on Docker Repositories at a later point, but I can't give you an ETA. Hi, I didn't have any problems loading the image, but when I run the setup script, I get the following error:
Your inputs are:Full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1IP address for your InterSystems IRIS instance: xxx.xxx.xxx.xxxWeb server port for your InterSystems IRIS instance: 52773Would you like to continue with these inputs (y/n)? yGetting IAM license using your inputs...No content. Either your InterSystems IRIS instance is unlicensed or your license key does not contain an IAM license.
Which license is required for IAM?
We have installed Intersystems IRIS for Health in 2019.3 version on this server.
$ iris list
Configuration 'IRISDEV01' (default) directory: /InterSystems versionid: 2019.3.0.304.0 datadir: /InterSystems conf file: iris.cpf (SuperServer port = 51773, WebServer = 52773) status: running, since Tue Aug 13 08:27:34 2019 state: warn product: InterSystems IRISHealth
Thanks for your help! Please write to InterSystems to receive your license. While trying to do this I've used both my public IP address and my VPN IP address and I get the error
Couldn't reach InterSystems IRIS at xxx.xx.x.xx:52773. One or both of your IP and Port are incorrect.
Strangely I got this to work last week without having the IAM application or user enabled. That said I was able to access the IAM portal and set some stuff up but I'm not sure it was actually truly working.
Any troubleshooting advice? Michael,
please create a separate question for this comment. Hello @Stefan.Wittmann thanks for the detailed tutorial. Exists a community free version of API Manager in order to use it with the Iris Community version?
Thank you. The documentation link at the end of the post has changed. Here's the current one.
Article
Sergey Kamenev · Nov 11, 2019
InterSystems IRIS supports a unique data structure, called globals, for information storage. Essentially, globals are persistent arrays with multi-level indices, having several extra capabilities—transactions, quick traversal of tree structures, and a programming language known as ObjectScript.
I'd note that for the remainder of the article, or at least the code samples, we'll assume you have familiarised yourself with the basics of globals:
Globals Are Magic Swords For Managing Data. Part 1.Globals - Magic swords for storing data. Trees. Part 2.Globals - Magic swords for storing data. Sparse arrays. Part 3.
Globals are completely different structures for storing data than the usual tables, and operate at a much lower level. And that begs the question, how would transactions look when working with globals, and what peculiarities might you encounter in the effort?
We know from relational database theory that a good transaction implementation needs to pass the ACID test (see ACID in Wikipedia).
Atomicity: All changes made in the transaction are recorded, or none at all. See Atomicity (database systems) in Wikipedia.
Consistency: After the transaction is completed, the logical state of the database should be internally consistent. In many ways, this requirement applies to the programmer, but in the case of SQL databases, it also applies to foreign keys.
Isolation: Transactions running in parallel shouldn’t affect one another.
Durability: After successful completion of the transaction, low-level problems (such as a power failure) should not affect the data changed by the transaction.
Globals are non-relational data structures. They were designed to support ultra-fast work on hardware with a minimal footprint. Let's look at how transactions are implemented in globals using the IRIS/docker-image.
1. Atomicity
Consider the situation when 3 values must be saved in database together, or none of them should be recorded.
The easiest way to check atomicity is to enter the following code in terminal:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
TCOMMIT
Then conclude with:
ZWRITE ^a
The result should be this:
^a(1)=1
^a(2)=2
^a(3)=3
As expected, Atomicity is observed. But now let's complicate the task by introducing an error and see how the transaction is saved—partially, or not at all. We’ll start checking atomicity as we did before, like so:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
But this time we’ll forcibly stop the container using the command docker kill my-iris, which is almost equivalent to a forced power off as it sends a SIGKILL (halt process immediately) signal. After restarting the container, we check the contents of our global to see what happened. Maybe the transaction has been partially saved?
ZWRITE ^a
Nothing got out
No, nothing has been saved. So, in the case of accidental server stop, the IRIS database will guarantee the atomicity of your transactions.
But what if we want to cancel changes intentionally? So now let's try this with the rollback command, as follows:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
TROLLBACK 1
ZWRITE ^a
Nothing got out
Once again, nothing has been saved.
2. Consistency
Recall that globals are lower-level structures for storing data than relational tables, and with a globals database, indices are also stored as globals. Thus, to meet the requirement of consistency, you need to include an index change in the same transaction as a global node value change.
Say, for example, we have a global ^person, in which we store personal data using the social security number (SSN) as the key:
^person(1234567, 'firstname') = 'Sergey'
^person(1234567, ‘lastname’) = ‘Kamenev’
^person(1234567, ‘phone’) = ‘+74995555555
...
We’ve created an ^index key to enable rapid search by last or last and first names, as follows:
^index(‘Kamenev’, ‘Sergey’, 1234567) = 1
To keep the database consistent, we need to add persons like this:
TSTART
^person(1234567, ‘firstname’) = ‘Sergey’
^person(1234567, ‘lastname’) = ‘Kamenev’
^person(1234567, ‘phone’) = ‘+74995555555
^index(‘Kamenev’, ‘Sergey’, 1234567) = 1
TCOMMIT
Accordingly, when deleting a person, we must use the transaction:
TSTART
Kill ^person(1234567)
Kill ^index(‘Kamenev’, ‘Sergey’, 1234567)
TCOMMIT
In other words, fulfilling the consistency requirement for your application logic is entirely up to the programmer when working with a low-level storage format such as globals.
Luckily, IRIS offers the commands to organise your transactions and deliver Consistency guarantees for your applications. When using SQL, IRIS will use these commands under the hood to ensure consistency of its underlying globals data structures when performing INSERT, UPDATE, and DELETE statements. Of course, IRIS SQL also offers corresponding SQL commands for starting and stopping transactions to leverage in your (SQL) application logic.
3. Isolation
Here’s where things get wild. Suppose many users are working on the same database at the same time, changing the same data. The situation is comparable to when many developers are working with the same code repository and trying to commit changes to many files at the same time.
The database needs to keep up with everything in real time. Given that serious companies typically have a person responsible for version control—merging branches, managing conflict resolution, and so forth—and that the database needs to take care of this in real time, the complexity of the problem and the importance of correctly designing the database and the code that serves it both become self-evident.
The database can’t understand the meaning of actions performed by users and try to prevent conflicts when they’re working on the same data. It can only cancel one transaction that contradicts another or execute them sequentially.
Moreover, as a transaction is executing (before the commit), the state of the database may be inconsistent. Other transactions should not have access to the inconsistent database state. In relational databases, this is achieved in many ways, such as by creating snapshots or using multi-versioned rows.
When transactions execute in parallel, it’s important that they not interfere with each other. This is what isolation is all about.
SQL defines four levels of isolation, in order of increasing rigor. They are:
READ UNCOMMITTED
READ COMMITTED
REPEATABLE READ
SERIALIZABLE
Let's consider each level separately. Note that the cost of implementing each level grows almost exponentially as you move up the stack.
READ UNCOMMITTED is the lowest level of isolation, but it’s also the fastest. Transactions can read the changes made by other transactions.
READ COMMITTED is the next level of isolation and represents a compromise. Transactions can’t read each other's changes before a commit, but can read any changes after a commit.
Say we have a long-running transaction (T1), during which commits have happened in transactions T2, T3... Tn while working on the same data as T1. In such cases, each time we request data in T1, we may well obtain a different result. This is called a non-repeatable read.
REPEATABLE READ is the next level of isolation, in which we no longer have non-repeatable reads because a snapshot of the result is taken each time we request to read data. The snapshot is used if the same data is requested again during the same transaction. However, at this isolation level, it’s possible that what will be read is phantom data—new strings that were added by transactions committed in parallel.
SERIALIZABLE is the highest level of isolation. It’s characterized by the fact that any data used in a transaction (whether read or changed) becomes accessible to other transactions only after the first transaction has finished.
First, let’s see whether there’s isolation of operations between threads with transactions and threads without transactions. Open two terminal windows and enter the following:
Kill ^t
Write ^t(1)
2
TSTART
Set ^t(1)=2
There’s no isolation. One thread sees what the second one does when it opens a transaction.
Now let's see whether transactions in different threads can see what’s happening inside. Open two terminal windows and start two transactions in parallel.
Kill ^t
TSTART
Write ^t(1)
2
TSTART
Set ^t(1)=2
A 3 appears on the screen. What we have here is the simplest (but also the fastest) isolation level: READ UNCOMMITTED.
In principle, this is what we expect from a low-level data representation such as globals, which always prioritize speed. IRIS SQL provides different transaction isolation levels to choose from, but what if we need a higher level of isolation when working with globals directly?
Here we need to think about what isolation levels are actually for and how they work. For instance, lower levels of isolation are compromises designed to speed up database operations.
The highest isolation level, SERIALIZABLE, ensures that the result of transactions executed in parallel is equivalent to the result of executing them serially. This guarantees there will be no collisions. We can achieve this with properly used locks in ObjectScript, which can be applied in multiple ways. This means you can create regular, incremental, or multiple locks using the LOCK command.
Let's see how to use locks to achieve different levels of isolation. In ObjectScript, you use the LOCK operator. This operator permits not just exclusive locks, which are necessary for changing data, but also what are called shared locks. These shared locks can be accessed by several threads at once to read data that won’t be changed by other processes during the reading process.
For more details about locking, see the article “Locking and Concurrency Control”. To learn about two-phase locking, see the article "Two-phase locking" on Wikipedia.
The difficulty is that the state of the database may be inconsistent during the transaction, with the inconsistent data visible to other processes. How can this be avoided? For this example, we’ll use locks to create visibility windows within which the state of the database can be consistent. Access to any of these visibility windows will be through a lock.
Shared locks on the same data are reusable—several processes can take them. These locks prevent other processes from changing data. That is, they’re used to form windows of a consistent database state.
Exclusive locks, on the other hand, are used when you’re modifying data—only one process can take such a lock.
Exclusive locking can be employed in two scenarios. First, it can take any process if the data doesn’t have locks. Second, it can take only the process that has a shared lock on the data and the first one that requested an exclusive lock.

The narrower the visibility window, the longer the wait for other processes becomes—but the more consistent the state of the database in it will be.
READ COMMITTED ensures that we see only committed data from other threads. If data in another transaction hasn't yet been committed, we see the old version. This lets us parallelize the work instead of waiting for a lock to be released.
In IRIS, you can't see an old version of the data without using special tricks, so we'll have to make do with locks. We need to use shared locks to permit data to be read only at points where it’s consistent.
Let's say we have a database of users, ^person, who transfer money from one person to another. Here’s the point at which money is transferred from person 123 to person 242:
LOCK +^person(123), +^person(242)
TSTART
Set ^person(123, amount) = ^person(123, amount) - amount
Set ^person(242, amount) = ^person(242, amount) + amount
TCOMMIT
LOCK -^person(123), -^person(242)
The point where the amount is requested for person 123 before the deduction should have an exclusive lock (by default):
LOCK +^person(123)
Write ^person(123)
But if we need to display the account status in the user's personal account, we can use a shared lock, or none at all:
LOCK +^person(123)#”S”
Write ^person(123)
LOCK -^person(123)#”S”
However, if we accept that database operations are carried out virtually instantaneously (remember that globals are a much lower-level structure than a relational table), then this level is no longer so necessary in favor higher isolation levels.
Full example, for READ COMMITTED:
LOCK +^person(123)#”S”, +^person(242)#”S”
Read data (сoncurrent committed transactions can change the data)
LOCK +^person(123), +^person(242)
TSTART
Set ^person(123, amount) = ^person(123, amount) - amount
Set ^person(242, amount) = ^person(242, amount) + amount
TCOMMIT
LOCK -^person(123), -^person(242)
Read data (сoncurrent committed transactions can change the data)
LOCK -^person(123)#”S”, -^person(242)#”S”
REPEATABLE READ is the second-highest level of isolation. At this level we accept that data may be read several times with the same results in one transaction, but may be changed by parallel transactions.
The easiest way to ensure a REPEATABLE READ is to take an exclusive lock on the data, which automatically turns this isolation level into a SERIALIZABLE one.
LOCK +^person(123, amount)
read ^person(123, amount)
other operations (parallel streams try to change ^person(123, amount), but can't)
change ^person(123, amount)
read ^person(123, amount)
LOCK -^person(123, amount)
If locks are separated by commas, they are taken in sequence. But they will be taken atomically, all at once, if they’re listed like this:
LOCK +(^person(123),^person(242))
SERIALIZABLE is the highest level of isolation and the most costly. When working with classic locks like we did in the above examples, we have to set the locks in such a way that all transactions with data in common will end up being performed serially. For this approach, most of the locks should be exclusive and taken to the smallest fields of the global, for performance.
If we’re talking about deducting funds from a ^person global, then SERIALIZABLE is the only acceptable level. Money spent needs to be strictly serial, otherwise it’s possible to spend the same amount several times.
4. Durable
I conducted tests with a hard cut-off of the container using the docker kill my-iris command. The database stood up well to these tests. No problems were identified.
Tools to manage globals and locks
You may find useful the following tools in IRIS Management portal:
View and manage locks.
View and manage globals.
Conclusion
InterSystems IRIS has support for transactions using globals, which are atomic and durable. To ensure database consistency with globals, some programming effort and the use of transactions are necessary, since there are no complex built-in constructions like foreign keys.
Globals without locks are equivalent to the READ UNCOMMITTED level of isolation, but this can be raised to the SERIALIZABLE level using locks. The correctness and transaction speed achievable with globals depend considerably on the programmer's skill and intent. The more widely that shared locks are used when reading data, the higher the isolation level. And the more narrowly exclusive locks are used, the greater the speed. Sergey, it's great that you are writing articles for newbies, nevertheless you don't explicitly mark it. Just a quick note on your samples: ZWRITE command never returns <UNDEFINED> in IRIS, so to check the global existence one should use something like
if '$data(^A) { write "Global ^A is UNDEFINED",! }
I'm sure that you are aware of it; just thinking of novices that should not be confused. In the example for READ UNCOMMITTED, after the second (right-hand) Terminal session sets ^t(1)=2, when the first (left-hand) Terminal session writes ^t(1), the example shows/states that a "3" appears, but that's wrong; it should be a "2". Since transactions can be nested, and TROLLBACK rolls back all transactions, it's best practice to pair each TSTART with TROLLBACK 1, which will rollback only the currenct transaction. Thanks! I fixed the error Thanks! You're right Thanks! I fixed the error As you are talking about locks and transactions, and as others have noted, you can nest transactions it might be worth warning people about locks inside transactions and the fact that the unlock will not take place until the tcommit.
These can cause issues, especially where this is a method using transactions and locks which calls another that does the same.
Announcement
Anastasia Dyubaylo · Nov 26, 2019
Hi Community,
We are pleased to invite you to the InterSystems Meetup in Moscow on December 10, 2019!
InterSystems Moscow Meetup is a pre-New Year meeting for users and developers on InterSystems technologies. The meetup will be devoted to the InterSystems IRIS Data Platform.
Please check out the agenda:
📌 18:40 Registration. Welcome coffee
📌 19:00Review presentation on the InterSystems Russia news for 2019
📌 19:15 Technology News on InterSystems IRIS by @Eduard.Lebedyuk, InterSystems Sales Engineer
REST, JSON, IAM
PEX, Native API, etc.
📌 20:00 Coffee Break
📌 20:20 Migration to InterSystems IRIS
📌 21:00 ObjectScript Package Manager Introduction — Package Manager Client for InterSystems IRIS by @Evgeny.Shvarov, Startups and Community Manager
📌 21:15 Open Exchange and other resources & services for InterSystems Developers by @Evgeny.Shvarov, Startups and Community Manager
📌 21:30 Free time. Drinks and snacks
The agenda is full of interesting stuff. We look forward to seeing you!
So, remember:
🗓 Date: December 10, 2019
⏱ Time: 18:40-22:00
📍 Venue: Loft-Ministerstvo, Stoleshnikov Lane 6/3, Moscow
✅ Registration: Register for FREE here
Space is limited, so register today to secure your place. Admission free, registration is mandatory for attendees.
Save your seat today! Hey!
The agenda is updated - I'll do the following sessions:
📌 21:00 ObjectScript Package Manager Introduction — Package Manager Client for InterSystems IRIS
📌 21:15 Open Exchange and other resources & services for InterSystems Developers
Come to chat!
Announcement
Andreas Dieckow · Oct 24, 2019
InterSystems Atelier has been tested with OpenJDK 8. The InterSystems Eclipse plug-in is currently available for Eclipse Photon (4.8), which requires and works with Java 8.
Announcement
Jon Jensen · May 23, 2019
InterSystems Global Summit 2019 Boston Marriott Copley PlaceSeptember 22-25, 2019Registration is now open! InterSystems Global Summit 2019 is the premier event for the InterSystems technology community – a gathering of industry leaders and developers at the forefront of their respective industries. This event attracts a wide range of attendees, from C-level executives, top subject matter experts and visionary leaders, managers, directors and developers. Attendees gather to network with peers, connect with InterSystems partners, learn best practices and get a firsthand look at upcoming features and future innovations from InterSystems.Global Summit will be a three and half-day event held at the Boston Marriott Copley Place, September 22-25, 2019. Nestled in the bustling cosmopolitan neighborhood of Boston’s Back Bay, the Marriott Hotel is conveniently located next to the Prudential Center, just minutes from historic Trinity Church and Boston Common, and walking distance to Fenway Park and the Charles River.Global Summit abounds with opportunities to connect with the larger InterSystems Community:Tech Exchange - talk with our developers about current and future product capabilitiesPartner Pavilion - discover products and services to help you build applications that matterHealthShare User Group - share your stories & hear what’s new and next for InterSystems HealthShareExperience Lab - get hands-on with our latest innovationsPersonalized Training - Learning Services experts available for one-on-one consultationsAnd much more... Register Now!
Announcement
Andreas Dieckow · Jun 28, 2019
Conversion Sequence step 4 (see table below)Over the last few months, we have made changes to InterSystems IRIS to make it easier for you to move from Caché/Ensemble (C/E) to InterSystems IRIS. The most significant adjustments are the re-introduction of non-Unicode databases and the in-place conversion. InterSystems IRIS now fully supports non-Unicode databases with all the functionally that already exists with Caché. The in-place conversion supports an upgradelike process to install InterSystems IRIS right on top of your existing C/E deployments. We call it a “conversion” because it transforms your C/E instance into a true InterSystems IRIS instance.InterSystems is excited to invite you to our in-place conversion field test program. This program will be active until the end of July and provides you with early access for testing and experiencing your move from C/E to InterSystems IRIS. We have already concluded a limited pre-field test and are pleased that all customers have been able to successfully move their application and convert their existing instances to InterSystems IRIS.What will you need to participate? InterSystems will give you access to two documents, a special kit of InterSystems IRIS that offers the features for this field test, and of course a license key.InterSystems IRIS Adoption GuideThe journey begins here, where you can discover the differences between the two product lines and learn all the information you need to port your application to InterSystems IRIS. Once you have your application running on InterSystems IRIS, you can move to the next step. By the way, you don’t need to do anything special to activate the non-Unicode aspect.InterSystems IRIS Conversion GuideThis document describes, in great detail, all the aspects and considerations for converting a single instance or instances that are part of mirror configurations.The guides, InterSystems IRIS kit, and license key are available from our WRC download server. Look for the files that have the word “conversion” in the name.Support for the in-place conversion and non-Unicode support will be released with InterSystems IRIS 2019.1.1 before the summer is over. Please do not use the field test kit to convert production installations.Please send all feedback to conversionft@intersystems.com, which will route your message straight to the team that will assist you.We hope to engage with you during this field test and include your feedback in the official release. IRIS Adoption Initiative - SequencingStepMigrations or Conversion from/toStatus1Migration to IRIS or IRIS for HealthAvailable today2Migration of TrakCare to IRIS for Health Available today3 In-place conversion to HealthShare Health ConnectAvailable today; Contact InterSystems for details4 In-place conversion to IRISLimited Field Test: completedPublic Field Test: June 27, 2019Expected completion: July 31, 20195In-place conversion to IRIS for HealthQ3 20196In-place conversion to HealthShare {IE, PD, HI, ….}Q4 2019 In-place conversions are supported for Caché and Ensemble versions 2016.2 and later. I love IRIS and using it in already a couple of solutions. This is even greater to adopt IRIS over the full line. Keep up the good work ! IRIS is a very good product, with many new capabilities.However, as IRIS is available on 64 bit O/S only, customers running for Cache/Emsemble on (old) 32 bit O/S, will have to migrate those to 64 bit O/S before they can migrate to IRIS. That is correct, InterSystems IRIS will not run on 32 bit OS systems. Thank you, Marco. Andreas:I'm not technical, so please forgive my inability to answer this question from reading your post. Is any of the above relevant to a conversion from MapR to IRIS? No, the focus of this conversion is to enable existing C/E customers to move to InterSystems IRIS. Another concern is if your current (Cache/Ensemble) is using external calls using COM objects (proxy classes to external DLLs).It looks like that on IRIS this functionality was totally removed:- The "activate wizard" in Studio no longer exists- The %Activate package is also removed. Hello,Is there any difference of in-place conversion feature on different HealthShare instances?I am trying to do inplace-conversion with HealthShare 2018 to IRIS, the installer is supposed to ask me what instance (Cache, Ensemble, HealthShare,...) I want to convert, but it isn't. It just go on full install a new IRIS.Am I missing something? Andreas,
Which the recommendation to migrate to IRIS if a Ensemble license is like that?
Elite, Multi-Server, Platform Independent
Whats happens with the license, since is not compatible with Iris? I suggest you work with your Sales Rep to discussion licensing options.
As a licensed customer you can go to evaluation.intersystems.com in order to grab an InterSystems IRIS evaluation key for trying out the product.
Question
Ponnumani Gurusamy · Jul 7, 2019
Hi Team, If any possible to add a rewards list of InterSystems cache certification to InterSystems Global Masters. For example ,developer or global master user have 10000 point as , we give any offer(price) to attend the InterSystems cache/Ensemble/IRIS certificate. So no.of developer try to attend the exam and get certification.This is very useful for developer career and we also have lot of Cache developers in the market. Please correct me, if I am wrong..Thanks ,Ponnumani Gurusamy. Hi Ponnumani!
Here are InterSystems certification offers
Announcement
Josh Lubarr · Oct 19, 2017
Hi, Community!We are pleased to invite you to participate in the InterSystems Documentation Satisfaction survey. As part of ongoing efforts to make our content more usable and helpful, Learning Services wants your feedback about InterSystems documentation. The survey covers many different areas of documentation, and you can complete it in about five minutes. The deadline for responses is October 30. Also, responses are anonymous. The survey is over, thanks! And also you can earn 300 points in Global Masters for the survey completion.
Article
Eduard Lebedyuk · Feb 5, 2016
Class Queries in InterSystems IRIS (and Cache, Ensemble, HealthShare) is a useful tool that separates SQL queries from Object Script code. Basically, it works like this: suppose that you want to use the same SQL query with different arguments in several different places.In this case you can avoid code duplication by declaring the query body as a class query and then calling this query by name. This approach is also convenient for custom queries, in which the task of obtaining the next row is defined by a developer. Sounds interesting? Then read on!Basic class queriesSimply put, basic class queries allow you to represent SQL SELECT queries. SQL optimizer and compiler handle them just as they would standard SQL queries, but they are more convenient when it comes to executing them from Caché Object Script context. They are declared as Query items in class definitions (similar to Method or Property) in the following way:Type: %SQLQueryAll arguments of your SQL query must be listed in the list of argumentsQuery type: SELECTUse the colon to access each argument (similar to static SQL)Define the ROWSPEC parameter which contains information about names and data types of the output results along with the order of fields(Optional) Define the CONTAINID parameter which corresponds to the numeric order if the field containing ID. If you don't need to return ID, don't assign any values to CONTAINID(Optional) Define the COMPILEMODE parameter which corresponds to the similar parameter in static SQL and specifies when the SQL expression must be compiled. When this parameter is set to IMMEDIATE (by default), the query will be compiled simultaneously with the class. When this parameter is set to DYNAMIC, the query will be compiled before its first execution (similar to dynamic SQL)(Optional) Define the SELECTMODE parameter which specifies the format of the query resultsAdd the SqlProc property, if you want to call this query as an SQL procedure.Set the SqlName property, if you want to rename the query. The default name of a query in SQL context is as follows: PackageName.ClassName_QueryNameCaché Studio provides the built-in wizard for creating class queriesSample definition of the Sample.Person class with the ByName query which returns all user names that begin with a specified letter
Class Sample.Person Extends %Persistent
{
Property Name As %String;
Property DOB As %Date;
Property SSN As %String;
Query ByName(name As %String = "") As %SQLQuery
(ROWSPEC="ID:%Integer,Name:%String,DOB:%Date,SSN:%String",
CONTAINID = 1, SELECTMODE = "RUNTIME",
COMPILEMODE = "IMMEDIATE") [ SqlName = SP_Sample_By_Name, SqlProc ]
{
SELECT ID, Name, DOB, SSN
FROM Sample.Person
WHERE (Name %STARTSWITH :name)
ORDER BY Name
}
}
You can call this query from Caché Object Script in the following way:
Set statement=##class(%SQL.Statement).%New()
Set status=statement.%PrepareClassQuery("Sample.Person","ByName")
If $$$ISERR(status) {
Do $system.OBJ.DisplayError(status)
}
Set resultset=statement.%Execute("A")
While resultset.%Next() {
Write !, resultset.%Get("Name")
}
Alternatively, you can obtain a resultset using the automatically generated method queryNameFunc:
Set resultset = ##class(Sample.Person).ByNameFunc("A")
While resultset.%Next() {
Write !, resultset.%Get("Name")
}
This query can also be called from SQLcontext in these two ways:
Call Sample.SP_Sample_By_Name('A')
Select * from Sample.SP_Sample_By_Name('A')
This class can be found in the SAMPLES default Caché namespace And that's all about simple queries. Now let's proceed to custom ones.
Custom class queries
Though basic class queries work fine in most cases, sometimes it is necessary to execute full control over the query behavior in applications, e.g.:
Sophisticated selection criteria. Since in custom queries you implement a Caché Object Script method that returns the next row on your own, these criteria can be as sophisticated as you require.If data is accessible only via API in a format that you don't want to useIf data is stored in globals (without classes)If you need to escalate rights in order to access dataIf you need to call an external API in order to access dataIf you need to gain access to the file system in order to access dataYou need to perform additional operations before running the query (e.g. establish a connection, check permissions, etc.)
So, how do you create custom class queries? First of all, you should define 4 methods which implement the entire workflow for your query, from initialization to destruction:
queryName — provides information about a query (similar to basic class queries)queryNameExecute — constructs a queryqueryNameFetch — obtains the next row result of a queryqueryNameClose — destructs a query
Now let's analyze these methods in more detail.
The queryName method
The queryName method represents information about a query.
Type: %QueryLeave body blankDefine the ROWSPEC parameter which contains the information about names and data types of the output results along with the field order(Optional) Define the CONTAINID parameter which corresponds to the numeric order if the field containing ID. If you don't return ID, don't assign any value to CONTAINID
For example, let's create the AllRecords query (queryName = AllRecords, and the method is simply called AllRecords) which will be outputting all instances of the new persistent class Utils.CustomQuery, one by one. First, let's create a new persistent class Utils.CustomQuery:
Class Utils.CustomQuery Extends (%Persistent, %Populate){
Property Prop1 As %String;
Property Prop2 As %Integer;
}
Now let's write the AllRecords query:
Query AllRecords() As %Query(CONTAINID = 1, ROWSPEC = "Id:%String,Prop1:%String,Prop2:%Integer") [ SqlName = AllRecords, SqlProc ]
{
}
The queryNameExecute methodThe queryNameExecute method fully initializes a query. The signature of this method is as follows:
ClassMethod queryNameExecute(ByRef qHandle As %Binary, args) As %Status
where:
qHandle is used for communication with other methods of the query implementationThis method must set qHandle into the state which will then be passed to the queryNameFetch methodqHandle can be set to OREF, variable or a multi-dimensional variableargs are additional parameters passed to the query. You can add as many args as you need (or don't use them at all)The method must return query initialization status
But let's get back to our example. You are free to iterate through the extent in multiple ways (I will describe the basic working approaches for custom queries below), but as for this example let's iterate through the global using the $Order function. In this case, qHandle will be storing the current ID, and since we don't need any additional arguments, the arg argument is not required. The result looks like this:
ClassMethod AllRecordsExecute(ByRef qHandle As %Binary) As %Status {
Set qHandle = "" Quit $$$OK
}
The queryNameFetch methodThe queryNameFetch method returns a single result in $List form. The signature of this method is as follows:
ClassMethod queryNameFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = queryNameExecute ]
where:
qHandle is used for communication with other methods of the query implementationWhen the query is executed, qHandle is being assigned values specified by queryNameExecute or by previous call of queryNameFetch.Row will be set either to a value of %List or to an empty string, if all data has been processedAtEnd must be set to 1, once the end of data is reached.The PlaceAfter keyword identifies the method's position in the int code . The "Fetch" method must be positioned after the "Execute" method, but this is important only for static SQL, i.e. cursors inside queries.
In general, the following operations are performed within this method:
Check whether we've reached the end of dataIf there is still some data left: Create a new %List and assign a value to the Row variableOtherwise, set AtEnd to 1Prepare qHandle for the next result fetchReturn the status
This is how it will look like in our example:
ClassMethod AllRecordsFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status {
#; Iterating through ^Utils.CustomQueryD
#; Writing the next id to qHandle and writing the global's value with the new id into val
Set qHandle = $Order(^Utils.CustomQueryD(qHandle),1,val)
#; Checking whether there is any more data left
If qHandle = "" {
Set AtEnd = 1
Set Row = ""
Quit $$$OK
}
#; If not, create %List
#; val = $Lb("", Prop1, Prop2) see Storage definition
#; Row =$lb(Id,Prop1, Prop2) see ROWSPEC for the AllRecords request
Set Row = $Lb(qHandle, $Lg(val,2), $Lg(val,3))
Quit $$$OK
}
The queryNameClose methodThe queryNameClose method terminates the query, once all the data is obtained. The signature of this method is as follows:
ClassMethod queryNameClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = queryNameFetch ]
where:
Caché executes this method after the final call to the queryNameFetch methodIn other words, this is a query destructorTherefore you should dispose all SQL cursors, queries and local variables in its implementationThe methods return the current status
In our example, we have to delete the local variable qHandle:
ClassMethod AllRecordsClose(ByRef qHandle As %Binary) As %Status {
Kill qHandle
Quit $$$OK
}
And here we are! Once you compile the class, you will be able to use the AllRecords query from %SQL.Statement – just as the basic class queries.
Iteration logic approaches for custom queries
So, what approaches can be used for custom queries? In general, there exist 3 basic approaches:
Iteration through a globalStatic SQLDynamic SQL
Iteration through a globalThe approach is based on using $Order and similar functions for iteration through a global. It can be used in the following cases:
Data is stored in globals (without classes)You want to reduce the number of glorefs is the codeThe results must be/can be sorted by the global's subscript
Static SQLThe approach is based on cursors and static SQL. This is used for:
Making the int code more readableMaking the work with cursors easierSpeeding up the compilation process (static SQL is included into the class query and is therefore compiled only once).
Note:
Cursors generated from queries of the %SQLQuery type are named automatically, e.g. Q14.All cursors used within a class must have different names.Error messages are related to the internal names of cursors which have an additional characters at the end of their names. For example, an error in cursor Q140 is actually caused by cursor Q14.Use PlaceAfter and make sure that cursors are used in the same int routinewhere they have been declared.INTO must be used in conjunction with FETCH, but not DECLARE.
Example of static SQL for Utils.CustomQuery:
Query AllStatic() As %Query(CONTAINID = 1, ROWSPEC = "Id:%String,Prop1:%String,Prop2:%Integer") [ SqlName = AllStatic, SqlProc ]
{
}
ClassMethod AllStaticExecute(ByRef qHandle As %Binary) As %Status
{
&sql(DECLARE C CURSOR FOR
SELECT Id, Prop1, Prop2
FROM Utils.CustomQuery
)
&sql(OPEN C)
Quit $$$OK
}
ClassMethod AllStaticFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = AllStaticExecute ]
{
#; INTO must be with FETCH
&sql(FETCH C INTO :Id, :Prop1, :Prop2)
#; Check if we reached end of data
If (SQLCODE'=0) {
Set AtEnd = 1
Set Row = ""
Quit $$$OK
}
Set Row = $Lb(Id, Prop1, Prop2)
Quit $$$OK
}
ClassMethod AllStaticClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = AllStaticFetch ]
{
&sql(CLOSE C)
Quit $$$OK
}
Dynamic SQLThe approach is based on other class queries and dynamic SQL. This is reasonable when in addition to an SQL query itself, you also need to perform some additional operations, e.g. execute an SQL query in several namespaces or escalate permissions before running the query.
Example of dynamic SQL for Utils.CustomQuery:
Query AllDynamic() As %Query(CONTAINID = 1, ROWSPEC = "Id:%String,Prop1:%String,Prop2:%Integer") [ SqlName = AllDynamic, SqlProc ]
{
}
ClassMethod AllDynamicExecute(ByRef qHandle As %Binary) As %Status
{
Set qHandle = ##class(%SQL.Statement).%ExecDirect(,"SELECT * FROM Utils.CustomQuery")
Quit $$$OK
}
ClassMethod AllDynamicFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status
{
If qHandle.%Next()=0 {
Set AtEnd = 1
Set Row = ""
Quit $$$OK
}
Set Row = $Lb(qHandle.%Get("Id"), qHandle.%Get("Prop1"), qHandle.%Get("Prop2"))
Quit $$$OK
}
ClassMethod AllDynamicClose(ByRef qHandle As %Binary) As %Status
{
Kill qHandle
Quit $$$OK
}
Alternative approach: %SQL.CustomResultSet
Alternatively, you can create a query by subclassing from the %SQL.CustomResultSet class. Benefits of this approach are as follows:
Slight increase in speedROWSPEC is unnecessary, since all metadata is obtained from the class definitionCompliance with the object-oriented design principles
To create query from the subclass of %SQL.CustomResultSet class, make sure to perform the following steps:
Define the properties corresponding to the resulting fieldsDefine the private properties where the query context will be storedOverride the %OpenCursor method (similar to queryNameExecute) which initiates the context. In case of any errors, set %SQLCODE and %Message as wellOverride the %Next method (similar to queryNameFetch) which obtains the next result. Fill in the properties. The method returns 0 if all the data has been processed and 1 if some data is still remainingOverride the %CloseCursor method (similar to queryNameClose) if necessary
Example of %SQL.CustomResultSet for Utils.CustomQuery:
Class Utils.CustomQueryRS Extends %SQL.CustomResultSet
{
Property Id As %String;
Property Prop1 As %String;
Property Prop2 As %Integer;
Method %OpenCursor() As %Library.Status
{
Set ..Id = ""
Quit $$$OK
}
Method %Next(ByRef sc As %Library.Status) As %Library.Integer [ PlaceAfter = %Execute ]
{
Set sc = $$$OK
Set ..Id = $Order(^Utils.CustomQueryD(..Id),1,val)
Quit:..Id="" 0
Set ..Prop1 = $Lg(val,2)
Set ..Prop2 = $Lg(val,3)
Quit $$$OK
}
}
You can call it from Caché Object Script code in the following way:
Set resultset= ##class(Utils.CustomQueryRS).%New()
While resultset.%Next() {
Write resultset.Id,!
}
Another example is available in the SAMPLES namespace – it's the Sample.CustomResultSet class which implements a query for Samples.Person.
Summary
Custom queries will help you to separate SQL expressions from Caché Object Script code and implement sophisticated behavior which can be too difficult for pure SQL.
References
Class queries
Iteration through a global
Static SQL
Dynamic SQL
%SQL.CustomResultSet
The Utils.CustomQuery class
The Utils.CustomQueryRS class
The author would like to thank Alexander Koblov for his assistance in writing this article. In regards to the first part of your write-up, as a developer, if you want a quick and easy way to test the query from your terminal window (i.e. Cache Terminal, PowerTerm, Putty, Reflections, etc...), then you can run the query as follows:
Do ##class(%ResultSet).RunQuery({className},{queryName}) - if you have input parameters, then you would pass those values as subscripts 3 - n;
Example of running query with two input parameters:
Do ##class(%ResultSet).RunQuery({className},{queryName},{inputValue1},{inputValue2})
So an example of running the first query in your write-up from a terminal window might be:
Do ##class(%ResultSet).RunQuery("Sample.Person","ByName","A")
The output from the query might be:
ID:Name:DOB:SSN:53:Adam,Ralph O.:41730:657-43-6149:33:Adam,Zoe X.:56117:982-36-6928:80:Ahmed,Edgar Q.:33719:546-61-2688:110:Alton,Umberto B.:30116:877-79-1057:146:Avery,Valery P.:39515:310-11-8847:
Hope this Helps and Have a Great Day!!! Happy Coding/Testing those queries!
Quick followup note - so folks don't freak out about the output from the query example - the names and birthdates and SSN's displayed in the example were from a SAMPLES testing environment where data was auto-generated at random - so the data is fake and not for real people :) Thanks!
I often need to run queries from a terminal, so I extended Caché ObjectScript with zsql command to run queries and display the results. Here's how it works:
zsql "SELECT TOP 2 Name FROM Sample.Person"
Would output:
Name
Adams,Chris Z.
Adams,Danielle P
To achieve it I created %ZLANGC00 mac routine with the following code:
; %ZLANGC00
; custom commands for ObjectScript
; http://docs.intersystems.com/cache20141/csp/docbook/DocBook.UI.Page.cls?KEY=GSTU_customize
Quit
/// Execute Query and display the results
/// Call like this:
/// zsql "SELECT TOP 10 Name FROM Sample.Person"
ZSQL(Query)
#Dim ResultSet As %SQL.StatementResult
Set ResultSet = ##class(%SQL.Statement).%ExecDirect(, Query)
Do ResultSet.%Display()
Quit
Save and compile it and then you can execute sql (class queries too) in a terminal with zsql command:
zsql "SELECT * FROM Sample.SP_Sample_By_Name('Z')"
That said I myself prefer executing SQL queries in SMP because you don't need to type them there (drag&drop from the left panel or copy&paste from the code) - it's very convenient. I typically use the SQL shell in the terminal when doing this. It is very powerful.SAMPLES>do $system.SQL.Shell()SQL Command Line Shell----------------------------------------------------The command prefix is currently set to: <<nothing>>.Enter q to quit, ? for help.SAMPLES>>?Interactive SQL Shell (c) InterSystems Corporation 1999-2009 All Rights Reserved----------------------------------------------------To execute an SQL statement, type it in and press ENTER.To execute a multiline SQL statement, press <enter> to entermultiline statement mode, enter the each statement line andenter GO to exit multiline statement mode and execute the statement.etc...This uses %SQL.Statement to do all the work and has a number of options including saving of queries, etc. I would also recommend to try Caché Web Terminal SQL mode:This mode appeared with 4.0 version. for me personally, the greatest benefit of custom SQL queries is that you don't need any (persistent) data, you can just generate it on the fly, either computing them or bringing from any arbitrary external system. Dan Fantastic post @Eduard.Lebedyuk
Alternative approach: %SQL.CustomResultSet is icing on the cake which is not covered by Intersystems documentation. It really should be. This is exactly what I needed for my current project. Thanks. 💡 The article is considered as InterSystems Data Platform Best Practice. For 2017.1CE and later as well as all IRIS versions, %SQL.CustomResultSet should not be used. Instead, use %SQL.CustomQuery. There are several good reasons for this. There is good class documentation available. I am happy to post examples if anyone is interested. Hi @Daniel.Pasco
It would be great if you can share some examples with us.
Best Regards,Henrique First, keep in mind that all implementations (faithful implementations that is) of %SQL.CustomQuery are also projected as table-valued functions. That means you can include the function in the FROM clause of a SELECT statement.
The process of implementing a custom query is simple. These steps are described in the %SQL.CustomQuery class documentation so I'll just summarize here.
Define a new class that extends %SQL.CustomQuery;
Override the SQLNAME parameter, assign a valid SQL identifier that is to be the name of the TVF;
Define properties, in order, that are the columns of each row returned by this query. Let's call these "result columns". Each result column is defined as a non-private property;
Define properties that you will need to maintain the source data, pointers, etc. that you will use to manage the data used to produce rows. These properties are defined as "private";
Override %OpenCursor. Add parameters to this method override that correspond to the input parameters that will be passed when instantiating the custom query;
Override %FetchCursor. In this method, check for end of data. If not at the end then populate all of the result properties with data and return 1 (true). Otherwise, clear all result properties and return 0;
Override %CloseCursor. In this override, release any resources acquired during instantiation and perform any necessary cleanup.
I won't post the version of the class that produces this output since the version of %Net.Http in current versions of CE/IRIS do not have a working GetJSON() method. The version of the class I'm posting simply passes in the raw JSON data as an argument.
The query:
SELECT top 5 stateCode,name,population FROM example_custom.sample_custom_query('https://api.census.gov/data/2014/pep/natstprc?get=STNAME,POP&for=state:*&DATE_=7','Default') ORDER BY population DESC
and the results:
stateCode
name
population
06
California
38802500
48
Texas
26956958
12
Florida
19893297
36
New York
19746227
17
Illinois
12880580
5 row(s) affected
Using this version of a custom query class:
Class example.custom.JsonQuery Extends %SQL.CustomQuery
{
Parameter SQLNAME As String = "sample_custom_json_query";
Property data As %Library.DynamicAbstractObject [ Private ];
Property iterator As %Iterator.AbstractIterator [ Private ];
Property atEnd As %Integer [ InitialExpression = 0, Private ];
Property stateCode As %String;
Property name As %String;
Property population As %Integer;
Method %OpenCursor(data As %String(MAXLEN="")) [ Private ]
{
try {
if $isobject(data) {
set ..data = data
} else {
set ..data = [].%FromJSON(data)
}
set ..iterator = ..data.%GetIterator()
if '..iterator.%GetNext(.key,.value) {
set ..atEnd = 0
set ..iterator = ""
}
} catch exception {
// this is just a place holder, this method reports errors by throwing an exception
// but a catch can allow the user to log errors or perform some self-healing action
throw exception
}
}
Method %FetchCursor() As %Library.Integer
{
set response = 0
if ($isObject(..iterator)) && ('..atEnd) {
if ..iterator.%GetNext(.key,.value) {
set ..name = value.%Get(0)
set ..population = value.%Get(1)
set ..stateCode = value.%Get(3)
set response = 1
} else {
set ..atEnd = 1
set ..iterator = ""
}
} else {
set ..name = ""
set ..population = ""
set ..stateCode = ""
}
return response
}
Method %CloseCursor() [ PlaceAfter = %Next, Private ]
{
// not really necessary as %OnClose will automatically close the cursor during destruction
// but users can place code here to clean up other resources allocated for this query instance
// that are external to the query instance. Like a temporary global.
set ..iterator = ""
set ..data = ""
}
}
and this query
SELECT top 5 stateCode,name,population FROM example_custom.sample_custom_json_query('[["STNAME","POP","DATE_","state"],["Alabama","4849377","7","01"],["Alaska","736732","7","02"],["Arizona","6731484","7","04"],["Arkansas","2966369","7","05"],["California","38802500","7","06"],["Colorado","5355866","7","08"],["Connecticut","3596677","7","09"],["Delaware","935614","7","10"],["District of Columbia","658893","7","11"],["Florida","19893297","7","12"],["Georgia","10097343","7","13"],["Hawaii","1419561","7","15"],["Idaho","1634464","7","16"],["Illinois","12880580","7","17"],["Indiana","6596855","7","18"],["Iowa","3107126","7","19"],["Kansas","2904021","7","20"],["Kentucky","4413457","7","21"],["Louisiana","4649676","7","22"],["Maine","1330089","7","23"],["Maryland","5976407","7","24"],["Massachusetts","6745408","7","25"],["Michigan","9909877","7","26"],["Minnesota","5457173","7","27"],["Mississippi","2994079","7","28"],["Missouri","6063589","7","29"],["Montana","1023579","7","30"],["Nebraska","1881503","7","31"],["Nevada","2839099","7","32"],["New Hampshire","1326813","7","33"],["New Jersey","8938175","7","34"],["New Mexico","2085572","7","35"],["New York","19746227","7","36"],["North Carolina","9943964","7","37"],["North Dakota","739482","7","38"],["Ohio","11594163","7","39"],["Oklahoma","3878051","7","40"],["Oregon","3970239","7","41"],["Pennsylvania","12787209","7","42"],["Rhode Island","1055173","7","44"],["South Carolina","4832482","7","45"],["South Dakota","853175","7","46"],["Tennessee","6549352","7","47"],["Texas","26956958","7","48"],["Utah","2942902","7","49"],["Vermont","626562","7","50"],["Virginia","8326289","7","51"],["Washington","7061530","7","53"],["West Virginia","1850326","7","54"],["Wisconsin","5757564","7","55"],["Wyoming","584153","7","56"],["Puerto Rico Commonwealth","3548397","7","72"]]') ORDER BY population DESC
produces the same result:
stateCode
name
population
06
California
38802500
48
Texas
26956958
12
Florida
19893297
36
New York
19746227
17
Illinois
12880580
5 row(s) affected
I am happy to post other examples if you wish.
Dan Custom queries can also be instantiated without using SQL. Simply call %New and pass in the arguments that are defined by the %OpenCursor method. There is one difference here - the first argument of %New is the SELECTMODE and subsequent arguments correspond to the %OpenCursor arguments. Once instantiated, the interface is like any other %SQL.IResultSet.
USER>set result = ##class(example.custom.Query).%New(,"https://api.census.gov/data/2014/pep/natstprc?get=STNAME,POP&for=state:*&DATE_=7","Default")
USER>write result.%Next()
1
USER>write result.name
Alabama
USER>write result.population
4849377
USER>while result.%Next() { write !,result.name,": ",result.population }
Alaska: 736732
Arizona: 6731484
Arkansas: 2966369
California: 38802500
Another unknown feature that isn't mentioned anywhere else. I am a fan Robert...but...
https://community.intersystems.com/post/new-video-sql-%E2%80%93-things-you-should-know
Somewhere around the 36 minute mark of this video. Buried perhaps. Still, %SQL.CustomQuery has a lot of interesting capabilities. Incredible! I really missed something important!@Daniel.Pasco Thanks for the pointer. I never stop learning. This is a great feature Dan!
I'm using it to write SQL queries to Pandas dataframes in Python (code). Some of the reasons why I focus on utilizing class queries include
Studio and other editors are much better at providing coloring/syntax checking vs a strategy of setting a tSQL string as some arbitrary SQL statement
As mentioned in the original post it provides the separation that allows for easy re-use and testing. If I have a class query I can decide to allow it to be reflected as a stored procedure, I can expose it to ZEN reports, ODBC/JDBC, JSON projection, %XML.DataSet, I can use it to satisfy a control in ZEN that allows for binding to a class query, as well as others. Basically, it provides for great separation.
I also like the idea of considering writing the statement using SQL as %Query. If for some reason I run into performance issues that I cannot overcome I can change the implementation to %SQLQuery and implement COS code, the calling application would see the improved performance but does not have to understand the internal implementation. However, I've only done this on extremely rare occasions.
I think one of the most important aspects of SQL to always read the query plan, it's there where you can understand what the optimizer is going to do. I care all about the query plan and almost always validate what it reports. Having a class query allows for easy Show Plan examination whereas it's generally hard to do if you take the approach of setting a tSQL string(sometimes with concatenation over several lines).
Class Queries are really worth investing in IMHO.