Clear filter
Question
Ankita JAin · May 31, 2017
Hi ,I am stuck with unit test failure with intersystem . In case of unit test failure, the build in jenkins is succeding while the build in jenkins should fail in case unit test failure .In cache programming i am using %UnitTest.Manager class and DebugRunTestCase method within it. I'm able to link studio with jenkins. I wanna fail my build in jenkins, if any of the test cases fails. Could anyone help? Please, provide how you call Caché from Jenkins? Please tell us very clearly how to exit the cache terminal via command line and then that batch script should fail the jenkins job ? Please tell us if it is feassible in intersystem . We have tried with all methods from intersystem doc library but still we could not progress (Jenkins job is not getting triggered based on boolean value passed from cache). Please suggest Hi Ankita,Would you be able to provide some more details about how you currently run the unit tests from your batch script? Regards,Caza Hi Caza ,
We are facing the following issues :-
1. We are getting the boolean value for the pass and fail unit test cases in cache Intersystem, but we are not able to assingn this boolean value into a variable in a batch script( r3 in this case)
2. @ECHO %FAILUREFLAG% is not giving any output to us, can you help with that also.
Please suggest us for the following problems or some alternative approach. The Batch script code is here :-
:: Switch output mode to utf8 - for comfortable log reading
@chcp 65001
@SET WORKSPACE="https://github.com/intersystems-ru/CacheGitHubCI.git"
:: Check the presence of the variable initialized by Jenkins
@IF NOT DEFINED WORKSPACE EXIT 1
@SET SUCCESSFLG =%CD%\successflag.txt
@SET FAILUREFLAG =%CD%\failureflag.txt
@ECHO %FAILUREFLAG%
@DEL "%SUCCESSFLG%"
@DEL "%FAILUREFLAG%"
:: The assembly may end with different results
:: Let the presence of a specific file in the directory tell us about the problems with the assembly
::% CD% - [C] urrent [D] irectory is a system variable
:: it contains the name of the directory in which the script is run (bat)
:: Delete the bad completion file from the previous start
::@DEL "% ERRFLAG%"
:::::::::::::::::::::::::::::::::::::::::: :::::::::::::::::::::
:: CREATING CACHE CONTROL SCRIPT
:: line by line in the build.cos command to manage the Cache
:::::::::::::::::::::::::::::::::::::::::: :::::::::::::::::::::
:: If the Cache is installed with normal or increased security
:: then in the first two lines the name and password of the user Cache
@ECHO user>build.cos
@ECHO password>>build.cos
:: Go to the necessary NAMESPACE
@ECHO zn "MYNAMESPACE" >>build.cos
@ECHO do ##class(Util.SourceControl).Init() >>build.cos
@ECHO zn "User" >>build.cos
@ECHO set ^^UnitTestRoot ="C:\test" >>build.cos
@ECHO do ##class(%%UnitTest.Manager).RunTest("myunittest","/nodelete") >>build.cos
@Echo set i=$order(^^UnitTest.Result(""),-1) >>build.cos
@Echo write i >>build.cos
@Echo set unitResults=^^UnitTest.Result(i,"myunittest") >>build.cos
@ECHO Set r3 = $LISTGET(unitResults,1,1) >>build.cos
@ECHO if r3'= 0 set file = "C:/unittests/successflag.txt" o file:("NWS") u file do $SYSTEM.OBJ.DisplayError(r3) >>build.cos
@ECHO if r3'= 1 set file = "C:/unittests/failureflag.txt" o file:("NWS") u file do $SYSTEM.OBJ.DisplayError(r3) >>build.cos
::@ECHO if 'r3 do $SYSTEM.Process.Terminate(,1)
::@ECHO halt
@IF EXIST "%FAILUREFLAG%" EXIT 1
::@ECHO do ##class(%%UnitTest.Manager).RunTest("User") >>build.cos
::@ECHO set ut = do ##class(%%UnitTest.Manager).RunTest("User") >>build.cos
::set var = 1 >>build.coss
:: If it did not work out, we will show the error to see it in the logs of Jenkins
::@ECHO if ut'= 1 do $SYSTEM.OBJ.DisplayError(sc) >>build.cos
::echo %var%
@Echo On
@ECHO zn "%%SYS" >>build.cos
@ECHO set pVars("Namespace")="MYDEV" >>build.cos
@ECHO set pVars("SourceDir")="C:\source\MYNAMESPACE\src\cls\User" >>build.cos
@ECHO do ##class(User.Import).setup(.pVars)>>build.cos
:: Download and compile all the sources in the assembly directory;
::% WORKSPACE% - variable Jenkins
::@ECHO set sc = $SYSTEM.OBJ.ImportDir("%WORKSPACE%", "*. Xml", "ck",. Err, 1) >>build.cos
:: If it did not work out, we will show the error to see it in the logs of Jenkins
::@ECHO if sc'= 1 do $SYSTEM.OBJ.DisplayError(sc) >>build.cos
:: and from the cos script create an error flag file to notify the script bat
::@ECHO if sc'= 1 set file = "% ERRFLAG%" o file :( "NWS") u file do $ SYSTEM.OBJ.DisplayError (sc) with file >> build.cos
:: Finish the Cache process
::@ECHO if sc'= 1 set file = "% ERRFLAG%" o file :( "NWS") u file do $ SYSTEM.OBJ.DisplayError (sc) with file >> build.cos
:::::::::::::::::::::::::::::::::::::::::: :::::::::::::::::::
:: Call the Cache control program and pass the generated script to it ::
:::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::
C:\InterSystems\HealthShare\bin\cache.exe -s C:\InterSystems\HealthShare\mgr -U %%SYS <%CD%\build.cos
:: If the error file was created during the cos script execution - we notify Jenkins about this
::@IF EXIST "%ERRFLAG%" EXIT 1
Hi Ankita,
I found a couple of issues in the script that might affect your end results:- the folder C:\unittests doesn't exist (at least not on my computer); unless the value of the WORKSPACE env variable is C:\untittest then you have to ensure the folder exists (you can create it either using batch mkdir or using COS ##class(%%File).CreateDirectoryChain() method )- what is stored in the ^UnitTest.Result(i,"myunittest") global is not a status code but a numeric value; so I would suggest replacing Do $system.OBJ.DisplayError(r3) with a simple write command, like this:
@ECHO if r3'= 0 set file = "C:/unittests/successflag.txt" o file:("NWS") u file write r3 >>build.cos
@ECHO if r3'= 1 set file = "C:/unittests/failureflag.txt" o file:("NWS") u file write r3 >>build.cos
Regarding "@ECHO %FAILUREFLAG%" - make sure there are no spaces before or after the = character in the following two commands:
@SET SUCCESSFLG =%CD%\successflag.txt
@SET FAILUREFLAG =%CD%\failureflag.txt
When I did copy/paste of the example script I ended up with a space character before the = character.
Can you try these changes and let me know how you go?
Cheers,Caza Thanks for your response it worked If you add to your batch script something like:
do ##class(com.isc.UnitTest.Manager).OutputResultsXml("c:/unittests/unit_test_results.xml")
then you could pass on the unit_test_results.xml file to the JUnit plugin from Jenkins. This will give you useful reports, duration and individual results breakdown, etc.
For example: https://support.smartbear.com/testcomplete/docs/_images/working-with/integration/jenkins/trend-graph.png Yay! You can create a windows batch file, where you can write commands to call Cache terminal using cterm.exe Here's an object and SQL approaches to get Unit Tests status. One gotcha in windows is making sure the encoding on your .scr file is in UTF-8. Windows thinks a .scr file is a screensaver file and can give it a weird encoding. Once I switched it to UTF-8 it worked as described in the documentation. I don't know how you call Caché method from Jenkins, but anyway you can use $SYSTEM.Process.Terminate in Caché script to exit with an exit status. Something like this.
set tSC=##class(%UnitTest.Manager).DebugRunTestCase(....)
if 'tSC do $SYSTEM.Process.Terminate(,1)
halt
I suggest that you may use csession or cterm to call Caché code, then you should get exit code and send it to Jenkins, which will be recognized by Jenkins as an error and will fail the job. Hi, Thanks Dmitry Maslennikov for the Response.Could you please help us in Clarifying the below Query.Is there any methods/ways through which will get to know whether any of the Unit Test cases is/are failing (we are getting an url of the csp page with the report which has the passed failed status) as we need to send this failure status to Jenkins for the Build to Fail (where in we have acheive this part making the build failure/success based on harcoded boolean) Hi,
We use something like the below to output the unit test results to an xml file in JUnit format.
/// Extend %UnitTest manager to output unit test results in JUnit format.
/// This relies on the fact that unit test results are stored in <b>^UnitTest.Result</b> global. Results displayed on CSP pages come from this global.
Class com.isc.UnitTest.Manager Extends %UnitTest.Manager
{
ClassMethod OutputResultsXml(pFileName As %String) As %Status
{
set File=##class(%File).%New(pFileName)
set i=$order(^UnitTest.Result(""),-1)
if i="" quit $$$OK // no results
kill ^||TMP // results global
set suite="" for {
set suite=$order(^UnitTest.Result(i,suite))
quit:suite=""
set ^||TMP("S",suite,"time")=$listget(^UnitTest.Result(i,suite),2)
set case="" for {
set case=$order(^UnitTest.Result(i,suite,case))
quit:case=""
if $increment(^||TMP("S",suite,"tests"))
set ^||TMP("S",suite,"C",case,"time")=$listget(^UnitTest.Result(i,suite),2)
set method="" for {
set method=$order(^UnitTest.Result(i,suite,case,method))
quit:method=""
set ^||TMP("S",suite,"C",case,"M",method,"time")=$listget(^UnitTest.Result(i,suite,case,method),2)
set assert="" for {
set assert=$order(^UnitTest.Result(i,suite,case,method,assert))
quit:assert=""
if $increment(^||TMP("S",suite,"assertions"))
if $increment(^||TMP("S",suite,"C",case,"assertions"))
if $increment(^||TMP("S",suite,"C",case,"M",method,"assertions"))
if $listget(^UnitTest.Result(i,suite,case,method,assert))=0 {
if $increment(^||TMP("S",suite,"failures"))
if $increment(^||TMP("S",suite,"C",case,"failures"))
if $increment(^||TMP("S",suite,"C",case,"M",method,"failures"))
set ^||TMP("S",suite,"C",case,"M",method,"failure")=$get(^||TMP("S",suite,"C",case,"M",method,"failure"))
_$listget(^UnitTest.Result(i,suite,case,method,assert),2)
_": "_$listget(^UnitTest.Result(i,suite,case,method,assert),3)
_$char(13,10)
}
}
if ($listget(^UnitTest.Result(i,suite,case,method))=0)
&& ('$data(^||TMP("S",suite,"C",case,"M",method,"failures"))) {
if $increment(^||TMP("S",suite,"failures"))
if $increment(^||TMP("S",suite,"C",case,"failures"))
if $increment(^||TMP("S",suite,"C",case,"M",method,"failures"))
set ^||TMP("S",suite,"C",case,"M",method,"failure")=$get(^||TMP("S",suite,"C",case,"M",method,"failure"))
_$listget(^UnitTest.Result(i,suite,case,method),3)
_": "_$listget(^UnitTest.Result(i,suite,case,method),4)
_$char(13,10)
}
}
if $listget(^UnitTest.Result(i,suite,case))=0
&& ('$data(^||TMP("S",suite,"C",case,"failures"))) {
if $increment(^||TMP("S",suite,"failures"))
if $increment(^||TMP("S",suite,"C",case,"failures"))
if $increment(^||TMP("S",suite,"C",case,"M",case,"failures"))
set ^||TMP("S",suite,"C",case,"M",case,"failure")=$get(^||TMP("S",suite,"C",case,"M",case,"failure"))
_$listget(^UnitTest.Result(i,suite,case),3)
_": "_$listget(^UnitTest.Result(i,suite,case),4)
_$char(13,10)
}
}
}
do File.Open("WSN")
do File.WriteLine("<?xml version=""1.0"" encoding=""UTF-8"" ?>")
do File.WriteLine("<testsuites>")
set suite="" for {
set suite=$order(^||TMP("S",suite))
quit:suite=""
do File.Write("<testsuite")
do File.Write(" name="""_$zconvert(suite,"O","XML")_"""")
do File.Write(" assertions="""_$get(^||TMP("S",suite,"assertions"))_"""")
do File.Write(" time="""_$get(^||TMP("S",suite,"time"))_"""")
do File.Write(" tests="""_$get(^||TMP("S",suite,"tests"))_"""")
do File.WriteLine(">")
set case="" for {
set case=$order(^||TMP("S",suite,"C",case))
quit:case=""
do File.Write("<testsuite")
do File.Write(" name="""_$zconvert(case,"O","XML")_"""")
do File.Write(" assertions="""_$get(^||TMP("S",suite,"C",case,"assertions"))_"""")
do File.Write(" time="""_$get(^||TMP("S",suite,"C",case,"time"))_"""")
do File.Write(" tests="""_$get(^||TMP("S",suite,"C",case,"tests"))_"""")
do File.WriteLine(">")
set method="" for {
set method=$order(^||TMP("S",suite,"C",case,"M",method))
quit:method=""
do File.Write("<testcase")
do File.Write(" name="""_$zconvert(method,"O","XML")_"""")
do File.Write(" assertions="""_$get(^||TMP("S",suite,"C",case,"M",method,"assertions"))_"""")
do File.Write(" time="""_$get(^||TMP("S",suite,"C",case,"M",method,"time"))_"""")
do File.WriteLine(">")
if $data(^||TMP("S",suite,"C",case,"M",method,"failure")) {
do File.Write("<failure type=""cache-error"" message=""Cache Error"">")
do File.Write($zconvert(^||TMP("S",suite,"C",case,"M",method,"failure"),"O","XML"))
do File.WriteLine("</failure>")
}
do File.WriteLine("</testcase>")
}
do File.WriteLine("</testsuite>")
}
do File.WriteLine("</testsuite>")
}
do File.WriteLine("</testsuites>")
do File.Close()
kill ^||TMP
quit $$$OK
}
}
Article
Benjamin De Boe · Sep 19, 2017
Last week, we announced the InterSystems IRIS Data Platform, our new and comprehensive platform for all your data endeavours, whether transactional, analytics or both. We've included many of the features our customers know and loved from Caché and Ensemble, but in this article we'll shed a little more light on one of the new capabilities of the platform: SQL Sharding, a powerful new feature in our scalability story.
Should you have exactly 4 minutes and 41 seconds, take a look at this neat video on scalability. If you can't find your headphones and don't trust our soothing voiceover will please your co-workers, just read on!
Scaling up and out
Whether it's processing millions of stock trades a day or treating tens of thousands of patients a day, a data platform supporting those businesses should be able to cope with those large scales transparently. Transparently means that developers and business users shouldn't worry about those numbers and can concentrate on their core business and applications, with the platform taking care of the scale aspect.
For years, Caché has supported vertical scalability, where advancements in hardware are taken advantage of transparently by the software, efficiently leveraging very high core counts and vast amounts of RAM. This is called scaling up, and while a good upfront sizing effort can get you a perfectly balanced system, there's an inherent limit to what you can achieve on a single system in a cost-effective way.
In comes horizontal scalability, where the workload is spread over a number of separate servers working in a cluster, rather than a single one. Caché has supported ECP Application Servers as a means to scale out for a while already, but InterSystems IRIS now also adds SQL sharding.
What's new?
So what's the difference between ECP Application Servers and the new sharding capability? In order to understand how they differ, let's take a closer look at workloads. A workload may consist of tens of thousands of small devices continuously writing small batches of data to the database, or just a handful of analysts issuing analytical queries each spanning GBs of data at a time. Which one has the largest scale? Hard to tell, just like it's hard to say whether a fishing rod or a beer keg is largest. Workloads have more than one dimension and therefore scaling to support them needs a little more subtlety too.
In a rough simplification, let's consider the following components in an application workload: N represents the user workload and Q the query size. In our earlier examples, the first workload has a high N but low Q and the latter low N but high Q. ECP Application Servers are very good at helping support a large N, as they allow partitioning the application users across different servers. However, it doesn't necessarily help as much if the dataset gets very large and the working set doesn't fit in a single machine's memory. Sharding addresses large Q, allowing you to partition the dataset across servers, with work also being pushed down to those shard servers as much as possible.
SQL Sharding
So what does sharding really do? It's a SQL capability that will split the data in a sharded table into disjoint sets of rows that are stored on the shard servers. When connecting to the shard master, you'll still see this table as if it were a single table that contains all the data, but queries against it are split into shard-local queries that are sent to all shard servers. There, the shard servers calculate the results based on the data they have stored locally and send their results back to the shard master. The shard master aggregates these results, performs any relevant combination logic and returns the results back to the application.
While this system is trivial for a simple SELECT * FROM table, there's a lot of smart logic under the hood that ensures that you can use (almost) any SQL query and a maximum amount of work gets pushed to the shards to maximize parallelism. The shard key, which defines which rows go where, is where you anticipate typical query patterns. Most importantly, if you can ensure that tables often JOINed together are sharded along the same keys, the JOINs can be fully resolved at the shard level, giving you the high performance you're looking for.
Of course this is only a teaser and there is much more to explore, but the essence is what's pictured above: SQL sharding is a new recipe in the book of highly scalable dishes you can cook up with InterSystems IRIS. It's complementary to ECP Application Servers and focuses on challenging dataset sizes, making it a good fit for many analytical use cases. Like ECP app servers, it's entirely transparent to the application has a few more creative architectural variations for very specific scenarios.
Where can I learn more?
Recordings from the following Global Summit 2017 sessions on the topic are available on http://learning.intersystems.com:
What's Lurking in Your Data Lake, a technical overview of scalability & sharding in particular
We Want More! Solving Scalability, an overview of relevant use cases demanding for a highly scalable platform
See also this resource guide on InterSystems IRIS on learning.intersystems.com for more on the other capabilities of the new platform. If you'd like to give sharding a try on your particular use case, check out http://www.intersystems.com/iris and fill out the form at the bottom to apply for our early adopter program, or watch out for the field test version due later this year.
Great article! awesome But what will this do with Object storage/access or (direct) global storage/access ? Is this data SQL-only ? Hi Herman,We're supporting SQL only in this first release, but are working hard to add Object and other data models in the future. Sharding any globals is unfortunately not possible as we need some level of abstraction (such as SQL tables or Objects) to hook into in order to automate the distribution of data and work to shards. This said, if your SQL (or soon Object) based application has the odd direct global reference to a "custom" global (not related to a sharded table), we'll still support that by just mapping those to the shard master database.Thanks,benjamin Can we infer form this that sharding can be applied to globals that have been mapped to classes (thus providing SQL access)? Hi Warlin,I'm not sure whether you have something specific in mind, but it sort of works the other way around. You shard a table and, under the hood, invisible to application code, the table's data gets distributed to globals in the data shards. You cannot shard globals.thanks,benjamin Let's say I have an orders global with the following structure:^ORD(<ID>)=customerId~locationId.....And I create a mapping class for this global: MyPackage.OrderCan I use sharding over this table? To my understanding the structure of your global is irrelevant in this context.If you want to use sharding forget about ALL global access.You only access works over SQL ! (at least at the moment, objects may follow in some future)It's the decision of the sharing logic where and how data are stored in globals.If you ignore this and continue with direct global access you have a good chance to break it. I understand the accessing part but by creating a class mapping I'm enabling SQL access to the existing global. I guess that the question is more in line on whether sharding will be able to properly partition (shard) SQL tables that are the result of global mapping? Are there any constraints on how the %Persistent class (and the storage) is defined in order for it to work with sharding? Should they all use %CacheStorage or can they use %CacheSQLStorage (as with mappings)? If you have a global structure that you mapped a class to afterwards, that data is already in one physical database and therefore not sharded or shardable. Sharding really is a layer in between your SQL accesses and the physical storage and it expects you not to touch that physical storage directly. So yes you can still picture how that global structure looks like and under certain circumstances (and when we're not looking ;-) ) read from those globals, but new records have to go through INSERT statements (or %New in a future version), but can never go against the global directly.We currently only support sharding for %CacheStorage. There's been so many improvements in that model over the past 5-10 years that there aren't many reasons left to choose %CacheSQLStorage for new SQL/Object development. The only likely reason would be that you still have legacy global structures to start from, but as explained above, that's not a scenario we can support with sharding. Maybe a nice reference in this context is that of one of our early adopters who was able to migrate their existing SQL-based application to InterSystems IRIS in less than a day without any code changes, so they could use the rest of the day to start sharding a few of their tables and were ready to scale before dinner, so to speak. With the release of InterSystems IRIS, we're also publishing a few new great online courses on the technologies that come with it. That includes two on sharding, so don't hesitate to check them out!
Announcement
Simon Player · Sep 12, 2017
Modern businesses need new kinds of applications — ones that are smarter, faster, and can scale more quickly and cost-effectively to accommodate larger data sets, greater workloads, and more users.With this in mind, we have unveiled InterSystems IRIS Data Platform™, a complete, unified solution that provides a comprehensive and consistent set of capabilities spanning data management, interoperability, transaction processing, and analytics. It redefines high performance for application developers, systems integrators, and end-user organizations who develop and deploy data-rich and mission-critical solutions. InterSystems IRIS Data Platform provides all of the following capabilities in a single unified platform:Data ManagementAn ultra-high performance, horizontally scalable, multi-model database stores and accesses data modeled as objects, schema-free data, relational data, and multi-dimensional arrays in a single, highly efficient representation. It simultaneously processes both transactional and analytic workloads in a single database at very high scale, eliminating latencies between event, insight, and action, and reducing the complexities associated with maintaining multiple databases. InteroperabilityA comprehensive integration platform provides application integration, data coordination, business process orchestration, composite application development, API management, and real-time monitoring and alerting capabilities to support the full spectrum of integration scenarios and requirements. AnalyticsA powerful open analytics platform supports a wide range of analytics, including business intelligence, predictive analytics, distributed big data processing, real-time analytics, and machine learning. It is able to analyze real-time and batch data simultaneously at scale, and developers can embed analytic processing into business processes and transactional applications, enabling sophisticated programmatic decisions based on real-time analyses. The analytics platform also provides natural language processing capabilities to extract meaning and sentiment from unstructured text, allowing organizations to streamline processes that reference customer emails, knowledge databases, social media content, and other unstructured text data. Cloud DeploymentAutomated “cloud-first” deployment options simplify public cloud, private cloud, on-premise, and virtual machine deployments and updates. You can learn more about this new data platform by visiting our online learning page Simon Player,Director of Development, Data Platforms and TrakCare. Hi , Did we have iris cube like cache and ensemble or iris is different. please explain me how to work with iris i am really confused about iris. Here is a lot of information on Intersystems IRIS to reduce confusion.Your InterSystems sales rep will have more.
Announcement
Evgeny Shvarov · Apr 13, 2017
Hi, Community!You are very welcome to watch just uploaded InterSystems Atelier Welcome video on the InterSystems Developers YouTube Channel! Subscribe and stay tuned!
Question
Jose Sampaio · Sep 19, 2018
Hi community members!Please, I'm looking for any references or experiences using InterSystems technologies with MQTT (Message Queuing Telemetry Transport) protocol .Thanks in advance! Hi Evgeny!I will take look on this.Tks. Hi, Jose!Have you seen this article? Also pinging @Attila.Toth in hope to provide most recent updates.
Announcement
Daniel Kutac · Oct 29, 2018
We had our first meetup of the Prague Meetup for InterSystems Data Platform last Thursday!
As it was our first such venue, the attendance was not large, but we believe it was a good start. Those who attended could learn about new features that InterSystems IRIS brings to our partners and customers as well as listen to a presentation discussing what it takes to migrate from Caché or Ensemble to InterSystems IRIS and eventually containerizing their applications.
We all enjoyed excellent assortment of various tea species, accompanied by vegetarian food. (so you know what you can expect next time :) )
Attached you find a picture taken at the meetup.
Looking forward to see you next time and perhaps in a bigger group!
Dan Kutac Congratulations! In past, we had a similar event in Austria named "Tech Talk" that formed a national user community over time.I wish you a lot of success,Robert Thank you Robert!Dan
Article
Evgeny Shvarov · Jul 22, 2019
Hi Community!
We've introduced Direct Messages on InterSystems Community.
What's that?
Direct message(DM) is a Developer Community feature which lets you to send a direct message to InterSystems community member you want.
How to send it?
Open member's page, and click "Send Direct Message". Like here:
Or, open your account page and open the section "Direct Messages":
In Direct Messages you can see all the conversations and start the new one with Write new message:
The conversation could be between two or more people.
How a member will know about the message?
DC sends an email notification to a member if he has a new DM. Of course, you can setup if you want to receive DM email notifications.
Privacy
Attention! Direct messages are not private messages. Direct messages are pretty much the same as posts and comments but with the difference that you can alter the visibility of the message to certain people.
E.g. if John sends DM to Paul this DM is visible to John, Paul and to Developer Community admin. But this DM is hidden from other community members and public access, e.g. from search crawlers.
So it is safe to send contact data to each other which you consider possible to share with your recipient and DC admin.
What About Spam?
Only registered members who have postings can send direct messages.
Any registered members can receive and answer messages.
So, there is no spam expected.
Please report on any issues on Developers Issue Tracker or on Community Feedback track.
Stay tuned!
Article
Stefan Wittmann · Aug 14, 2019
As you might have heard, we just introduced the InterSystems API Manager (IAM); a new feature of the InterSystems IRIS Data Platform™, enabling you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement.
In this article, I will show you how to set up IAM and highlight some of the many capabilities IAM allows you to leverage. InterSystems API Manager brings everything you need
to monitor your HTTP-based API traffic and understand who is using your APIs; what are your most popular APIs and which could require a rework.
to control who is using your APIs and restrict usage in various ways. From simple access restrictions to throttling API traffic and fine-tuning request payloads, you have fine-grained control and can react quickly.
to protect your APIs with central security mechanisms like OAuth2.0 or Key Token Authentication.
to onboard third-party developers and provide them with a superb developer experience right from the start by providing a dedicated Developer Portal for their needs.
to scale your API demands and deliver low-latency responses
I am excited to give you a first look at IAM, so let's get started right away.
Getting started
IAM is available as a download from the WRC Software Distribution site and is deployed as a docker container of its own. So, make sure to meet the following minimum requirements:
The Docker engine is available. Minimum supported version is 17.04.0+.
The docker-compose CLI tool is available. Minimum supported version is 1.12.0+.
The first step requires you to load the docker image via
docker load -i iam_image.tar
This makes the IAM image available for subsequent use on your machine. IAM runs as a separate container so that you can scale it independently from your InterSystems IRIS backend. To start IAM requires access to your IRIS instance to load the required license information. The following configuration changes have to happen:
Enable the /api/IAM web application
Enable the IAM user
Change the password of the IAM user
Now we can configure our IAM container. In the distribution tarball, you will find a script for Windows and Unix-based systems named "iam-setup". This script helps you to set the environment variables correctly, enabling the IAM container to establish a connection with your InterSystems IRIS instance. This is an exemplary run from my terminal session on my Mac:
source ./iam-setup.sh Welcome to the InterSystems IRIS and InterSystems API Manager (IAM) setup script.This script sets the ISC_IRIS_URL environment variable that is used by the IAM container to get the IAM license key from InterSystems IRIS.Enter the full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1Enter the IP address for your InterSystems IRIS instance. The IP address has to be accessible from within the IAM container, therefore, do not use "localhost" or "127.0.0.1" if IRIS is running on your local machine. Instead use the public IP address of your local machine. If IRIS is running in a container, use the public IP address of the host environment, not the IP address of the IRIS container. xxx.xxx.xxx.xxx Enter the web server port for your InterSystems IRIS instance: 52773Enter the password for the IAM user for your InterSystems IRIS instance: Re-enter your password: Your inputs are:Full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1IP address for your InterSystems IRIS instance: xxx.xxx.xxx.xxxWeb server port for your InterSystems IRIS instance: 52773Would you like to continue with these inputs (y/n)? yGetting IAM license using your inputs...Successfully got IAM license!The ISC_IRIS_URL environment variable was set to: http://IAM:****************@xxx.xxx.xxx.xxx:52773/api/iam/licenseWARNING: The environment variable is set for this shell only!To start the services, run the following command in the top level directory: docker-compose up -dTo stop the services, run the following command in the top level directory: docker-compose downURL for the IAM Manager portal: http://localhost:8002
I obfuscated the IP address and you can't see the password I used, but this should give you an idea how simple the configuration is. The full image name, IP address and port of your InterSystems IRIS instance and the password for your IAM user, that's everything you need to get started.
Now you can start your IAM container by executing
docker-compose up -d
This orchestrates the IAM containers and ensures everything is started in the correct order. You can check the status of your containers with the following command:
docker ps
Opening localhost:8002 in my browser brings up the web-based UI:
The global report does not show any throughput yet, as this is a brand new node. We will change that shortly. You can see that IAM supports a concept of workspaces to separate your work into modules and/or teams. Scrolling down and selecting the "default" workspace brings us to the Dashboard for, well, the "default" workspace we will use for our first experiments.
Again, the number of requests for this workspace is still zero, but you get a first look at the major concepts of the API Gateway in the menu on the left side. The first two elements are the most important ones: Services and Routes. A service is an API we want to expose to consumers. Therefore, a REST API in your IRIS instance is considered a service, as is a Google API you might want to leverage. A route decides to which service incoming requests should be routed to. Every route has a certain set of conditions and if the conditions are fulfilled the request is routed to the associated service. To give you an idea, a route can match the IP or domain of the sender, HTTP methods, parts of the URI or a combination of the mentioned examples.
Let's create a service targeting our IRIS instance, with the following values:
field
value
description
name
test-iris
the logical name of this service
host
xxx.xxx.xxx.xxx
public IP-address of your IRIS instance
port
52773
the port used for HTTP requests
protocol
http
the protocols you want to support
Keep the default for everything else. Now let's create a route:
field
value
description
paths
/ api /atelier
requests with this path will be forwarded to our IRIS instance
protocols
http
the protocols you want to support
service
test-iris
requests matching this route will be forwarded to this service
Again, keep the default for everything else. IAM is listening on port 8000 for incoming requests by default. From now on requests that are sent to http://localhost:8000 and start with the path /api/atelier will be routed to our IRIS instance. Let's give this a try in a REST client (I am using Postman).
Sending a GET request to http://localhost:8000/api/atelier/ indeed returns a response from our IRIS instance. Every request goes through IAM and metrics like HTTP status code, latency, and consumer (if configured) are monitored. I went ahead and issues a couple more requests (including two requests to non-existing endpoints, like /api/atelier/test/) and you can see them all aggregated in the dashboard:
Working with plugins
Now that we have a basic route in place, we can start to manage the API traffic. Now we can start to add behavior that complements our service. Now the magic happens.
The most common way to enforce a certain behavior is by adding a plugin. Plugins isolate a certain functionality and can usually be attached to certain parts of IAM. Either they are supposed to affect the global runtime or just parts like a single user (group), a service or a route. We will start by adding a Rate Limiting plugin to our route. What we need to establish the link between the plugin and the route is the unique ID of the route. You can look it up by viewing the details of the route.
If you are following this article step by step, the ID of your route will be different. Copy the ID for the next step.
Click on Plugins on the left sidebar menu. Usually, you see all active plugins on this screen, but as this node is relatively new, there are no active plugins yet. So, move on by selecting "Add New Plugin".
The plugin we are after is in the category "Traffic Control" and is named "Rate Limiting". Select it. There are quite a few fields that you can define here as plugins are very flexible, but we only care about two fields:
field
value
description
route_id
d6a97e5b-7da6-4c98-bc69-6a09263039a8
paste the ID of your route here
config.minute
5
number of calls allowed per minute
That's it. The plugin is configured and active. You probably have seen that we can pick from a variety of time intervals, like minutes, hours or days but I deliberately used minutes as this allows us to easily understand the impact of this plugin.
If you send the same request again in Postman you will realize that the response comes back with 2 additional headers. XRateLimit-Limit-minute (value 5) and XRateLimit-Remaining-minute (value 4). This tells the client that he can make up to 5 calls per minute and has 4 more requests available in the current time interval.
If you keep making the same request over and over again, you will eventually run out of your available quota and instead get back an HTTP status code 429 with the following payload:
Wait until the minute is over and you will be able to get through again. This is a pretty handy mechanism allowing you to achieve a couple of things:
Protect your backend from spikes
Set an expectation for the client how many calls he is allowed to make in a transparent way for your services
Potentially monetize based on API traffic by introducing tiers (e.g. 100 calls per hour at the bronze level and unlimited with gold)
You can set values for different time intervals and hereby smoothing out API traffic over a certain period. Let's say you allow 600 calls per hour for a certain route. That's 10 calls per minute on average. But you are not preventing clients from using up all of their 600 calls in the very first minute of their hour. Maybe that's what you want. Maybe you would like to ensure that the load is distributed more equally over the hour. By setting the config_minute field to 20 you ensure that your users are not making more than 20 calls per minute AND 600 per hour. This would allow some spikes on the minute-level interval as they can only make 10 calls per minute on average, but users can't use up the hourly quota in a single minute. Now it will take them at least 30 minutes if they hit your system with full capacity. Clients will receive additional headers for each configured time interval, e.g.:
header
value
X-RateLimit-Limit-hour
600
X-RateLimit-Remaining-hour
595
X-RateLimit-Limit-minute
20
X-RateLimit-Remaining-minute
16
Of course, there are many different ways to configure your rate-limits depending on what you want to achieve.
I will stop at this point as this is probably enough for the first article about InterSystems API Manager. There are plenty more things you can do with IAM, we've just used one out for more than 40 plugins and haven't even used all of the core concepts yet! Here are a couple of things you can do as well and I might cover in future articles:
Add a central authentication mechanism for all your services
Scale-out by load-balancing requests to multiple targets that support the same set of APIs
Introduce new features or bugfixes to a smaller audience and monitor how it goes before you release it to everyone
Onboard internal and external developers by providing them a dedicated and customizable developer portal documenting all APIs they have access to
Cache commonly requested responses to reduce response latency and the load on the service systems
So, let's give IAM a try and let me know what you think in the comments below. We worked hard to bring this feature to you and are eager to learn what challenges you overcome with this technology. Stay tuned...
More resources
The official Press Release can be found here: InterSystems IRIS Data Platform 2019.2 introduces API Management capabilities
A short animated overview video: What is InterSystems API Manager
An 8-minute video walking you through some of the key highlights: Introducing InterSystems API Manager
The documentation is part of the regular IRIS documentation: InterSystems API Manager Documentation
Nice to have a clear step-by-step example to follow! Hi @Stefan.Wittmann !
If IRIS API Manager is published on InterSystems Docker Hub? `docker load -I iam_image.tar` did not work for me.
I used `docker import aim_image.tar iam` instead and it worked. Have you unpacked the IAM-0.34-1-1.tar.gz?
Originally I was trying to import the whole archive, which failed. After unpacking container it was imported successfully. No, it is not for multiple reasons. We have plans to publish the InterSystems API Manager on Docker Repositories at a later point, but I can't give you an ETA. Hi, I didn't have any problems loading the image, but when I run the setup script, I get the following error:
Your inputs are:Full image repository, name and tag for your IAM docker image: intersystems/iam:0.34-1-1IP address for your InterSystems IRIS instance: xxx.xxx.xxx.xxxWeb server port for your InterSystems IRIS instance: 52773Would you like to continue with these inputs (y/n)? yGetting IAM license using your inputs...No content. Either your InterSystems IRIS instance is unlicensed or your license key does not contain an IAM license.
Which license is required for IAM?
We have installed Intersystems IRIS for Health in 2019.3 version on this server.
$ iris list
Configuration 'IRISDEV01' (default) directory: /InterSystems versionid: 2019.3.0.304.0 datadir: /InterSystems conf file: iris.cpf (SuperServer port = 51773, WebServer = 52773) status: running, since Tue Aug 13 08:27:34 2019 state: warn product: InterSystems IRISHealth
Thanks for your help! Please write to InterSystems to receive your license. While trying to do this I've used both my public IP address and my VPN IP address and I get the error
Couldn't reach InterSystems IRIS at xxx.xx.x.xx:52773. One or both of your IP and Port are incorrect.
Strangely I got this to work last week without having the IAM application or user enabled. That said I was able to access the IAM portal and set some stuff up but I'm not sure it was actually truly working.
Any troubleshooting advice? Michael,
please create a separate question for this comment. Hello @Stefan.Wittmann thanks for the detailed tutorial. Exists a community free version of API Manager in order to use it with the Iris Community version?
Thank you. The documentation link at the end of the post has changed. Here's the current one.
Article
Sergey Kamenev · Nov 11, 2019
InterSystems IRIS supports a unique data structure, called globals, for information storage. Essentially, globals are persistent arrays with multi-level indices, having several extra capabilities—transactions, quick traversal of tree structures, and a programming language known as ObjectScript.
I'd note that for the remainder of the article, or at least the code samples, we'll assume you have familiarised yourself with the basics of globals:
Globals Are Magic Swords For Managing Data. Part 1.Globals - Magic swords for storing data. Trees. Part 2.Globals - Magic swords for storing data. Sparse arrays. Part 3.
Globals are completely different structures for storing data than the usual tables, and operate at a much lower level. And that begs the question, how would transactions look when working with globals, and what peculiarities might you encounter in the effort?
We know from relational database theory that a good transaction implementation needs to pass the ACID test (see ACID in Wikipedia).
Atomicity: All changes made in the transaction are recorded, or none at all. See Atomicity (database systems) in Wikipedia.
Consistency: After the transaction is completed, the logical state of the database should be internally consistent. In many ways, this requirement applies to the programmer, but in the case of SQL databases, it also applies to foreign keys.
Isolation: Transactions running in parallel shouldn’t affect one another.
Durability: After successful completion of the transaction, low-level problems (such as a power failure) should not affect the data changed by the transaction.
Globals are non-relational data structures. They were designed to support ultra-fast work on hardware with a minimal footprint. Let's look at how transactions are implemented in globals using the IRIS/docker-image.
1. Atomicity
Consider the situation when 3 values must be saved in database together, or none of them should be recorded.
The easiest way to check atomicity is to enter the following code in terminal:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
TCOMMIT
Then conclude with:
ZWRITE ^a
The result should be this:
^a(1)=1
^a(2)=2
^a(3)=3
As expected, Atomicity is observed. But now let's complicate the task by introducing an error and see how the transaction is saved—partially, or not at all. We’ll start checking atomicity as we did before, like so:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
But this time we’ll forcibly stop the container using the command docker kill my-iris, which is almost equivalent to a forced power off as it sends a SIGKILL (halt process immediately) signal. After restarting the container, we check the contents of our global to see what happened. Maybe the transaction has been partially saved?
ZWRITE ^a
Nothing got out
No, nothing has been saved. So, in the case of accidental server stop, the IRIS database will guarantee the atomicity of your transactions.
But what if we want to cancel changes intentionally? So now let's try this with the rollback command, as follows:
Kill ^a
TSTART
Set ^a(1) = 1
Set ^a(2) = 2
Set ^a(3) = 3
TROLLBACK 1
ZWRITE ^a
Nothing got out
Once again, nothing has been saved.
2. Consistency
Recall that globals are lower-level structures for storing data than relational tables, and with a globals database, indices are also stored as globals. Thus, to meet the requirement of consistency, you need to include an index change in the same transaction as a global node value change.
Say, for example, we have a global ^person, in which we store personal data using the social security number (SSN) as the key:
^person(1234567, 'firstname') = 'Sergey'
^person(1234567, ‘lastname’) = ‘Kamenev’
^person(1234567, ‘phone’) = ‘+74995555555
...
We’ve created an ^index key to enable rapid search by last or last and first names, as follows:
^index(‘Kamenev’, ‘Sergey’, 1234567) = 1
To keep the database consistent, we need to add persons like this:
TSTART
^person(1234567, ‘firstname’) = ‘Sergey’
^person(1234567, ‘lastname’) = ‘Kamenev’
^person(1234567, ‘phone’) = ‘+74995555555
^index(‘Kamenev’, ‘Sergey’, 1234567) = 1
TCOMMIT
Accordingly, when deleting a person, we must use the transaction:
TSTART
Kill ^person(1234567)
Kill ^index(‘Kamenev’, ‘Sergey’, 1234567)
TCOMMIT
In other words, fulfilling the consistency requirement for your application logic is entirely up to the programmer when working with a low-level storage format such as globals.
Luckily, IRIS offers the commands to organise your transactions and deliver Consistency guarantees for your applications. When using SQL, IRIS will use these commands under the hood to ensure consistency of its underlying globals data structures when performing INSERT, UPDATE, and DELETE statements. Of course, IRIS SQL also offers corresponding SQL commands for starting and stopping transactions to leverage in your (SQL) application logic.
3. Isolation
Here’s where things get wild. Suppose many users are working on the same database at the same time, changing the same data. The situation is comparable to when many developers are working with the same code repository and trying to commit changes to many files at the same time.
The database needs to keep up with everything in real time. Given that serious companies typically have a person responsible for version control—merging branches, managing conflict resolution, and so forth—and that the database needs to take care of this in real time, the complexity of the problem and the importance of correctly designing the database and the code that serves it both become self-evident.
The database can’t understand the meaning of actions performed by users and try to prevent conflicts when they’re working on the same data. It can only cancel one transaction that contradicts another or execute them sequentially.
Moreover, as a transaction is executing (before the commit), the state of the database may be inconsistent. Other transactions should not have access to the inconsistent database state. In relational databases, this is achieved in many ways, such as by creating snapshots or using multi-versioned rows.
When transactions execute in parallel, it’s important that they not interfere with each other. This is what isolation is all about.
SQL defines four levels of isolation, in order of increasing rigor. They are:
READ UNCOMMITTED
READ COMMITTED
REPEATABLE READ
SERIALIZABLE
Let's consider each level separately. Note that the cost of implementing each level grows almost exponentially as you move up the stack.
READ UNCOMMITTED is the lowest level of isolation, but it’s also the fastest. Transactions can read the changes made by other transactions.
READ COMMITTED is the next level of isolation and represents a compromise. Transactions can’t read each other's changes before a commit, but can read any changes after a commit.
Say we have a long-running transaction (T1), during which commits have happened in transactions T2, T3... Tn while working on the same data as T1. In such cases, each time we request data in T1, we may well obtain a different result. This is called a non-repeatable read.
REPEATABLE READ is the next level of isolation, in which we no longer have non-repeatable reads because a snapshot of the result is taken each time we request to read data. The snapshot is used if the same data is requested again during the same transaction. However, at this isolation level, it’s possible that what will be read is phantom data—new strings that were added by transactions committed in parallel.
SERIALIZABLE is the highest level of isolation. It’s characterized by the fact that any data used in a transaction (whether read or changed) becomes accessible to other transactions only after the first transaction has finished.
First, let’s see whether there’s isolation of operations between threads with transactions and threads without transactions. Open two terminal windows and enter the following:
Kill ^t
Write ^t(1)
2
TSTART
Set ^t(1)=2
There’s no isolation. One thread sees what the second one does when it opens a transaction.
Now let's see whether transactions in different threads can see what’s happening inside. Open two terminal windows and start two transactions in parallel.
Kill ^t
TSTART
Write ^t(1)
2
TSTART
Set ^t(1)=2
A 3 appears on the screen. What we have here is the simplest (but also the fastest) isolation level: READ UNCOMMITTED.
In principle, this is what we expect from a low-level data representation such as globals, which always prioritize speed. IRIS SQL provides different transaction isolation levels to choose from, but what if we need a higher level of isolation when working with globals directly?
Here we need to think about what isolation levels are actually for and how they work. For instance, lower levels of isolation are compromises designed to speed up database operations.
The highest isolation level, SERIALIZABLE, ensures that the result of transactions executed in parallel is equivalent to the result of executing them serially. This guarantees there will be no collisions. We can achieve this with properly used locks in ObjectScript, which can be applied in multiple ways. This means you can create regular, incremental, or multiple locks using the LOCK command.
Let's see how to use locks to achieve different levels of isolation. In ObjectScript, you use the LOCK operator. This operator permits not just exclusive locks, which are necessary for changing data, but also what are called shared locks. These shared locks can be accessed by several threads at once to read data that won’t be changed by other processes during the reading process.
For more details about locking, see the article “Locking and Concurrency Control”. To learn about two-phase locking, see the article "Two-phase locking" on Wikipedia.
The difficulty is that the state of the database may be inconsistent during the transaction, with the inconsistent data visible to other processes. How can this be avoided? For this example, we’ll use locks to create visibility windows within which the state of the database can be consistent. Access to any of these visibility windows will be through a lock.
Shared locks on the same data are reusable—several processes can take them. These locks prevent other processes from changing data. That is, they’re used to form windows of a consistent database state.
Exclusive locks, on the other hand, are used when you’re modifying data—only one process can take such a lock.
Exclusive locking can be employed in two scenarios. First, it can take any process if the data doesn’t have locks. Second, it can take only the process that has a shared lock on the data and the first one that requested an exclusive lock.

The narrower the visibility window, the longer the wait for other processes becomes—but the more consistent the state of the database in it will be.
READ COMMITTED ensures that we see only committed data from other threads. If data in another transaction hasn't yet been committed, we see the old version. This lets us parallelize the work instead of waiting for a lock to be released.
In IRIS, you can't see an old version of the data without using special tricks, so we'll have to make do with locks. We need to use shared locks to permit data to be read only at points where it’s consistent.
Let's say we have a database of users, ^person, who transfer money from one person to another. Here’s the point at which money is transferred from person 123 to person 242:
LOCK +^person(123), +^person(242)
TSTART
Set ^person(123, amount) = ^person(123, amount) - amount
Set ^person(242, amount) = ^person(242, amount) + amount
TCOMMIT
LOCK -^person(123), -^person(242)
The point where the amount is requested for person 123 before the deduction should have an exclusive lock (by default):
LOCK +^person(123)
Write ^person(123)
But if we need to display the account status in the user's personal account, we can use a shared lock, or none at all:
LOCK +^person(123)#”S”
Write ^person(123)
LOCK -^person(123)#”S”
However, if we accept that database operations are carried out virtually instantaneously (remember that globals are a much lower-level structure than a relational table), then this level is no longer so necessary in favor higher isolation levels.
Full example, for READ COMMITTED:
LOCK +^person(123)#”S”, +^person(242)#”S”
Read data (сoncurrent committed transactions can change the data)
LOCK +^person(123), +^person(242)
TSTART
Set ^person(123, amount) = ^person(123, amount) - amount
Set ^person(242, amount) = ^person(242, amount) + amount
TCOMMIT
LOCK -^person(123), -^person(242)
Read data (сoncurrent committed transactions can change the data)
LOCK -^person(123)#”S”, -^person(242)#”S”
REPEATABLE READ is the second-highest level of isolation. At this level we accept that data may be read several times with the same results in one transaction, but may be changed by parallel transactions.
The easiest way to ensure a REPEATABLE READ is to take an exclusive lock on the data, which automatically turns this isolation level into a SERIALIZABLE one.
LOCK +^person(123, amount)
read ^person(123, amount)
other operations (parallel streams try to change ^person(123, amount), but can't)
change ^person(123, amount)
read ^person(123, amount)
LOCK -^person(123, amount)
If locks are separated by commas, they are taken in sequence. But they will be taken atomically, all at once, if they’re listed like this:
LOCK +(^person(123),^person(242))
SERIALIZABLE is the highest level of isolation and the most costly. When working with classic locks like we did in the above examples, we have to set the locks in such a way that all transactions with data in common will end up being performed serially. For this approach, most of the locks should be exclusive and taken to the smallest fields of the global, for performance.
If we’re talking about deducting funds from a ^person global, then SERIALIZABLE is the only acceptable level. Money spent needs to be strictly serial, otherwise it’s possible to spend the same amount several times.
4. Durable
I conducted tests with a hard cut-off of the container using the docker kill my-iris command. The database stood up well to these tests. No problems were identified.
Tools to manage globals and locks
You may find useful the following tools in IRIS Management portal:
View and manage locks.
View and manage globals.
Conclusion
InterSystems IRIS has support for transactions using globals, which are atomic and durable. To ensure database consistency with globals, some programming effort and the use of transactions are necessary, since there are no complex built-in constructions like foreign keys.
Globals without locks are equivalent to the READ UNCOMMITTED level of isolation, but this can be raised to the SERIALIZABLE level using locks. The correctness and transaction speed achievable with globals depend considerably on the programmer's skill and intent. The more widely that shared locks are used when reading data, the higher the isolation level. And the more narrowly exclusive locks are used, the greater the speed. Sergey, it's great that you are writing articles for newbies, nevertheless you don't explicitly mark it. Just a quick note on your samples: ZWRITE command never returns <UNDEFINED> in IRIS, so to check the global existence one should use something like
if '$data(^A) { write "Global ^A is UNDEFINED",! }
I'm sure that you are aware of it; just thinking of novices that should not be confused. In the example for READ UNCOMMITTED, after the second (right-hand) Terminal session sets ^t(1)=2, when the first (left-hand) Terminal session writes ^t(1), the example shows/states that a "3" appears, but that's wrong; it should be a "2". Since transactions can be nested, and TROLLBACK rolls back all transactions, it's best practice to pair each TSTART with TROLLBACK 1, which will rollback only the currenct transaction. Thanks! I fixed the error Thanks! You're right Thanks! I fixed the error As you are talking about locks and transactions, and as others have noted, you can nest transactions it might be worth warning people about locks inside transactions and the fact that the unlock will not take place until the tcommit.
These can cause issues, especially where this is a method using transactions and locks which calls another that does the same.
Announcement
Anastasia Dyubaylo · Nov 26, 2019
Hi Community,
We are pleased to invite you to the InterSystems Meetup in Moscow on December 10, 2019!
InterSystems Moscow Meetup is a pre-New Year meeting for users and developers on InterSystems technologies. The meetup will be devoted to the InterSystems IRIS Data Platform.
Please check out the agenda:
📌 18:40 Registration. Welcome coffee
📌 19:00Review presentation on the InterSystems Russia news for 2019
📌 19:15 Technology News on InterSystems IRIS by @Eduard.Lebedyuk, InterSystems Sales Engineer
REST, JSON, IAM
PEX, Native API, etc.
📌 20:00 Coffee Break
📌 20:20 Migration to InterSystems IRIS
📌 21:00 ObjectScript Package Manager Introduction — Package Manager Client for InterSystems IRIS by @Evgeny.Shvarov, Startups and Community Manager
📌 21:15 Open Exchange and other resources & services for InterSystems Developers by @Evgeny.Shvarov, Startups and Community Manager
📌 21:30 Free time. Drinks and snacks
The agenda is full of interesting stuff. We look forward to seeing you!
So, remember:
🗓 Date: December 10, 2019
⏱ Time: 18:40-22:00
📍 Venue: Loft-Ministerstvo, Stoleshnikov Lane 6/3, Moscow
✅ Registration: Register for FREE here
Space is limited, so register today to secure your place. Admission free, registration is mandatory for attendees.
Save your seat today! Hey!
The agenda is updated - I'll do the following sessions:
📌 21:00 ObjectScript Package Manager Introduction — Package Manager Client for InterSystems IRIS
📌 21:15 Open Exchange and other resources & services for InterSystems Developers
Come to chat!
Announcement
Andreas Dieckow · Oct 24, 2019
InterSystems Atelier has been tested with OpenJDK 8. The InterSystems Eclipse plug-in is currently available for Eclipse Photon (4.8), which requires and works with Java 8.
Announcement
Jon Jensen · May 23, 2019
InterSystems Global Summit 2019 Boston Marriott Copley PlaceSeptember 22-25, 2019Registration is now open! InterSystems Global Summit 2019 is the premier event for the InterSystems technology community – a gathering of industry leaders and developers at the forefront of their respective industries. This event attracts a wide range of attendees, from C-level executives, top subject matter experts and visionary leaders, managers, directors and developers. Attendees gather to network with peers, connect with InterSystems partners, learn best practices and get a firsthand look at upcoming features and future innovations from InterSystems.Global Summit will be a three and half-day event held at the Boston Marriott Copley Place, September 22-25, 2019. Nestled in the bustling cosmopolitan neighborhood of Boston’s Back Bay, the Marriott Hotel is conveniently located next to the Prudential Center, just minutes from historic Trinity Church and Boston Common, and walking distance to Fenway Park and the Charles River.Global Summit abounds with opportunities to connect with the larger InterSystems Community:Tech Exchange - talk with our developers about current and future product capabilitiesPartner Pavilion - discover products and services to help you build applications that matterHealthShare User Group - share your stories & hear what’s new and next for InterSystems HealthShareExperience Lab - get hands-on with our latest innovationsPersonalized Training - Learning Services experts available for one-on-one consultationsAnd much more... Register Now!
Announcement
Andreas Dieckow · Jun 28, 2019
Conversion Sequence step 4 (see table below)Over the last few months, we have made changes to InterSystems IRIS to make it easier for you to move from Caché/Ensemble (C/E) to InterSystems IRIS. The most significant adjustments are the re-introduction of non-Unicode databases and the in-place conversion. InterSystems IRIS now fully supports non-Unicode databases with all the functionally that already exists with Caché. The in-place conversion supports an upgradelike process to install InterSystems IRIS right on top of your existing C/E deployments. We call it a “conversion” because it transforms your C/E instance into a true InterSystems IRIS instance.InterSystems is excited to invite you to our in-place conversion field test program. This program will be active until the end of July and provides you with early access for testing and experiencing your move from C/E to InterSystems IRIS. We have already concluded a limited pre-field test and are pleased that all customers have been able to successfully move their application and convert their existing instances to InterSystems IRIS.What will you need to participate? InterSystems will give you access to two documents, a special kit of InterSystems IRIS that offers the features for this field test, and of course a license key.InterSystems IRIS Adoption GuideThe journey begins here, where you can discover the differences between the two product lines and learn all the information you need to port your application to InterSystems IRIS. Once you have your application running on InterSystems IRIS, you can move to the next step. By the way, you don’t need to do anything special to activate the non-Unicode aspect.InterSystems IRIS Conversion GuideThis document describes, in great detail, all the aspects and considerations for converting a single instance or instances that are part of mirror configurations.The guides, InterSystems IRIS kit, and license key are available from our WRC download server. Look for the files that have the word “conversion” in the name.Support for the in-place conversion and non-Unicode support will be released with InterSystems IRIS 2019.1.1 before the summer is over. Please do not use the field test kit to convert production installations.Please send all feedback to conversionft@intersystems.com, which will route your message straight to the team that will assist you.We hope to engage with you during this field test and include your feedback in the official release. IRIS Adoption Initiative - SequencingStepMigrations or Conversion from/toStatus1Migration to IRIS or IRIS for HealthAvailable today2Migration of TrakCare to IRIS for Health Available today3 In-place conversion to HealthShare Health ConnectAvailable today; Contact InterSystems for details4 In-place conversion to IRISLimited Field Test: completedPublic Field Test: June 27, 2019Expected completion: July 31, 20195In-place conversion to IRIS for HealthQ3 20196In-place conversion to HealthShare {IE, PD, HI, ….}Q4 2019 In-place conversions are supported for Caché and Ensemble versions 2016.2 and later. I love IRIS and using it in already a couple of solutions. This is even greater to adopt IRIS over the full line. Keep up the good work ! IRIS is a very good product, with many new capabilities.However, as IRIS is available on 64 bit O/S only, customers running for Cache/Emsemble on (old) 32 bit O/S, will have to migrate those to 64 bit O/S before they can migrate to IRIS. That is correct, InterSystems IRIS will not run on 32 bit OS systems. Thank you, Marco. Andreas:I'm not technical, so please forgive my inability to answer this question from reading your post. Is any of the above relevant to a conversion from MapR to IRIS? No, the focus of this conversion is to enable existing C/E customers to move to InterSystems IRIS. Another concern is if your current (Cache/Ensemble) is using external calls using COM objects (proxy classes to external DLLs).It looks like that on IRIS this functionality was totally removed:- The "activate wizard" in Studio no longer exists- The %Activate package is also removed. Hello,Is there any difference of in-place conversion feature on different HealthShare instances?I am trying to do inplace-conversion with HealthShare 2018 to IRIS, the installer is supposed to ask me what instance (Cache, Ensemble, HealthShare,...) I want to convert, but it isn't. It just go on full install a new IRIS.Am I missing something? Andreas,
Which the recommendation to migrate to IRIS if a Ensemble license is like that?
Elite, Multi-Server, Platform Independent
Whats happens with the license, since is not compatible with Iris? I suggest you work with your Sales Rep to discussion licensing options.
As a licensed customer you can go to evaluation.intersystems.com in order to grab an InterSystems IRIS evaluation key for trying out the product.
Question
Ponnumani Gurusamy · Jul 7, 2019
Hi Team, If any possible to add a rewards list of InterSystems cache certification to InterSystems Global Masters. For example ,developer or global master user have 10000 point as , we give any offer(price) to attend the InterSystems cache/Ensemble/IRIS certificate. So no.of developer try to attend the exam and get certification.This is very useful for developer career and we also have lot of Cache developers in the market. Please correct me, if I am wrong..Thanks ,Ponnumani Gurusamy. Hi Ponnumani!
Here are InterSystems certification offers
Announcement
Josh Lubarr · Oct 19, 2017
Hi, Community!We are pleased to invite you to participate in the InterSystems Documentation Satisfaction survey. As part of ongoing efforts to make our content more usable and helpful, Learning Services wants your feedback about InterSystems documentation. The survey covers many different areas of documentation, and you can complete it in about five minutes. The deadline for responses is October 30. Also, responses are anonymous. The survey is over, thanks! And also you can earn 300 points in Global Masters for the survey completion.