I would expect the content type to be 'application/json'? (I assume the responseType is the contentType? In every HTTP Operation I have worked with that used json that was the content type).

The character set must be set to UTF-8 unless you need to use the Latin character set in which case use Latin. Bear in mind, both the server and the client must be consistent in declaring what they are sending and what they are expecting to receive.

I found the following article on Angular and HTTP (or Interceptors as they are called after Angular 5)


We add HTTP Headers using the HttpHeaders helper class. It is passed as one of the arguments to the GET, POST, PUT, DELETE, PATCH & OPTIONS request.

To use HttpHeaders in your app, you must import it into your component or service

Then create an instance of the class

And then call the httpClient.get method passing the headers as the argument

Note that httpHeaders are immutable. i.e every method on HttpHeaders object does not modify it but returns a new HttpHeaders object.

The httpHeaders class has several methods using which you can manipulate the headers.


set(name: string, value: string | string[]): HttpHeaders

The Sets method returns a new instance after modifying the given header. If the header already exists, its value is replaced with the given value in the returned object.

httpHeaders are immutable

The HTTP headers are immutable. The following example does not work as each set method returns a new header and does not update the original header.

To workaround, you can use the code as follows

You can also use the following code


append(name: string, value: string | string[]): HttpHeaders

The append method appends a new value to the existing set of values for a header and returns a new instance. The append method does not check if the value exists.

The above results in content-type header in the request header as content-type: application/json,application/x-www-form-urlencoded


has(name: string): boolean

Returns true if the given header with the name already exists in the HttpHeaders. The following code checks if the content-type header present in the request header. If not it adds it.


I started writing a response to your questions yesterday afternoon and I wrote a lengthy explanation using LabTrak as the basis for my response. I wrote about 400 lines of explanation and then my laptop rebooted to install some updates and I lost everything I wrote. So I am going to answer but I am going to abbreviate the explanation as I don't have enough time to rewrite the whole document I wrote yesterday.

1) Different database Storage Models

  • DefaultStrorage
    • Is the Storage model used when you create a new Class/Table in Ensemble/IRIS.
    • Stores data in a Global whose name ends with a "D". It stores Indices for the Class/Table in a Global ending with an "I" and puts Stream data into a Global where the Name ends with an "S"
    • Data is stored in a $list structure. $list creates a couple of bytes at the beginning of each field that informs the Compiler how long the field is and the base datatype of the field: String, Number, Boolean and Bit.
    • By default, the RowId is a Sequential Contiguous Integer starting from 1. The RowId is referenced by the column named "ID"
    • The RowId can be overridden by creating an Index on a property of the class and assigning it the attributes [Unique, PrimaryKey, IDKey]. PrimaryKey says that the property value is the RowId for each row in the Table. IDKey says that the property Value is the Object.%Id() and is the Object Identifier. Unique is implicit in that you cannot have two different records with the same RowId value.
    • The Global reference for the Data Global looks like this:
    • ^GlobalD(RowId)=$lb(Property1,Property2,...,PropertyN)
    • An index on Property X will generate a Global Reference: ^GlobalI("{IndexName}",{Property_Value},{RowId_Value"})=""
    • The RowId, irrespective of whether it is system generated or based on the value of a specific property or properties is the "ID" in an SQL statement
  • SQLStorage
    • This model was implemented to allow InterSystems customers who had developed applications that used direct Global Access to create a Mapping of the Global Structures in such a way that the data in the global can be exposed as Class Objects or Table Rows.
    • The RowId can be referenced as "ID" in an SQL Statement. The RowId has to be generated by the application even if it is a sequential Integer, though in many applications including LabTrak typically subscripted their Application Globals with a data value generated by the application. For example, the Episode Number in LabTrak is based on the 2 character Site Code followed by a string of Numerics. e.g. AA1345363

2) Links between Tables

  • A Designative Reference is a term we use for a Property/Column whose value is the RowId of a row in another Class/Table. The most common examples are code tables. If you have a code table CT_Country where the Country_Code is the RowId of the table. A column on the table EP_VisitNumber (A Trak Episode)  called CountryDR is linked to a row in CT_Country. 
  • The CT_Country Global will look like this: ^User.CTCountryD({Country_Code})="{Country_Name}\{Country_Currency}\{Country_Telephone_Code}" and an example would be: ^User.CTCountryD("ZA")="South Agrica\ZAR\+27"
  • The Episode Record will look like this: ^TEPI({Episode_Number})="field1\Field2\...\ZA\...\FieldN".  The SQL query "Select Country_Code from SQLUser.EP_VisitNumber where EP_VisitNumber='AA12347690'" will display ZA from the Country_Code column. You can use a "->" (Points Into) construct supported in Ensemble/IRIS SQL to access the Country_Name without having to specify the CT_Country TableName in the FROM clause or the WHERE clause "WHERE EP_VisitNumber.Country_Code=CT_Country.ID"
  • The Select statement looks like this "SELECT EP_VisitNumber,EP_VisitSurname,Country_Code->CountryName from SQLUser.EP_VisitNumber
  • Designative References do not support Referential Integrity. That means that you could delete the "ZA" row in CT_Country even though there may be thousands of Episodes that point to that row. In order to enforce Referential Integrity, you declare a ForeignKey definition on CountryCode in the Class Definition. Ensemble will generate an Index of Country Codes and Episode Numbers and if there are any entries in this Index for the Country Code "ZA", i.e. Episodes linked to the Code "ZA" then an Error Status is generated that indicates that the ForeignKey Constraint prevents the delete of the the "ZA" row in CT_Country as there are rows in other Tables that reference that Row.
  • LabTrak doesn't use ForeignKeys (generally speaking), the application does, however, prevent you from Deleting rows in Code Tables if those Codes are referenced by other LabTrak Tables.

3) Relationships

There are two types of relationships most commonly used in Ensemble/IRIS Classes/Tables: Parent-Child and One-Many. Many-Many relationships can be defined. Refer to the documentation on Relationships to see how Many-Many relationships are defined. I can't think of any examples of Many-Many relationships in LabTrak so instead, I will just focus on the two main relationship Types.

Parent-Child Relationships

If you look at the TEPI global in LabTrak which is the global which stores Episodes. You will see something like this:

^TEPI({Episode_Number})="{Episode Fields}"



for example:

^TEPI("AA00000401")=""SALM\NIGEL\M\44453\\\S2\61839\646\61838\480\61839\0DR\\R\\\1039444\\41C028W000\\\\\40\\\F\\fred.smith\\61839\646\N\\AA\,690425\\\\\\N\0\\\\N\H\38768\\\\41C028W000\\\\\\\\\\\\\\\Z03.9\52ZETH000003" - This is the Episode Record

^TEPI("AA00000401",1,"H065",1)="61842\594\jane.doe\61842\597\jane.doe\\N\\\\\\\\\\\\\61839\646\\\N\\\Y\\\A\~~~\ \N\\fred.smith\\\\\\N\\61838\28800\\\1\\\\\Y\\\\61839\38760\\\N\\\\\\\N\61843\8122\\\P\Z03.9\" - This is a Test Set Record within the Episode

^TEPI("AA00000401",1,"H065",1,"DATA","H0440")="4.14\\\\\AAH02\\\\\\\\\" - This is a Test Item Record within the Test Set.

As you can see this global is a Multidimensional Array. It has many subscript levels. The ability to create Arrays like this is one of the features that differentiates Cache/Ensemble/IRIS from any other Database Technology. It is one of the reasons why Ensemble/IRIS creates databases that are, on average, half the size of other database technologies on a Like for Like basis. Partly because the data records do not use fixed-length fields. By using Delimiters or $List we do not need to pad out field values with leading '0s or trailing spaces or NULL characters. Secondly, by storing the Test Sets within the Episode men's that when we access the Episod/IRIS loads the data page that contains that episode into memory it brings with it the Test Sets and Test Items. Where you walk through the Test Sets the Test Set Records are already in memory. We haven't had to go to another data structure in the Database to fetch the Test Set Records from a Test Set Table as you would need to do in other Relational Database Technologies.

If you delete an Episode the Test Sets and Test Items are deleted as well. You do not need to delete the Test Items first, then delete the Test Sets and then, finally, delete the Episode.

Nor do you need to maintain Indices to link Test Items to Test Sets or Test Sets to Episode.

In LabTrak the property in the Parent Table that references the Child rows in the Chile Table is prefixed with the word "Child" and the corresponding property in the Child Table that points to the row in the Parent Table is suffixed with the word "ParRef"

One-Many Relationships

In the One to Many relationship, there will be two tables, a Row in the "One" table is linked to Many rows in the "Many" table. From an Objects point of view, there is a Property in the "One" class that points to the related rows in the "Many" table. In LabTrak the property name is prefixed with the word "child". There is a corresponding property in the "Many" Table and that property name is suffixed with the word "ParRef".

Your Question

When I read your question and I looked at the Schema Diagram I was a bit confused as there is no SQLUser.CT_Loc table in LabTrak. There is however a table called SQLUser.CT_UserLocation.

If you use the Management Portal -> System Explorer -> SQL in the LabTrak namespace and you select the SQLUser schema in the Schema drop-down list on the left of the SQL form. You will be presented with a list of all Tables in the SQLUser schema. If you scroll down and find the table "CT_UserLocation" and you drag it into the "Execute SQL" text area on the form it will display the following SQL Statement:

CTUSL_RowId, CTUSL_Code, CTUSL_Desc, CTUSL_Destination_DR, CTUSL_UniqueSite, CTUSL_MoveDirectory, CTUSL_AccreditationNumber, CTUSL_WebReportFooter, CTUSL_DisplaySequence, CTUSL_RegionCode, CTUSL_Address1, CTUSL_Address2, CTUSL_Address3_Suburb, CTUSL_Address4_State_DR, CTUSL_Address5_PostCode, CTUSL_Phone, CTUSL_Fax, CTUSL_email, CTUSL_StructuredCode, CTUSL_StructuredCodeLength, CTUSL_StructuredCodeLevel, CTUSL_Address, CTUSL_DocCourierRun_DR, CTUSL_LocCourierRun_DR, CTUSL_ActiveFlag, CTUSL_DefaultNID, CTUSL_UnixMoveDirectory, CTUSL_PFforBatchEntry
FROM SQLUser.CT_UserLocation

Note the Columns  CTUSL_Address4_State_DR, CTUSL_DocCourierRun_DR, CTUSL_LocCourierRun_DR. These are all Designative References to rows in other Tables.

The EP_VisitTestSet table contains the following Columns:

VISTS_ParRef, VISTS_RowId, VISTS_TestSetCounter, VISTS_TestSet_DR, VISTS_DateOfEntry, VISTS_TimeOfEntry, VISTS_UserEntered_DR, VISTS_DateOfAuthorisation, VISTS_TimeOfAuthorisation, VISTS_UserAuthorised_DR, VISTS_PathologistID_DR, VISTS_ExcludeFromCCR, VISTS_Rule3Exempt_Sequence, VISTS_Priority_DR, VISTS_Rule3Exempt_Max, VISTS_TherapeutDosage, VISTS_TimeOfDosage, VISTS_24HUVolume, VISTS_24HUTimePeriod, VISTS_BB_TransfEvents_DR, VISTS_Rule3Exempt_Date, VISTS_DateOfPathologistAtt, VISTS_TimeOfPathologistAtt, VISTS_StaffNotes, VISTS_DateOfCreation, VISTS_TimeOfCreation, VISTS_StandardLettersChecked, VISTS_BB_DateRequired, VISTS_ExcludeFromPatientMean, VISTS_UserSite_DR, VISTS_Machine, VISTS_Printed, VISTS_SuperSet_DR, VISTS_StatusResult, VISTS_DFT_TimeOfFirstCollection, VISTS_HISTO_Extra, VISTS_HISTO_BillingItem, VISTS_SupressBilling, VISTS_HospitalRefNumber, VISTS_UserCreated_DR, VISTS_MoveToReferralLab_DR, VISTS_MoveToUserSite_DR, VISTS_DFT_DR, VISTS_DFT_Position, VISTS_DFT_DateOfFirstCollection, VISTS_StatusEntry, VISTS_Confidential, VISTS_SpecimenType_DR, VISTS_SpecimenNo, VISTS_DateOfCollection, VISTS_TimeOfCollection, VISTS_CollectedBy_DR, VISTS_DisplaySequence, VISTS_SupressReason, VISTS_UserSupress_DR, VISTS_DateOfSupressBilling, VISTS_Rule3Exempt_Comments, VISTS_AnatomicalSite_DR, VISTS_Reason_DR, VISTS_DateOfReason, VISTS_TimeOfReason, VISTS_UserReason_DR, VISTS_DateOfReceive, VISTS_TimeOfReceive, VISTS_PaymentCode_DR, VISTS_SpecimenGroup_DR, VISTS_Document_DR, VISTS_BB_Neonatal, VISTS_RR_Date, VISTS_RR_Time, VISTS_RR_User_DR, VISTS_RV_Date, VISTS_RV_Time, VISTS_RV_User_DR, VISTS_BB_DoNotFile, VISTS_DateOfLastChange, VISTS_TimeOfLastChange, VISTS_UserOfLastChange_DR, VISTS_DateOfSentSTM, VISTS_TimeOfSentSTM, VISTS_DateOfReceivedSTM, VISTS_TimeOfReceivedSTM, VISTS_MovementStatus, VISTS_AddedByAction, VISTS_PricingStatus, VISTS_ICD10List, VISTS_LongTerm, VISTS_RequestBy_DR, VISTS_LongTermReason, VISTS_PairedSeraQueue, VISTS_DoctorDR, VISTS_BB_TimeRequired, VISTS_BB_Tags, VISTS_DTOfResultChange, VISTS_DateOfFirstAuthorisation, VISTS_TimeOfFirstAuthorisation, VISTS_AnatomicalSiteFT, VISTS_ReasonReportedTo, VISTS_ReasonTelephone, VISTS_ReasonObservations, VISTS_ManualPCEntry, VISTS_ReasonClearResults, VISTS_ReasonClearSN, VISTS_ReasonResultsNR
FROM SQLUser.EP_VisitTestSet

If I add a WHERE Clause: WHERE VISTS_ParRef %STARTSWITH 'AA' it will return records from the site that "AA" 

If you run this query you will see rows of data that look like this

AA00000005 AA00000005||H065||1 1 H065 04/19/2010 1209

The ParRef is the Episode Number. The RowId for the Test Set is AA00000005||H065||1, {Episode}||{TestSetCode}||{TestSetCounter}.

I created an ODBC connection to a LabTrak Namespace on our UAT server. I unspecified the Server IP address, the Super Server Port Number, The Namespace and my Credentials. I tested the connection which was successful. Te ODBC connection can be used by Excel and other reporting tools to create Reports and Charts.

I then use an application called DBBeaver (available on the Open Exchange site) to create ERD diagrams of the LabTrak classes. DBeaver uses the InterSystems JDBC driver and you set up the connection in DBeaver itself.



If you are using the HS FHIR Resource classes in IRIS for Health then you will see that attributes such as Patient.Picture is a BinaryStream. If you are using a BinaryStream you need to convert it into Base64 encoding which generates a stream of characters that do not include any control characters that may cause a JSON or XML parser to think that it has encountered a line terminator. If you look at the FHIR to SDA nd SDA to FHIR data transformations you will see how they Transform the Inbound Stream from Base64 to Binary and from Binary to Base 64 encoding for the outbound SDA to FHIR JSON or XML.


I have spent the last 4 months developing a generic HL7 and FHIR Codebase. The way it works is as follows:

1) I have a Codebase Namespace that consists of a number of common classes:

a) A Message Queue Service that processes messages in a Message Queue. The Message Queue class has a number of standard properties including: CreateTS, ProcessTS, CompletedTS, Target Message Type (HK7 or FHIR), Fields for the generated Request FHIR or HL7 message, Fields for the Response HL7 or FHIR message.  Fileds for the HL7 ACK/NACK Code and Text, HTTP Response Status, Overall Message Status.

b) There are a number of standard methods for every  Message Queue,: CreateMessage(), Update Message(), GetNextMessage(), CompleteMessage(), ResendMessage(), ResendMessageRange, PurgeMessages(). These methods are called by the Business Service that Processes the Message Queue, Methods that are used by the Busiess Process to Resend Messages or Complete a Message and the UpdateMessage() method is used to update the fields in the message with data it collects as the message passes through the Production. 

c) Code tables to convert some application field values into HL7 or FHIR equivalents.

d) The following Business Operations: HL7HTTPOperation, HL7HTTPSOperation, FHIRHTTPOperation, FHIRHTTPSOperation, HL7FileOperation and FHIRFileOperation 

e) A Business Service that calls the GetNextMessage() method and if it finds a message it puts the MessageId into an Ensemble Request Message and sends it to the Business Process.

f) The Business Process uses the MessageId to open the message Queue Object which apart from the standard fields mentioned in (a) contains any fields derived from the source application that identify the source data that you want to transform into an HL7 or FHIR Message. In the OnRequest() method of the Process you use the fields in the Message to retrieve the data from the data source and then invoke a DTL to transform the data into an HL7 or FHIR Message. The resultant message is written to the Message Queue object. Then depending on how the Interface is configured the Ensemble Request Message is sent on to any of the the Business Operations I listed. The Business Operations then do the work to write the HL7 or FHIR message into the HTTPRequest Message. The HTTP/S Operations then send the HTTP request to the Target Server. When a response is received the response status is noted and the message queue object updated. If there is content in the response then that too is written into the Message Queue Object and an Ensemble response message is instantiated and sent back to the Business Process.

g) The Business Process OnResponse() method processes the response it receives from the HTTP Operation and updates the Message Queue Object with the overall status of the transaction. It also calls the CompleteMessage() Method which tidies everything up and sets the CompletedTS to the current Date/Time. There are two calculated fields: TimeFromCreateToComplete and TimeFromProcessToComplete. The second one is  based on the time from when the message was picked up by the Business Service through to completion. The first time field records the time it took from when the message was originally created through to completion. 

h) There is a House Keeping Business Service that purges the Message Queue, Ensemble Messages, Ensemble Log Files ($$$TRACE etc), the Inbound and outbound file directories. The purges are based on configurable "Number of Days to Keep {messages/logs/etc...}"

i) There is a Debug Logger that you can call anywhere in your code to record some debg information. I did this because you can't use $$$TRACE or any of the $$$LOG... calls in non-Production classes. It also encourages developers to use a standard debugging approach using a standard format rather than having lots of "set ^Nigel($i(^Nigel),$zdt($h,3))="The value of variable x is "_$g(x) approach.

j) There is an Alert Notification System. I have the concept of Error Alert Notifications such as a comms failure in the HTTP Operation or an exception caught in a Try/Catch. Then there are Condition Notifications. They are intended for a different audience and an example would be when the Interface discovers that there is no entry in a code table for a specific code. An email will be sent to the appropriate person/people to update the code table. If the code isn't there a day later a follow-up email will be sent. The Alert Notification Service also checks the number of items in the Production Queues and will send emails to notify someone that the queues are either empty (that could mean that something has broken in the source database/application and it is no longer creating messages for the message queue.

k) There is a single configuration object. There is not a single hard coded value in the entire codebase. Every element of the interface gets its context from the configuration object. The configuration Object has about 40 properties and include the name of the message queue class, the Business Process Production Item Name and Business Process Class Name, the Inbound and Outbound File Directory names, the HL7 or FHIR Request and Response File names (if you are using the File Operation which is intended mainly for testing purposes but can also be used if you are using FTP to send a message onto some other system.

l) The Codebase sits in one database along with the code tables and code table mapping classes and their contents.

m) When you want to create a new Interface you create a new Namespace and map the Codebase classes into that Namespace. You also map the code tables and Code table mapping globals into the new Interface Namespace.

n) In the source application database you import one class. The Interface Mapping Class. If you have an application or ensemble interface in your source then in some appropriate class where you have the details of a Patent or other Entity that you want to send into the new interface you add two lines of code into that class that populate an array of name/value pairs that correspond to the message queue specific fields in the message queue class created for that interface. as mentioned earlier the Message Queue Class has a number of standard properties and methods. When you create a new Interface you create a new Message Queue Class and inherit the Properties and Methods into the class and then add properties for any identifiers that the interface will use to access the data that will be used in the Data Transformation to create your HL7 or FHIR message. The name/value pairs in the array you pass to the Interface Mapping class method correspond to those identifying property names.

o) The alternative is that an Interface will access the source database and walk through a given table and for every entry it finds in that table it will call the Interface Mapping class method to create a new Message Queue Object with the identifier for that item in the table you are processing. This is really useful for Bulk Loads of data from one source into a target destination.

p) There is also a testing module that will take data from any source. You can define test conditions. For example: Scramble the data in one of the source table fields, generate random data for any field, insert or remove items from any collection, remove values from properties. This module uses a combination of the Populate Utils Methods, a whole host of Patient Names, Addresses, Identifiers and you can write custom methods to manipulate the data from the data source. Tests are conducted on one or more source records which are grouped into a "Manifest". You can tell the Manifest how many records you want to process,  where to get the records from. The Manifest Records are then processed by the Test Interface which will run some SQL or object code to get the source data for each Manifest Record. It then calls the DTL you have created to convert that source data into an HL7 or FHIR message. It then runs through the test conditions and applies them to each generated message before the message is sent to the HTTP operations and optionally the File Operations. Details of the Request and Response messages are captured in the Manifest Records. You can also specify documentation on what you expect to happen when the transformed data is processed by the target application that you are testing and once completed there is a manifest report that will generate a .cav file containing the data from every record in the manifest. aving analysed this you update the manifest with the outcome and if the outcome differs from the expected outcome then you either send the whole manifest back to the developers and tell them to fix the bug or you look at why the tests produced a different outcome and whether that is due to issues in the source database or application. Tests are defined in a Tests Table and Tests are then grouped into a Test Set which is linked to a Manifest. That means that Tests can be reused for other test scenarios

Te entire collection of classes are extensively documented. I have determined that once I have monitored the first Interface that is going unto testing in the very near future and I resolve any bugs that haven't been picked up by the various unit tests I have done so far then assuming it is stable (lol I wrote it, how could it not be)  then it will take 1- days to generate any new interface. I wrote this architecture as I have 14 HL7 and FHIR Interfaces that I have to write and they are all almost identical.

The whole framework comes with a few CSP pages that I created to maintain the code tables, code table mappings and the configuration class. I am busy learning Angular at the moment and once I have that under my belt then I will create a REST API that any UI can call so that anyone who uses the Framework can build their own UI on top of it. 

If anyone is interested in this framework please contact me. If there are enough people interested then I will do the work to publish this on Open Exchange.

If anyone wants the Frame work but doesn't want to do any of the development to make it work in your environment then I will do the implementation, create the interfaces and whatever else is required on a contract basis.

There are some aspects f this framework that could make better use of some features in Ensemble such as Code Tables, Uploading CSV files, the Managed Alert Monitoring System (which I rather like but have not built it into this framework as I have my Alert Notification Module which does pretty much the same thing but does not have the 'Help Desk" feel of the Ensemble Alert Monitoring System.

If there are other adapters that are required then I can add them into the framework codebase.


$$$ASSERT(condition) Writes an entry of type Assert, if the argument is false. condition is an ObjectScript expression that evaluates to true or false.

$$$LOGASSERT(condition) Writes an entry of type Assert, for any value of the argument. condition is an ObjectScript expression that evaluates to true or false.

so in the same way that you would have a $$$LOGERROR() statement in your code or a $$$TRACE the $$$ASSERT AND $$$LOGASSERT() will write to ^Ens.Log.UtilD in the case of $$$Assert()

The exact code for each is defined in the Ensemble.inc Include file

#define ASSERT(%condition) If '(%condition) { $$$LOGASSERT("Assert Condition Failed: "##quote(%condition)$char(13,10)) BREAK }

#define LOGALERT(%arg) Do ##class(Ens.Util.Log).LogAlert($$$CurrentClass,$$$CurrentMethod,%arg)

interesting: the $$$$ASSERT has a BREAK statement in it.

so if you have BREAK L/L+/S/S+ then if the task is running in the fore ground then it will automatically BREAK at that point.

If the Production item is not running in the foreground the BREAK will be ignored.

It's basically the same as saying 'If condition $$$LOGINFO("message")' I suspect it is probably useful in BPL's where to write an IF construct is tedious in comparison with a single statement $$$.... in ObjectScript


Just one additional comment. If the databases/namespaces are in different instances on the same or different servers, then you need to create a remote database connection to the databases on the secondary server (the server instance which contains the tables you want to use in queries in your primary server instance). Once you have made the secondary databases available to your primary server then you can create a namespace on the primary server to include the database or databases (if you split your globals into one database and your routines into another database) and then you can use Package Mapping on the primary Server Namespace to map the Class/Table Packages into the Primary server Instance. What you don't want to do is map the globals associated with the secondary server namespace into the primary server namespace as this will cause the globals to be created in the primary server namespace database which means that if you ever disconnect the servers then the data that rightfully belongs to the secondary server will be orphaned in the databases of the primary server.




Undoubtedly VS Code. There are new ObjectScript Plugins being developed all the time and now that you can start using Python and R in the latest IRIS 2021.1 (in the Early adopter program) you just add the appropriate Extensions for these languages as well. It is very obvious that VS Code is growing in popularity just by the sheer volume of extensions available. The extensions for ObjectScript include the Server Manager, gi::locate, gi::connect, ObjectScript Language Server, CacheQualityQ for both VS Code and Cache Studio that helps you write better code. The built-in Source Control connects to GitHub or private GitHub servers and there are a number of GIT extensions. I'm just waiting for someone to add support for the graphical DTL, BPL, and Business Rules (though of course, you can do those through the Management Portal (which you can also run within VS Code). Personally, I hated Atelier and as I had used Cache Studio for roughly 30 years the idea that I might want to use a different Editor never crossed my mind and Atelier was just scary for a simple developer like me but VS Code changed all that. I still use Cache Studio though (it's a hard habit to break, not that I particularly want to break that habit)


That sounds really cool. I have a friend who had a house in Cape Town and he had installed Solar Panels, water collection systems,  and everything in his house from his music system, alarm system, gates, garage doors, lights, curtains, fridge were all controlled through an app on his phone. His solar panels and storage cells effectively covered all of his electricity needs and his electricity meter for City Power cost about R5 a month


Glad to hear that you have your IRIS on Raspberry Pi. Before I send you some suggestions can you give me a bit of background on what aspects of IRIS you are most interested in ? I take it you have access to the Open Exchange and secondly how much experience do you have with ITIS or Ensemble? 

In the meantime here are some suggestions:


https://openexchange.intersystems.com/package/ObjectScript-Package-Manager (ZPM is the most useful tool for installing Open Exchange Applications

https://openexchange.intersystems.com/package/zpm-registry This will show you a list of OEX apps that are zpm ready

Are you using Cache Studio or V S Code and do you use Git. If you use V S Code then there are a number of Objectscript Extensions available and they are all useful


https://openexchange.intersystems.com/package/Trying-Embedded-Python  There is going to be a lot of Python apps apppearing on OEX now that we have a version with Python fully integrated into IRIS (first implementation of Native Python within IRIS

Python opens the gateway to more adventurous use of ML, NLP, and AI and there are a number of ML nd I example (with or without Python)'

Some useful information on Python at



DBeaver is an excellent Databse Viewer/Creator with native support for IRIS JDBC and its free



That should get you started.

Please let  me know how you get on with these and if there is anything more I can help you with just message me



I have done some more experienting and in the Contact class I have a ContactPhoneNumbers which I defined as %ListOfDataTypes and I noticed that they were being generated but not exported to JSON so I changed the type to %ArrayOfDataTpes and that didn't work either. I played around with the %JSON attributes to no avail. I read the documentation on the %JSON.Adapter class and there are strict rules about Arrays and Lists must contain literals or objects and  so I wrapped the Phone Numbers in quotes even though I was generating them as +27nn nnn nnn but that made no difference. I suspect that the Attribute ElementType should be set. In the ParentClass I specify that the array of object Oid's has an ElementType of %Persistent (the default is %RegisteredObject) and I think that I should do the same with the Phone Number array/list.



I should have included the class definition for Parent

Include DFIInclude Class DFI.Common.JSON.ParentClass Extends (%Persistent, %JSON.Adaptor, %XML.Adaptor, %ZEN.DataModel.Adaptor)
{ Property ParentId As %String(%JSONFIELDNAME = "parentId", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT") [ InitialExpression = {"Parent"_$i(^Parent)} ];

Property ParentName As %String(%JSONFIELDNAME = "parentName", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT") [ InitialExpression = {..ParentName()} ];


ClassMethod ParentName() As %String
quit "Example: "_$i(^Example)

ClassMethod BuildData(pCount As %Integer = 1) As %Status
set tSC=$$$OK
set array(1)="DFI.Common.JSON.Contact"
set array(2)="DFI.Common.JSON.Patient"
set array(3)="DFI.Common.JSON.Practitioner"
set array(4)="DFI.Common.JSON.Reference"
try {
for i=1:1:pCount {
set obj=##class(DFI.Common.JSON.ParentClass).%New()
set obj.Schemas.ElementType="%Persistent"
set count=$r(12)
for j=1:1:count {
set k=$r(4)+1
set schema=$classmethod(array(k),"%New"),tSC=schema.%Save() quit:'tSC do obj.Schemas.SetObjectAt(schema.%Oid(),$p(array(k),".",4)_"_"_j)
set tSC=obj.%Save() quit:'tSC
catch ex {set tSC=ex.AsStatus()}
write !,"Status: "_$s(tSC:"OK",1:$$$GetErrorText(tSC))
quit tSC



I believe that I have a solution for this.

I worked on the basis that there is a 'Parent' object that has a property Schemas of type  %ArrayOfObjectsWithClassName. This allows you to create an array of Objects where 'key' is the schema name and the 'id' is the instance.%Oid()

I then defined 4 classes:

Reference, Contact, Patient, Practitioner

I then created a method to Build N instances of the ParentClass. That code reads as follows:

ClassMethod BuildData(pCount As %Integer = 1) As %Status
    set tSC=$$$OK
    set array(1)="DFI.Common.JSON.Contact"
    set array(2)="DFI.Common.JSON.Patient"
    set array(3)="DFI.Common.JSON.Practitioner"
    set array(4)="DFI.Common.JSON.Reference"
    try {
        for i=1:1:pCount {
            set obj=##class(DFI.Common.JSON.ParentClass).%New()
            set obj.Schemas.ElementType="%Persistent"
            set count=$r(10)
            for j=1:1:count {
                 set k=$r(4)+1
                 set schema=$classmethod(array(k),"%New"),tSC=schema.%Save() quit:'tSC

                 do obj.Schemas.SetObjectAt(schema.%Oid(),$p(array(k),".",4))
        set tSC=obj.%Save() quit:'tSC
    catch ex {set tSC=ex.AsStatus()}
    write !,"Status: "_$s(tSC:"OK",1:$$$GetErrorText(tSC))
    quit tSC

Initially I wanted to see if I could (a) insert different object types into the Array and (b) Export the Parent Object to JSON and so to make life easier I specified [ initialexpression = {some expression}] to generate a value for the field. Sort of like %Populate would do but I didn't want to pre-create instances in the 4 schema tables and then manually go and link them together.

When I ran my Method to create 10 Parents it created them and as you can see in the logic I generate a random number of schemas.

That all worked and I then exported to JSON to String resulting in this:

{"%seriesCount":"1","parentId":"Parent36","parentName":"Example: 38","schemas":{"Contact_1":{"%seriesCount":"1","contactGivenName":"Zeke","contactSurname":"Zucherro"},"Contact_11":{"%seriesCount":"1","contactGivenName":"Mark","contactSurname":"Nagel"},"Contact_3":{"%seriesCount":"1","contactGivenName":"Brendan","contactSurname":"King"},"Contact_8":{"%seriesCount":"1","contactGivenName":"George","contactSurname":"O'Brien"},"Patient_10":{"%seriesCount":"1","patientId":"PAT-000-251","patientDateOfBirth":"2021-05-05T03:38:33Z"},"Patient_2":{"%seriesCount":"1","patientId":"PAT-000-401","patientDateOfBirth":"2017-09-30T21:56:00Z"},"Patient_4":{"%seriesCount":"1","patientId":"PAT-000-305","patientDateOfBirth":"2019-04-19T14:04:11Z"},"Patient_5":{"%seriesCount":"1","patientId":"PAT-000-366","patientDateOfBirth":"2017-07-03T18:57:58Z"},"Patient_7":{"%seriesCount":"1","patientId":"PAT-000-50","patientDateOfBirth":"2016-11-26T03:39:36Z"},"Patient_9":{"%seriesCount":"1","patientId":"PAT-000-874","patientDateOfBirth":"2019-03-28T15:22:37Z"},"Practitioner_6":{"%seriesCount":"1","practitionerId":{"%seriesCount":"1","practitionerId":"PR0089","practitionerTitle":"Dr.","practitionerGivenName":"Angela","practitionerSurname":"Noodleman","practitionerSpeciality":"GP"},"practitionerIsActive":false}}}

Because I am using effectively an array of Objects the array is subscripted by 'key' and so if there are multiple instances of say "Patient" then each instance of "Patient" would over write the existing "Patient" in the array and so in creating the array I concatenated the counter 'j' to the Schema Name.

in object terms if you open an Instance of ParentClass and you use the GetAt('key') method on the Schemas array you will be returned with a full object Oid() and from that you can extract the ClassName and the %Id()

The only way I can see around not having to uniquely identify the 'Schema' %dynamicObject in the JSON string is in the Parent class you need to have an array for each schema type. i.e. Array of Patient, Array of Contact.

In terms of nesting you will see that Patient has a Practitioner and Practioner is linked to a Table of Practitioners and in the JSON above you can see that it picks up the Patient, Practitioner and the Practitioner Details from the Table Practitioners

I havent tried importing the JSON as I would have to remove all of the code that I put in the Schema classes to generate values if the field is NULL but that can be overcome by setting the attribute  %JSONIGNORENULL to 0 and then make sure that you specify NULL for the property that has no value.

I would carry on experimenting but we are in the middle of a Power Cut (Thank you South African State Utility Company)

If you want to see the classes I wrote and play with them let me know and I'll email them as I can't upload them



I'm sure that the documentation will answer your question but if you want a short answer that explains the difference here you go.

Globals are defined as persisted sparse multidimensional arrays.

A Multidimensional array basically means that you can create an array with multiple subscripts and visually this is represented by a tree structure where you have a trunk that branches and each branch then branches and each of those branches can have many branches and so on and so on and eventually at the tips of the very outer branches you have a leaf. This tree however can also have leaves at the point where the branch branches again. 

The leaves in this analogy are your data pages i.e. a database block that contains some data.  In your application you want to go and fetch the data in those leaves.

But as in nature, some of those branches don't have leaves, it might be a new branch and a leaf is beginning to develop. Likewise some of the leaves at the join between a branch and the branches that sprout from that branch have fallen off or for whatever reason a leaf never grew at that particular intersection between a branch and the branches that branch off that branch.

So what is the most effective way of finding all of the leaves on the tree? Worse still, depending on how old that tree is we don't necessarily know how many branches on branches on branches... on branches there are.

So, if you only had $order and you were a hungry grasshopper 🦗 you would have to walk up the trunk, choose the furthest branch to the left and walk up it. Ah! success, you find a leaf. You eat the leaf. Your still hungry so you take the branch that is the furthest branch on the left of the next level of branches and eventually you reach the very top of the tree. The thinest little twig and you eat the little leaf that has just budded on that twig then you walk back down the twig and you move one twig to the right and up you climb and eat the leaf at the end of that twig and you repeat this process until you have processed each twig on that outer most branch -1. Now you move one branch to the right and up you climb and  you climb each twig on that outer most branch -1 and so on and once you have traversed every single branch on every single branch you will eventually get back to the trunk where a very hungy sparrow has been watching your progress with interest and as soon as you stumble back down the trunk, tired, dusty, full to the brim with leaf 🍃 the sparrow makes his move and eats you.

Bummer. So much effort, so little reward, the amount of leaf you ate barely replenished the energy you used to traverse the tree 🌴.

Now, if you are a smart grasshopper and you have good eyesight you remember you can hop, in fact you can hop like a grasshopper on steroids. So you bound up the trunk and you scan the branches and you spot a leaf and it is within hopping distance and so you hop to the leaf and eat it and on you go, hopping from one leafy branch to the next, ignoring all of the branches that have no leaves.

Well, that's how $query works, $query will return the next set of subscripts in your sparse array that has some data at the nth subscript level.

Of course, the grasshopper was so pleased with himself and how well he hopped that he forgot to keep his eyes open for sparrows and that sparrow that had sat patiently watching the first grasshopper traverse the tree using $order has been quietly sitting (do sparrows sit or clutch?) watching you bound effortlessly from ;leafy branch to leafy branch and when the grasshopper eventually gets back to the trunk the sparrow eats him too.

Moral of the story: I'd rather be a sparrow than a grasshopper and if I am a developer and I have a large sparse array I will use $query rather than $order.


In general, the following concepts should be noted when working with REST.

When you create a Namespace Definition, unless you specify otherwise, a default Web Application will be defined. If you namespace is named "Application-DEV" then when you save the Namespace definition a Web Application will be created in the form /csp/application-dev/.

Likewise, you are likely to have a QC and PROD namespaces as well and the default Web Applications will be /csp/application-qc/ and /csp/application-prod/

This default Web Application is used by by the "Management Portal" and  "View Class Documentation"

If you design normal CSP Pages bound to a class in your application then the default Web Application will be used. You can customise this by specifying your own Login Page and a couple of other properties.

When it comes to REST, and specifically the REST Dispatcher you will define one or more Web Applications. All of which point to the same namespace.

For example, if you have three namespaces, DEV, QC and PRD and you have a Business Service and you want to channel HTTP Requests based on 3 different User Roles then, for each namespace, you define a Web Application for each Role for each namespace.

Lets assume that the roles are User, Administrator and SuperUser then you would create the following Web Applications:




generically the format is /csp/{namespace_abbreviation}/{role}

When you define these Web Applications you need to have written a class that inherits from %CSP.REST

In you Interface Production you add three business services that are named "User Service", "Administrator Service" and "SuperUser Service". Every Service Production Item has the same underlying Business Service Class. In you Web Application Definition there is a field called REST Dispatcher. You enter the name of your REST Dispatcher class there. The rest of the form greys out

In your Rest Dispatcher class, lets call it REST.Dispatcher there is an XDATA routing block that defines what to do when  different HTTP Methods are used to pass in your request. They are very simply POST, PUT, DELETE, GET and GET with parameters (essentially a search)

Lets assume that you are going to InsertData, UpdataData, DeleteData, FetchData, SearchData

Then the XDATA Route Maps would look something like this:

<Route URL="/InsertData" Method="POST" Call "InsertDataMethod" />

<Route URL="/UpdateData/:id" Method="PUT" call "UpdateDataMethod" />

<Route URL="/SearchData/:id" Method="GET" call "SearchDataMethod" />

The methods are defined in the REST.Dispatcher class and one of the variables available to you is %request.URL which is the full /csp/{ns}/{role}/ and from this you can determine which Production Service Name you want to pass the request to.

So you Production Item Name turns out to be "Administrator Service" and is held in a variable tProductionItem

you then execute the following line of code:

Set tSC = ##class(Ens.Director).CreateBusinessService(tProductionItem,.tService) if 'tSC quit

You then create your Ensemble Request Message (call it tMyAppRequest) based on data from the %request object, the %request.Headers List and the %request.Data(tParam, .tKey)=tValue,. You $order through the Parameters, and then the Keys and for each combination of Param and Key there will be a value even if it is null. Bear in mind that in a URL you can specify a parameter name more than once so it is best to build a $List tParamList=tParamList_$lb(tValue, tParam) and then insert the list into a property in your MyApplicationRequest.Parameters.Insert(tParamList) ten you move onto the next Parameter

Once your Request message is constructed you pass the request message to the Instantiated Business Service as follows:

Set tSC = tService.ProcessInput(.tMyApplicationRequest,.tMyApplicationResponse) and that will invoke the OnProcesInput(tRequest, .tResponse) method of your Business Service.

When it come s to CSP and REST I suspect that invoke similar logic to determine which Business Service in which Namespace the request will be directed to.

CSP REST calls do not use the REST Dispatcher. But you do Instantiate the correct Business Service Name though I see no reason why it can't but I would need to dig deep in the documentation to make sure.



Use the %Stream.FileBinary class. A simple exmple is as follows:

Set stream=##class(%Stream.FileBinary).%New()

Set sc=stream.LinkToFile("c:\myfile.txt")

While 'stream.AtEnd { Set line=stream.Read()

; Process the chunk here


Typically you would read each chunk from the file into your object 'stream' and once you have reached the end of the file (AtEnd=1) you would then use the Stream Copy method to copy the stream object into your class property e.g.

Property MyPicture as %BinaryStream

and likewise you can copy the MyPicture stream into new %BinaryStream and then write that stream out to file.

Of course you can also just read from file directly into your Property MyPicture and write it out to another file or another stream object.

You don't strictly speaking need the intemediary step of reading the file contents into an instance of %BinaryStream . You can read it directly into your 'MyPicture' property however there may be cases where you might want to analyse the stream object before writing it into your class.

When you %Save() your class that contains the property 'MyPicture' the steam data is written into the 'S' global in the Cache default storage architecture. That is to say, if I have a class "Company.Staff" and for each staff member apart from their names, addresses and so on you may have indices on certain properties and you may have a property such as "StaffMemeberPicture as %Stream.FileBinary.

By the way the IRIS documentation on LinkTo() for Binary Streams warns that if the picture is edited outside of IRIS then the version of the picture stored in IRIS will be different from the edited picture external to IRIS. Hence the reason why you would read it into a intermediary %BinaryStream and then copy i into your class property. If you suspect that the external picture may have changed then if you ever export the BinaryStream back to file you might want to write it to a different filename so that you won't overwrite the edited photo and you can then compare the file size of the files to see if there is a difference which will tell you if the original has been edited. Or that's how I interpreted the documentation. 

When IRIS creates the storage definition the regular data fields go into the global ^Company.StaffD, the Indices are created in the global ^Company.StaffI and the stream data will be stored in ^Company.StaffS




DBeaver uses JDBC to connect to Cache so each instance of dbeaver you have open and connected to Cache will consume a license. Each terminal connection whether it be Cache Terminal or Putty will also consumer one license per connection. The error condition you see is the Cache License limit has been exceeded. If you have Cache Studio open that will consume a license and so on. 

From the Managment Portal you can view the license information:

Management Portal -> System Administration -> Licensing -> License Key and you will see this in the Community Version of IRIS/IRIS  for Health

Current license key information for this system:

License Capacity  InterSystems IRIS Community license
Customer Name  InterSystems IRIS Community
Order Number  54702
Expiration Date  10/30/2021
Authorization Key 8116600000500000500008000084345EF8F2473A5F13003
License Type=Concurrent User
Platform=IRIS Community
License Units=5
Licensed Cores=8
Authorized Cores=8
Extended Features=3A5F1300
- Interoperability
- BI User
- BI Development
- HealthShare
- Analytics Run
- Analytics Analyzer
- Analytics Architect
- HealthShare Foundation
- Analytics VR Execute
- Analytics VR Format
- Analytics VR Data Define
- InterSystems IRIS
Machine ID

From the Management Portal -> System Operation -> License Usage you can see how many linceses are in use and by which user


System > License Usage


  License Usage  

Last update: 2021-02-03 09:05:30.772


  - Summary  

  - Usage by Process  

  - Usage by User  

  - Distributed License Usage  

Current license activity summary:

 Page size:  Max rows:  Results: 5  Page: |‹‹‹1›››|of 1   
    License Unit Use  Local  Distributed 
  Current License Units Used 1 Not connected to license server
  Maximum License Units Used 2 Not connected to license server
  License Units Authorized 5 Not connected to license server
  Current Connections 1 Not connected to license server
  Maximum Connections 2 Not connected to license server




Dimitry is correct in his reply that this is a memory issue. Every cache connection or ensemble production class as well as all of the system processes run in individual instances of cache.exe or iris.exe (in IRIS). Each of these processes is in effect an operating system process (or job) and when a new user process is created Cache allocates a certain amount of memory to that process. The memory is divided into chucks, there is a chunk of memory where the code being executed is stored, there is a chunk of memory where system variables and other process information is stored and then there is a chunk of memory that is used to store variables that are created by the code being executed. Whether it is a simple variable [variable1="123"] or a complex structure such as an object (which is basically a whole load of variables and arrays that are related together as the attributes of an object instance). If you are using Cache Objects then when you create variables or manipulate objects in a (class)method those variables are killed when the method quits. Another feature of Cache Objects is that if you open an instance of a very large object with lots of properties, some of which are embedded objects, collections, streams and relationships Cache does not load the entire object into memory. it just loads the basic object and then as you reference properties that are serial objects, collections and so on then only then does cache pull that data into your variable memory area. And in normal situations you can generally speaking create a lot of variables and open many objects and still have memory left over. However there are a couple of things that can mess with this memory management and they are:

1) Specifying variables used in a method as PUBLIC which means that once they are created they remain in memory until you either kill them or use the NEW command on them. Secondly, it is possible to write code that gets into a nested loop and within each loop more variables are created and more objects are created or opened and eventually you will run out of memory and a <STORE> error is generated. 

I did a quick check to see where %SYS.BNDSRV is referenced and there is one line of code in the %Library.RegisteredObject class in a method called %BindExport what calls a method in %SYS.BINDSRV. The documentation for %BindExport says the following:

/// This method is used by Language Binding Engine to
/// send the whole object and all objects it referes to
/// to the client.

So my guess is that you have a Java, .Net or some other binding and when %BindExport is called it is trying to pass the contents of your object (and any directly linked objects) to the client and that is filling up your variable memory and generating the store error. 

I also see that the %Atelier class is also referencing %SYS.BINDSRV. 

So to investigate further do you use Atelier and/or are you using class bindings (Java etc....)

If you are then something you are doing with Atelier or in you application is periodically trying to manipulate a lot of objects all at once and killing your process memory. You can increase the amount of memory allocated to cache processes but bear in mind that if you increase the process memory allocation then that setting will be applied to all cache processes. I suspect there may be a way of creating a cache process with a larger memory allocation for just that process but I have no idea if it is possible or how to do it.

It is quite likely that even if you increase the process memory it may not cure the problem in which case I would suggest that you contact WRC and log a call with them.