Ha ha ha, love the Zombies 🧟‍♂️. I have written little utilities based around ^$GLOBAL so that I can select a range of globals based on a wild card *{partial_global_name}* with an exclusion clause specified as -*{partial_global_name}* which will exclude those globals from the resultant selection. I then use the resultant array to perform actions like export, delete, kill the index globals for a range of D (data) globals and then run %BuildIndices(). I also wrote a global traverse function that would walk through a global nesting down through sub-nodes to N levels and then export the resultant tree to file using List to String for $List forms of globals. This is obviously something that the Management Portal -> Globals does but the code is well hidden. You could also specify an array of pseudo node syntax (global.level1...,levelN) followed by a field number and either an expression or classmethod call that would modify that value. The modified global(s) were then written to a new file which I could import. The import would kill the original globals and during the import convert the de-listed strings back into lists. It also allowed for fields to be inserted or removed, again with an expression or method call to populate the inserted field with a value.  It was very useful when working with old style globals and for retro populating existing data for classes that had been modified and had either changed the data type on a field, added a required field or where a property had been removed and the storage definition had been changed and the global was now out of sync with the new storage definition.

It was very useful if the days when creating globals with many levels as we used to do years ago. These days it's seldom that you design globals like that especially if you want to use SQL on them.

But it was fun to work out the traverse algorithm using @gblref@() syntax and avoid stack overflows 

It could also be the URL of the Patient Resource. In the REST Dispatcher XDATA Routing Map Block check the URL syntax:

Normally for FHIR REST Dispatchers you will see the routings will look like this:

<Route Url="/Patient" Method="POST" Call="PatientHandler"/>

<Route Url="/Patient/:id" Method="PUT" Call="PatientHandler"/>

In the IRIS for Health, HS.FHIRServer.resthandler the XDATA Block looks like this:

XData UrlMap
{
<Routes> <Route Url="/(.*)"   Method="GET"  Call="processRequest"/>
<Route Url="/(.*)"   Method="POST"  Call="processRequest"/>
<Route Url="/(.*)"   Method="PUT"  Call="processRequest"/>
<Route Url="/(.*)"   Method="DELETE"  Call="processRequest"/>
<Route Url="/(.*)"   Method="PATCH"  Call="processRequest"/>
<Route Url="/(.*)"   Method="HEAD"  Call="processRequest"/>
</Routes>
}

The Process Request will see that it is a Bundle and ultimately calls the class  HS.FHIRServer.DefaultBundleProcessor. The Bundle processor will analyze the entries in the Bundle, it looks at the Entries in the Bundle to check if there are any dependencies for Entries where the request Method is POST.  There is comment in the code that says:

// Transaction Processing:
// 1. Map 'fullUrl' -> entry# for each POST entry
// 2. Visit all references, capture:
// References that are "search urls"
// References the target one of the 'POST' entry fullUrl values
// 3. Resolve the search urls to resource references
// 4. Resolve references for each POST result
// 5. execute other operations and return result.
 

Further on it goes the following code:

if isTransaction {
if (reqObj.method  = "POST") {
// Grab the UUID if the fullUrl is specified
Set referenceKey = ..ExtractUUID(entryObj.fullUrl)
 

It uses this for the response where it returns the 'Location' and the response Bundle will specify the entry.response.fullURL

In your example you have specified the 'url' as "Patient" 

Despite all of the rest of the code I searched through I could not find any code that checked this for a leading "/"

Ultimately each Post is then dispatched to the FHIRService which will perform the POST (Interaction="Insert") and send back the Response Status that will potentially be converted into an Operation Outcome.

It's just a hunch and I suspect that adding the "/" after R4 might have the same effect but try putting a "/" into the url i.e. "url" : "/Patient"

or you could use the property 'fullUrl' instead of 'url' and see if that makes a difference

Yours

Nigel  

That is very neat. I have a situation at the moment where I want to force a single object in a class and I use %OnBeforeSave() to check that the field that I have used as a PK, IDKey, Unique has a specific value. Though this is not quite the same as a one:one relationship it is similar in the sence that I amtrying to do something that is not specifically addressed in the documentation or through a class attribute.

In general, the following concepts should be noted when working with REST.

When you create a Namespace Definition, unless you specify otherwise, a default Web Application will be defined. If you namespace is named "Application-DEV" then when you save the Namespace definition a Web Application will be created in the form /csp/application-dev/.

Likewise, you are likely to have a QC and PROD namespaces as well and the default Web Applications will be /csp/application-qc/ and /csp/application-prod/

This default Web Application is used by by the "Management Portal" and  "View Class Documentation"

If you design normal CSP Pages bound to a class in your application then the default Web Application will be used. You can customise this by specifying your own Login Page and a couple of other properties.

When it comes to REST, and specifically the REST Dispatcher you will define one or more Web Applications. All of which point to the same namespace.

For example, if you have three namespaces, DEV, QC and PRD and you have a Business Service and you want to channel HTTP Requests based on 3 different User Roles then, for each namespace, you define a Web Application for each Role for each namespace.

Lets assume that the roles are User, Administrator and SuperUser then you would create the following Web Applications:

/csp/dev/user/

/csp/dev/administrator/

/csp/dev/superuser/

generically the format is /csp/{namespace_abbreviation}/{role}

When you define these Web Applications you need to have written a class that inherits from %CSP.REST

In you Interface Production you add three business services that are named "User Service", "Administrator Service" and "SuperUser Service". Every Service Production Item has the same underlying Business Service Class. In you Web Application Definition there is a field called REST Dispatcher. You enter the name of your REST Dispatcher class there. The rest of the form greys out

In your Rest Dispatcher class, lets call it REST.Dispatcher there is an XDATA routing block that defines what to do when  different HTTP Methods are used to pass in your request. They are very simply POST, PUT, DELETE, GET and GET with parameters (essentially a search)

Lets assume that you are going to InsertData, UpdataData, DeleteData, FetchData, SearchData

Then the XDATA Route Maps would look something like this:

<Route URL="/InsertData" Method="POST" Call "InsertDataMethod" />

<Route URL="/UpdateData/:id" Method="PUT" call "UpdateDataMethod" />

<Route URL="/SearchData/:id" Method="GET" call "SearchDataMethod" />

The methods are defined in the REST.Dispatcher class and one of the variables available to you is %request.URL which is the full /csp/{ns}/{role}/ and from this you can determine which Production Service Name you want to pass the request to.

So you Production Item Name turns out to be "Administrator Service" and is held in a variable tProductionItem

you then execute the following line of code:

Set tSC = ##class(Ens.Director).CreateBusinessService(tProductionItem,.tService) if 'tSC quit

You then create your Ensemble Request Message (call it tMyAppRequest) based on data from the %request object, the %request.Headers List and the %request.Data(tParam, .tKey)=tValue,. You $order through the Parameters, and then the Keys and for each combination of Param and Key there will be a value even if it is null. Bear in mind that in a URL you can specify a parameter name more than once so it is best to build a $List tParamList=tParamList_$lb(tValue, tParam) and then insert the list into a property in your MyApplicationRequest.Parameters.Insert(tParamList) ten you move onto the next Parameter

Once your Request message is constructed you pass the request message to the Instantiated Business Service as follows:

Set tSC = tService.ProcessInput(.tMyApplicationRequest,.tMyApplicationResponse) and that will invoke the OnProcesInput(tRequest, .tResponse) method of your Business Service.

When it come s to CSP and REST I suspect that invoke similar logic to determine which Business Service in which Namespace the request will be directed to.

CSP REST calls do not use the REST Dispatcher. But you do Instantiate the correct Business Service Name though I see no reason why it can't but I would need to dig deep in the documentation to make sure.

Nigel

That is why we differentiate between GlobalCharacter Streams for plain text and BinaryStreams for binary data such as photo's. The reason for this is that binary data can contain characters that IRIS might interpret as terminators or have other undesirable effects and so we handle the content differently.

In Character Streams you can use any of the first 128 ascii characters except for control characters whereas Binary data can use the full 256 ASCII character set without worrying about control characters and the like.

You can of course convert the binary data into base64 encoding which will result in a pure character stream without control characters etc. Then you could use the character streams. Then when you want to export the data again your would have to do a base64 decode to get it back to pure binary (I might have got my encoding and decoding in the wrong order but the documentation will assist you with that).

Nigel

Hi

Use the %Stream.FileBinary class. A simple exmple is as follows:

Set stream=##class(%Stream.FileBinary).%New()

Set sc=stream.LinkToFile("c:\myfile.txt")

While 'stream.AtEnd { Set line=stream.Read()

; Process the chunk here

}

Typically you would read each chunk from the file into your object 'stream' and once you have reached the end of the file (AtEnd=1) you would then use the Stream Copy method to copy the stream object into your class property e.g.

Property MyPicture as %BinaryStream

and likewise you can copy the MyPicture stream into new %BinaryStream and then write that stream out to file.

Of course you can also just read from file directly into your Property MyPicture and write it out to another file or another stream object.

You don't strictly speaking need the intemediary step of reading the file contents into an instance of %BinaryStream . You can read it directly into your 'MyPicture' property however there may be cases where you might want to analyse the stream object before writing it into your class.

When you %Save() your class that contains the property 'MyPicture' the steam data is written into the 'S' global in the Cache default storage architecture. That is to say, if I have a class "Company.Staff" and for each staff member apart from their names, addresses and so on you may have indices on certain properties and you may have a property such as "StaffMemeberPicture as %Stream.FileBinary.

By the way the IRIS documentation on LinkTo() for Binary Streams warns that if the picture is edited outside of IRIS then the version of the picture stored in IRIS will be different from the edited picture external to IRIS. Hence the reason why you would read it into a intermediary %BinaryStream and then copy i into your class property. If you suspect that the external picture may have changed then if you ever export the BinaryStream back to file you might want to write it to a different filename so that you won't overwrite the edited photo and you can then compare the file size of the files to see if there is a difference which will tell you if the original has been edited. Or that's how I interpreted the documentation. 

When IRIS creates the storage definition the regular data fields go into the global ^Company.StaffD, the Indices are created in the global ^Company.StaffI and the stream data will be stored in ^Company.StaffS

Yours

Nigel

Hi John, Dmitriy and Tani

Please can you point me to the instructions on how to invoke the IRIS terminal from VS Code. I am busy creating documents that are collections of comments/replies/examples from the DC on a particular topic e.g. Terminal Options from with VS Code

I am also creating documents that link all V S Code extensions, Examples and so on into a common Reference Index to make it easier for a developer who is new to VS Code to have a single source that will list all DC or Open Exchange articles/Questions and Solutions and use the hyperlinks in the document to pull some or all of the contents to their laptop for installation/evaluation and probably use.

So any documents or examples that you have created and are willing to share with me will be greatly appreciated.

Once the document(s) are complete I will post them to DC.

Yours

Nigel

Hi Robert

I noticed your Adopted Bitmap post but didn't read it, but i shall do so now.

If it works for me I will factor it into my code and test and if I like it I will flag you answer as accepted unless someone manages to come p with a way of calling a classmethod in another namespace. I suspect it is not possible as there are issues such as where the objects would be instantiated and whether the source namespace would be able to access them and I suspect cocurrrency could be an issue if you are creating objects in the same class but in different namespaces. It would be a nightmare to keep track of who has which objects open.

Nigel

Hi

I have developed a Debug Logging system that is not quite as expansive as this version. The Debug Log class has the following properties:

Classname

Username

CreatedTS

Key

Message

It is called with one macro

#define DebugLog(%s1,%s2,%s3,%s4)

where %s1 = Username (defaults to $username)

%s2 is the Key which could be a method name or some other meaningful way to locate logs from a certain area in your code

%s3 is the Message which is a string of Human readable text 

%s4 is a %Status returned by reference (in case the code has crashed)

I use it in all of my developments

There is a Configuration class as well that basically determines whether logging is turned on or off depending on whether you are working in DEV or PRODUCTION.

The class comes with a number of methods including a Purge method that will purge all logs older than a certain number of days.

If anyone is interested in this please let me know

Nigel

Hi

DBeaver uses JDBC to connect to Cache so each instance of dbeaver you have open and connected to Cache will consume a license. Each terminal connection whether it be Cache Terminal or Putty will also consumer one license per connection. The error condition you see is the Cache License limit has been exceeded. If you have Cache Studio open that will consume a license and so on. 

From the Managment Portal you can view the license information:

Management Portal -> System Administration -> Licensing -> License Key and you will see this in the Community Version of IRIS/IRIS  for Health

Current license key information for this system:

 
 
License Capacity  InterSystems IRIS Community license
Customer Name  InterSystems IRIS Community
Order Number  54702
Expiration Date  10/30/2021
Authorization Key 8116600000500000500008000084345EF8F2473A5F13003
Product=Server
License Type=Concurrent User
Server=Single
Platform=IRIS Community
License Units=5
Licensed Cores=8
Authorized Cores=8
Extended Features=3A5F1300
- Interoperability
- BI User
- BI Development
- HealthShare
- Analytics Run
- Analytics Analyzer
- Analytics Architect
- NLP
- HealthShare Foundation
- Analytics VR Execute
- Analytics VR Format
- Analytics VR Data Define
- InterSystems IRIS
Non-Production
Machine ID

From the Management Portal -> System Operation -> License Usage you can see how many linceses are in use and by which user



 

System > License Usage

Menu

  License Usage  

Last update: 2021-02-03 09:05:30.772

 
 

  - Summary  

  - Usage by Process  

  - Usage by User  

  - Distributed License Usage  

Current license activity summary:

 Page size:  Max rows:  Results: 5  Page: |‹‹‹1›››|of 1   
    License Unit Use  Local  Distributed 
  Current License Units Used 1 Not connected to license server
  Maximum License Units Used 2 Not connected to license server
  License Units Authorized 5 Not connected to license server
  Current Connections 1 Not connected to license server
  Maximum Connections 2 Not connected to license server
 

Yours

Nigel 

Hi

One question I meant to ask is if you are building local arrays to store temporary data or similar and they are growing too large. That can quickly eat up all your memory if they are very large. It would be better to use ^CacheTemp($j,...) or replace $j with %session.SessionId if you are using a web service, or REST ($j is not reliable as $j will be the PID of the process that processes your CSP Request). In fact there are various ways of creating temporary globals which reside on disk rather than in your local variable partition. My previous reply focused on where the error was being reported but it occurred to me that there could be something else that the application code could be doing that is eating up your memory and effectively leave none for your BINDSRV.  But my gut instinct is that it is an issue with the objects being passed into the %BindExport method which if I remember correctly is passed $this, the current object instance

Hi

There are a couple of options. You can install Putty which allows you to connect to IRIS and behaves exactly like terminal but I suspect you will run into similar size constraints.

The other option and the one that I like an use all the time is call Web Terminal and is available in the Open Exchange.

Essentially, once you have downloaded the Web Terminal xml file and imported it into Cache/Ensemble/IRIS (it doesn't matter which namespace) you can then open your browser and type in http://server:port/terminal/ and you will be presented witha login popup where you enter your cache credentials and then you have a fully functional terminal running in your browser, in colour, with intellisense so as you start typing in class names it will auto suggest the rest of the class/property/method names, you can also run an sql shell.

Just one word of advice, if on very rare occasions the browser tab freezes and refreshing the page does not resolve the problem and you end up killing the tab or the browser then it is possible that the process that was created on your cache server for the terminal connection may be left orphaned and I have yet to work out a way of reconnecting to that process id (PID) and you may have to kill the process on the server.

Apart from that it is a great alternative to terminal and I have been using it for some years now and almost never use cache terminal or putty.

Nigel

If you have a Business Service that is acting as a Web Service server then every client connection will result in the server process spawning off a sub-process to deal with the incoming client request. The server will also detect where the request has come from (i.e. the client ip address and other CGI variables) and therefore will be able to detect whether the requests are coming from one client or many. If the requests are coming from one client then you will be able to manipulate %session to a certain extent and that client will be able to have a number of simultaneous connections to the server yet consume one 'License' however if there are multiple clients connecting simultaneously then Cache will treat them as individual 'users' and each 'user' will consume a license and depending on the license key you have installed/purchased may result in you running out of licenses. Even if you were able to successfully kill your %session (where each session is viewed as a 'user') there is a grace period of several minutes before the license is released back into the available license pool. 

The reason the system works like this is to prevent applications from supporting more users than the T&C's of your license agreement with InterSystems allows. If this functionality didn't exist then developers would be able to build applications supporting multiusers with a single user license and that would not be good business practice for ISC who license their software based on the number of users that the application is going to support. 

Nigel

Hi

Dimitry is correct in his reply that this is a memory issue. Every cache connection or ensemble production class as well as all of the system processes run in individual instances of cache.exe or iris.exe (in IRIS). Each of these processes is in effect an operating system process (or job) and when a new user process is created Cache allocates a certain amount of memory to that process. The memory is divided into chucks, there is a chunk of memory where the code being executed is stored, there is a chunk of memory where system variables and other process information is stored and then there is a chunk of memory that is used to store variables that are created by the code being executed. Whether it is a simple variable [variable1="123"] or a complex structure such as an object (which is basically a whole load of variables and arrays that are related together as the attributes of an object instance). If you are using Cache Objects then when you create variables or manipulate objects in a (class)method those variables are killed when the method quits. Another feature of Cache Objects is that if you open an instance of a very large object with lots of properties, some of which are embedded objects, collections, streams and relationships Cache does not load the entire object into memory. it just loads the basic object and then as you reference properties that are serial objects, collections and so on then only then does cache pull that data into your variable memory area. And in normal situations you can generally speaking create a lot of variables and open many objects and still have memory left over. However there are a couple of things that can mess with this memory management and they are:

1) Specifying variables used in a method as PUBLIC which means that once they are created they remain in memory until you either kill them or use the NEW command on them. Secondly, it is possible to write code that gets into a nested loop and within each loop more variables are created and more objects are created or opened and eventually you will run out of memory and a <STORE> error is generated. 

I did a quick check to see where %SYS.BNDSRV is referenced and there is one line of code in the %Library.RegisteredObject class in a method called %BindExport what calls a method in %SYS.BINDSRV. The documentation for %BindExport says the following:

/// This method is used by Language Binding Engine to
/// send the whole object and all objects it referes to
/// to the client.

So my guess is that you have a Java, .Net or some other binding and when %BindExport is called it is trying to pass the contents of your object (and any directly linked objects) to the client and that is filling up your variable memory and generating the store error. 

I also see that the %Atelier class is also referencing %SYS.BINDSRV. 

So to investigate further do you use Atelier and/or are you using class bindings (Java etc....)

If you are then something you are doing with Atelier or in you application is periodically trying to manipulate a lot of objects all at once and killing your process memory. You can increase the amount of memory allocated to cache processes but bear in mind that if you increase the process memory allocation then that setting will be applied to all cache processes. I suspect there may be a way of creating a cache process with a larger memory allocation for just that process but I have no idea if it is possible or how to do it.

It is quite likely that even if you increase the process memory it may not cure the problem in which case I would suggest that you contact WRC and log a call with them.

Nigel

Hi

There is an application DBeaver that will connect to your Cache/Ensemble/IRIS database and will generate a visual representation of your schema. I haven't tried creating tables in DBeaver and see whether it will generate a valid Cache Table/Class Definition. I suspect that it will generate classes in the same way that if you use the SQL Server import utility that will execute an SQL CREATE TABLE script and generate Cache classes/tables. Likewise you can use the SQL Gateway to connect to a SQL database and you can either create class/definitions in Cache that connect to the tables in SQL Server and leave the data in SQL Server or you can opt to create the table definitions in Cache and pull the data from SQL Server into Cache and remove SQL server altogether.

In the System Configuration and in the $system.SQL class you can manipulate how Cache Classes/Tables are created from an SQL Script or through the SQL Gateway. Including how to handle Integers and support for NULL vs Empty String.

I also found a couple of android apps called DB Scheme and Database that supports SQL Script Generation for about 8 common relational database technologies (ISC is not one of them but then we don't really push ourselves as a pure Relational database as we are primarily an Object Orientated Technology with  excellent Relational functionality and Relational Connectivity). The Android app allows you to design tables including relationship and then you select the SQL Database from the list of common Relational technologies and it will generate a SQL CREATE TABLE script. It also supports UPDATE and DELETE. It may support other functionality as well but I basically played with it for 10 minutes, built a simple table, generated the SQL Server SQL Script and then ran it through Ensemble and it created a valid Cache Table/Class. 

There is Cache based UML application that you can find in the Open Exchange tab. Download the zip file and import the classes. The documentation will tell you how to use the application. I like it because it is based on Cache Classes and Tables and therefore gives you a far more realistic view of your Cache Classes/Tables including all those nice weird things like Parent-Child Relationships, Streams, Certain Cache Data Types that we are so accustomed to using in Cache that are just not supported in other Relational classes. The only word of caution is that if your select a schema with loads of tables in it then it can take ages to render the display. I have attached a screen shot of the UML Class Explorer

Nigel