Hi

I should have included the class definition for Parent

Include DFIInclude Class DFI.Common.JSON.ParentClass Extends (%Persistent, %JSON.Adaptor, %XML.Adaptor, %ZEN.DataModel.Adaptor)
{ Property ParentId As %String(%JSONFIELDNAME = "parentId", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT") [ InitialExpression = {"Parent"_$i(^Parent)} ];

Property ParentName As %String(%JSONFIELDNAME = "parentName", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT") [ InitialExpression = {..ParentName()} ];

Property Schemas As %ArrayOfObjectsWithClassName(%JSONFIELDNAME = "schemas", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT", CLASSNAME = 2, ELEMENTQUALIFIED = 1, REFELEMENTQUALIFIED = 1);

ClassMethod ParentName() As %String
{
quit "Example: "_$i(^Example)
}

ClassMethod BuildData(pCount As %Integer = 1) As %Status
{
set tSC=$$$OK
set array(1)="DFI.Common.JSON.Contact"
set array(2)="DFI.Common.JSON.Patient"
set array(3)="DFI.Common.JSON.Practitioner"
set array(4)="DFI.Common.JSON.Reference"
try {
for i=1:1:pCount {
set obj=##class(DFI.Common.JSON.ParentClass).%New()
set obj.Schemas.ElementType="%Persistent"
set count=$r(12)
for j=1:1:count {
set k=$r(4)+1
set schema=$classmethod(array(k),"%New"),tSC=schema.%Save() quit:'tSC do obj.Schemas.SetObjectAt(schema.%Oid(),$p(array(k),".",4)_"_"_j)
}
quit:'tSC
set tSC=obj.%Save() quit:'tSC
}
}
catch ex {set tSC=ex.AsStatus()}
write !,"Status: "_$s(tSC:"OK",1:$$$GetErrorText(tSC))
quit tSC
}

Nigel

Hi

I believe that I have a solution for this.

I worked on the basis that there is a 'Parent' object that has a property Schemas of type %ArrayOfObjectsWithClassName. This allows you to create an array of Objects where 'key' is the schema name and the 'id' is the instance.%Oid()

I then defined 4 classes:

Reference, Contact, Patient, Practitioner

I then created a method to Build N instances of the ParentClass. That code reads as follows:

ClassMethod BuildData(pCount As %Integer = 1) As %Status
{
    set tSC=$$$OK
    set array(1)="DFI.Common.JSON.Contact"
    set array(2)="DFI.Common.JSON.Patient"
    set array(3)="DFI.Common.JSON.Practitioner"
    set array(4)="DFI.Common.JSON.Reference"
    try {
        for i=1:1:pCount {
            set obj=##class(DFI.Common.JSON.ParentClass).%New()
            set obj.Schemas.ElementType="%Persistent"
            set count=$r(10)
            for j=1:1:count {
                 set k=$r(4)+1
                 set schema=$classmethod(array(k),"%New"),tSC=schema.%Save() quit:'tSC

                 do obj.Schemas.SetObjectAt(schema.%Oid(),$p(array(k),".",4))
             }
        set tSC=obj.%Save() quit:'tSC
        }
    }
    catch ex {set tSC=ex.AsStatus()}
    write !,"Status: "_$s(tSC:"OK",1:$$$GetErrorText(tSC))
    quit tSC
}

Initially I wanted to see if I could (a) insert different object types into the Array and (b) Export the Parent Object to JSON and so to make life easier I specified [ initialexpression = {some expression}] to generate a value for the field. Sort of like %Populate would do but I didn't want to pre-create instances in the 4 schema tables and then manually go and link them together.

When I ran my Method to create 10 Parents it created them and as you can see in the logic I generate a random number of schemas.

That all worked and I then exported to JSON to String resulting in this:

{"%seriesCount":"1","parentId":"Parent36","parentName":"Example: 38","schemas":{"Contact_1":{"%seriesCount":"1","contactGivenName":"Zeke","contactSurname":"Zucherro"},"Contact_11":{"%seriesCount":"1","contactGivenName":"Mark","contactSurname":"Nagel"},"Contact_3":{"%seriesCount":"1","contactGivenName":"Brendan","contactSurname":"King"},"Contact_8":{"%seriesCount":"1","contactGivenName":"George","contactSurname":"O'Brien"},"Patient_10":{"%seriesCount":"1","patientId":"PAT-000-251","patientDateOfBirth":"2021-05-05T03:38:33Z"},"Patient_2":{"%seriesCount":"1","patientId":"PAT-000-401","patientDateOfBirth":"2017-09-30T21:56:00Z"},"Patient_4":{"%seriesCount":"1","patientId":"PAT-000-305","patientDateOfBirth":"2019-04-19T14:04:11Z"},"Patient_5":{"%seriesCount":"1","patientId":"PAT-000-366","patientDateOfBirth":"2017-07-03T18:57:58Z"},"Patient_7":{"%seriesCount":"1","patientId":"PAT-000-50","patientDateOfBirth":"2016-11-26T03:39:36Z"},"Patient_9":{"%seriesCount":"1","patientId":"PAT-000-874","patientDateOfBirth":"2019-03-28T15:22:37Z"},"Practitioner_6":{"%seriesCount":"1","practitionerId":{"%seriesCount":"1","practitionerId":"PR0089","practitionerTitle":"Dr.","practitionerGivenName":"Angela","practitionerSurname":"Noodleman","practitionerSpeciality":"GP"},"practitionerIsActive":false}}}

Because I am using effectively an array of Objects the array is subscripted by 'key' and so if there are multiple instances of say "Patient" then each instance of "Patient" would over write the existing "Patient" in the array and so in creating the array I concatenated the counter 'j' to the Schema Name.

in object terms if you open an Instance of ParentClass and you use the GetAt('key') method on the Schemas array you will be returned with a full object Oid() and from that you can extract the ClassName and the %Id()

The only way I can see around not having to uniquely identify the 'Schema' %dynamicObject in the JSON string is in the Parent class you need to have an array for each schema type. i.e. Array of Patient, Array of Contact.

In terms of nesting you will see that Patient has a Practitioner and Practioner is linked to a Table of Practitioners and in the JSON above you can see that it picks up the Patient, Practitioner and the Practitioner Details from the Table Practitioners

I havent tried importing the JSON as I would have to remove all of the code that I put in the Schema classes to generate values if the field is NULL but that can be overcome by setting the attribute %JSONIGNORENULL to 0 and then make sure that you specify NULL for the property that has no value.

I would carry on experimenting but we are in the middle of a Power Cut (Thank you South African State Utility Company)

If you want to see the classes I wrote and play with them let me know and I'll email them as I can't upload them

Nigel

Hi

I'm sure that the documentation will answer your question but if you want a short answer that explains the difference here you go.

Globals are defined as persisted sparse multidimensional arrays.

A Multidimensional array basically means that you can create an array with multiple subscripts and visually this is represented by a tree structure where you have a trunk that branches and each branch then branches and each of those branches can have many branches and so on and so on and eventually at the tips of the very outer branches you have a leaf. This tree however can also have leaves at the point where the branch branches again. 

The leaves in this analogy are your data pages i.e. a database block that contains some data.  In your application you want to go and fetch the data in those leaves.

But as in nature, some of those branches don't have leaves, it might be a new branch and a leaf is beginning to develop. Likewise some of the leaves at the join between a branch and the branches that sprout from that branch have fallen off or for whatever reason a leaf never grew at that particular intersection between a branch and the branches that branch off that branch.

So what is the most effective way of finding all of the leaves on the tree? Worse still, depending on how old that tree is we don't necessarily know how many branches on branches on branches... on branches there are.

So, if you only had $order and you were a hungry grasshopper 🦗 you would have to walk up the trunk, choose the furthest branch to the left and walk up it. Ah! success, you find a leaf. You eat the leaf. Your still hungry so you take the branch that is the furthest branch on the left of the next level of branches and eventually you reach the very top of the tree. The thinest little twig and you eat the little leaf that has just budded on that twig then you walk back down the twig and you move one twig to the right and up you climb and eat the leaf at the end of that twig and you repeat this process until you have processed each twig on that outer most branch -1. Now you move one branch to the right and up you climb and  you climb each twig on that outer most branch -1 and so on and once you have traversed every single branch on every single branch you will eventually get back to the trunk where a very hungy sparrow has been watching your progress with interest and as soon as you stumble back down the trunk, tired, dusty, full to the brim with leaf 🍃 the sparrow makes his move and eats you.

Bummer. So much effort, so little reward, the amount of leaf you ate barely replenished the energy you used to traverse the tree 🌴.

Now, if you are a smart grasshopper and you have good eyesight you remember you can hop, in fact you can hop like a grasshopper on steroids. So you bound up the trunk and you scan the branches and you spot a leaf and it is within hopping distance and so you hop to the leaf and eat it and on you go, hopping from one leafy branch to the next, ignoring all of the branches that have no leaves.

Well, that's how $query works, $query will return the next set of subscripts in your sparse array that has some data at the nth subscript level.

Of course, the grasshopper was so pleased with himself and how well he hopped that he forgot to keep his eyes open for sparrows and that sparrow that had sat patiently watching the first grasshopper traverse the tree using $order has been quietly sitting (do sparrows sit or clutch?) watching you bound effortlessly from ;leafy branch to leafy branch and when the grasshopper eventually gets back to the trunk the sparrow eats him too.

Moral of the story: I'd rather be a sparrow than a grasshopper and if I am a developer and I have a large sparse array I will use $query rather than $order.

Nigel

I have found the answer. ZPM will only run on IRIS. So I will load it into my IRIS installation instead. 

Hi George

It is a top level folder on my laptop and no I haven't exported all of the classes, So that I shall do and confirm that it works. I will have an answer by tomorrow.  Busy deploying an application on a customer site but wanted to acknowledge that I have read your reply.

Thanks

Nigel

In general, the following concepts should be noted when working with REST.

When you create a Namespace Definition, unless you specify otherwise, a default Web Application will be defined. If you namespace is named "Application-DEV" then when you save the Namespace definition a Web Application will be created in the form /csp/application-dev/.

Likewise, you are likely to have a QC and PROD namespaces as well and the default Web Applications will be /csp/application-qc/ and /csp/application-prod/

This default Web Application is used by by the "Management Portal" and  "View Class Documentation"

If you design normal CSP Pages bound to a class in your application then the default Web Application will be used. You can customise this by specifying your own Login Page and a couple of other properties.

When it comes to REST, and specifically the REST Dispatcher you will define one or more Web Applications. All of which point to the same namespace.

For example, if you have three namespaces, DEV, QC and PRD and you have a Business Service and you want to channel HTTP Requests based on 3 different User Roles then, for each namespace, you define a Web Application for each Role for each namespace.

Lets assume that the roles are User, Administrator and SuperUser then you would create the following Web Applications:

/csp/dev/user/

/csp/dev/administrator/

/csp/dev/superuser/

generically the format is /csp/{namespace_abbreviation}/{role}

When you define these Web Applications you need to have written a class that inherits from %CSP.REST

In you Interface Production you add three business services that are named "User Service", "Administrator Service" and "SuperUser Service". Every Service Production Item has the same underlying Business Service Class. In you Web Application Definition there is a field called REST Dispatcher. You enter the name of your REST Dispatcher class there. The rest of the form greys out

In your Rest Dispatcher class, lets call it REST.Dispatcher there is an XDATA routing block that defines what to do when  different HTTP Methods are used to pass in your request. They are very simply POST, PUT, DELETE, GET and GET with parameters (essentially a search)

Lets assume that you are going to InsertData, UpdataData, DeleteData, FetchData, SearchData

Then the XDATA Route Maps would look something like this:

<Route URL="/InsertData" Method="POST" Call "InsertDataMethod" />

<Route URL="/UpdateData/:id" Method="PUT" call "UpdateDataMethod" />

<Route URL="/SearchData/:id" Method="GET" call "SearchDataMethod" />

The methods are defined in the REST.Dispatcher class and one of the variables available to you is %request.URL which is the full /csp/{ns}/{role}/ and from this you can determine which Production Service Name you want to pass the request to.

So you Production Item Name turns out to be "Administrator Service" and is held in a variable tProductionItem

you then execute the following line of code:

Set tSC = ##class(Ens.Director).CreateBusinessService(tProductionItem,.tService) if 'tSC quit

You then create your Ensemble Request Message (call it tMyAppRequest) based on data from the %request object, the %request.Headers List and the %request.Data(tParam, .tKey)=tValue,. You $order through the Parameters, and then the Keys and for each combination of Param and Key there will be a value even if it is null. Bear in mind that in a URL you can specify a parameter name more than once so it is best to build a $List tParamList=tParamList_$lb(tValue, tParam) and then insert the list into a property in your MyApplicationRequest.Parameters.Insert(tParamList) ten you move onto the next Parameter

Once your Request message is constructed you pass the request message to the Instantiated Business Service as follows:

Set tSC = tService.ProcessInput(.tMyApplicationRequest,.tMyApplicationResponse) and that will invoke the OnProcesInput(tRequest, .tResponse) method of your Business Service.

When it come s to CSP and REST I suspect that invoke similar logic to determine which Business Service in which Namespace the request will be directed to.

CSP REST calls do not use the REST Dispatcher. But you do Instantiate the correct Business Service Name though I see no reason why it can't but I would need to dig deep in the documentation to make sure.

Nigel

Hi

Use the %Stream.FileBinary class. A simple exmple is as follows:

Set stream=##class(%Stream.FileBinary).%New()

Set sc=stream.LinkToFile("c:\myfile.txt")

While 'stream.AtEnd { Set line=stream.Read()

; Process the chunk here

}

Typically you would read each chunk from the file into your object 'stream' and once you have reached the end of the file (AtEnd=1) you would then use the Stream Copy method to copy the stream object into your class property e.g.

Property MyPicture as %BinaryStream

and likewise you can copy the MyPicture stream into new %BinaryStream and then write that stream out to file.

Of course you can also just read from file directly into your Property MyPicture and write it out to another file or another stream object.

You don't strictly speaking need the intemediary step of reading the file contents into an instance of %BinaryStream . You can read it directly into your 'MyPicture' property however there may be cases where you might want to analyse the stream object before writing it into your class.

When you %Save() your class that contains the property 'MyPicture' the steam data is written into the 'S' global in the Cache default storage architecture. That is to say, if I have a class "Company.Staff" and for each staff member apart from their names, addresses and so on you may have indices on certain properties and you may have a property such as "StaffMemeberPicture as %Stream.FileBinary.

By the way the IRIS documentation on LinkTo() for Binary Streams warns that if the picture is edited outside of IRIS then the version of the picture stored in IRIS will be different from the edited picture external to IRIS. Hence the reason why you would read it into a intermediary %BinaryStream and then copy i into your class property. If you suspect that the external picture may have changed then if you ever export the BinaryStream back to file you might want to write it to a different filename so that you won't overwrite the edited photo and you can then compare the file size of the files to see if there is a difference which will tell you if the original has been edited. Or that's how I interpreted the documentation. 

When IRIS creates the storage definition the regular data fields go into the global ^Company.StaffD, the Indices are created in the global ^Company.StaffI and the stream data will be stored in ^Company.StaffS

Yours

Nigel

Hi

DBeaver uses JDBC to connect to Cache so each instance of dbeaver you have open and connected to Cache will consume a license. Each terminal connection whether it be Cache Terminal or Putty will also consumer one license per connection. The error condition you see is the Cache License limit has been exceeded. If you have Cache Studio open that will consume a license and so on. 

From the Managment Portal you can view the license information:

Management Portal -> System Administration -> Licensing -> License Key and you will see this in the Community Version of IRIS/IRIS  for Health

Current license key information for this system:

 
 
License Capacity  InterSystems IRIS Community license
Customer Name  InterSystems IRIS Community
Order Number  54702
Expiration Date  10/30/2021
Authorization Key 8116600000500000500008000084345EF8F2473A5F13003
Product=Server
License Type=Concurrent User
Server=Single
Platform=IRIS Community
License Units=5
Licensed Cores=8
Authorized Cores=8
Extended Features=3A5F1300
- Interoperability
- BI User
- BI Development
- HealthShare
- Analytics Run
- Analytics Analyzer
- Analytics Architect
- NLP
- HealthShare Foundation
- Analytics VR Execute
- Analytics VR Format
- Analytics VR Data Define
- InterSystems IRIS
Non-Production
Machine ID

From the Management Portal -> System Operation -> License Usage you can see how many linceses are in use and by which user



 

System > License Usage

Menu

  License Usage  

Last update: 2021-02-03 09:05:30.772

 
 

  - Summary  

  - Usage by Process  

  - Usage by User  

  - Distributed License Usage  

Current license activity summary:

 Page size:  Max rows:  Results: 5  Page: |‹‹‹1›››|of 1   
    License Unit Use  Local  Distributed 
  Current License Units Used 1 Not connected to license server
  Maximum License Units Used 2 Not connected to license server
  License Units Authorized 5 Not connected to license server
  Current Connections 1 Not connected to license server
  Maximum Connections 2 Not connected to license server
 

Yours

Nigel 

Hi

Dimitry is correct in his reply that this is a memory issue. Every cache connection or ensemble production class as well as all of the system processes run in individual instances of cache.exe or iris.exe (in IRIS). Each of these processes is in effect an operating system process (or job) and when a new user process is created Cache allocates a certain amount of memory to that process. The memory is divided into chucks, there is a chunk of memory where the code being executed is stored, there is a chunk of memory where system variables and other process information is stored and then there is a chunk of memory that is used to store variables that are created by the code being executed. Whether it is a simple variable [variable1="123"] or a complex structure such as an object (which is basically a whole load of variables and arrays that are related together as the attributes of an object instance). If you are using Cache Objects then when you create variables or manipulate objects in a (class)method those variables are killed when the method quits. Another feature of Cache Objects is that if you open an instance of a very large object with lots of properties, some of which are embedded objects, collections, streams and relationships Cache does not load the entire object into memory. it just loads the basic object and then as you reference properties that are serial objects, collections and so on then only then does cache pull that data into your variable memory area. And in normal situations you can generally speaking create a lot of variables and open many objects and still have memory left over. However there are a couple of things that can mess with this memory management and they are:

1) Specifying variables used in a method as PUBLIC which means that once they are created they remain in memory until you either kill them or use the NEW command on them. Secondly, it is possible to write code that gets into a nested loop and within each loop more variables are created and more objects are created or opened and eventually you will run out of memory and a <STORE> error is generated. 

I did a quick check to see where %SYS.BNDSRV is referenced and there is one line of code in the %Library.RegisteredObject class in a method called %BindExport what calls a method in %SYS.BINDSRV. The documentation for %BindExport says the following:

/// This method is used by Language Binding Engine to
/// send the whole object and all objects it referes to
/// to the client.

So my guess is that you have a Java, .Net or some other binding and when %BindExport is called it is trying to pass the contents of your object (and any directly linked objects) to the client and that is filling up your variable memory and generating the store error. 

I also see that the %Atelier class is also referencing %SYS.BINDSRV. 

So to investigate further do you use Atelier and/or are you using class bindings (Java etc....)

If you are then something you are doing with Atelier or in you application is periodically trying to manipulate a lot of objects all at once and killing your process memory. You can increase the amount of memory allocated to cache processes but bear in mind that if you increase the process memory allocation then that setting will be applied to all cache processes. I suspect there may be a way of creating a cache process with a larger memory allocation for just that process but I have no idea if it is possible or how to do it.

It is quite likely that even if you increase the process memory it may not cure the problem in which case I would suggest that you contact WRC and log a call with them.

Nigel

Hi

You can run the method in the 'Output' window which you can access from the 'View' menu.

Just a note on the difference between Cache Classes and Classes in other OO based languages. 

Cache supports two types of method:

Method ABC(Param1, param2, ... paramN)

and

ClassMethod XYZ(param1, param2, ... ParamN)

a Method acts on a instance of your class whereas a class method is independent of the object instance. i.e. you do not need to create a new instance or open an existing instance of a class in order to invoke a class method.

The second thing is that there a developer hooks that allow you to run code based on the following actions:

%New(): If you write a method %OnNew() this method will be run after the %New() method has been run. It allows you to do any initiation logic on the new instance that has been created.

%OnOpen(): This method is called after the instance of the class has been opened.

%Save(): There are two methods %OnBeforeSave and %OnAfterSave() that you can write that are called before or after the %Save() method.

%Delete: Likewise has two methods %OnBeforeDelete () and %OnAfterDelete() where you can write code that is executed before or after the %Delete() Method.

We don't really have the concept of a "Main" method that is automatically invoked when you reference the class other than through the %OnNew() and %OnOpen() methods.

In Ensemble things are slightly different in that Business Service/Business Process and Business Operation classes have methods such as OnProcessInput(), OnProcess() and in the case of Business Operations there is an XDATA block where you can specify the method to be invoked based on the message type being processed by the Business Operation.  You could consider these methods as the 'Main' methods of the class but in reality they are the methods where the developer is expected to put the application logic for the class. There are other methods that the Ensemble Production will execute before it is ready to run your code and there are methods that the Ensemble Production will run after your methods have completed.

Cache/Ensemble/IRIS also support the ability to write application/system specific code that will be executed when Cache Starts Up or Shuts Down and likewise when an Ensemble Production Starts Up or Shuts Down.

Have a look at the documentation on %ZSTART for more information on what you can do when Cache starts up or shuts down. There is similar documentation in relation to Ensemble Productions.

Nigel