The way that I have resolved this in the past is to have two properties in my Production Item.

The first is called PropertyDescription and the second is called PropertyID . PropertyDescription is referenced in the SETTINGS parameter and the query returns the Display Value of my property, the second Property is flagged as CALCULATED and when referenced it invokes the PropertyIDGet() method which says

method PropertyIDGet() as %String {

'select ID into :tId from {table} where {table}.Description=:..PropertyDescription'

quit tId

}

In your case you would use the syntax  :

..PropertyLogicalToDisplay(..Property)

Nigel

/// This is a nice little debugging class<BR><BR>
/// All of my classes have an Include statement in them<BR><BR>
/// 
/// Include ExampleInc<BR><BR>
/// 
/// Then the Include Routine Example.inc has the following #define in it<BR><BR>
/// 
/// #define DebugLog(%s1,%s2,%s3) do ##class(Example.Debug.Logging).CreateDebugLog($classname(),%s1,%s2,%s3)<BR><BR>
/// 
/// Then in your code you can add calls to the Debug Logger as follows:<BR><BR>
/// 
/// $$$DebugLog("MyKey","This is my Debug Message",.dSC)<BR><BR>
/// 
/// To enable Debug Logging execute the following code in the namespace where your production is running<BR><BR>
/// 
/// do ##class(Example.Debug.Logging).DebuggingOnOff(1)<BR>
Class Example.Debug.Logging Extends %Persistent
{
Property CreateTS As %TimeStamp [ InitialExpression = {$zdt($now(),3,1,6)} ];
Property ClassName As %String(MAXLEN = 150);
Property Username As %String [ Required ];
Property Key As %String(MAXLEN = 100) [ Required ];
Property Message As %String(MAXLEN = 3641144, TRUNCATE = 1);
Index CDT On CreateTS;
Index CN On ClassName;
Index UN On Username;
Index K1 On Key;
ClassMethod CreateDebugLog(pClassName As %String = "", pKey As %String = "", pMessage As %String(MAXLEN=3641144,TRUNCATE=1), ByRef pStatus As %Status)
{
               set pStatus=$$$OK
               try {
                              // You might want to put a check in here to test whether you want to create a debug log
                              // So if you port the code to Production you can leave the debug calls in your code but
                              // turn off debugging
                              if '(+$get(^Example.Debugging)) quit
                              set obj=##class(EMCI.Debug.Logging).%New()
               set obj.ClassName=pClassName,obj.Key=pKey,obj.Message=pMessage,obj.Username=$username
                              set pStatus=obj.%Save() if 'pStatus quit
               }
               catch ex {
                              set pStatus=ex.AsStatus()
               }
               quit
}
ClassMethod DebuggingOnOff(pOnOff As %Boolean)
{
               set ^Example.Debugging=pOnOff
}
ClassMethod PurgeDebugLog(pNumberOfDays As %Integer = 30, ByRef pRowCount As %Integer) As %Status
{
               set tSC=$$$OK,pRowCount=0
               try {
                              set date=$zdt($h-pNumberOfDays,3),id=""
                              for {
                                             set date=$o(^Example.Debug.LoggingI("CDT",date),-1) quit:date=""
                                             for {
                                                            set id=$o(^Example.Debug.LoggingI("CDT",date,id)) quit:id=""
                                                            set tSC=##class(Example.Degug.Logging).%DeleteId(id)
                                                            if 'tSC {!,"Unable to delete Debug Log with ID: "_id set tSC=$$$OK Continue}
                                                            else {set pRowCount=pRowCount+1}
                                             }             
                              }
               }
               catch ex {
                              set tSC=ex.AsStatus()
               }
               quit tSC
}

}

Hi

Lets assume you have an Ensemble request message with the following properties:

MyRequestMessage as Ens.Request

{

Property Action as %String(VALUELIST=",Add,Update,Delete";

Property HL7Message as EnsLib.HL7.Message;

Property Patient as ?;

}

// Assume I have received or created an HL7 Message in tHL7Message and I want to send the message to a Business Process  and the BP will perform the specified action on my Patient based on the content of the HL7 message (bear in mind that this is just an example and you probably wouldn't do this in real life so this is for illustration purposes only)

set tRequest=##class(MyRequestMessage).%New(),tRequest.Action="Add",tRequest.HL7Message=tHL7Message

set tSC=..SendRequestSync("My Business Process",tRequest,.tResponse)

The question is whether to open the Patient Object and pass it in the request message or rather send the Patient ID rather

i.e.

Property Patient as MyPackage.Patient; or

Property Patient as %String;

If you pass the patient as an object you have to take into consideration the following factors:

How long will the request message sit in the BP queue before it gets processed.?

Once it is linked to the request message it will remain linked to the message until the message is purged

If another request comes along that wants to delete that Patient then you will run into a referential integrity problem an %DeleteId() will probably fail as the patient is linked to the request message. (You can get around this by setting tRequest.Patient = "" in the BP once you have finished processing the request message but if at some point in the future you want to see which patient was modified by that request message you wont be able to tell as the property is now null.

Likewise if some other Business Process or classmethod elsewhere in your application wants to update that patient then you will run into concurrency issues where either the other process will not be able to update the patient as it has been modified by your process or visa versa.

So my recomendation is that you pass the Patient RowId in the Patient Property and then in your BP open the Patient, do your update it and save the change and then when your BP method quits the patient object will be released.

I have been caught out in the past by passing objects by reference in Ensemble Productions in the past and it took me ages to work out why the object was not getting updated  or why another part of the application was unable to update the patient because it was effectively locked elsewhere in the application (i.e. in the BP processing the request message)

Nigel

Hi

There are a few things to understand here. Cache (including Ensemble and IRIS) is different from other DBMS systems in that if a field/property/column is defined as having a size of 50 characters but you only write 20 characters into that field then only 20 characters will be used in the database record for that field. Unlike other DBMS it does not preallocate 50 characters for that field on every record it creates in the table. Records are created using $LIST. So let us assume you have a table/class with 5 fields then Cache will create a record as:

$listbuild(field1,field2,field3,field4,field5)

and if our fields have the following values:

So our class has the following definition:

Property Field1 as %String(MAXLEN=100);

property Field2 as %String;

property Field3 as %Date;

property Field4 as %Boolean;

property Field5 as %Integer;

and our values for these fields are "Nigel","Salm","1962-09-16","1","0608449093" then our record will look like this:

$lb("Nigel","Salm",44453,1,0608449093)

Internally (in layman's speak) stores each field in the list and at the beginning of each field it stores some information about the field, it stores the actual length of the field and it stores the internal datatype as that tells it how many characters are actually used. So the record looks like this

{5,S}Nigel{4,S}Salm,{1,N}4453,{1,B}1,{2,N}0608449093

This is a simplistic representation for the purposes of illustration. Alphabetic characters consume 1 character per character, numerics will only use as many characters as are required to store the number according to 8 bit characters

As I say, this is a simplistic explanation for exactly what is happening  at the disk level and I'm sure the ISC database experts would be able to explain it far better than I can but for the purposes of illustration this will suffice.

The next thing we need to take into account is a the binary tree structure of the cache.dat file itself. The cache.dat file stores a list of globals in the database. Each global is represented by a b-Tree structure and the the cache.dat file knows that global ^Nigel starts at block 12567

beneath that there is a layer of what we refer to as Map Blocks. The first Map block contains the block references for ^Nigel(1)-^Nigel(10), the next Map Block ^Nigel(11)-^Nigel(20) and so on. At the next map block layer the first map block tells us the block location for ^Nigel(1,0)-^Nigel(5,0) the next block at that level tells us the block location for ^Nigel(6,0)-^Nigel(10,0) and so on. Each block has a Right Link Pointer to the next logical block in the chain so when traversing a global cache can reference the block that contains the pointers for ^Nigel(1,0)-^Nigel(5,0) and the RLP (Right Link Pointer) holds the address for the next block at that level which contains ^Nigel(6,0)-^Nigel(10,0) so cache can follow those RLP's to find the block that contains the pointers for ^Nigel(7,0)

As the global grows in size the number of Map Block levels increases (depth)

Finally we get to the 'leaves' of the b-tree which are the Data Blocks where the actual record data is stored.

Using our simple example of very short little records a data block may contain say 10 records of data. So using an example fro the documentation the following global

^Data(1999) = 100
^Data(1999,1) = "January"
^Data(1999,2) = "February"
^Data(2000) = 300
^Data(2000,1) = "January"
^Data(2000,2) = "February"

is stored in one data block in the format

Data(1999):100|1:January|2:February|2000:300|1:January|2:February|...

Like all blocks in the b-tree structure the block has a RLP to the next data block that contains the next N records.

So let us assume that we have a data block that contains 10 records  and you delete a record, record 7 (for example). The data block at this point is sitting in memory and when you delete the record the data block is updated so that it now holds record1,record2,record3,record4,record5,record6,record8,record,9,record10 and the rest of the block will be filled with null characters. The block is flagged as 'dirty' which means that it has been modified and it will be written to the Write Image Journal file *WIJ) which is a file on the disk that holds a representation of all of the data blocks in play in the Global Buffer Pool in memory and a process known as the Write Daemon will write the block back to disk and flag the block in memory as clean. At some point that data block in memory will be removed as no further activity has occurred on it and the space in the global buffer pool is required for new data coming in. 

So let us assume that our 10 records completely filled the available space in the data block, essentially 100% of the available space is used. In removing record 7 we have removed 10% of the data in the block and so the data block is now only 90% full.

If you remove all records in the data block then cache will unlink that block from the b-tree structure and flag it as free. 

Lets look at a different scenario. Lets say that we want to add another record into our data block and that record is too large to fit into the available space in the block then cache will perform a block splitting action. where it grabs an empty (free) block and links it into the chain.

So in our chain we have Block A, Block B and Block C.  Our data is in Block A . Cache knows that Block D is free so it will grab Block D and move some of the data from block A and put it into Block D. It will then add the new record into Block A and it will adjust the RLP's     so that our sequence is now Block A -> Block D -> Block B -> Block C

This means that at least two of our Blocks are only partially filled and if we never try and add more data into Block A and Block D then they will remain partially empty.

At this point we might ask the question "Can I compress my global?" effectively we want to take all of the data our 4 blocks (A,D,B,C) and compress them into just 3 blocks (A,D,B).

There are global functions that will allow you to do this (and in very old versions of Cache and it's predecessor MUMPS it was quite common to do this form of compression as disk space was limited and costly. In this day and age it is hardly ever considered), There is one big disadvantage to compressing globals and that is should you add a record into a compressed block structure cache will have to do a block splitting action and if you add enough records over time cache will have had to do many such block splitting actions which, from a cache point of view are very costly due to the amount of work that is required to add a free block into the chain and shuffle the data around. Ultimately you will end up in the same position of having many data blocks that are partially filled.

So to summarise at this point, Cache only stores the actual data written into any field. Records are only as long as the data they contain. They are not as long as the class/table definition.  Cache likes to keep blocks roughly 80% filled so there is some free space to add more data to a block without necessarily having to do a block splitting action.

The next thing to consider is Fragmentation. In an ideal world an entire global b-tree structure (Global A) would occupy a contiguous set of directory, map and data blocks starting from location 1 and ending at location N and global B would occupy the next chunk of contiguous blocks from block N+1 through N+n. In reality this is seldom possible and so it is quite likely that a global may occupy random chunks of non-contiguous blocks. In the very dark ages of the 1980's this would be problematic from the point of view of the Hard Disk Platter technology available at the time where a global could be so fragmented that every block read could send the disk platter and disk head spinning and moving hectically as cache attempted to get all of the required blocks from disk into memory and so Cache technicians would spend a lot of time specifying the start position of a global and allocate a chunk of contiguous blocks from that start point based on the expected size of the global. As technology improved and we moved onto disk arrays and  Solid State Drives these issues become inconsequential.

Generally speaking the advise given to cache IT Operators is "Let cache deal with the management of the disk system, the operating file system and the operating system as every instance of Cache is highly optimised for the platform it is running on". In fact the more you try and interfere the more likely it is that you will introduce performance and integrity issues.

Then there is fragmentation at the Disk level. Many of you who work on windows will be familiar with disk fragmentation where there is not enough contiguous space on the drive to store your file in one contiguous chunk and as a result your file gets split across many chunks of the hard drive. It is for this reason that in production systems InterSystems will always advise that your cache databases be allocated to their own disk system and that they do not have to compete with files being created by a million and one other applications. You can use disk defrag tools to sort out this form of defragmentation but you will notice that these tools will not attempt to move chucks of a cache.dat file from one sector to another in order to make it one contiguous file occupying a contiguous chunk of disk blocks. Cache won't allow it as it has built up its own internal picture of where cache,dat blocks are physically located and no defrag tool can just go and move chucks of a cache.dat file around without it upsetting Cache's internal picture of where data is physically stored.

The final point to consider is whether you can shrink a cache.dat file and I suspect that that is the real question you are asking here. In real life in typical transaction based applications data comes in, gets processed, is kept in the database for a period of time and then purged. Your data will consist of static data in the form of code tables, config tables and so on. There may be a large data take on to build these tables to start with but once in place they typically don't grow very much over time. Your transaction data will initially grow your database to a certain size and it will continue growing by a predictable amount every day/week/month based on the amount of incoming traffic. If you persist your transactional data for ever then your database will grow and grow but again the growth will be predictable based on the quantity of incoming data and assuming that over time more users get added to the system and there is a predictable growth in your business then the quantity of incoming data will grow accordingly. Cache is designed to increase in configurable chunks (the default is 1Mb at a time if I remember correctly) . Again there is a formula that can be followed to calculate the optimal size for each new database expansion. If that number is to small then cache will end up expanding the database lots and lots of times in small incremental chunks. Make the number to big and you end up with a database that is way bigger than you actually need. Expanding the database is costly from Cache's point of view. Therefore you don't want cache to be constantly increasing the database size in small little chunks. When it expands it should expand iby a reasonable amount to allow for the expected growth in data for say the next day/week/month

 Here are some things to consider though:

1) How long do you need to keep transaction data?

- Some applications require data to be held for a minimum period of time ranging from 6 months through 5 years or more depending (in most cases) on what is required by law or required for future data analysis. Most applications will archive data to a data warehouse where you can use Deep See to cube it and slice and dice it and produce impressive management stats. 

- Once data has been processed and has served its usefulness then you would typically have some form of housekeeping task that will archive, transform into an operational data store and ultimately be purged from the run time production database

-- If you are using Ensemble and your Production classes contain loads of $$$TRACE (and other forms of logging) then this can create huge amounts of data in ^Ens.Util.LogD and ^Ens.Util.LogI. Make sure that log files are purged automatically after a number of days (the default is 7). Like wise the ensemble message bank

- Developers have a habit of writing debugging information into globals and then when the code goes into production they forget to take those debug statements out and at some point in the future someone will ask "What is this global ^NigelDebug?", "Why is it so big?" and because no one can remember who Nigel was  and though they can probably find the places in the code where ^NigelDebug is being created they don't want to remove those lines of code because they don't know if they will break something and cause the application to crash. Good developer teams will decide on a structured way of creating debug logs and build in a flag that says "if this is production don't create this debug log"

So strictly speaking if your housekeeping is in order and ensemble message and trace logs are purged automatically and frequently then your database size and growth over time are merely predictable mathematical calculations.

If you have the situation where a database has grown very large and someone decides that it is time to run a purge that deletes massive amounts of data then you can end up with a database that is far larger than it needs to be and that is when the question arises "How to I compress a cache.dat". The answer is that "You can't", not yet anyway. Based on everything I have said so far it tells us that we expect databases to grow and grow in a predictable manageable way. The concept of wanting to shrink a database was never really given much priority because of this expectation that databases will always grow, never shrink.

So how do you compress a database between now and that point in time when IRIS supports a database shrink function? The only way that I know and that I have done in the past is to create a brand new database,  make its initial size as large as you realistically expect it to be, make sure that the growth factor is calculated with some thought and then copy your data across. Starting with your static data. That's the easy bit. When it comes to your transaction data you have to be a bit clever. You need to set a point in time where you can track any changes to the database after that point in time. You then copy your transactional data across to the new database and then, when you have a planned system down time you copy across all of the database changes that have occurred in the period between that fixed point in time and now. Once you are happy that that process is complete and you have data integrity then you move your users and application connections over to the new database and once you are confident that all is well go and delete the original cache.dat

It was a very simple question you asked and I have probably given far more information than is strictly required to answer the question but I'm getting old and with lock down still in force I am enjoying getting some of the 30 years of knowledge of cache that I have stored in my head down onto paper (or in this case some virtual server in the ethernet of everything)

Yours

Nigel

Hi Igor

That's great. I have a good understanding of how Ensemble works on Windows and almost every Ensemble Interface I have written has ended up running on some form of Linux and most of those Interfaces are based on 3rd party requests coming into the Interface (in the form of Lab or Pharmacy orders for example) and at some point in time the Interface will send back the Results. So though the internals of the interface may be quite complex the quantity of data is not necessarily very high. However when I was writing the Ensemble engine for a prototype  Pharmacy dispensing robot my Ensemble engine had to interact with the underlying Java based ROS (Robot Operating System) and every single mechanical component of the robot right down to the LED lights, motors, sensors and so on were generating a massive stream of JSON event messages which were grouped into queues with one or more business service handling each queue. As the Business Service OnRequest methods can only iterate at 1/10th of a second I ended up writing infinite loops within each OnRequest method to the point where each service was processing around 3-4000 messages per second. When we gave the robot the instruction to shut down the ROS would start shutting down the mechanical parts and I had to wait for the last messages from the components to ensure that they had all shut down correctly. The database was being journalled as well. We found that by forcing the production to halt had all sorts of ramifications. Some of them I mentioned in my first response. We couldn't leave data in the queues and pick up from where we left off when we restarted the robot and so we saw this behaviour of lots of ensemble processes firing up to help clear the queues, the WIJ file would grow very large and the system would ultimately freeze. That forced us to do a complete reboot but Ensemble would then have to deal with rolling back the WIJ file and it would take ages for the system to finally become responsive. I didn't have the option to throw more hard drives into the configuration nor more memory and eventually I got the Ubuntu guys to show me what was happening on the system during shutdown and that is where I saw this behaviour which was quite different from what I am used to on windows and that is when I discovered that by increasing the Wait Time for the production to stop did the trick. Just increasing it to 60 seconds made all the difference. I know this doesn't really add to my original reply but I thought I would give some context to my recommendation for other developers who are faced with similar issues.

Nigel

Hi

So you can use global indirection here:

you can set a variable to the name of a global

set gbl="^MyGlobalName"

you can then do either of the following:

if $(@gbl)#10 {write !,"Global: ",gbl," has a value of ",@gbl}
else {write !,"The global: ",gbl," is not defined"}

or you can use it to reference nodes within the global root:

for gbl="^A","^B","^C","^D","^E" {write !,"Global: ",gbl," for the node: ",gbl,"(""SYSTEM"") has a value: ",@gbl@("SYSTEM")

you can also do this:

set gbl="^NigelGlobal(""Subscript1"""_","_"""Subscript2"")"
set y=$o(@gbl@(y)

which effectively reads

set y=$o(^NigelGlobal("Subscript1","Subscript2",y) )

so before this I executed the following line of code:

set ^NigelGlobal("Subscript1","Subscript2","This is a Test")=""
set y=""
set y=$o(@gbl@(y))
if y'=""  write !,y
This is a Test

if you want to use this across namespaces then you need to do the following:

set ns="My other Namespace Name" e.g. set ns="DEV", my current Namespace is "QC"
set gbl1="^MyGlobalA",gbl2="^["""_ns_"""]MyGlobalB"
merge @gbl1@("Node A")=@gbl2("Node B")

this then reads as:

merge ^MyGlobalA("NodeA")=^["DEV"]MyGlobalB("NodeB")

Yours

Nigel

Hi

What are your settings for charset, content type and content encoding?

I would expect to see something like this:

  1. ContentEncoding = "HL7-ER7"
  2. ContentCharset = "UTF-8"
  3. ContentType = "application/hl7-v2"

The most common type of framing is MLLP. If you are acting as an HL7 server tand you don't know what the client framin is then set framing to flexible. that way ensemble will try and detect the framing based on the properties listed above and by looking for tell tale characters (segment terminators) such as LF $c(10) or CR,LF $c(13,10).  Depending on the properties listed above you may see the terminator represented as "/r". 

If you are the HL7 client then you can get away with framing = none provided both you and the 3rd party server are consistent on the content type, charset and encoding.

I hope that gives you some ideas of what to look out for and the questions you need to ask the 3rd party application you are trying to communicate with.

Yours

Nigel

Hi Vic

So a <protect> error would indicate an attempt to write to a read only database. Even though we typically use the Management Portal to manage Ensemble productions running in application namespaces and theoretically we don't go anywhere near the ENSEMBLE, ENSLIB, CACHE, CACHELIB, MGR (%SYS) databases the reality is that Users and Roles are maintained in the %SYS namespace and Ensemble itself writes data to the ENSEMBLE databases.  CACHE, ENSLIB, CACHELIB are all read only databases and the Classes in those databases are mapped to all namespaces that require those classes.  So either the database resources you are trying to access only have Read (R) rights or in the case of security management you can manage Users and Roles through the Management Portal no matter which namespace you are connected to but if you attempt to programatically create users and roles from an application namespace you will hit a protect error as you have to be in the %SYS namespace in order to perform these actions programmatically. I guess what I am trying to say is that it might not be sufficient to grant access to the application database resource alone, you might need to assign access to some of the other system database resources as well though without actually attempting to do this exercise myself I can't be more specific than that.  If I get an opportunity tonight after I have completed my daily work task list I will attempt to replicate what you are trying to do and see i I can get it to work and what resources/roles are required in addition to those listed by Carl.

Hi

I am wondering if this is a user security level issue. As you know you need Windows/Unix Administrator rights to install Cache/Ensemble/IRIS. The same is true if you want to run any of the executables in the 'bin' directory such as cstart cachesystray and so on. I am not a UNIX expert and so I can  only speak from my Windows experience but assuming this is windows close the Cache/Ensemble/IRIS systeray icon and then from the 'bin' directory fine th csystray.exe, right click and slect 'run as administrator', then see if you can start or shut down your instance.

If that fails and all other suggestions from the Developer community uninstall C/E/IRIS and reinstall it making sure you run the installation as 'Administrator'

The final suggection is check your cache.key or iris.key and make sure your license is still valid.

Yours

Nigel

Hi

You have to differentiate between resources and roles. Assigning database resource to a user with RW access will do exactly as it says, were the user be able to access the database they would indeed be able to read and write data from/to the database. However what you want to do is give them access to a select set of Management Portal menu options and forms and for that you need to assign the appropriate %{roles}. There are a number of Ensemble Roles available including the following:

 
» %EnsRole_Administrator
Ensemble Administrator _SYSTEM
  %EnsRole_AlertAdministrator Ensemble user with administrative Alert access _SYSTEM
  %EnsRole_AlertOperator Ensemble user with Alert access _SYSTEM
  %EnsRole_Developer Ensemble Developer _SYSTEM
  %EnsRole_Monitor Ensemble Monitor _SYSTEM
  %EnsRole_Operator Ensemble Operator _SYSTEM
  %EnsRole_PubSubDeveloper Ensemble PubSub Developer _SYSTEM
  %EnsRole_RegistryManager Administrator of the Public Registry _SYSTEM
  %EnsRole_RegistrySelect Role for viewing Public Registry tables _SYSTEM
  %EnsRole_RulesDeveloper Ensemble Rules Developer _SYSTEM
  %EnsRole_WebDeveloper Ensemble Web Developer _SYSTEM

 There are other roles that give access to the general administration of your Cache/Ensemble/IRIS instance. These roles allow your user to do anything from being able to Monitor the system, perform system operator functions (Create Task Manager Tasks, manage Journals and other system related tasks). Theses roles include:

  %Manager A role for all System Managers _SYSTEM
  %Operator System Operators _SYSTEM

Then there are SQL related roles:

  %SQL Role for SQL access _SYSTEM
  %SQLTuneTable Role for use by tunetable to sample tables irrespective of row level security _SYSTEM

These roles would allow the user to run SQL queries in the Management Portal -> System Explorer -> SQL and perform other DB Administrator functions like Tuning a Table which is a process where by Cache/Ensemble/IRIS will analyse a class/table definition and the data in the table and table indices and based on this will add Selectivity information into the class definition which assists the SQL query generator to choose the least costly and most efficient use of standard indices, bitmap indices and iFind indices to retrieve the requested data.

Finally you have the %All role which gives the user access to everything and should only be granted to the very select group of Managers/Developers who need the flexibility of accessing all aspects of your Cache/Ensemble/IRIS installation. This role should be used with great caution because of the possibility of misuse in the wrong hands.

As the previous commentator wrote, check out the documentation on 'Controlling Access to the Management Portal Functions' but hopefully my response should give you a quick overview and understanding of resources and roles in general.

Nigel