Hi.

I'm not sure what your "persistent objects" contain, but if the repeating data is in the form of strings or streams then perhaps you could put them into a XPATH document object ( %XML.XPATH.Document) and use the evaluator in that? I support a system that has a message with a transient property to hold the document created, so it's only done once (per processing), and a method that builds that property if needed and then calls the  EvaluateExpression method in that property for an expression supplied as a parameter. This is used in transformations to extract data to post into HL7 v2 messages, so the same calls should work in rules as well.

Of course the XPATH expressions may be no easier to define than your exporting and importing methods. I certainly struggled with them and in the end had to build lots of special inputs that added the huge long list of nested items that usually went at the front of each expression, resulting in calls like source.Pull("Baby","ep2:id/@extension") where "Baby" added a prefix of over 160 characters to find the baby section of the Mother's full document before tunneling down to the hospital number. If you already have a sub-section of the full document then you may not need as much.

Mike

Hi. We had a site upgrade from 2012.2.5 to 2017.1.0 last year, and it included a mirror. We had very few code changes needed - just an issue with it failing to save objects inside a Business Process where we had used some "unusual" structures. The upgrade itself went smoothly. The only issue was afterwards when the next backup was a "full" one instead of the scheduled "partial", using more space than expected. Our Production was much smaller than yours, with only about 120 items, and it is hard to say how much effort went into pre-release testing as it was "fitted in" around other work by a team of people. Maybe a couple of man  months?

To be honest, it all depends on how much custom or unusual code you have, and how much testing the customer wants. We upgraded a development namespace and re-ran test messages through all the important paths and compared the result before and after upgrade. Plus some connection testing to cover all the "types" we used: ftp, web service, HL7, etc.  In our case the testers included people from the user side, so they could decide when they were happy with it.

InterSystems were very helpful. We raised a call a few months before and they gave advice on testing and desk checked our detailed plan of the upgrade itself, including how to do the mirror.

Good luck.

Hi - we may be overcomplicating the solution here. Rather than comparing the age for every record, all you need is to work out a single cut off date to compare against the DOB. This was given in Jill's answer above, but another variation that might be clearer is:

WHERE DOB < todate((tochar(current_date,'YYYY')-12)||'0101','yyyymmdd')

Also, the usefulness  of an index will depend on the ratio of under 13 to over 13 records. If the vast majority are to be included, then use of an index may slow access down as the system flips back and forth over the main global, whereas a straight run without an index could be quicker (hopefully the compiler would work this out for you).

Regards,

I won't embarrass myself by listing the MUMPS code from 1991 that does this on our application, but I will comment that you need to work out how many birthdays have gone by so it must compare month and day values once the basic year subtraction has been done. It gets quite complicated. (Might also like to look at whether you need more than just a "years old", and also need months or days for very low values.)

Mike

I have used the  %XML.Writer class to create a document, but only for a fairly simple one that was destined for a SOAP outbound call. The SDA is tricky, so I would imagine using HealthShare (Ensemble) would be much easier. (I have used Ensemble classes like EnsLib.EDI.XML.Document that can be added to a message, etc. and used as the target for transformations once you have an appropriate document definition loaded in. Reduces the coding required, though not entirely as repeating groups are an issue.)

Mike

I'm just thinking that maybe we need more information as to why this is needed before recommending anything. If the network connection is good enough for mirroring, then why not just map the classes to a central repository? Perhaps all that is needed are security settings to prevent updates from the "slave" systems. Perhaps there is no network connection, in which case mirroring or shadowing is not possible, and what is needed is a good way to automate export/import to OS files.

Mike

I may have misunderstood your requirement, but  you may not need a Business Rule at all. To pull data from the request into the context  I've actually used Transforms.

To get this to work, I created a new class that Extends (%SerialObject, %XML.Adaptor), and defined in it the properties that I need to store. I could then define a Transformation from the incoming message type to this new one, pulling everything I needed. In the BPL  I then added a property called "TempStore" of that type to the Context object in my BLP. To pull the data I added a Transform Activity with a Source of "request" and a Target of "context.TempStore" using the Transform.

Later Activity boxes could then use the fields with references like "context.TempStore.priorMRN" to do tests, etc. I've also used the same trick to update outgoing messages with data from the context (using Create = existing in the Transform).

I hope this is useful.

Regards,

Mike

if you open the class in Studio, and view the "Inspector" pane, then pick "Property" from the first drop down and your property in the other (or double click it), then it shows a very long list of possible things you can add. It may even show all of them, I don't know. You can then click on the values to enter them, sometimes getting drop down lists of possible values.

I've often used the inspector as a way of finding out what might be available, and then searching any keywords in the Caché documentation to confirm how to use it.

Regards,

Mike

Amir's answer with option 2 is what we did. The XML we sent had to be converted to allow it to be sent, so our code looked a bit like this:

Method ImportEpisode(pRequest As EnsLib.EDI.XML.Document, Output pResponse As Ens.Response) As %Status
{
 ; Use format 8 bit regardless of cache default (else Base64Encode gives ILLEGAL VALUE error)
 Set sendingXML = pRequest.OutputToString("C(utf-8)",.tSC)
  If $$$ISERR(tSC) Quit tSC
  $$$TRACE("Sending: "_sendingXML)
  Set sending = $system.Encryption.Base64Encode(sendingXML)
  Set tSC = ..Adapter.InvokeMethod("ImportEpisode",.result,sending,{plus some other id parameters})
  If $$$ISERR(tSC) Quit tSC
 Set resultXML = $system.Encryption.Base64Decode(result)

...etc.

I hope this is useful to you.

Mike

Many guides to "good programming" (in any language) would advise that the return from a function/method should be used for "real" data only, and any "exception" situations should be flagged as an error. While I'm not convinced this is always the best way, I can see the advantages. Code with repeated tests of returned status values can be messy and hard to read, and if the only thing it can do when the status is a fail is to quit out again with a status of "failed", then there is not a lot to be gained.

Mike

I had a similar situation and ended up with an Ensemble Service reading in the meta data file (like your xml), and composing an Ensemble message with that information, including a file reference for the data file (your pdf). This meant that the meta data file could be automatically archived by Ensemble, but now I had to archive the data file instead, using calls to the OS like you have done for your xml file above.

In my case this did make some sense, as I wanted to convert the data file using an OS call to an "exe", and at least the messages in Ensemble had all the meta information, file name, etc. But I also think it was a bit clumsy so would be interested in any better ideas.

Regards,

Mike

The timeouts for the web front end can be frustrating. Where we had searches that we wanted to do regularly we have ended up creating a Business Service class that does embedded SQL queries on the Ens.MessageHeader table, and puts the results into a simple text message that then gets sent as an attachment to an email. This gives us our daily stats in a CSV format to copy and paste into a spreadsheet. Yes, we could have built an XML spreadsheet file directly, but that is tricky, and not much of an advantage as we want to build on it each day without the query working through many day's of data.

We also had a go creating something using Ens.BusinessMetric for a "recent activity" graph, but the end result was a bit limited in how it could be displayed and analysed (using DeepSee) as we only had the Ensemble license.

Mike

Hi,

We have an Ensemble Service class that extends " EnsLib.SOAP.Service" and provides a web method with a parameter that is a class that includes a property of " %Stream.FileBinary", plus all the meta data in other properties. This allows the source application, written in .net, to send us documents fairly easily, as all the translation back and forth into xml, etc. is done for you (I assume it is also easy to do at the .net end). There is not a lot of code needed to define the class and web service, then it just needs to build a message object with that same input class as a property, and send that onwards as usual.

(Unfortunately, we then have to convert the file into base 64 encoded chunks and insert into segments in one of those MDM^T02 messages like you do. But that's another story.)

Mike

A lot of the code we have to deal with is like the "One Line" method, but then some of it was written back in the days when there was very little space in the partition for the program, and the source code was interpreted directly, so it had to be kept small. What is different is that while the code to handle the traverse is all kept on one line, which I think makes the scanning method clear, other code for pulling and writing stuff is usually on lower "dotted" lines, like this:

S Sub1="" F  S Sub1=$O(^Trans(Sub1)) Q:Sub1=""  D
.W !,"Sub1: ",Sub1
.S Sub2="" F  S Sub2=$O(^Trans(Sub1,Sub2)) Q:Sub2=""  D
.. ; etc

And yes, it uses the short version of commands (as you did for the Quit, I notice). I'm not saying this is the best style to use, obviously, as things have moved on, but we do still tend to use it when amending the old code.

I think it is useful to be aware of all these variations, as you may come across old code using them. Also, sometimes the array structure can be better or faster than objects and SQL, whether in a global arrays, or in temporary local arrays. Its a very neat and flexible storage system.

Mike

Hi Jenny,

The FOREACH has to iterate over something, but there is only one input field, so I don't think there is any other way than a CODE action in a DTL (at least on v2012). I could have done the whole loop in the one CODE, or called an outside function, but I preferred to use the GUI to set up the ASSIGN actions for me, to provide the extra validation and visual links in the DT Builder. I have used DTL whenever possible in other areas like in BPL to extract data from messages into temporary variables. Maybe not always the quickest or easiest way, but I think it makes the system more transparent.

For the loop the DT is actually called from a routing rule. The source is a simple message class with a few fields, and the target an HL7 v2 MDM_T02 message. I suppose if a full BP was involved then the loop could have been implemented in BPL in some way, but I think that would get complicated. Perhaps DTL needs a more general WHILE/UNTIL loop action added?

This may be slightly dodgy as it depends on assuming how the DTL will be compile, but I have used them to implement a loop around a set of standard DTL actions.

For one the source message had a stream with a document in it, and it needed to be chopped up into chunks, each one being put in a field in an HL7 v2 OBX segment. So an initial CODE action opens the loop with something like "Do {", gets the next chunk and quits if none. Then there are some normal ASSIGN actions setting up the new segment - easy to put in and appear in the DTL display. And finally there is a second CODE action to end the loop with "}".

It works, but only as long as InterSystems keep the compiled code for the assigns nice and simple.

A similar use was when dealing with xml documents. Again I needed a loop that enclosed normal ASSIGN actions, but this time for things like stepping backwards and deleting unwanted elements, or creating them from an array.

Also, I used a CODE action to work out if a DTL was running inside a business process, or just in the Studio for testing, so that some expected properties could be created to allow the test to run. Just for debugging.

Regards,

Mike

I agree, the Production class is a major problem for Configuration Management. In the past we've tried System Defaults and found it very awkward to use. In the last project we started with separate classes for dev, test and live. The updates to test and live versions were done in a release preparation namespace, with comparisons done to ensure they were in step, as mentioned earlier. Then a release was built from that and installed in test slot and later in live slot.

But it all got harder and harder to handle, and when the system went fully live we stopped doing full releases and went over to releasing only changed classes, etc. and never releasing the production classes. Now the problem is that changes to the production have to be done manually via the front end. Fortunately, there are a lot less of these now.

Regards,

Mike

Hi Steve,

I have done something like you describe. I used BPL, and at the time tried to keep away from using bits of code, but it got complex and in retrospect I'm not sure it was the best way. The diagrams are nice, but I think a bit of well written code might have been easier to follow!

First I created a "TempStore" class with an "MRN" (Medical Records Number) property and no permanent storage. This is used as the target class for a transform that pulls out the patient id and puts it in that property.

In the BPL Process I added an instance of the TempStore class to the BPL context object, and the first activity in the diagram is the transform with Source of "request" and Target "context.TempStore".

With the MRN found, I then use code like the following in the Value of an "assign" activity to put the target stored object into another context property of "context.BNetEpisode" already set to the same class.

##class(...).MRNIndexOpen(context.TempStore.MRN,4)

An "if" activity with a Condition like "$IsObject(context.BNetEpisode)" is used to see if anything was found, and create a new one if required by setting the "context.BNetEpisode.MRN" property equal to "context.TempStore.MRN".

The "context.BNetEpisode" property is then be used as the Target for "transform" activities later on with Create = "existing" used. Ensemble does a save automatically when the Process completes.

I hope this makes sense. (I cannot provide the full code as it belongs to the customer, and anyway it gets a lot more complex as there are 3 types of inbound message, one of which was an HL7 v3 document, and it was actually using an xml document inside the stored object to hold much of the data - but that's another story).

Mike

Hi Andrew,

Thanks for posting this. I am in the process of coding a similar automated statistics email, so I am reading your code with interest and may well use some of it. My only comment is that since this is for monitoring Ensemble, I actually decided to use Ensemble to implement it. So I have a message class with properties like "Subject" etc. and this is sent to an Operation that uses the Ensemble email adapter to send it. There is also a Service that runs once a day to pull the required information and builds the email message.

Mike