Many guides to "good programming" (in any language) would advise that the return from a function/method should be used for "real" data only, and any "exception" situations should be flagged as an error. While I'm not convinced this is always the best way, I can see the advantages. Code with repeated tests of returned status values can be messy and hard to read, and if the only thing it can do when the status is a fail is to quit out again with a status of "failed", then there is not a lot to be gained.

Mike

I had a similar situation and ended up with an Ensemble Service reading in the meta data file (like your xml), and composing an Ensemble message with that information, including a file reference for the data file (your pdf). This meant that the meta data file could be automatically archived by Ensemble, but now I had to archive the data file instead, using calls to the OS like you have done for your xml file above.

In my case this did make some sense, as I wanted to convert the data file using an OS call to an "exe", and at least the messages in Ensemble had all the meta information, file name, etc. But I also think it was a bit clumsy so would be interested in any better ideas.

Regards,

Mike

A lot of the code we have to deal with is like the "One Line" method, but then some of it was written back in the days when there was very little space in the partition for the program, and the source code was interpreted directly, so it had to be kept small. What is different is that while the code to handle the traverse is all kept on one line, which I think makes the scanning method clear, other code for pulling and writing stuff is usually on lower "dotted" lines, like this:

S Sub1="" F  S Sub1=$O(^Trans(Sub1)) Q:Sub1=""  D
.W !,"Sub1: ",Sub1
.S Sub2="" F  S Sub2=$O(^Trans(Sub1,Sub2)) Q:Sub2=""  D
.. ; etc

And yes, it uses the short version of commands (as you did for the Quit, I notice). I'm not saying this is the best style to use, obviously, as things have moved on, but we do still tend to use it when amending the old code.

I think it is useful to be aware of all these variations, as you may come across old code using them. Also, sometimes the array structure can be better or faster than objects and SQL, whether in a global arrays, or in temporary local arrays. Its a very neat and flexible storage system.

Mike

 

 

I agree, the Production class is a major problem for Configuration Management. In the past we've tried System Defaults and found it very awkward to use. In the last project we started with separate classes for dev, test and live. The updates to test and live versions were done in a release preparation namespace, with comparisons done to ensure they were in step, as mentioned earlier. Then a release was built from that and installed in test slot and later in live slot.

But it all got harder and harder to handle, and when the system went fully live we stopped doing full releases and went over to releasing only changed classes, etc. and never releasing the production classes. Now the problem is that changes to the production have to be done manually via the front end. Fortunately, there are a lot less of these now.

Regards,

Mike

Hi Andrew,

Thanks for posting this. I am in the process of coding a similar automated statistics email, so I am reading your code with interest and may well use some of it. My only comment is that since this is for monitoring Ensemble, I actually decided to use Ensemble to implement it. So I have a message class with properties like "Subject" etc. and this is sent to an Operation that uses the Ensemble email adapter to send it. There is also a Service that runs once a day to pull the required information and builds the email message.

Mike

Great idea. (I find SharePoint frustrating. All I want is a simple link to a folder in a library, but it's really hard to get.)

Yep, that's the first thing I noticed as well. I often go from one search to another and do not want to step back to the start each time. Search is probably the most important facility for me. (Thanks to interSystems for opening it up for comment, by the way.)

There may be arguments in favour of either solution, depending on the types of data involved and programmer preference, but if you are to embrace the full Ensemble "model" then I think the second option is far better.

By putting the non-HL7 into a message sent to a business process,  it gets stored and becomes visible in Ensemble in it's raw form (or as close as you can make it) on the  message queue into that Process. This makes support much easier as you can see before and after messages in the Ensemble GUI. Also, a Business Service should do a minimum of work so that messages are input as fast as possible.

Using DTL is also the cleanest option, since it is meant to transform messages, but I admit that sometimes this is more effort than it is worth. I have had to deal with complex xml documents, and ended up writing methods in my custom message class to make extraction easier to understand in the DTL.

Mike