Hi. It's been a long time since I dealt with this, but when I was trying to get a stats extract to run once a day I used ADAPTER = "Ens.InboundAdapter" (with some custom params for which operation to send to and an email address) and in the production set up a schedule as described above to define a run for a few minutes each day, but then defined CallInterval as 999999. Thus it ran once at start of interval, and not again. I don't know if it was the best solution, but it seemed to work and kept the schedule visible in the production should we need to change it. The service just did queries on the cache database, but you could use a SQL adapter to query other databases.

update - just found that I actually used that same trick for middle of the night SQL extractions (mass update of medical instrument reference data).

and looking back at the original question: the answer is that in the ideal solution (not always the best) things should not "magically" appear in the middle of a BPL, there should be a service starting from the left sending stuff to the right in a visible path, even when the input is from "inside" the system. So here you might need that SQL adapter reading rows and sending a message per row. (Though the solution might get a lot more complex, I know mine did, with special message objects, streams and stored procedures!)

Hi. Awful long time since I looked at that tool, but Michel is probably right: it's to do with how the lines are defined.  I have a vague memory that "multiline" was actually provided as an array, not one string with delimiters. And the simple top of the array had a line count, e.g.

remarks=2

remarks(1)="first line"

remarks(2)="last line"

But I could be wrong.  :-)  Try the help on that "options" field.  / Mike

Hi. This was a few years ago, but we could not find any way to do this in the standard Ensemble set up at the time. The message viewer can be asked the right sort of questions, but then times out as the query takes too long.

For monthly stats we wrote embedded SQL in a class method that we ran at the command prompt. This wrote out the results as CSV to the terminal and once it finished we copied and pasted the various blocks into Excel. Very slow and manual, but only once a month so not so bad. (Our system only purged messages after many months.)

Then we wanted daily numbers to feed into another spreadsheet. So we built an Ensemble service running once a day that ran similar embedded SQL but only for the one previous day of messages. It put the CSV results into a stream in a custom message that went to an operation, that sent then emails out to a user-defined list. Essential bits below. Hope it helps.

    Set today = $ZDATE($H,3)_" 00:00:00"
    Set yesterday = $SYSTEM.SQL.DATEADD("day",-1,today)
    Set tRequest = ##class(xxx.Mess.Email).%New()
    Set tRequest.To = ..SendTo
...etc
    &sql(DECLARE C1 CURSOR FOR
    SELECT
          SUBSTRING(DATEPART('sts',TimeCreated) FROM 1 FOR 10),
          SourceBusinessType,
          SourceConfigName,
          TargetBusinessType,
          TargetConfigName,
          COUNT(*)
        INTO :mDate,...
        FROM Ens.MessageHeader
        WHERE TimeCreated >= :yesterday AND TimeCreated < :today
        GROUP BY...
        ORDER BY...
    )
    &sql(OPEN C1)
    &sql(FETCH C1)
    While (SQLCODE = 0) {
        Set tSC = tRequest.Attachment.WriteLine(mDate_","_...
        &sql(FETCH C1)
    }
    &sql(CLOSE C1)
    Set tSC = ..SendRequestAsync($ZStrip(..TargetConfigName,"<>W"),tRequest) If $$$ISERR(tSC)...

Sadly, in my team we've all been writing MUMPS for so long that the abbreviated style comes naturally and is a hard habit to break. Yes, expanded is better for new starters in the language.

However... Playing devil's advocate you could say that abbreviated commands are:

1. Faster to type (as you said).

2. More compact. Allowing the reader to "see" more of the structure in one go.

You see, you can expand things out too much, in my opinion. Also, it only takes a few minutes for a (reasonable) programmer to get that "S" means "set", "I" means "if", etc. Commands always appear in the same part of the code (unlike some languages), there are not that many to learn, and once you know them, you can read them! So why bother with extra letters? After all "set" is itself only a token for "put the value on the right of the = into the variable on the left" or something like that. It could be "make" or "update" or "<-" (look up the programming language "APL" on Wikipedia if you want a real scare).

I think the main problem with "old fashioned" code is usually poor label/variable names, squeezing too much on one line, and lack of indenting. It's hard to read mainly because of the other parts of the code, not because the commands are single characters. Some things should be longer to better convey what they are for (though not as long as COBOL), and more lines can help convey program structure.

While I'm here, I'm not that keen on spurious spaces in "set x = 1", as opposed to "set x=1". It just spreads out the important stuff  - spaces are there to split out the commands.  :-) 

Mike

Hi. It depends on what you mean by "certain criteria". If it's a special file name then you could amend the FileSpec  property to skip the ones you don't want yet. If it's in the content, then maybe you should be reading in the file (creating a copy or allowing archive so the original continues to exist) and sending it as a message into Ensemble that can then  be held up in a business process until ready to send out to an Operation that creates an output file. That is the way ensemble is supposed to work, so you get a full record of what happened, etc.

(Otherwise, I'm pretty certain that there are actions that reset the list of processed files - maybe resetting that file path or restarting the job - but I cannot find the documentation about it at the moment. )

Hi / It sounds like a good idea! I can think of a number of interfaces I've seen where the target application - a small local system - struggled to keep up with the flow of updates from a large PAS.

The only thing I've done like it was complicated, and had to use a proper Business Process. In that case the "department" was neonatal, so we were only interested in patients admitted to a particular ward. The solution looked for HL7 admissions and transfers to that ward, and when found used the data to create a local record in Caché. Then all other types of message could be checked against those records to see if it needed further processing and passing on (to "BadgerNet" eventually when a full episode was built up). Of course this only works if you can define a clear "starting point" that can be spotted in the message stream.                        / Mike

Hi, I also help support an NHS trust using Ensemble, and it also has ever-growing PDF files in messages. We have our incoming PDFs as external file streams and it helps, though you have to bear in mind that the files are not going to be part of the cache backup for Disaster Recovery, etc. (Not sure about mirroring. I'd assume they don't get mirrored either as the contents are not in the journal.)

As yet, we don't have as big a problem as you - less messages and we only keep 92 days - but that is just as well as the PDF files are converted to base64 encoded in HL7 v2 messages, so they then do take up space in the database, and the journal, and the backup, which has resulted in the need to expand the disk space recently. I can recommend keeping Ensemble on a virtual server with disk expansion on demand.

I tend to think the problem is not going to go away whatever you do. I assume, like us, the PDFs come from 3rd party applications and they are always going to be producing  ever more and prettier documents as time goes by. So I recommend looking at more disk. :-)  / Mike

An alternative solution that works for us is to use the "Schedule" setting to run it for 30m (to allow some leeway as the job takes a while), and then set the "Call Interval" setting to something very large like "999999". This is for an Inbound SQL adaptor. (If something goes wrong with this overnight run then we manually remove the "Schedule" setting and restart the Service. Once complete, we put back the setting ready for the next night.)

Hi. We had a site upgrade from 2012.2.5 to 2017.1.0 last year, and it included a mirror. We had very few code changes needed - just an issue with it failing to save objects inside a Business Process where we had used some "unusual" structures. The upgrade itself went smoothly. The only issue was afterwards when the next backup was a "full" one instead of the scheduled "partial", using more space than expected. Our Production was much smaller than yours, with only about 120 items, and it is hard to say how much effort went into pre-release testing as it was "fitted in" around other work by a team of people. Maybe a couple of man  months?

To be honest, it all depends on how much custom or unusual code you have, and how much testing the customer wants. We upgraded a development namespace and re-ran test messages through all the important paths and compared the result before and after upgrade. Plus some connection testing to cover all the "types" we used: ftp, web service, HL7, etc.  In our case the testers included people from the user side, so they could decide when they were happy with it.

InterSystems were very helpful. We raised a call a few months before and they gave advice on testing and desk checked our detailed plan of the upgrade itself, including how to do the mirror.

Good luck.

Amir's answer with option 2 is what we did. The XML we sent had to be converted to allow it to be sent, so our code looked a bit like this:

Method ImportEpisode(pRequest As EnsLib.EDI.XML.Document, Output pResponse As Ens.Response) As %Status
{
 ; Use format 8 bit regardless of cache default (else Base64Encode gives ILLEGAL VALUE error)
 Set sendingXML = pRequest.OutputToString("C(utf-8)",.tSC)
  If $$$ISERR(tSC) Quit tSC
  $$$TRACE("Sending: "_sendingXML)
  Set sending = $system.Encryption.Base64Encode(sendingXML)
  Set tSC = ..Adapter.InvokeMethod("ImportEpisode",.result,sending,{plus some other id parameters})
  If $$$ISERR(tSC) Quit tSC
 Set resultXML = $system.Encryption.Base64Decode(result)

...etc.

I hope this is useful to you.

Mike

I had a similar situation and ended up with an Ensemble Service reading in the meta data file (like your xml), and composing an Ensemble message with that information, including a file reference for the data file (your pdf). This meant that the meta data file could be automatically archived by Ensemble, but now I had to archive the data file instead, using calls to the OS like you have done for your xml file above.

In my case this did make some sense, as I wanted to convert the data file using an OS call to an "exe", and at least the messages in Ensemble had all the meta information, file name, etc. But I also think it was a bit clumsy so would be interested in any better ideas.

Regards,

Mike

The timeouts for the web front end can be frustrating. Where we had searches that we wanted to do regularly we have ended up creating a Business Service class that does embedded SQL queries on the Ens.MessageHeader table, and puts the results into a simple text message that then gets sent as an attachment to an email. This gives us our daily stats in a CSV format to copy and paste into a spreadsheet. Yes, we could have built an XML spreadsheet file directly, but that is tricky, and not much of an advantage as we want to build on it each day without the query working through many day's of data.

We also had a go creating something using Ens.BusinessMetric for a "recent activity" graph, but the end result was a bit limited in how it could be displayed and analysed (using DeepSee) as we only had the Ensemble license.

Mike

Hi,

We have an Ensemble Service class that extends " EnsLib.SOAP.Service" and provides a web method with a parameter that is a class that includes a property of " %Stream.FileBinary", plus all the meta data in other properties. This allows the source application, written in .net, to send us documents fairly easily, as all the translation back and forth into xml, etc. is done for you (I assume it is also easy to do at the .net end). There is not a lot of code needed to define the class and web service, then it just needs to build a message object with that same input class as a property, and send that onwards as usual.

(Unfortunately, we then have to convert the file into base 64 encoded chunks and insert into segments in one of those MDM^T02 messages like you do. But that's another story.)

Mike

Hi Steve,

I have done something like you describe. I used BPL, and at the time tried to keep away from using bits of code, but it got complex and in retrospect I'm not sure it was the best way. The diagrams are nice, but I think a bit of well written code might have been easier to follow!

First I created a "TempStore" class with an "MRN" (Medical Records Number) property and no permanent storage. This is used as the target class for a transform that pulls out the patient id and puts it in that property.

In the BPL Process I added an instance of the TempStore class to the BPL context object, and the first activity in the diagram is the transform with Source of "request" and Target "context.TempStore".

With the MRN found, I then use code like the following in the Value of an "assign" activity to put the target stored object into another context property of "context.BNetEpisode" already set to the same class.

##class(...).MRNIndexOpen(context.TempStore.MRN,4)

An "if" activity with a Condition like "$IsObject(context.BNetEpisode)" is used to see if anything was found, and create a new one if required by setting the "context.BNetEpisode.MRN" property equal to "context.TempStore.MRN".

The "context.BNetEpisode" property is then be used as the Target for "transform" activities later on with Create = "existing" used. Ensemble does a save automatically when the Process completes.

I hope this makes sense. (I cannot provide the full code as it belongs to the customer, and anyway it gets a lot more complex as there are 3 types of inbound message, one of which was an HL7 v3 document, and it was actually using an xml document inside the stored object to hold much of the data - but that's another story).

Mike

There may be arguments in favour of either solution, depending on the types of data involved and programmer preference, but if you are to embrace the full Ensemble "model" then I think the second option is far better.

By putting the non-HL7 into a message sent to a business process,  it gets stored and becomes visible in Ensemble in it's raw form (or as close as you can make it) on the  message queue into that Process. This makes support much easier as you can see before and after messages in the Ensemble GUI. Also, a Business Service should do a minimum of work so that messages are input as fast as possible.

Using DTL is also the cleanest option, since it is meant to transform messages, but I admit that sometimes this is more effort than it is worth. I have had to deal with complex xml documents, and ended up writing methods in my custom message class to make extraction easier to understand in the DTL.

Mike