Hi Murillo.

From that screenshot - an ORU_R01 should go to ManageRIS every time, and go to ManageEDM as well if the OrderStatus is 160, so I'm at a loss as to why it isn't working as intended.

My next steps would be checking the general tab to be sure the rule type is set to "HL7 Message Routing Rule" or, if that fails, creating a new rule to make sure nothing has gone weird in the background for this specific rule.

Otherwise - if no one else is able to point you in the right direction here it might be worth raising with WRC as it could be a weird bug with the version of Ensemble.

Hi Murillo.

You will need a when to specify when the action should be triggered, and using WHEN 1 is the easiest way of achieving what you need, however I think I see your problem with the ordering.

The reason why your rule only sends to ManageRIS when the ManageRIS item is above the OrderStatus check is because of the RETURN action.

What the RETURN does is stop any actions beyond that point from being processed if the conditions of the WHEN are met. So in your case: When Orderstatus =160, transform and send, and then do not process anything else and move on to the next message.

So what you will want to do is remove the RETURN block to allow the second WHEN in rule two to be run regardless of the outcome of the first WHEN.

Within the Production under Production Settings, you should currently find a Document button which will produce a report of everything within your production. However, depending on the production size, this could easily be overkill.

2019 brings a new option called "Interface Maps" where you can get a graphical view of message flows, along with the processes, routers, and rules for the individual sections of your production. It's a lot cleaner than using the Documentation generator, but if you're needing to show multiple routes, you're likely to want to go through each one and take a screenshot. I also found that where I have a router with lots of rules/transforms, I need to scroll the screen to see the bottom of the display.

Information on this can be found here.

I haven't come across anything built in as standard that would do this in itself, but I guess it's something you could create within your environment.

I have something a bit similar, but the index is populated by a CSV received daily from another organisation, and then the process compares the HL7 messages against the index and sends if the patient is present in that table before sending.

I had this exact issue last week, and this is how I got around it. For clarity, I wanted to pass the Dynamic Object from a process to an operation.

I created my dynamic object within the Process, and then used the %ToJSON Method to put the JSON within a GlobalBinaryStream (which can be passed through the request).

In the Operation, I then use the %FromJSON Method of DynamicAbstractObject to then have the Dynamic Object within the operation.

Hi Andrew.

I don't think the operation to a downstream system would be the appropriate place for adjusting the HL7 content.

Using Transforms within a router will be the best approach for this, and while it might seem a little cumbersome creating a transform per message type you should be able to remove some of the leg work by using sub-transforms.

For example; if you were looking to adjust the datestamp used in an admission date within a PV1, rather than complete the desired transform work in every message transform that contains a PV1, you create a sub-transform for the PV1 segment once and then reference this in your transform for each message type. If you then have additional changes required to the PV1 which is message specific (say, a change for an A01 but not an A02) you can then make this change in the message specific transformation.

As far as transforming datetime in HL7, I have faced my share of faff when it comes to this with suppliers. The best approach from within the transforms is to use the built in ensemble utility function "ConvertDateTime" listed here

What type of business service are you using? If you are using a single job on the inbound, I guess you're hitting a limit on how fast the adapter can work on handling each message (in your case, you're getting around 15ms per message)

You could look at increasing the pool size and jobs per connection if you're not worried about the order in which the messages are received into your process.

Hi Alexandr.

If you are looking to run a task at specific times, you could create a new task which extends %SYS.Task.Definition to then be selectable as an option from the task manager.

For example, I have a folder which I need to periodically delete files older than x days.

To achieve this, I have a class that looks like this:

Class DEV.Schedule.Purge Extends %SYS.Task.Definition
{

Parameter TaskName = "Purge Sent Folder";

Property Directory As %String;

Property Daystokeep As %Integer(VALUELIST = ",5,10,15,20,25,30") [ InitialExpression = "30" ];

Method OnTask() As %Status
{

Set tsc = ..PurgeSentFolder(..Directory,..Daystokeep,"txt")
Quit tsc
}

Method PurgeSentFolder(Directory As %String, DaysToKeep As %Integer, Extention As %String) As %Status
{
// Calculate the oldest date to keep files on or after
set BeforeThisDate = $zdt($h-DaysToKeep_",0",3)

// Gather the list of files in the specified directory
set rs=##class(%ResultSet).%New("%File:FileSet")
Set ext = "*."_Extention
do rs.Execute(Directory,ext,"DateModified")

// Step through the files in DateModified order
while rs.Next() {
set DateModified=rs.Get("DateModified")
if BeforeThisDate]DateModified {
// Delete the file
set Name=rs.Get("Name")
do ##class(%File).Delete(Name)
}
// Stop when we get to files with last modified dates on or after our delete date
if DateModified]BeforeThisDate 
set tSC = 1
}
quit tSC
}

}

Then I created a new task in the scheduler, selected the namespace where the new class exists, and then filled in the properties and times I want the task to run.

Hi Eric.

My first check would be looking at the console log for that instance to see if there's anything wobbling in the background. Specifically checking for any entries around the time the monitor thinks it has gone down.

Failing that, it's probably worth going to WRC. The last thing I think you need this close to Christmas is the Primary dropping and you needing the Mirror to be working.