Just to expand on Davids response - the File Outbound adapter will create a new file per message assuming the filename settings are configured so that the filenames it produces are unique per message.

If you were to set the Filename property to a specific value rather than use the timestamp specifiers (so for example set it so that the filename is output.txt) then each message should write the data at the end of the file, giving you a single file with all of the entries.

You won't be able to do this using the built in viewer, however you can query the sql tables directly and then interrogate the results using your preferred method.

For example, I had a spike in activity on the 20th of Dec which made a disk fill a lot more than usual, but the purges meant I couldn't just check the messages as they had since been deleted. So I ran the following in the SQL option in the management portal:

SELECT *
FROM Ens_Activity_Data.Days
Where TimeSlot = '2020-12-20 00:00:00'

I then used the print option to export to CSV to then use a simple Pivot table to work through each Host name to see what had a dramatically higher number of messages to what I would usually expect. (I actually exported a few days worth of data to compare between days, but hopefully you're getting the idea)

You could always explore using Grafana to produce a nice visual representation of the data that you've surfaced using a method like this.

Hey Werner.

I know I have ignored your request on how to call a class method (Jeffery has you covered by the looks of things), but you could use $PIECE to break the string apart and then insert what you need.

For example if "source.{PhoneNumberHome(1).Emailaddress}" is equal to "myemail@myemaildomain.co.uk" then 

"test"_$PIECE(source.{PhoneNumberHome(1).Emailaddress},".",1)_"test"_"."_$PIECE(source.{PhoneNumberHome(1).Emailaddress},".",2,*)

will return: 

"testmyemail@myemaildomaintest.co.uk"

The idea being that we

  • Start the new string with "test"
  • take everything before the first period with $P(source.{PhoneNumberHome(1).Emailaddress},"."1) 
  • Add "test" in again
  • Add a period that gets dropped from the $PIECE from using the period as the delimiter
  • Provide everything from the second period onwards with $P(source.{PhoneNumberHome(1).Emailaddress},".",2,*)

Hey ED Coder.

There are built in classes to manage this in a nicer way.

##class(Ens.Util.Time).ConvertDateTime() is a good starting point. For example:

Here is the filled in classmethod call for easy copy/pasting:

Set NewDate = ##class(Ens.Util.Time).ConvertDateTime(HL7Date,"%Y%m%d%H%M%S","%Y-%m-%d %H:%M:%S")

The values for each section of the date are defined by the following: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

Hey Ahmad.

If this is all happening in the one production, then you will have an inbound service for each port.

You could setup a router with a specific rule for each destination, and then use the rule constraint to be restricted on the source service. That way, if you point all three services at the one router, it will only use the rule within the router that has the service listed as a constraint.

For example:

I hope that helps!

Hey, this should be controllable using the Service setting of AckMode:

Control of ACK handling. The options are:
  • Never: Do not send back any ACK.
  • Immediate: Send back (commit) ACK reply message immediately upon receipt of the inbound message.
  • Application: If message passes validation, wait for ACK from target config item and forward it back when it arrives.
  • MSH-determined: Send back ACK reply messages as requested in the MSH header of the incoming message.
  • Byte: Send back an immediate single ACK-code byte instead of an ACK message. Byte ASCII code 6 = 'OK', ASCII code 21 = 'Error'

In your case, you'll be interested in the Application ACK.

So if your HisEmrRouter is running with 10 jobs, which then sends to ADTRoutingRule that has half the number of jobs available, then I can see that this would introduce some form of a bottleneck.

It's worth considering the impact of increasing your app pool beyond 1 when working with healthcare data, and the details of which are noted here: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

It also mentioned not having the poolsize exceed the number of CPU (I assume it means CPU cores) , however this has been contested in the past by other users as seen by the comments of Eduard here: https://community.intersystems.com/post/ensemble-introduction-pool-size-...

If FIFO is not required for your use case, I would at the very least try setting the poolsize to the same value.

Hey Yone.

It doesn't look like There was an ORU^R30 in the standard until HL7 2.5, so that would explain why there isn't a schema.

Depending on your source, you might want to look at how the source system thinks it is providing you a v2.3 ORU^R30 as it's possible they're using the 2.5 schema and then incorrectly calling it 2.3 in the header.

If that is the case, you could create your own schema based off of the 2.5 ORU^R30 to match what you're receiving.

Hey Yone - there's a few things going on here, but I'll try to explain as best I can.

For starters, calling "request.GetValueAt("5:3.2")" is returning to you the value of segment "3.2" of whatever is row 5 of your inbound message. If in your case this is an OBX, then this is only returning the content of OBX:3.2 (Which is some variation of Observation Identifier Text).

When you are then outputting the entire HL7 message to a string and then running "$PIECE(request.OutputToString(), "OBX", 2)" you are getting every character in the string after the literal text "OBX"

So if we use this fake message as an example:

MSH|blahblahblah
PID|blahblahblah
OBR|blahblahblah
NTE|1|blahblahblah
OBX|1|ST|NA^SODIUM^L||139|mmol/L|136-145|N|||F

Calling "request.GetValueAt("5:3.2")" and then evaluating its length would give you 6, as OBX:3.2 is "SODIUM" in the above. If you then output the above into a string and then checked the length of the output from "$PIECE(request.OutputToString(), "OBX", 2)"  you would be evaluating all that is highlighted above.

Now with that being said, it is not a good idea to make assumptions that a specific row within a message will be a set segment type. Certainly in my case, all it would take is for a few NTE segments to be present in a message, and "5:3.2" could easily be part of an OBR or NTE.

Hey Kyle - it looks like I gave you some bad advice. I have just noticed that you're working with a pre-2.4 HL7 schema, and my examples were all based on HL7 2.4 (even where I corrected myself in the second reply). Also in my corrections, I continued to provide examples for a common PID transform, whereas your screenshot shows you doing the transform at the whole message level.

This should work for you:

Sorry about that - hopefully it works for you now smiley

Hey Kyle.

I would approach this by creating a separate count to what is created by the DTL for for each loops, and then iterate through the list of ID's and only move across the ones you want, while not leaving a blank entry in the target message (and remembering to increment the separate count for each ID you copy to the target message).

I have made an example within a common PID transform:

This then gave me:

Hey Oliver.

This Webinar is over on learning.intersystems.com and includes the code sample as an xml that you can import into your environment. The link to the code can be found on this page: https://learning.intersystems.com/course/view.php?id=623 

Make sure you give the code a good read before you try to run it smiley

EDIT: Direct link to the code is here: https://learning.intersystems.com/mod/resource/view.php?id=2388