Hey Nigel.

The key take away from any attempts to backup a running environment is to use freeze/thaw scripts.

The idea being that the backup solution will prompt the IRIS system to freeze the DBs for the backup to take place, and then thaw after the fact.

I recently embarked on this myself, and posted an article showing my journey based on using VMWare and a windows environment. However this should be easily adaptable to other backup solutions.

The only change I have made since that article is that I am not passing the login credentials via a separate file, and instead have OS Authentication enabled within my IRIS system so that the user account running the script is able to automatically authenticate at runtime.

Hey ED Coder.

You should be able to just use the EnsLib.HL7.Service.HTTPService class for your Service, and then the request can be sent to the port you define as you would for any other HL7 inbound. You can also then specify the message schema in the same way. 

The "url" you provide can be the IP of the machine with the port afterwards. For example, I just set up an inbound on a dev environment and then sent a message from Insomnia.

I sent it as a post to http://IP:PORT with the HL7 message as the content of the HTTP body:

I've cobbled the following together from other posts that come close to this (for example, here and here) and running each line in terminal should disable all the services from the production in the namespace you run it in:

zn "NAMEOFNAMESPACEHERE"

Set tRS = ##class(%ResultSet).%New("Ens.Config.Production:EnumerateConfigItems")

Set tStatus = tRS.%Execute("Production.Name.Here", 1)

While tRS.%Next(.tStatus) {set sc = ##class(Ens.Director).EnableConfigItem(tRS.%Get("ConfigName"), 0, 1)}

Line 1 sets your namespace, 2 and 3 bring back the list of services (the flag set to 1 on the third line specifies listing the services, setting this to 3 will bring all operations)

Line 4 is a while loop to iterate through the result set and to then use Ens.Director.EnableConfigItem to disable the config item by name (flag 1 is the enable/disable flag, and the second is to tell it to update the production).

This could probably be made nicer and more efficient (eg. disabling all of the config names without updating the production and then updating the production once using "##class(Ens.Director).UpdateProduction()" to avoid doing it once per entry) however I hope it works as a starting point.

Hey Scott.

If you were open to having a Service in your production which is what your function sends its two variables (and the service then passes it onto your Operation) you could have something like this:

ClassMethod SendPage(PagerNumber As %String, Message As %String) As %Status
{
    //The String passed to Ens.Director must match a service name within the active production
    set tsc = ##class(Ens.Director).CreateBusinessService("Pager From Function Service",.tService)
    
    if ($IsObject(tService))
    {
        set input = ##class(osuwmc.Page.DataStructures.Page).%New()
        set input.PagerNumber = PagerNumber
        Set input.Message = Message
        
        set tsc = tService.ProcessInput(input)
        Quit tsc
                
    }
    else
    { 
    Quit 0
    }
}

and then you have a custom service that looks a little like this:

Class osuwmc.Services.PageService Extends Ens.BusinessService
{
Property TargetConfigName As Ens.DataType.ConfigName;

Parameter SETTINGS = "TargetConfigName";

Method OnProcessInput(pRequest As osuwmc.Page.DataStructures.Page) As %Status
{
    set tsc=..SendRequestAsync(..TargetConfigName, pRequest)
    
    Quit tsc
}

}

Then when you add the service to your production (remembering to match it to the name declared in the service code), you can select your target operation as a config item, and when the function is triggered it should go Function -->Service-->Operation.

Edit: my Service Class example had an error in the SETTINGS parameter, I have corrected it. 

Hey Scott.

I think you can achieve this by setting the second parameter to a comma delimted list of the request names, and then pass each value afterwards (in the same order you have the names).

For example:

Method Sample(pReq As osuwmc.Page.DataStructures.Page, Output pResp As %Net.HttpResponse) As %Status
{

    Set FormItems = "PNo,PMsg"
    
    set tSC = ..Adapter.Post(.tResponse,FormItems,pReq.PagerNumber,pReq.Message)

    if ('tSC)

    {

        $$$LOGERROR(tSC)

    }

    quit tSC

}

Hey Mufsi, I had this happen to me and according to WRC this is a known issue fixed in 2020.1.1.

It is caused by an attempt to get an exclusive lock on a specific node for the mirrored database (which is read only) where the task is being scheduled.

There is an alternative workaround to the steps you took by using the ^TASKMGR interface from the %SYS namespace in a Terminal session as it doesn't try to perform any write operations on the read-only mirror databases.

Just to expand on Davids response - the File Outbound adapter will create a new file per message assuming the filename settings are configured so that the filenames it produces are unique per message.

If you were to set the Filename property to a specific value rather than use the timestamp specifiers (so for example set it so that the filename is output.txt) then each message should write the data at the end of the file, giving you a single file with all of the entries.

You won't be able to do this using the built in viewer, however you can query the sql tables directly and then interrogate the results using your preferred method.

For example, I had a spike in activity on the 20th of Dec which made a disk fill a lot more than usual, but the purges meant I couldn't just check the messages as they had since been deleted. So I ran the following in the SQL option in the management portal:

SELECT *
FROM Ens_Activity_Data.Days
Where TimeSlot = '2020-12-20 00:00:00'

I then used the print option to export to CSV to then use a simple Pivot table to work through each Host name to see what had a dramatically higher number of messages to what I would usually expect. (I actually exported a few days worth of data to compare between days, but hopefully you're getting the idea)

You could always explore using Grafana to produce a nice visual representation of the data that you've surfaced using a method like this.

Hey Ahmad.

If this is all happening in the one production, then you will have an inbound service for each port.

You could setup a router with a specific rule for each destination, and then use the rule constraint to be restricted on the source service. That way, if you point all three services at the one router, it will only use the rule within the router that has the service listed as a constraint.

For example:

I hope that helps!

So if your HisEmrRouter is running with 10 jobs, which then sends to ADTRoutingRule that has half the number of jobs available, then I can see that this would introduce some form of a bottleneck.

It's worth considering the impact of increasing your app pool beyond 1 when working with healthcare data, and the details of which are noted here: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

It also mentioned not having the poolsize exceed the number of CPU (I assume it means CPU cores) , however this has been contested in the past by other users as seen by the comments of Eduard here: https://community.intersystems.com/post/ensemble-introduction-pool-size-...

If FIFO is not required for your use case, I would at the very least try setting the poolsize to the same value.

Hey Yone.

It doesn't look like There was an ORU^R30 in the standard until HL7 2.5, so that would explain why there isn't a schema.

Depending on your source, you might want to look at how the source system thinks it is providing you a v2.3 ORU^R30 as it's possible they're using the 2.5 schema and then incorrectly calling it 2.3 in the header.

If that is the case, you could create your own schema based off of the 2.5 ORU^R30 to match what you're receiving.

Hey Yone - there's a few things going on here, but I'll try to explain as best I can.

For starters, calling "request.GetValueAt("5:3.2")" is returning to you the value of segment "3.2" of whatever is row 5 of your inbound message. If in your case this is an OBX, then this is only returning the content of OBX:3.2 (Which is some variation of Observation Identifier Text).

When you are then outputting the entire HL7 message to a string and then running "$PIECE(request.OutputToString(), "OBX", 2)" you are getting every character in the string after the literal text "OBX"

So if we use this fake message as an example:

MSH|blahblahblah
PID|blahblahblah
OBR|blahblahblah
NTE|1|blahblahblah
OBX|1|ST|NA^SODIUM^L||139|mmol/L|136-145|N|||F

Calling "request.GetValueAt("5:3.2")" and then evaluating its length would give you 6, as OBX:3.2 is "SODIUM" in the above. If you then output the above into a string and then checked the length of the output from "$PIECE(request.OutputToString(), "OBX", 2)"  you would be evaluating all that is highlighted above.

Now with that being said, it is not a good idea to make assumptions that a specific row within a message will be a set segment type. Certainly in my case, all it would take is for a few NTE segments to be present in a message, and "5:3.2" could easily be part of an OBR or NTE.

Hey Kyle - it looks like I gave you some bad advice. I have just noticed that you're working with a pre-2.4 HL7 schema, and my examples were all based on HL7 2.4 (even where I corrected myself in the second reply). Also in my corrections, I continued to provide examples for a common PID transform, whereas your screenshot shows you doing the transform at the whole message level.

This should work for you:

Sorry about that - hopefully it works for you now smiley

Hey Oliver.

This Webinar is over on learning.intersystems.com and includes the code sample as an xml that you can import into your environment. The link to the code can be found on this page: https://learning.intersystems.com/course/view.php?id=623 

Make sure you give the code a good read before you try to run it smiley

EDIT: Direct link to the code is here: https://learning.intersystems.com/mod/resource/view.php?id=2388