Not sure what version of Caché or IRIS you're on; for future reference it's helpful to include that information. In IRIS 2021.2, you can do this from the IRIS SQL Shell:

JEFF>do $system.SQL.Shell()
SQL Command Line Shell
----------------------------------------------------
The command prefix is currently set to: <<nothing>>.
Enter <command>, 'q' to quit, '?' for help.

[SQL]JEFF>>set displaypath /home/jeff/tmp/
displaypath = /home/jeff/tmp/

[SQL]JEFF>>set displayfile sqlout
displayfile = sqlout

[SQL]JEFF>>set displaymode csv
displaymode = csv

[SQL]JEFF>>set selectmode display
selectmode = display

[SQL]JEFF>>select top 100 * from Ens_Util.Log
13.     select top 100 * from Ens_Util.Log

/home/jeff/tmp/sqlout.csv
/home/jeff/tmp/sqloutMessages.txt

statement prepare time(s)/globals/cmds/disk: 0.0002s/6/831/0ms
          execute time(s)/globals/cmds/disk: 0.0035s/467/20822/0ms
                          cached query class: %sqlcq.JEFF.cls115
---------------------------------------------------------------------------

The default delimiter is comma, but you can change that. For example, the tab character:

[SQL]JEFF>>set displaydelimiter = $C(9)

A Business Process Component is the BPL analogue of a subroutine or function and is called exclusively from a BPL. The idea is that they can be reusable components applicable to potentially multiple, different business processes. I don't think that's really what you're looking for.

If you don't have a FIFO concern with this database processing and are thinking that increasing the number of parallel processes performing these database activities might improve performance, you could try increasing the pool size for the BP.

There are methods for dealing with what are essentially embedded streams in HL7 Objects. See the methods GetFieldStreamRaw() and StoreFieldStreamRaw() in class EnsLib.HL7.Message; these are useful for copying streams from one message to another. If the need is to extract the Base64 stream as a binary stream for writing to a file, there's also GetFieldStreamBase64() in the same class; the stream obtained from it can be used with file-based streams to write to a disk file.

I'm not sure whether this will work in Ensemble 2018.1, but it does seem to work fine in IRIS for Health Interoperability 2022.2.

I'm testing with a simple JSON file that looks like this:

{
    "mrn": 12345678,
    "name": "Johann Smythe",
    "firstname": "Johann",
    "lastname": "Smythe",
    "dob": "1989-03-21 14:20:00",
    "phone": "(555) 555-4917",
    "mobile": "(555) 555-6401",
    "email": "johann@smythe.com",
    "address": "123 Anystreet St",
    "city": "Anytown",
    "state": "ME",
    "zip": "04121"
}

I've used the File Passthrough Service (EnsLib.File.PassthroughService) to read the JSON document into a stream, message class Ens.StreamContainer. Because this isn't an HL7 object, my router is based on the "General Message Routing Rule" rule type, and my constraint consists of the source service name with a message class of Ens.StreamContainer.

In the DTL called by the send action, I use Ens.StreamContainer as the source message type. The target message type is EnsLib.HL7.Message with whatever Document Category and Type is needed.

The first rule in the DTL is this:

After setting a number of default values for the target HL7 message (Event Type/Trigger, Date/Time of message, etc.) I populate the PID fields as follows:

And I now have an HL7 message created from JSON.

This isn't going to work with a batch of patient records in a JSON array; you'd need to create a BPL to process that. But for input that consists of a simple structure like the example I used, you can accomplish what you need without building a custom service or creating a BPL.

Are you using LDAP for authentication? I seem to remember running into this when the web applications created as part of enabling Ensemble/Interoperability weren't set to support LDAP.

Compare the settings for the web applications created for your new namespace in Security | Applications | Web Applications with those from other (working) Ensemble-enabled namespaces.

For those that use Interoperability/HealthConnect, nc/netcat is also an excellent tool for verifying that remote ports are accessible for HL7 MLLP, HTTP or other protocols that require a TCP socket client connection.

And while this thread is specifically for Unix/Linux, there's a Windows PowerShell analogue named Test-NetConnection (alias tnc) that provides a subset of nc's features.

Something like this, perhaps?

Class User.Util.StringFunctions Extends Ens.Util.FunctionSet
{

ClassMethod ReReplace(pStr As %String, pPat As %String, pRepl As %String = "") As %String
{
    Set tStrt = $LOCATE(pStr,pPat,,tEnd) - 1
    // in case the pattern isn't found, return source string
    Return:(tStrt < 0) pStr
    Set tPrefix = $EXTRACT(pStr,1,tStrt)
    Set tSuffix = $EXTRACT(pStr,tEnd,*)
    Return tPrefix_pRepl_tSuffix
}

}
USER> set mystr = "REASON->Blood(1.23)"
USER> set newstr = ##class(User.Util.StringFunctions).ReReplace(mystr,"->\w+")
USER> write newstr
REASON(1.23)
USER> set altstr =  ##class(User.Util.StringFunctions).ReReplace(mystr,"->\w+","-CODE")
USER> write altstr
REASON-CODE(1.23)

The included IsRecentManagedAlert() method expects a recent alert to have 100% identical SourceConfigName and AlertText values. Probably not suitable for your application.

Unfortunately, I haven't run into a scenario such as you describe where errors from multiple business hosts must be aggregated into one.

I can envision a solution where you would identify this group of business hosts under a single Managed Alert Group, log activity for alerts in that group to a table or global via Ens.Alert's routing rule, and check the table/global for prior activity from that alert group in the in that same rule for the desired time span. Matches could then be suppressed.

Since Managed Alert Groups aren't a property of Ens.AlertRequest, you would need to interrogate the business host (production item) for its membership from the rule.

 So you'd need to create a table/global, write some methods (in a class extending Ens.Rule.FunctionSet) to query your custom table/global for prior alerts and log the current alert, then configure a routing rule to check for the existence of prior activity on the selected Alert Group, log the current activity, and suppress or send the alert based on your criteria.

DocTypeCategory and DocTypeName are populated based on the contents of DocType. The DocType property can be changed even though the message is immutable.

TypeVersion is populated based on the value of the MSH:12 field in the message body. If you're attempting to modify the properties of an inbound message received from a business service, I don't think you'll be able to change TypeVersion with "Existing" set in the DTL editor because you can't modify MSH:12.

Are you working with messages newly arrived through a service that haven't undergone any prior transformations?

Hi Blake,

This might get you started in the right direction:

Set tRuleName = "<rulename>"
Set tTarget = $ORDER(^Ens.Rule.Targets(tRuleName,""))
Set tArr = 0
Set tCnt = 1
While tTarget '= ""
{
    Set tArr(tCnt) = tTarget
    Set tTarget = $ORDER(^Ens.Rule.Targets(tRuleName,tTarget))
    Set tArr = tCnt
    Set tCnt = tCnt + 1
}

Replace <rulename> with the name of the rule as it appears in the router configuration pane.

With some help from a fellow DC member, I wrote the method below. Its intent is to support auto-resolution of managed alerts:

/// Returns the connection status ("AdapterState") of the Business Service or Operation
/// named in <var>pItemName</var>
ClassMethod GetConnectionStatus(pItemName As %String) As %String [ Language = objectscript ]
{
    Set tStatement = ##class(%SQL.Statement).%New()
    Set tStatus = tStatement.%PrepareClassQuery("Ens.Util.Statistics","EnumerateJobStatus")
    If $$$ISERR(tStatus)
    {
        Return "Error in Status Query: "_$system.Status.GetErrorText(tStatus)
    }
    Set tRS = tStatement.%Execute(pItemName)
    If tRS.%SQLCODE = 0
    {
        Do tRS.%Next()
        Return tRS.%Get("AdapterState")
    }
    Return "Status not Found"
}

Here's a little code snippet that the Management Portal uses to get the Arbiter state:

	Set state = $SYSTEM.Mirror.ArbiterState()
	Set thisConnected = $SELECT($ZB(+state,+$$$ArbiterConnected,1)'=0:1,1:0)
	Set otherConnected = $SELECT($ZB(+state,+$$$ArbiterPeerConnected,1)'=0:1,1:0)
	
	If 'thisConnected {
		Set stateString = $$$Text("This member is not connected to the arbiter")
	} ElseIf 'otherConnected {
		Set stateString = $$$Text("Only this member is connected to the arbiter")
	} Else {
		Set stateString = $$$Text("Both failover members are connected to the arbiter")
	}

You'll need to add an Include statement for %syMirror to use the $$$Arbiter* macros.

Note that the ArbiterState() method is undocumented, and its behavior may change in future releases.