I experienced a similar issue when configuring the MSSQL ODBC driver for IRIS. It appears as though the default odbcgateway.so is 32 bit, and I was able to get it working through the following steps:

  1. Change the working directory to Caché's /<install-dir>/bin
  2. Copy odbcgateway.so to odbcgateway32.so
  3. Copy odbcgatewayur64.so to odbcgateway.so

The NTE segment is commonly used for plain-text "notes" containing descriptive data that doesn't conform to the structured format used for most other data in HL7 messages. It could certainly be used for the storage of base64-encoded binary data, and the likely candidate for that would be the NTE:3 Comment field.

If the base64 content is less than ~3.5MB in size, it can be assigned directly to that field in a DTL or using the SetValueAt() method of the message object (Note: the HL7 specification does not define a maximum length for the Comment field, but there are considerations for IRIS storage that need to be handled appropriately). If larger than 3.5MB, it must be stored using the StoreFieldStreamRaw() method of the EnsLib.HL7.Message object (there is also a StoreFieldStreamBase64() method but it's not applicable if your data is already in base64 format). If you must use the StoreFieldStreamRaw() method, note that it must be the last method called against the NTE segment as it makes the segment immutable.

The recipient of the ORL_O22 message must be informed of this atypical use of the NTE segment so that it can be handled appropriately.

You can use the "*" shorthand for obtaining a count of the number of repeating segments (or fields, or groups) in a when clause. Anything greater than 0 evaluates to "true," so adding a Not() around the expression negates that:

Note: The original reply had Document.DocTypeName rather than Document.Name as the first part of the condition expression. The DocTypeName property refers to the message structure, not the actual message trigger event. That's in the Name property.

You're logging on to the sftp server as user test_user, but attempting to write a file to C:/user/testuser/desktop/orders/. Are you sure that's the correct path for that user? My suspicion is that it's a local permissions issue and not a problem with Caché's sftp support.

I'm not a Solarwinds user/expert; does it have anything useful in its log(s)?

If the content of the field that contains the base64-encoded PDF is greater than the maximum string size in IRIS, it will be stored as a stream.

I've written a custom function that will extract the full content of a message containing such a value; it was originally created to support fetching large HL7 messages in HL7 Spy. You can download it here: http://www.hl7spy.com/downloads/HICG_HL7.xml. I've been asked to add it to the OEX repository, but I just haven't gotten around to that yet ...

I'm not sure how the SQL Editor in the management console will handle a query that uses that function, though. It will likely cause some interesting behavior 😉

The specific SQL function in that class is HICG.GetBigMsg(<messagebodyid>).

Can you share the error message generated by the BPL when it terminates?

You should be able to assign the return value of $EXTRACT(request.{OBX:5},1,510)}* to a context variable and use it in place of the original OBX:5 path in the lookup function. However, if the OBX:5 is longer than the maximum string size in IRIS (i.e. it's a stream), you may  need to include logic to check for that, then read just the first 510 bytes.

* The path to the OBX segment is likely different than what's in my example ... you'll need to supply the correct path.

I"m assuming you're talking about HL7 here ...

InterSystems does a bit of magic in the background to display the Message Body fields in the message viewer, and it's not immediately available to you. However, I've written a custom SQL function that fetches the field at the supplied path (you'll need supply the path as its defined by the Schema/DocType of the message).

Example:

SELECT TOP 100 head.ID As ID,
        {fn RIGHT(%EXTERNAL(head.TimeCreated),999 )} As TimeCreated,
        head.SessionId As Session,
        head.Status As Status,
        head.SourceConfigName As Source,
        head.TargetConfigName As Target,
        HICG.GetMsg(head.MessageBodyId,'MSH:9') As MessageType
    FROM
        %IGNOREINDEX Ens.MessageHeader.SessionId Ens.MessageHeader head
    WHERE
        head.SessionId = head.%ID
        AND head.MessageBodyClassname = 'EnsLib.HL7.Message'
        AND head.SourceConfigName = 'SourceBusinessHostName'
    ORDER BY
        head.ID Desc

You can download the class here: http://www.hl7spy.com/downloads/HICG_HL7.xml

I don't know of any other attributes, and increasing logging may turn up something but hasn't proved useful in my experience.

If your servers are reporting occasional disconnect errors, that's not necessarily a network issue, or even something to worry about. The mirror members may report disconnections due to things like snapshots/backups where one or more hosts are momentarily suspended by the hypervisor in a virtualized environment, or updates to the OS that might stop/start services as part of the update process. It's important to remember, though, that the arbiter is designed to operate optimally in a low-latency, same network, same data center scenario. If your mirrored hosts are spread across a WAN, you're asking for errors.

That setting allows you to specify an adapter address over which arbitration traffic will flow. It's a local address for situations where the arbiter server may have multiple adapters on different networks.

The IRIS servers establish the arbiter connection rather than the other way around, and you can have multiple mirror pairs us a single arbiter instance.

The ISCAgent is normally installed by default with IRIS, but may not be activated. On my mirrored servers, it's active, but I don't remember whether that happened automatically as part of the setup process, or I did it manually.

No allow list that I'm aware of, but if you want to restrict access to ISCAgent ports, add an adapter to each host, connected to a VLAN that is not routeable from outside that network. Configure application_server.interface_address to use it. You could also use the same network for mirror journal transmission/communication, and leverage QoS to allocate a fixed minimum amount of bandwidth that would be unavailable to other network traffic.

Are you attempting to do this in a routing rule foreach or a DTL?

You're much better off working with a Business Process/BPL for this sort of thing. You would:

  • Create a context variable for the FT1 segment counter (ex. FT1counter)
  • use that as the key in a Foreach action, with request.{FT1()} as the property.
  • In the loop:
    • Add a Transform action to use a DTL to map the required PID/PV1/etc. fields to the record map, specifying the current FT1 segment values using source.{FT1(context.FT1counter):Fieldname} to select the current segment's fields.
    • Add a Call action for invoking the operation in the same loop, and you're done.
  • Save, add it to your production, and point the service at it.

As part of the encryption negotiation process, there's an exchange of supported cypher suites between the client and server. If there's no match, no connection can be established. No need to force a specific cypher site; all available should be presented by the client during connection negotiation.

If upgrading to a current version of HealthShare/Health Connect is not an option, you could script the transfers outside of the production (batch/powershell/Python/Perl script running under Windows' Scheduler or called from ObjectScript in a Scheduled Task via $ZF(-100) ) and then use a File service/operation to pick them up for processing or drop them off for delivery.

There have been updates to openssh within the last few years that retired older, less secure cypher suites. It's possible that 2017.2 may be old enough to be incompatible with newer versions of the ssh (which sftp relies upon) libraries.

Check with the vendor/customer at the other end of the connection to see if they've made recent changes to their version of ssh.