There are (at least) two sets of credentials used in the web gateway: one to control access to the web gateway's configuration forms, and the other to authenticate the web gateway to an IRIS instance. Removing the Password entry from the [SYSTEM] section of CSP.ini will give you unauthenticated access to the web gateway's configuration, as long as you're either accessing it locally or are remote but have an IP address that matches the filter set for the System_Manager entry.

Once you have access to the management pages, you can then configure the gateway's credentials for connecting to IRIS in the Configuration | Server Access | [Select Server] | Edit Server page. The credentials entered in the Connection Security section must match the Web Gateway credentials in IRIS (usually user CSPSystem with whatever password was originally set at installation). If you don't have access to that password anymore, you can sign on via IRIS terminal, and in the %SYS namespace execute d ^SECURITY. Option 1, User Setup, allows you to change passwords for users, as long as your account has the necessary roles/permissions.

Back in the Web Gateway's configuration forms, you can add a password for access to its configuration in the Configuration | Default Parameters page.

^Ens.AppData is the target of the $$$EnsStaticAppData macro, which is referenced in these classes:

Ens.Adapter.cls
Ens.Util.File.cls
EnsLib.EDI.EDIFACT.Operation.BatchStandard.cls
EnsLib.EDI.X12.Document.cls
EnsLib.EDI.X12.Operation.BatchStandard.cls
EnsLib.EDI.X12.Operation.FileOperation.cls
EnsLib.File.InboundAdapter.cls
EnsLib.FTP.InboundAdapter.cls
EnsLib.HL7.Operation.BatchStandard.cls
EnsLib.RecordMap.Operation.FileOperation.cls
EnsLib.SQL.InboundAdapter.cls
EnsLib.SQL.InboundProcAdapter.cls
EnsLib.SQL.Operation.GenericOperation.cls
EnsLib.SQL.Snapshot.cls

You can't specify a DocType Name in a routing rule, at least directly. By specifying the docName, you're both selecting the message by Message Type/Trigger Event and identifying the structure (DocType Name) that will be used to parse the message in the rule. If you look at the HL7 v2.3 DocType Category via the Management Console in Interoperability | Interoperate | HL7 v2.x | HL7 v2.x Message Structures | 2.3, then select the ADT_A04 Message Type, you'll see this:

This means that an A04 event will be evaluated/parsed using the structure defined for an A01; the DocType Name (Message Structure) is ADT_A01.

docName is not the same as DocType Name. The former is the HL7 event (i.e. "ADT_A04") while the latter is the message structure associated with that event. Many events use the same structure, so there's not a 1-1 correspondence. For example, A01, A04, A05 and A08 messages all use the ADT_A01 DocType Name.

In your trace statements, you should use the "_" character for string concatenation, not the "&" character; that's the full-evaluation logical "AND" symbol

My guess at this point is that PV1:PatientClass is not equal to "E".

So I've created a custom message structure for stuffing PDFs extracted from HL7 messages into a COLD feed. I've been using %Stream.FileBinary as the object type for the stream property in the class. I hadn't given much thought to the fact that those streams might hang around after a message purge, so I went back and modified the class to use %Stream.TmpBinary. I mean, that seems to make sense, right?

Except that with %Stream.TmpBinary, the stream goes away as soon as the message leaves the business process, and no file gets written by the operation. Oops.

So I'm back to using %Stream.FileBinary ... I would hope that the Interoperability message purge task would "do the right thing" and delete the stream since the message object extends Ens.Request, but I suppose I should do some experimentin' 😉

Are you dealing with multiple DocTypes and Categories of messages going to Epic, or are all messages the same schema?

I know you're not crazy about a custom operation, but if you take a look at the source for EnsLib.HL7.Operation.TCPOperation, you'll see that it would be dead simple to copy/extend it, add a check for the population of the Ordering Provider field, and the logic to populate it with a default if it's empty.

From within your task, you can obtain the task's classname with $CLASSNAME() and query the %SYS.Task table using the TaskClass of your task to fetch the OutputFilename column.

If you want to use that file under your direct control, you can set "Open output file when task is running" to "yes," enter the full path of the filename, then set the previous setting back to "no." The filename will remain in the table.

If you're calling the same class under multiple Task schedules or with different configurations, the schedule values and settings are also available to help you refine your selection. Settings are stored in $LIST() format.

EDIT: You can also define a Filename property in the task class, assuming it's a custom class. It will show up as a configurable field in the task editor. That way you don't have to deal with the OutputFilename settings.

Usually, OBX segments are either defined as repeating segments or members of repeating segment groups. The syntax you'll use will vary depending on the HL7 Document Category and Structure you're using in your DTL. In HL7 2.5.1, the OBX segment itself is non-repeating, but is part of a repeating group (OBXgrp) inside another repeating group (ORCgrp) inside yet another repeating group (PIDgrpgrp).

You first need to get the count of the total number of OBX segments, which you can do by supplying the "*" character in the iteration argument to the source path's OBXgrp(). Add 1 to that, and you have your iteration for the new OBX segment.

Use that value as the iteration for the new OBX segment and populate the fields as needed, as in the example below:

The above assumes that the OBX segments are the last segments in the message. If they're not and the message requires another OBX at the very end, it's a bit more complicated ... you'd create a new segment object, populate it, then use the AppendSegment method to slap it on the end of the target:

Thanks for this! Although ... the answer shows the query running in the Management Portal, which wasn't available to me when I ran into the issue 😁

But this works:

[SQL]%SYS>>call %CSP.Session_SessionInfo()
10.     call %CSP.Session_SessionInfo()

Dumping result #1
ID      Username        Preserve        Application     Timeout LicenseId       SesProcessIdAllowEndSession
P0AtBxzbL9      jeff    0       /ih/csp/healthshare/hicg/       2025-04-30 20:47:16     jeff1
zAOMQO8MC8      UnknownUser     0       /ih/csp/documatic/      2025-04-30 20:50:53         1

Add your code snippet and I'm good to nuke some sessions even when the Management Portal is unavailable 😉

Until such time as InterSystems provides synchronization for security components across mirror members, you can save a bit of effort by exporting them on the primary and importing them on the alternate server via the ^SECURITY routine in the %SYS namespace. At least you won't need to create them manually.

You can do the same for users, roles, resources and a few other things as well. All of these have ObjectScript methods for accomplishing the same in the Security package.

There really is no list of "Standard" settings. You can use whatever section name you desire and it will appear in the list of settings for the business host in the production.

Most adapters provide the settings Basic, Connection, and Additional. However if the setting you're creating doesn't fit any of those categories, you can create your own with the "MySetting:MyCategory" format.

The documentation for creating business host settings can be found here.

EDIT: After reviewing the documentation, I discovered that there are a set of predefined categories and they're listed here.