go to post Jeffrey Drumm · May 29 There are (at least) two sets of credentials used in the web gateway: one to control access to the web gateway's configuration forms, and the other to authenticate the web gateway to an IRIS instance. Removing the Password entry from the [SYSTEM] section of CSP.ini will give you unauthenticated access to the web gateway's configuration, as long as you're either accessing it locally or are remote but have an IP address that matches the filter set for the System_Manager entry. Once you have access to the management pages, you can then configure the gateway's credentials for connecting to IRIS in the Configuration | Server Access | [Select Server] | Edit Server page. The credentials entered in the Connection Security section must match the Web Gateway credentials in IRIS (usually user CSPSystem with whatever password was originally set at installation). If you don't have access to that password anymore, you can sign on via IRIS terminal, and in the %SYS namespace execute d ^SECURITY. Option 1, User Setup, allows you to change passwords for users, as long as your account has the necessary roles/permissions. Back in the Web Gateway's configuration forms, you can add a password for access to its configuration in the Configuration | Default Parameters page.
go to post Jeffrey Drumm · May 28 ^Ens.AppData is the target of the $$$EnsStaticAppData macro, which is referenced in these classes: Ens.Adapter.clsEns.Util.File.clsEnsLib.EDI.EDIFACT.Operation.BatchStandard.clsEnsLib.EDI.X12.Document.clsEnsLib.EDI.X12.Operation.BatchStandard.clsEnsLib.EDI.X12.Operation.FileOperation.clsEnsLib.File.InboundAdapter.clsEnsLib.FTP.InboundAdapter.clsEnsLib.HL7.Operation.BatchStandard.clsEnsLib.RecordMap.Operation.FileOperation.clsEnsLib.SQL.InboundAdapter.clsEnsLib.SQL.InboundProcAdapter.clsEnsLib.SQL.Operation.GenericOperation.clsEnsLib.SQL.Snapshot.cls
go to post Jeffrey Drumm · May 28 You can't specify a DocType Name in a routing rule, at least directly. By specifying the docName, you're both selecting the message by Message Type/Trigger Event and identifying the structure (DocType Name) that will be used to parse the message in the rule. If you look at the HL7 v2.3 DocType Category via the Management Console in Interoperability | Interoperate | HL7 v2.x | HL7 v2.x Message Structures | 2.3, then select the ADT_A04 Message Type, you'll see this: This means that an A04 event will be evaluated/parsed using the structure defined for an A01; the DocType Name (Message Structure) is ADT_A01.
go to post Jeffrey Drumm · May 26 docName is not the same as DocType Name. The former is the HL7 event (i.e. "ADT_A04") while the latter is the message structure associated with that event. Many events use the same structure, so there's not a 1-1 correspondence. For example, A01, A04, A05 and A08 messages all use the ADT_A01 DocType Name. In your trace statements, you should use the "_" character for string concatenation, not the "&" character; that's the full-evaluation logical "AND" symbol. My guess at this point is that PV1:PatientClass is not equal to "E".
go to post Jeffrey Drumm · May 26 The paths you're using for trace and 2nd when condition don't look correct for the ISC-supplied 2.3 Document Category. I would expect to see HL7.{PV1:3.1} and HL7.{PV1:PatientClass} used with a 2.3 DocType Category and ADT_A01 DocType Name (used for A04s).
go to post Jeffrey Drumm · May 22 So I've created a custom message structure for stuffing PDFs extracted from HL7 messages into a COLD feed. I've been using %Stream.FileBinary as the object type for the stream property in the class. I hadn't given much thought to the fact that those streams might hang around after a message purge, so I went back and modified the class to use %Stream.TmpBinary. I mean, that seems to make sense, right? Except that with %Stream.TmpBinary, the stream goes away as soon as the message leaves the business process, and no file gets written by the operation. Oops. So I'm back to using %Stream.FileBinary ... I would hope that the Interoperability message purge task would "do the right thing" and delete the stream since the message object extends Ens.Request, but I suppose I should do some experimentin' 😉
go to post Jeffrey Drumm · May 21 Must be a WinSQL issue. I tested this via the Management Console, IRIS SQL Shell, and DBeaver/jdbc; all worked as expected.
go to post Jeffrey Drumm · May 16 The 2nd argument to the GetValueAt() method is for delimiters, not status. The status is returned in the 3rd argument. The correct syntax for what you were trying to do is: Set tOBXText = pOutput.GetValueAt("ORCgrp(1).OBRuniongrp.OBXgrp("_i_").OBX:5", ,.tStatus) Note the additional comma before .tStatus.
go to post Jeffrey Drumm · May 14 An UPDATE example: UPDATE Sample_DB.Greetings SET Greeting = {fn CONCAT(Greeting,' World')}
go to post Jeffrey Drumm · May 6 Another option would be to "chain" a 2nd DTL in the Send action for each sender that has its "Create" setting as Existing. It would handle the evaluation and population of the ordering provider. You wouldn't have to touch a bunch of DTLs, but you'd be modifying a bunch of routes.
go to post Jeffrey Drumm · May 5 Are you dealing with multiple DocTypes and Categories of messages going to Epic, or are all messages the same schema? I know you're not crazy about a custom operation, but if you take a look at the source for EnsLib.HL7.Operation.TCPOperation, you'll see that it would be dead simple to copy/extend it, add a check for the population of the Ordering Provider field, and the logic to populate it with a default if it's empty.
go to post Jeffrey Drumm · May 5 From within your task, you can obtain the task's classname with $CLASSNAME() and query the %SYS.Task table using the TaskClass of your task to fetch the OutputFilename column. If you want to use that file under your direct control, you can set "Open output file when task is running" to "yes," enter the full path of the filename, then set the previous setting back to "no." The filename will remain in the table. If you're calling the same class under multiple Task schedules or with different configurations, the schedule values and settings are also available to help you refine your selection. Settings are stored in $LIST() format. EDIT: You can also define a Filename property in the task class, assuming it's a custom class. It will show up as a configurable field in the task editor. That way you don't have to deal with the OutputFilename settings.
go to post Jeffrey Drumm · May 4 Usually, OBX segments are either defined as repeating segments or members of repeating segment groups. The syntax you'll use will vary depending on the HL7 Document Category and Structure you're using in your DTL. In HL7 2.5.1, the OBX segment itself is non-repeating, but is part of a repeating group (OBXgrp) inside another repeating group (ORCgrp) inside yet another repeating group (PIDgrpgrp). You first need to get the count of the total number of OBX segments, which you can do by supplying the "*" character in the iteration argument to the source path's OBXgrp(). Add 1 to that, and you have your iteration for the new OBX segment. Use that value as the iteration for the new OBX segment and populate the fields as needed, as in the example below: The above assumes that the OBX segments are the last segments in the message. If they're not and the message requires another OBX at the very end, it's a bit more complicated ... you'd create a new segment object, populate it, then use the AppendSegment method to slap it on the end of the target:
go to post Jeffrey Drumm · May 2 It's less risky than losing access to the images completely on a failover, no? If you want true on-prem HA/Business Continuity, you need to get a systems engineer involved. I'm not one, at least anymore 😁
go to post Jeffrey Drumm · May 2 Use a NAS share that's mounted at the same location on both servers. There are other options involving SANs and mirrored filesystems but they're generally a bit pricier.
go to post Jeffrey Drumm · Apr 30 Thanks for this! Although ... the answer shows the query running in the Management Portal, which wasn't available to me when I ran into the issue 😁 But this works: [SQL]%SYS>>call %CSP.Session_SessionInfo() 10. call %CSP.Session_SessionInfo() Dumping result #1 ID Username Preserve Application Timeout LicenseId SesProcessIdAllowEndSession P0AtBxzbL9 jeff 0 /ih/csp/healthshare/hicg/ 2025-04-30 20:47:16 jeff1 zAOMQO8MC8 UnknownUser 0 /ih/csp/documatic/ 2025-04-30 20:50:53 1 Add your code snippet and I'm good to nuke some sessions even when the Management Portal is unavailable 😉
go to post Jeffrey Drumm · Apr 28 I'm not crazy about the change from "Services" and "Operations" to "Inbound Hosts" and "Outbound Hosts" respectively. Considering that most of the protocols we use are bi-directional (some more than others), the new nomenclature seems less accurate.
go to post Jeffrey Drumm · Apr 27 Until such time as InterSystems provides synchronization for security components across mirror members, you can save a bit of effort by exporting them on the primary and importing them on the alternate server via the ^SECURITY routine in the %SYS namespace. At least you won't need to create them manually. You can do the same for users, roles, resources and a few other things as well. All of these have ObjectScript methods for accomplishing the same in the Security package.
go to post Jeffrey Drumm · Apr 17 There really is no list of "Standard" settings. You can use whatever section name you desire and it will appear in the list of settings for the business host in the production. Most adapters provide the settings Basic, Connection, and Additional. However if the setting you're creating doesn't fit any of those categories, you can create your own with the "MySetting:MyCategory" format. The documentation for creating business host settings can be found here. EDIT: After reviewing the documentation, I discovered that there are a set of predefined categories and they're listed here.
go to post Jeffrey Drumm · Apr 14 I think the benefits of maintainability and separation of responsibilities far outweigh any performance considerations. You can always add more CPU and storage. What's the purpose of the "first mapping?" Data enrichment, canonicalization of message format(s), fixing message sequencing issues (the last is common in Epic environments)?