Sounds like a separators issue. I set that as empty on both the service and operation, which assumes the messages will be formatted with whatever separators are indicated in the ISA segment.
- Log in to post comments
Sounds like a separators issue. I set that as empty on both the service and operation, which assumes the messages will be formatted with whatever separators are indicated in the ISA segment.
Sooooo ...
It looks like those macros don't exist anymore. They're not in %occKeyword.inc or in any other .inc file as far as I can tell.
Any other thoughts?
Ah ... maybe never mind.
$$$comMemberArrayGet(class, $$$cCLASSproperty, property, $$$cPROPparameter, param)Appears to do what I want.
EnsLib.SQL.Snapshot has a GetData() method that takes the column number as its first argument (the second takes a row number but defaults to the current row). So that in conjunction with GetColumnCount() should allow you to iterate across columns.
Edit: And of course Marc beat me to it ... 馃榿
I used Single-Session batch and used a business process/router to send only the Interchange DocType to the operation. The Group and Transaction sets within are referenced from the Interchange and are automatically re-assembled by the operation.
Ubuntu is worse in that respect, at least in my experience ... I run Ubuntu on a bunch of systems in my home office. Seems like every update requires a reboot. At least with Redhat you have more granular control over what updates are installed.
You can select from a number of Linux vendors/versions for an AWS installation. I would recommend you select Red Hat or Ubuntu rather than Amazon Linux; InterSystems officially supports those.
In my experience Red Hat is the more stable/compatible version and is the most widely used for IRIS implementations.
You would not install Ubuntu or Red Hat "on top of" Amazon Linux; you would select the Linux flavor when creating your EC2 instance.
Is Java installed and the appropriate version for the driver? Is the $JAVA_HOME environment variable set for the account under which IRIS is running?
Because it's a method defined with the [ Internal ] keyword, which the class documentation generator excludes. That keyword means that it's not recommended for use by anyone other than InterSystems. Its behavior may change or it may go away, and you're taking a chance by implementing it in your own code.
GetSegmentAt() provides the same functionality but is documented for use by anyone. It's defined in a class (EnsLib.EDI.Segmented) that is inherited by EnsLib.HL7.Message and other virtual document classes.
I don't personally see the need, and I think InterSystems has better things to spend their time on 馃榿
To use STARTTLS, you need to do the following:
The conventional method is with an <assign> action. If the classmethod has output or byref variables in its signature, I think a <code> action would be appropriate (I've never tried to set context variables by reference in an <assign>).
This likely goes without saying, but context variables remain available/usable in a code action.
I believe the host name is smtp.office365.com. You have the t and p reversed.
EDIT: For port 587, you'll also need to use STARTTLS.
I did:
Class OrdRes.VendorMDM Extends Ens.DataTransformDTL [ DependsOn = EnsLib.HL7.Message ]
{
Parameter IGNOREMISSINGSOURCE = 1;
Parameter REPORTERRORS = 1;
Parameter TREATEMPTYREPEATINGFIELDASNULL = 0;
XData DTL [ XMLNamespace = "http://www.intersystems.com/dtl" ]
{
<transform sourceClass='EnsLib.HL7.Message' targetClass='EnsLib.HL7.Message' sourceDocType='2.3:ORU_R01' targetDocType='2.5:MDM_T02' create='new' language='objectscript' >
<assign value='source.{MSH}' property='target.{MSH}' action='set' />
<assign value='"MDM"' property='target.{MSH:MessageType.MessageCode}' action='set' />
<assign value='"T02"' property='target.{MSH:MessageType.TriggerEvent}' action='set' />
<assign value='"2.5"' property='target.{MSH:VersionID.VersionID}' action='set' />
<assign value='source.{MSH:DateTimeofMessage}' property='target.{EVN:2}' action='set' />
<assign value='source.{PIDgrpgrp(1).PIDgrp.PID}' property='target.{PID}' action='set' />
<assign value='source.{PIDgrpgrp(1).PIDgrp.PV1grp.PV1}' property='target.{PV1}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).ORC}' property='target.{ORCgrp(1).ORC}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBR}' property='target.{ORCgrp(1).OBR}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).NTE()}' property='target.{ORCgrp(1).NTE()}' action='set' />
<assign value='"Endoscopy Image"' property='target.{TXA:DocumentType}' action='set' />
<assign value='"AU"' property='target.{TXA:DocumentCompletionStatus}' action='set' />
<assign value='"AV"' property='target.{TXA:DocumentAvailabilityStatus}' action='set' />
<foreach property='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp()}' key='k1' >
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:SetIDOBX}' property='target.{OBXgrp(k1).OBX:SetIDOBX}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:ValueType}' property='target.{OBXgrp(k1).OBX:ValueType}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:ObservationIdentifier}' property='target.{OBXgrp(k1).OBX:ObservationIdentifier}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:ObservationSubID}' property='target.{OBXgrp(k1).OBX:ObservationSubID}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:Units.identifier}' property='target.{OBXgrp(k1).OBX:5.3}' action='set' />
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:Units.identifier}' property='target.{OBXgrp(k1).OBX:5.4}' action='set' />
<if condition='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX:SetIDOBX}' >
<true>
<code>
<![CDATA[ do source.GetFieldStreamRaw(.tStream,"PIDgrpgrp(1).ORCgrp(1).OBXgrp("_k1_").OBX:5(1).1",.tRem)
//
set tRem = "|PDF|||||F|"
//
// Store the stream to the appropriate target field
do target.StoreFieldStreamRaw(tStream,"OBXgrp("_k1_").OBX:5(1).5",tRem)]]></code>
</true>
<false>
<assign value='source.{PIDgrpgrp(1).ORCgrp(1).OBXgrp(k1).OBX}' property='target.{OBXgrp(k1).OBX}' action='set' />
</false>
</if>
</foreach>
<assign value='source.{PID:18}' property='target.{TXA:12.3}' action='set' />
</transform>
}
}
Now, I used PDFs rather than BMPs, I'm a little OCD, so my output looks slightly different from yours. But it does work. Notice that I used the numeric syntax to reference OBX:5's components, though. There are no symbolic names for those components in HL7, but they're still recognized using the numeric syntax.
Also, I think one of the OBX:5 components should probably contain "Base64" since that's probably how OBX:5.5 is encoded.
Here's the output:
.png)
The "length" of the OBX segment is only relevant if you're attempting to treat it as a string. If you treat it as an object and use the GUI's copy rules (which leverage the EnsLib.HL7.Message and EnsLib.HL7.Segment classes' methods), those fields should be readily accessible.
Hi Anthony,
I think the issue is that you're using GetFieldStreamRaw() against the entire OBX segment, when you should be using it against the field that contains the stream: OBX:5.1. The method can take 3 arguments, the 3rd being a variable passed by reference that contains the remainder of the current OBX segment. That variable is of type %String and can be modified to include different values for the remaining fields, and then supplied as the 3rd argument to StoreFieldStreamRaw() ... which you would use to populate OBX:5.5.
These methods are usually used in a code block, where passing a variable by reference is supported (precede it with a period). You'll need to do that with both the first and 3rd arguments in GetFieldStreamRaw().
It's also important to note that once you've used StoreFieldStreamRaw(), the target segment becomes immutable; no further changes can be made to it. That's why the remainder variable is so important as it populates the remainder of the segment at the time the stream is stored to the field.
The DTL flow would Look like this:
// Get the stream data (no need to instantiate a stream object in advance)
do source.GetFieldStreamRaw(.tStream,"PIDgrpgrp(1).ORCgrp(1).OBXgrp("_k1_").OBX:5(1).1",.tRem)
//
// Insert code here to modify tRem to accommodate any changes needed to
// fields after OBX:5(1).5
//
// Store the stream to the appropriate target field
do target.StoreFieldStreamRaw(tStream,"OBXgrp("_k1_").OBX:5(1).5",tRem)Then populate any remaining segments as you normally would.
All Business Host classes that inherit from Ens.Host have the callback method OnProductionStop(). When the production is shut down, that method is called, and in it you can insert code to allow you to control what happens during shutdown of a production.
Edit: OnProductionStop, not OnProductionShutdown
I think we'll need more information to provide an answer.
Since you've indicated that you're using Cach茅 2012, you don't have IRIS for Health, which you've tagged. Are you working with Ensemble, and looking to report on MRNs received via HL7 and/or other messaging formats into Ensemble?
Or are you working with a custom application built on Cach茅 2012?
Properties defined in the BPL class can be accessed as process.PropertyName, or as in your case, process.Scope.
You cannot import formatted text in Excel with a tab delimited text file as the source. The file must be created in either native Excel format or HTML.
There are many posts and articles on this Developer Community that discuss the generation of Excel-compatible files that will support text formatting; too many options to list them all here. Search for "Excel files" and you may find an answer that will work for you.
As Linux user irisusr:
jeff@host:~$ sudo su - irisusr
irisusr@host:~$ /isc/iris/sys/bin/irisdb -s/isc/iris/sys/mgr
Node: host, Instance: IH
USER>!whoami
irisusr
USER>
That command does get me directly to an IRIS shell prompt when logged into Linux as a user with the same name as an IRIS user.
Yes, I can launch an IRIS Lite Terminal from VS Code.
@John Murray I don't get the cookie errors, but I still get this:
.png)
In my case, on Ubuntu, I get the same behavior but different errors in the console:
.png)
The network trace simply shows a continuous stream of GETs:
| GET |
ws://hostname.domain:52773/iterm/pty/?EIO=4&transport=websocket
|
I ran into this problem as well, and I vaguely recalled that the passthrough file service has issues with multiple targets. So I went to the documentation and tried to find any reference to the need to ensure the operations were called synchronously and (initially) couldn't. So I went ahead and configured the services with Work and Archive directories and waited for something bad to happen.
And of course, something bad did happen.
I later found the file adapter documentation that provides a clear description of how the Work and Archive directories impact synchronous delivery. It seems to indicate that the Work directory is more likely to break sync than the Archive directory, and that you can have an Archive directory configured while still supporting sync as long as the Work directory is not configured.
I get a ModuleNotFoundError with
import Pygmentsbut not
import pygments
Everything else in the dependencies imports ok.
I normally have OS Auth enabled, but I also tried disabling it with the same result.
After re-installing in %SYS, I'm almost there ...
I'm getting a login screen when launching, but after authenticating I get a black page. When I click the page, I get a flashing cursor in the upper left corner. No IRIS prompt, though.
The audit log is showing this error:
.png)
This is IRIS for Health 2024.2, running on Ubuntu 24.04.1 LTS.
Oh yes, it finally settled down and worked normally, but it seemed to be spending a lot of time talking to the servers before displaying the package listing. The servers are on AWS and I'm connected via a VPN, but my connection is quite fast for a home office and I've never noticed that before.
I've also exited and relaunched VS Code since without any significant delay, so it must've been a one-off.
I upgraded to the new beta that uses the proposed apis (v2.12.9-beta.1), and when I reconnected to the server, it took quite some time (5-10 minutes) before I could get to work. Since then, performance has been normal. Is there some new feature that indexes classes/routines locally or something? I couldn't find anything in the usual logs that indicated what was going on.