Well, everyone has coding styles and ObjectScript offers several different styles - I could have made this prettier (to me, anyway) as I'm more accustomed to the single-letter-command and dot-loop styles... but I tried to keep this in your coding style.

My code isn't pretty - I focused more on making it (barely) functional and demonstrative of the $DATA command - this command will let you know if there's any further subscripts available in a global - documentation page is here:

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_FDATA

Anyway, here's my code - I didn't have a chance to create a class method (again, I prefer the older styles) but just copy-paste the code center into your method and it should function. Again, it's not pretty, but it will demonstrate what you need.

If you wanted to make this more efficient, recoding this to handle subscripts recursively would be much shorter and could handle any number of subscripts, not just 3.

 ZSUBFINDER ; 
 ;
    set subscript = ""
    for {
      set subscript = $order(^Book(subscript))
      quit:(subscript = "")
      set moresub = $data(^Book(subscript))
      if moresub=10 {
      set sub2=""
      for {
           set sub2 = $order(^Book(subscript,sub2))
           quit:(sub2="")
           set moresub2= $data(^Book(subscript,sub2))
           if moresub2=10 {
          set sub3=""
             for {
                 set sub3 = $order(^Book(subscript,sub2,sub3))
                 quit:(sub3="")
                 set moresub3= $data(^Book(subscript,sub2,sub3))
                 if moresub3 = 1 {
                 write !, "subscript=", subscript, ", sub2=", sub2, ", sub3=", sub3, ", value=", ^Book(subscript,sub2,sub3)
                 }
             }
           else {
             if moresub2=1 {
                     write !, "subscript=", subscript, ", sub2=", sub2, ", value=", ^Book(subscript,sub2)
             }
           }
      }
      else {
      if moresub=1 {
      write !, "subscript=", subscript, ", value=", ^Book(subscript)
      }
      }
    }
 quit

Hope this helps!

Would this section of documentation help with your situation?

https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_create#GJSON_create_serialize_streams

(I'm aware this is from the latest documentation online, but I did confirm that this section also exists in my HealthShare 2017.2 version as well.)

That section appears to basically describe how to save the JSON to a temporary file on the filesystem, then re-open the file as an Object and access the key successfully without hitting <MAXSTRING>. Yes, I'm aware that does cause extra storage I/O, so for busy servers this could have a negative impact on I/O performance.

It looks like they also offer a possible solution changing the JSON entity to a %Stream.GlobalCharacter which can handle strings much larger than <MAXSTRING> which may help with not adding nearly so much storage I/O as the solution in the previous paragraph.

Hope this helps!

43, if you have to read in the variable. -4 if it's assumed to already be there. +4 if you need to see a carriage return between the input and the output:

R Z F I=1:1:$L(Z) S J=$E(Z,I) F K=1:1:J W J

Here's the '-4' code if you assume the Z variable has the initial integers:

F I=1:1:$L(Z) S J=$E(Z,I) F K=1:1:J W J

And, here's the +4 code to add the carriage return between input & output:

R Z W ! F I=1:1:$L(Z) S J=$E(Z,I) F K=1:1:J W J

Enjoy!

[edited to add other cases.]

Brandon,

All may not be lost; you may just need a different (smaller) FOIA database. Indian Health Service uses an offshoot of the VA's VistA Database which IHS calls RPMS (Resource and Patient Management System). As IHS is a government entity, they also have to release a FOIA version of their database, and last I checked it's around 4G, which should allow you to run it under the free release of IRIS.

I will warn you - a lot of the menus will look different and there's a lot of added / modified functionality due to the database customization to bring it inline for the needs of the Native American population, but the core is still VistA and can still be used free as a learning tool.

To download it, go here: https://www.ihs.gov/rpms/applications/ftp/

and click on the FOIA link to see the .zip downloads. The most recent version was released on 03 March 2021. Oh, and for some reason, Firefox on Linux tries to download the .zip as a .pdf, so either use Chrome or download it and manually change the extension back to .zip to extract the database.

Hope this helps!

The 31-character limitation is there in 2018 (I'm using 2017 for this demonstration) - although anything longer doesn't technically error out, only the first 31 characters are recognized.

A quick demo I pulled from a test server:

NAME>s ^HH.LookupLabResultsToPhysiciansD(0)="fluffy"
 
NAME>zw ^HH.LookupLabResultsToPhysiciansD
^HH.LookupLabResultsToPhysicians(0)="fluffy"
 
NAME>s ^HH.LookupLabResultsToPhysiciansDoTryToDemonstrateLongGlobalNames(0)="More Fluffy"
 
NAME>zw ^HH.LookupLabResultsToPhysiciansDoNoFluffy
^HH.LookupLabResultsToPhysicians(0)="More Fluffy"

I underlined the characters that are 'ignored' - you can see on the ZWRITE command that the last 'D' (or anything after it) isn't displayed, and you can type all sorts of characters after that final 'D' and it still changes the 'base' 31-character global.

InterSystems probably put that check in because folks were using longer global names thinking all of the characters were significant, but some data was getting changed inadvertently.

Does that help?

Give the ^rINDEX global a look.

I made a QTEST routine in Studio, and saved it but did not compile it.

QTEST ; JUST A TEST.
 Q
EN ; JUST A TEST.
 Q
 ;

and then I executed this at the programmer prompt:

ZW ^rINDEX("QTEST")
^rINDEX("QTEST","INT")=$lb("2021-08-06 13:21:58.061867",49)

I changed the routine a bit:

QTEST ; JUST A BIGGER TEST.
 Q
EN ; JUST A BIGGER TEST.
 Q
 ;

and I ZW'd the global again:

ZW ^rINDEX("QTEST")
^rINDEX("QTEST","INT")=$lb("2021-08-06 13:24:50.38743",63)

It may be safe to assume that the underlined parameter is the length or number of bytes of storage required.

Now once I compile the routine in Studio, and ZW the global again, this is the output:

ZW ^rINDEX("QTEST")
^rINDEX("QTEST","INT")=$lb("2021-08-06 13:24:50.38743",63)
^rINDEX("QTEST","OBJ")=$lb("2021-08-06 13:26:30",152)

Hope this helps!

Yes, you can!

Access your XML file as a binary stream, then you can use the LoadStream command to load & compile the stream.

 STREAM=##class(%Stream.FileBinary).%New()
 STFILE=STREAM.LinkToFile("c:\where\is\your\file.xml")
 STREAM.Rewind()
 ; test loading the stream, doesn't actually create it yet. The last '1' parameter means test & report.
 $System.OBJ.LoadStream(STREAM,"ckfsbry/lock=0",.ERR,.LOADED,1)
 ; You can check the ERR variable to see if it errors out with no changes to the system. Assuming none, rewind & load it "for real"
 STREAM.Rewind()
 $System.OBJ.LoadStream(STREAM,"ckfsbry/lock=0",.ERR,.LOADED)

I haven't tested this in this manner, as I actually have the XML Base-64 encoded in a global... but hopefully this should get you started. PM me if you'd like to see my full code; it's too long to add here.

I'll leave the scheduling part as an exercise to the reader. :-)

Hope this helps!

I'm not sure if I understand your first question - if you're asking if there's a single "$command" that will just give you a length of a subscript? I don't think so, as you can have multiple subscripts, and I don't think there'd be a command that would let you pick which one you wanted output. However, if you're asking can you get a length of a subscript in a single line of code, then you certainly can - M is quite powerful with what you can do at the command prompt without utilizing Studio/Atelier. Imagine you have these globals:

 S ^ZTEST("ME")=1
 S ^ZTEST("YOU")=2
 S ^ZTEST("THEM")=3
 S ^ZTEST("OTHERS")=4

If you run this one line:

S I="" F  S I=$O(^ZTEST(I)) Q:I=""  W "SUB: "_I_" WITH LENGTH OF: "_$L(I),!

you'll get the output of each of the subscripts and the length of each:

SUB: ME WITH LENGTH OF: 2
SUB: OTHERS WITH LENGTH OF: 6
SUB: THEM WITH LENGTH OF: 4
SUB: YOU WITH LENGTH OF: 3

The F is a FOR loop, the $O(RDER) can retrieve the next or previous subscript, and the $L(ENGTH) function gives you the length.

For the 2nd half of your question, you surely can test maximum length of a subscript using a Try/Catch test. Try this code:

ZSUBTEST ; TEST LENGTH OF SUBSCRIPT.
 N I,I2,I3,J,J2,J3
 S I=1,J="B",J2=0
 DO {
     TRY {
         S ^ZTEST(J)="YUP"
         S I=I+1,J=J_"A"
         ; W I,*9,J,!
     } CATCH {
         W "A "_$ZERROR_" occured, caused by a subscript of "_I_" characters.",!
         S J2=1
     }
 } WHILE 'J2
 Q

You should see the output:

A <SUBSCRIPT>ZSUBTEST+6^ZSUBTEST ^ZTEST("BAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAA occured, caused by a subscript of 505 characters.

Or something similar. The program simply starts with a subscript one character long and keeps adding a character until the error occurs (caught by the CATCH part of the code) and displays the length of the subscript it couldn't handle.

Hope this helps!

This is for educational use only, so if you break your system, lose your cat, or your ice cream goes all melty by using my example below, please don't hold me accountable.

Also, I would like to mention that I borrowed the base structure from the ^%SS utility in the %SYS namespace.

I would also like to mention that in my *very* limited testing, you'll only have a CSP Session ID while there's actually "stuff" going on in the CSP Page. If you're executing a two-minute query in a CSP Page, whilst that query is running you can see (and clobber) that session. If the query is complete and you're just scrollin' round the results not generating new HTTP requests into the CSP Page, you may not be able to see the CSP Session ID.

That said, this did work for me to kill a CSP Session mid-query:

 ZZPROCKILL ; 
 OLDNS=$NAMESPACE
 $NAMESPACE="%SYS"
 query="",parm=1
 Set Rset=##class(%Library.ResultSet).%New("SYS.Process:SS")
 Do Rset.Execute(parm)
 POP=0
 While Rset.Next() {
Username=Rset.Get("User")
CONTINUE:Username'="UnknownUser"
NameSpace=Rset.Get("Namespace")
NameSpace="" NameSpace="%SYS" 
; i (Dir'=""),(NameSpace'=Dir) continue
Set Pid=Rset.Get("Process")
Process=##CLASS(%SYS.ProcessQuery).Open(Pid)
SID=Process.CSPSessionID
Process.%Close()
!,$J($s(($zversion(1)=1):$ZH(+Pid),1:Pid),8)_$Case(JobType,1:"*",:""),?20,Username,?40,NameSpace,?50,SID,!
    SID'="" $SYSTEM.Process.Terminate(Pid)
 }
 Rset.%Close()
 
 $NAMESPACE=OLDNS
 Q
 ;

I have 'embiggened' the line of code that actually terminates the process, and I did make one further assumption in my code: That the CSP Sessions would be run under the user: UnknownUser - If your system uses a different user to execute CSP, you'll need to change the username in the CONTINUE: line (ZZPROCKILL+10).

If you want to just see if you can find CSP sessions and not actually terminate them, comment out that final 'embiggened' line in the While loop (that starts with i SID'="").

Hope this helps!

Mr. Petrole,

Per the documentation here: https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=%25SYS&PRIVATE=1&CLASSNAME=%25Library.DynamicAbstractObject#%25FromJSON

is it possible that at some point your data stream is not encoded as UTF-8? That seems to be a requirement of the %FromJSON method, and if your data string isn't UTF-8 you may need to employ a call to $ZCONVERT, or for a "non-compliant" stream you may need to set the TranslateTable attribute of the stream to "UTF8."

Hope this helps!

@Jack Huser ,

Instead of using a regular pipe, have you tried a Command Pipe ( |CPIPE| )?

OPEN "|CPIPE|321":"/trak/FRXX/config/shells/transportIDE.sh ""login|password|NOMUSUEL|PRENOM|NOMDENAISSANCE|1234567891320|199999999999999999999999|09%2099%2099%2099%2099|31%2F12%2F1999|242%20IMPASSE%20DES%20MACHINCHOOOOOOOOSE%20||99999|SAINT%20BIDULETRUC%20DE%20MACHINCHOSE|isc%24jhu|123456|123456798"" 2>&1":100

Per this documentation: https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GIOD_interproccomm a CPIPE doesn't suffer from the 256(ish) character limit because the command called is the 2nd parameter.

Hope this helps!

There is a way to "re-route" telnet over a secure SSH tunnel with a product called 'stunnel.' There are free stunnel implementations for Windows, Linux and AIX, and I have tested using it across architectures (Windows Client to an AIX server, Linux client to Windows server, etc.) so it's basically architecture agnostic.

You will have to install stunnel on the server and every client so there is some work, but it can be done "over time" so there shouldn't be much downtime.

First, install stunnel on your server and configure it for a high port - for this example, let's use 6707. Configure stunnel encrypted SSH traffic on port 6707 and 're-route' the decrypted traffic to port 23, but don't disable port 23 just yet; you'll be able to use both ports temporarily until you get all the workstations switched.. On each workstation, install stunnel and re-route unencrypted port 23 traffic to encrypt and send out on port 6707. Once you get all of the workstations converted (which means you're no longer sending unencrypted traffic) then reconfigure the server to disable any input on port 23 on your network card(s), and rely solely on the SSH traffic on port 6707. Keep in mind, you can't disable port 23 on 127.0.0.1 - stunnel will need that, but as that's wholly internal to the OS it should satisfy your network scanner.

If stunnel & your firewall are set up correctly, it works invisibly with Cache / HealthShare / Iris and  every telnet client I've tested (NetTerm, putty, a couple others that don't immediately come to mind) because they still talk on port 23 locally, but stunnel does all the encryption and rerouting automatically. The only issue I've seen is if you _already_ have network issues and timeouts, stunnel can experience disconnects more often than straight telnet due to the increased overhead of the encryption.

Hope this helps!

ED Coder,

Your post is a bit confusing (to me, anyway), as I don't work much with HL7 on a daily basis, and I'm not sure what you mean by $GLOBAL, so I am going to try to clarify -- but I'll warn you, my crystal ball is usually pretty cloudy so I might be way off. If I am, I apologize in advance. But I hope this helps anyway.

Now, I'm going to assume that instead of $GLOBAL you really meant ^GLOBAL for where your info is stored -- as in, you've set up this info beforehand:

Set ^GLOBAL("123","bone issue")="Spur"   ; Sorry, but I added some data. You'll see why later.
Set ^GLOBAL("234","joint issue")="Ache"  ; Also on this one.

I'm going to assume one other thing for this post: that the local variable name that contains your data is called HLSEG. We can simulate this with this command:

Set HLSEG="$GLOBAL(""123"")"

Now, I realize that you may not be able to use carats (where I work we call 'em "uphats" -- don't ask me why...) in HL7 data so that's why you may have substituted the '$' symbol to indicate a global.

So, with those assumptions in place, here's a possible process on how you could get the data from a global into your segment.

First, if we assume that if the data doesn't start with a '$' symbol (indicating the need to reference a global) then just exit the process:

If $Extract(HLSEG,1)'="$" Quit

$Extract (in this case) gets just the first character of the HL Segment and if it isn't a '$' character then just quit. Here's documentation on $Extract: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_FEXTRACT

If it does start with a '$' then we need to turn that into a carat. If your HL7 data can't contain '$' symbols beyond the first one, we can use the Translate function (documentation here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=RCOS_ftranslate )

 Set HLSEG=$Translate(HLSEG,"$","^")

That turns *all* '$' symbols to '^' in your variable, so if your HL7 data _could_ contain more '$' characters, this won't work for you. In that case, we'll just strip off the first character (no matter what it is), save the rest and prefix that with a '^':

 Set HLSEG="^"_$Extract(HLSEG,2,$Length(HLSEG))

Now that we have the actual global name in HLSEG, we can use the Indirection operator to access the global through HLSEG, and use the $Data function (documentation here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=RCOS_fdata )to see to see if that global exists:

 If $Data(@HLSEG)=0 Quit ; no data here.

$Data will return a '10' or '11' if there's a subnode (for ^GLOBAL("123") the subnode is "bone issue") so if there is, let's put that in the local variable SUBNODE - for that we need the $Order (documentation here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=RCOS_forder ) function:

 If $Data(@HLSEG)>9 Set SUBNODE=$Order(@HLSEG@(""))

Now SUBNODE will contain "bone issue". We can use the $Data function to see if that subnode actually contains data [[ this is why I changed your example a bit, to make it easier to demonstrate this ability ]], and if it does we'll put that in the ISSUE variable:

 If $Data(@HLSEG@(SUBNODE))#2=1 Set ISSUE=@HLSEG@(SUBNODE)

Now the variable ISSUE will contain "Spur".

Now that you have the info for the SUBNODE and ISSUE local variables populated, you should be able to  use them to assign what you need in your segments.

Again, it's possible I'm way off base on this and this didn't help you - if that's the case then I apologize but you might want to provide a more descriptive example of what you're trying to accomplish. I also apologize in retrospect if any of my code examples contain bugs. But I really do hope it helps you!

Not sure if this will help, but Ensemble does (or at least did) support SNMP, so if your organization already runs a Network Monitoring platform like SolarWinds, OpenNPM or the like, you may be able to import a custom OID into that tool and let it do the monitoring for you.

That said, I do _not_ know if Ensemble's SNMP support is granular enough to be able to report on HL7 statistics, but at least it might be a place to start.

Full disclosure: I've never worked with Ensemble's SNMP support before, although I've read a bit about it. I was "more than passable" working with SolarWinds about a decade ago, enough that I could create custom OIDs and monitor specific things through custom Python scripts.

Hope this helps!

To add to what Peter said, there is a clue in the Read method documentation, snippet follows:

If no len is passed in, ie. 'Read()' then it is up to the Read implementation as to how much data to return. Some stream classes use this to optimize the amount of data returned to align this with the underlying storage of the stream.  

[[ Extra emphasis mine. ]]

And, if for some reason the FindAt method was called, it does only read a maximum of 20000 characters, snippet follows:

Method FindAt(position As %Integer, target As %CacheString, ByRef tmpstr As %CacheString = "", caseinsensitive As %Boolean = 0) As %Integer
{
If caseinsensitive Set target=$zconvert(target,"l")
;; truncation for brevity... While '..AtEnd {
Set tmp=..Read(20000)
If caseinsensitive Set tmp=$zconvert(tmp,"l")
Set tmpstr=$extract(tmpstr,*-targetlen+2,*)_tmp

[[ extra emphasis also mine. ]]

Hope this helps!

Vitaliy,

Thanks for the code, but how would I integrate changing the the TranslateTable in the DTL of my Production Process? That's where the hash is getting created; I tried several permutations of ' set source.Stream.TranslateTable="" '  but all gave me a <PROPERTY DOES NOT EXIST> error. The CharEncodingTable is a Readonly Internal value, and trying to changing that gives me a <CANNOT SET THIS PROPERTY> error.

Although, looking at the source code of the  Ens.StreamContainer class, that might give me an idea how to rewrite the class _easily_ to accomplish what I need...

Thanks for the input!

Yes, recreates in the same folder, as the production then re-sends the file (with the randomized filename) via SFTP to the remote server.

If it makes a difference, the production is on a Windows 10 system, although I have access to a Linux server for testing as well.

Settings:

File Path: C:\Export\

Archive Path: C:\Export\CSV_Complete

[[ I have two different business services looking for two different extensions)

Work Path: [[ Null ]]

I don't see a 'DeleteFromServer' setting... will continue digging.

Also, Conform Complete: Readable (default)

File Access Timeout: 2

Hope this helps, and thanks!