That's very true, rawContent is limited to 10,000 characters.

If you have messages that are larger then you could do it this way...

ClassMethod DisplaySegmentStats()
{
  write !!,"Segment Statistics...",!!
  &sql(declare hl7 cursor for select id into :id from EnsLib_HL7.Message)
  &sql(open hl7)
  &sql(fetch hl7)
  while SQLCODE=0
  {
    set msg=##class(EnsLib.HL7.Message).%OpenId(id)
    set raw=msg.getSegsAsString(id)
    for i=1:1:$l(raw,$C(13))-1
    {
      set seg=$p($p(raw,$c(13),i),"|")
      set stats(seg)=$G(stats(seg))+1
    }
    &sql(fetch hl7)
  }
  &sql(close hl7)
  zw stats
}

Hi James,

Nothing simple that I can think of (perhaps DeepSee?).

Alternatively, I normally bash out a few lines of code, something like this... 

ClassMethod DisplaySegmentStats()
{
  write !!,"Segment Statistics...",!!
  &sql(declare hl7 cursor for select rawContent into :raw from EnsLib_HL7.Message)
  &sql(open hl7)
  &sql(fetch hl7)
  while SQLCODE=0
  {
    for i=1:1:$l(raw,$C(13))-1
    {
      set seg=$p($p(raw,$c(13),i),"|")
      set stats(seg)=$G(stats(seg))+1
    }
    &sql(fetch hl7)
  }
  &sql(close hl7)
  zw stats
}

Sounds like you might be adding unnecessary complexity.

> What is the most efficient way to process this large file?

That really depends on your definition of efficiency.

If you want to solve the problem with the least amount of watts then solving the problem with a single process would be the most efficient.

If you add more processes then you will be executing additional code to co-ordinate responsibilities. There is also the danger that competing processes will flush data blocks out of memory in a less efficient way.

If you want to solve the problem with speed then its important to understand where the bottlenecks are before trying to optimise anything (avoid premature optimisation).

If your process is taking a long time (hours not minutes) then you will most likely have data queries that have a high relative cost. It's not uncommon to have a large job like this run 1000x quicker just by adding the right index in the right place.

Normally I would write a large (single) process job like this and then observe it in the management portal (System>Process>Process Details). If I see its labouring over a specific global then I can track back to where the index might be needed.

You will then get further efficiencies / speed gains by making sure the tables are tuned and that Cache has as much configured memory cache as you can afford.

If you are writing lots of data during this process then also consider using a temporary global that won't hit the transaction files. If the process is repeatable from the file then there is no danger of losing these temp globals during a crash as you can just restart the job after the restore.

Lastly, I would avoid using Ensemble for this. The last thing you want to do is generate 500,000 Ensemble messages if there is no need to integrate the rows of data with anything other than internal data tables.

Correction. It's perfectly fine (for Ensemble) to ingest your file and process it as a single message stream. What I wouldn't do is split the file into 500,000 messages when there is no need to do this. Doing so would obviously cause additional IO. 

Probably not looking at the underlying code.

I would say its being raised by cspxmlhttp.js when it gets a non 200 status code.

If there was a sever side option then we would probably see some kind of conditional around either of these two functions...

function cspProcessResponse(req) {
  if(req.status != 200) {
    var errText='Unexpected status code, unable to process HyperEvent: ' req.statusText ' (' req.status ')';
    var err new cspHyperEventError(req.status,errText);
    return cspHyperEventErrorHandler(err);
  }

...

}

function cspHyperEventErrorHandler(error)
{
  if (typeof cspRunServerMethodError == 'function'return cspRunServerMethodError(error.text,error);
  alert(error.text);
  return null;
}

Hi Bapu,

There is a really simple solution, no Zen required.

Put some pre tags on your web page...

<pre id="json-preview-panel"></pre>

If your JSON is an object then...

document.getElementById("json-preview-panel").innerHTML=JSON.stringify(json, undefined, 2);

Note that the third argument in stringify() is the number of spaces to insert for prettifying.

If your JSON is a string already then you will need to convert it to an object and then back again...

document.getElementById("json-preview-panel").innerHTML=JSON.stringify(JSON.parse(json),undefined,2);

Sean.

Hi Paul,

Quotes inside quotes need to be escaped, your condition is only looking for one double quote, you will need to try this...

source.{PV1:DischargeDateTime()}=""""""

On a side note, quotes sent in HL7 can be used to nullify a value, e.g. if a previous message had sent a discharge date and time by mistake then "" would be a request to delete that value (as apposed to an empty value).

Sean.

Try this...

ClassMethod Transform(source As EnsLib.HL7.Message, Output target As EnsLib.HL7.Message) As %Status
{
    set target=source.%ConstructClone(1)
    set seg=target.FindSegment("OBX",.idx,.sc)
    while idx'="",$$$ISOK(sc)
    {
        set ntestr = "NTE|"_$I(ident)_"|"_seg.GetValueAt(5)
        set nte = ##class(EnsLib.HL7.Segment).ImportFromString(ntestr,.sc,source.Separators) if $$$ISERR(sc) goto ERROR
        set sc=target.SetSegmentAt(nte,idx) if $$$ISERR(sc) goto ERROR        
        set seg=target.FindSegment("OBX",.idx,.sc)
    }
ERROR
    quit sc
}

Hi Evgeny,

Not exactly one command, but it can be done on one line...

set file="foo.zip" do $System.OBJ.ExportToStream("foo*.GBL",.s) open file:("WNS":/GZIP=1) use file Do s.OutputToDevice() close file do s.Clear()

This should work in reverse opening the file with the GZIP flag, read the contents to a temporary binary stream and then using the $System.OBJ.LoadStream on the temp binary stream.

Sean.

Hi Scott,

Sounds like classic teapotism from the vendor.

Typically at this stage I would put Wireshark on the TCP port so that I have absolute truth as to what's going on at the TCP level.

If you see no evidence of these messages in Wireshark then you can bounce the problem back to the vendor with the Wireshark logs.

If you see evidence of messages, then you will have something more to go on.

One thing to look out for is if the HL7 messages are correctly wrapped. If you don't see evidence of ending 1c 0d hex values then the message will get stuck in the buffer. If they are dropping the connection then this can get discarded. You might see warnings relating to this, something like "discarding TCP buffer".

The fact that they think they are getting HL7 level ACK's back is a bit odd. Again with Wireshark you will be able to prove or disprove their observations. There is a scenario where by a timed out connection can collect the previous messages ACK, again it would be obvious once you look at the Wireshark logs.

If you need help with Wireshark then I have some notes digging around that might help.

Sean.