Problem is this-  At the time that I have access to do this the data is contained in a stream.  Further complicating things is that the $C(26) is actually encoded as \u001a because the contentType of the response from the REST service is Application/JSON which automatically encodes these characters using the JSON notation.

I wanted to implement a more general solution for this, removing all non-printable characters, however, it seems that I need to implement this stripping in the transform where the contents of the data actually contain the non-encoded non-printable characters.

I have a simple method that I have created to remove these characters:

 

ClassMethod stripNonPrintables(string As %String) As %String

{

f i=0:1:8,11,12,14:1:31 set chars=$g(chars)_$c(i)

quit $tr(string, chars)

}

So, in places where we are seeing these characters, we can simply call this strip method.  Not the solution I wanted to implement, but it will work.

 

Here's what is happening.  The REST service is returning data that appears to have been encoded using escape sequences.  For example, line feeds are changed to \n, carriage returns changed to \r, and other non printable characters are changed to character codes, such as $c(26) is changed to \u001a

These sequences are then (apparently) automatically translated back to their regular ascii characters when I use the %DynamicObject's %FromJSON method to convert the JSON stream to an object.

We then take that object and pass it to a DTL transform that converts it to another object (in this case SDA3 set of containers) and the non printable characters (specifically $c(26)) is throwing off the generation of XML which in itself seems correct because I don't think a $c(26) is allowed in XML.

What I want to do is get rid of these non-printable characters (other than things like CR and LF so that the JSON can be converted to XML properly

Peter-

I can think of two ways, one using the plugin and a second by just adding a new dimension to the cube that compared these two values.  Would there be any benefit to doing it either way?  I would think less coding in the case of adding a dimension to the cube.

I found what I was looking for.  I was looking at EnsLib.HL7.Message looking for a way to correlate that to Ens.MessageHeader when I should have been looking from the reverse order.

In Ens.MessageHeader there is a field MessageBodyClassName which contains the name of the class containing the message body.  In this case EnsLib.HL7.Message

There is also a field in Ens.MessageHeader called MessageBodyId which contains the ID of the corresponding message body.

 

Thanks Bachhar, 

Expanding on this a bit, we also have the need to be able to query the message store for messages we have sent/received from a given interface.  These are all HL7 messages, which means they are stored in EnsLib.HL7.Message. 

I'm assuming there is a way to correlate a message in EnsLib.HL7.Message with either Ens.MessageBody or Ens.MessageHeader where regular Ensemble messages are stored.

What would be the proper way to correlate these?  I looked at the fields in EnsLib.HL7.Message but don't see anything that stands out as obvious connection between the two tables.

So this begs the question, how could one invalidate the result cache for the entire cube without running %KillCache as that purges the underlying globals and would interfere with actively running queries, correct?

 

So if we were to use the APIs to update the dimension table directly for example, this would not be adequate to insure that future queries showed the proper data?

As mentioned already, running ##class(HoleFoods.Cube).%KillCache() would actually go in and purge the internal globals associated with the result cache for that particular cube.  Not really the best thing to do on a production system.

A safer way to invalidate cache for a given cube would be to run %ProcessFact for an individual record within the source table for the cube.

For example

&sql(select top 1 id into :id from <sourcetable>)

do ##class(%DeepSee.Utils).%ProcessFact(<cubename>, id)

 

replacing <sourcetable> with the source table for your cube and <cubename> with the name of the cube.

This has the effect of causing the result cache for the cube to be invalidated without completely purging the internal globals that store the cache.  This is much safer to run during production times than the %KillCache() method.