Interestingly I just tried to use the List query to gather global mappings for a namespace in the active configuration and it didnt return anything..

%SYS>s rs=##class(%ResultSet).%New("Config.MapGlobals:List")

%SYS>s sc=rs.Prepare("HSEDGEREST","*")

%SYS>w sc


%SYS>w rs.Next()




Am I doing something not quite right here?

I was able to use the Config.Namespaces (Exists) method to determine if a given namespace was defined and also to gather attributes for it.

I am attempting to use the List query of the Config.MapGlobals class to gather global mappings for the namespace and it's not returning anything.  



The only issue I have with this is that in the case of a stream containing a very large amount of data, as I read through the data there is no guarantee that I'm going to get the entire coded entity.  For example: given the following block of text:

{freeText: This is some free text \u001a}

As I read through the stream using .Read() the first read could return "{freeText: this is some free text \u" and then the second call to .Read() could return "001a}".

Can $zcvt work on stream data?

In order to implement this I think I would have to put together some pretty good code to handle these cases.  The documents I am trying to remove characters from are very large and wouldnt be able to be stored in a single string for use with $zcvt, I think.

At the point that the data is accessible as distinct strings the decoding has already been done.  At this point I can just do a $tr to get rid of the non printable characters, which is what I have done.

Problem is this-  At the time that I have access to do this the data is contained in a stream.  Further complicating things is that the $C(26) is actually encoded as \u001a because the contentType of the response from the REST service is Application/JSON which automatically encodes these characters using the JSON notation.

I wanted to implement a more general solution for this, removing all non-printable characters, however, it seems that I need to implement this stripping in the transform where the contents of the data actually contain the non-encoded non-printable characters.

I have a simple method that I have created to remove these characters:


ClassMethod stripNonPrintables(string As %String) As %String


f i=0:1:8,11,12,14:1:31 set chars=$g(chars)_$c(i)

quit $tr(string, chars)


So, in places where we are seeing these characters, we can simply call this strip method.  Not the solution I wanted to implement, but it will work.


Here's what is happening.  The REST service is returning data that appears to have been encoded using escape sequences.  For example, line feeds are changed to \n, carriage returns changed to \r, and other non printable characters are changed to character codes, such as $c(26) is changed to \u001a

These sequences are then (apparently) automatically translated back to their regular ascii characters when I use the %DynamicObject's %FromJSON method to convert the JSON stream to an object.

We then take that object and pass it to a DTL transform that converts it to another object (in this case SDA3 set of containers) and the non printable characters (specifically $c(26)) is throwing off the generation of XML which in itself seems correct because I don't think a $c(26) is allowed in XML.

What I want to do is get rid of these non-printable characters (other than things like CR and LF so that the JSON can be converted to XML properly

I would imagine that the same code you wrote to convert XML to JSON in studio could be used from an Ensemble production to do the conversion from within the context of a production.  You could implement this code as a business operation method and could pass a message containing the XML as the request message and then the response message would contain the json converted version.

Can you share your code that converts XML to JSON from Terminal?

Perhaps you are referring to the ContentType response header which is used by many applications to know what format the response is in.  In one of my RESTful services that returns a JSON response I use the following code in my REST handler.

    Set %response.ContentType="application/json"

You can find a list of all of the various MIME types that could be used as the %response.ContentType

Hi Kishan

I think it would help to have a little bit more information on your particular use case for FHIR.  

FHIR is the latest standard to be developed under the HL7 organization. Pronounced 'Fire' , FHIR stands for Fast Healthcare Interoperability Resources.  FHIR is a standard for exchanging healthcare information electronically.  The FHIR standard is just that, a standard, and as is the case for all standards, requires implementation.  Complete details about the FHIR standard are publically available on the internet at

There is no specific functionality built into the InterSystems Cache product to support the FHIR standard, although InterSystems Cache could be used to develop an implementation of the FHIR standard.  Such an implementation, done with Cache, would require a significant amount of development

On the other hand InterSystems HealthShare does have specific libraries included that can make working with the FHIR standard much easier, but this obviously would depend upon what your exact use-case for FHIR was.  If you could provide additional information as to what you are wanting to do with FHIR it would make answering your question much easier.


It seems to me that your real issue is that your database is taking up too much space on disk and you want to shrink it.  To do this you really don't need to create a whole new database and namespace.  Even on non VMS systems before we had the compact and truncate functions I used to compact databases using GBLOCKCOPY which is pretty simple.

1.   Create a new database to hold the compacted globals

2.   Use GBLOCKCOPY to copy all globals from the large database to the new database you created in step 1.  Because routines are stored in the database they will be copied as well.

3.  Shutdown (or unmount) the original database and new compacted database

4.  Replace the huge CACHE.DAT file with the new compacted one.

5.  Remount new compacted database.

Global and Routine mappings are stored in the cache.cpf file and are related to the namespace configuration and not the database configuration.  After completing this process the database will be compacted and global/routine mappings will be preserved.

And of course, you shouldnt do any of this without a good backup of your original database.