Jenna Makin · Sep 29, 2017 go to post

Interestingly I just tried to use the List query to gather global mappings for a namespace in the active configuration and it didnt return anything..

%SYS>s rs=##class(%ResultSet).%New("Config.MapGlobals:List")

%SYS>s sc=rs.Prepare("HSEDGEREST","*")

%SYS>w sc

1

%SYS>w rs.Next()

0

%SYS>

 

Am I doing something not quite right here?

Jenna Makin · Sep 29, 2017 go to post

I was able to use the Config.Namespaces (Exists) method to determine if a given namespace was defined and also to gather attributes for it.
I am attempting to use the List query of the Config.MapGlobals class to gather global mappings for the namespace and it's not returning anything.  

Jenna Makin · Sep 6, 2017 go to post

The only issue I have with this is that in the case of a stream containing a very large amount of data, as I read through the data there is no guarantee that I'm going to get the entire coded entity.  For example: given the following block of text:

{freeText: This is some free text \u001a}

As I read through the stream using .Read() the first read could return "{freeText: this is some free text \u" and then the second call to .Read() could return "001a}".

Can $zcvt work on stream data?

In order to implement this I think I would have to put together some pretty good code to handle these cases.  The documents I am trying to remove characters from are very large and wouldnt be able to be stored in a single string for use with $zcvt, I think.

At the point that the data is accessible as distinct strings the decoding has already been done.  At this point I can just do a $tr to get rid of the non printable characters, which is what I have done.

Jenna Makin · Sep 5, 2017 go to post

Problem is this-  At the time that I have access to do this the data is contained in a stream.  Further complicating things is that the $C(26) is actually encoded as \u001a because the contentType of the response from the REST service is Application/JSON which automatically encodes these characters using the JSON notation.

I wanted to implement a more general solution for this, removing all non-printable characters, however, it seems that I need to implement this stripping in the transform where the contents of the data actually contain the non-encoded non-printable characters.

I have a simple method that I have created to remove these characters:

 

ClassMethod stripNonPrintables(string As %String) As %String

{

f i=0:1:8,11,12,14:1:31 set chars=$g(chars)_$c(i)

quit $tr(string, chars)

}

So, in places where we are seeing these characters, we can simply call this strip method.  Not the solution I wanted to implement, but it will work.

Jenna Makin · Sep 5, 2017 go to post

Here's what is happening.  The REST service is returning data that appears to have been encoded using escape sequences.  For example, line feeds are changed to \n, carriage returns changed to \r, and other non printable characters are changed to character codes, such as $c(26) is changed to \u001a

These sequences are then (apparently) automatically translated back to their regular ascii characters when I use the %DynamicObject's %FromJSON method to convert the JSON stream to an object.

We then take that object and pass it to a DTL transform that converts it to another object (in this case SDA3 set of containers) and the non printable characters (specifically $c(26)) is throwing off the generation of XML which in itself seems correct because I don't think a $c(26) is allowed in XML.

What I want to do is get rid of these non-printable characters (other than things like CR and LF so that the JSON can be converted to XML properly

Jenna Makin · May 18, 2017 go to post

I would imagine that the same code you wrote to convert XML to JSON in studio could be used from an Ensemble production to do the conversion from within the context of a production.  You could implement this code as a business operation method and could pass a message containing the XML as the request message and then the response message would contain the json converted version.

Can you share your code that converts XML to JSON from Terminal?

Jenna Makin · May 18, 2017 go to post

Perhaps you are referring to the ContentType response header which is used by many applications to know what format the response is in.  In one of my RESTful services that returns a JSON response I use the following code in my REST handler.

    Set %response.ContentType="application/json"

You can find a list of all of the various MIME types that could be used as the %response.ContentType

https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Complete_list_of_MIME_types

Jenna Makin · May 16, 2017 go to post

Hi Kishan

I think it would help to have a little bit more information on your particular use case for FHIR.  

FHIR is the latest standard to be developed under the HL7 organization. Pronounced 'Fire' , FHIR stands for Fast Healthcare Interoperability Resources.  FHIR is a standard for exchanging healthcare information electronically.  The FHIR standard is just that, a standard, and as is the case for all standards, requires implementation.  Complete details about the FHIR standard are publically available on the internet at https://www.hl7.org/fhir/

There is no specific functionality built into the InterSystems Cache product to support the FHIR standard, although InterSystems Cache could be used to develop an implementation of the FHIR standard.  Such an implementation, done with Cache, would require a significant amount of development

On the other hand InterSystems HealthShare does have specific libraries included that can make working with the FHIR standard much easier, but this obviously would depend upon what your exact use-case for FHIR was.  If you could provide additional information as to what you are wanting to do with FHIR it would make answering your question much easier.

Jenna Makin · Mar 23, 2017 go to post

Mack

It seems to me that your real issue is that your database is taking up too much space on disk and you want to shrink it.  To do this you really don't need to create a whole new database and namespace.  Even on non VMS systems before we had the compact and truncate functions I used to compact databases using GBLOCKCOPY which is pretty simple.

1.   Create a new database to hold the compacted globals

2.   Use GBLOCKCOPY to copy all globals from the large database to the new database you created in step 1.  Because routines are stored in the database they will be copied as well.

3.  Shutdown (or unmount) the original database and new compacted database

4.  Replace the huge CACHE.DAT file with the new compacted one.

5.  Remount new compacted database.

Global and Routine mappings are stored in the cache.cpf file and are related to the namespace configuration and not the database configuration.  After completing this process the database will be compacted and global/routine mappings will be preserved.

And of course, you shouldnt do any of this without a good backup of your original database.

Jenna Makin · Dec 7, 2016 go to post

Peter-

I can think of two ways, one using the plugin and a second by just adding a new dimension to the cube that compared these two values.  Would there be any benefit to doing it either way?  I would think less coding in the case of adding a dimension to the cube.

Jenna Makin · Oct 27, 2016 go to post

I found what I was looking for.  I was looking at EnsLib.HL7.Message looking for a way to correlate that to Ens.MessageHeader when I should have been looking from the reverse order.

In Ens.MessageHeader there is a field MessageBodyClassName which contains the name of the class containing the message body.  In this case EnsLib.HL7.Message

There is also a field in Ens.MessageHeader called MessageBodyId which contains the ID of the corresponding message body.

Jenna Makin · Oct 27, 2016 go to post

Thanks Bachhar, 

Expanding on this a bit, we also have the need to be able to query the message store for messages we have sent/received from a given interface.  These are all HL7 messages, which means they are stored in EnsLib.HL7.Message. 

I'm assuming there is a way to correlate a message in EnsLib.HL7.Message with either Ens.MessageBody or Ens.MessageHeader where regular Ensemble messages are stored.

What would be the proper way to correlate these?  I looked at the fields in EnsLib.HL7.Message but don't see anything that stands out as obvious connection between the two tables.

Jenna Makin · Oct 19, 2016 go to post

So this begs the question, how could one invalidate the result cache for the entire cube without running %KillCache as that purges the underlying globals and would interfere with actively running queries, correct?

Jenna Makin · Oct 19, 2016 go to post

So if we were to use the APIs to update the dimension table directly for example, this would not be adequate to insure that future queries showed the proper data?

Jenna Makin · Oct 19, 2016 go to post

As mentioned already, running ##class(HoleFoods.Cube).%KillCache() would actually go in and purge the internal globals associated with the result cache for that particular cube.  Not really the best thing to do on a production system.

A safer way to invalidate cache for a given cube would be to run %ProcessFact for an individual record within the source table for the cube.

For example

&sql(select top 1 id into :id from <sourcetable>)

do ##class(%DeepSee.Utils).%ProcessFact(<cubename>, id)

 

replacing <sourcetable> with the source table for your cube and <cubename> with the name of the cube.

This has the effect of causing the result cache for the cube to be invalidated without completely purging the internal globals that store the cache.  This is much safer to run during production times than the %KillCache() method.

Jenna Makin · Oct 6, 2016 go to post

One concern with using this method is that it actually goes in and kills all of the cache for the cube by killing the globals containing the cache.  What affect will this have if it is done while users are using the Analyzer for example?

Jenna Makin · Oct 5, 2016 go to post

After further review and advice from others I have discovered that there are two problems.

1.  The order of my inheritance needs to be switched to Extends (%Persistent, %Populate)

 2.  The POPSPEC attribute should actually point to a method name, ie:  POPSPEC="Name()" and not POPSPEC="NAME"   This was a problem with me mis-reading the documentation on POPSPEC.

Jenna Makin · Jun 23, 2016 go to post

Here's an example of how one might use the FileSet query in the %File class and the Delete class method in %File to purge backup files in a given directory before a given date.

/// Purge backups older than <var>DaysToKeep</var>
/// <var>Path</var> points to the directory path containing the backups.
/// Only *.cbk files will be purged
ClassMethod PurgeBackups(Directory As %String, DaysToKeep As %Integer = 14) As %Integer
{
	// Calculate the oldest date to keep files on or after
	set BeforeThisDate = $zdt($h-DaysToKeep_",0",3)

	// Gather the list of files in the specified directory
	set rs=##class(%ResultSet).%New("%File:FileSet")
	do rs.Execute(Directory,"*.cbk","DateModified")

	// Step through the files in DateModified order
	while rs.Next() {
		set DateModified=rs.Get("DateModified")
		if BeforeThisDate]DateModified {
			// Delete the file
			set Name=rs.Get("Name")
			do ##class(%File).Delete(Name)
		}
		// Stop when we get to files with last modified dates on or after our delete date
		if DateModified]BeforeThisDate quit
	}
}
Jenna Makin · Jun 21, 2016 go to post

Funny you should ask this as I was just looking at how to do this today.  

Most operating systems offer a way to search for files given a certain filter, such as being older than a certain date, and then piping that list to another command, such as delete.

Here is a class method I wrote to do this on a Windows 2012 R2 server running Cache

ClassMethod PurgeFiles(Path As %String, OlderThan As %Integer)
{
    set Date=$zd($h-OlderThan)
    set cmd="forfiles /P "_Path_" /D -"_Date_" /C ""cmd /c del @path"""
    set sc=$zf(-1,cmd)
}

This method accepts a path and an integer indicating the number of days to keep files for.  It then uses constructs a command line which uses the "forfiles" command passing the path and a calculated date.  For each file it finds, it executes the command cmd /c del <path> which deletes the file.

There are probably more elegant ways to do this, cross platform compatible, but this is one solution that I had.

Jenna Makin · Jun 3, 2016 go to post

And yes, there are many ways to accomplish this task from the Cache server side. 

The document data model doesn't really apply here as this is not a new application.

Once 2016.2 is released, we could just map the globals as objects using CacheSQLStorage and then use the .$toJSON() method to export to JSON at that point.

That's one of the great things about the Cache technology, there are so many different options and choices for doing the same thing.

This post is really just about a discussion on what the best way to represent a global structure in JSON, not the specifics of how to accomplish that in code.

Jenna Makin · Jun 3, 2016 go to post

Stefan-

My use case is that I have a Cache application (not object based) and I want to provide a set of REST services for accessing the raw global structures, all methods.  I want this REST interface to be generic in nature such that it can be used to read, set and kill any global node of any global.  In order to do this, I need an encoding method for sending and receiving the global data.  JSON seems the best way to package the data, thus I was looking for a good JSON structure that could represent the global structure.

Jenna Makin · Jun 2, 2016 go to post

You could create a SQL Stored procedure to return ##class(%Library.Functions).HostName(), such as:

 Class Utils.Procedures Extends %RegisteredObject
{

ClassMethod hostname() As %String [ SqlProc ]
{
 /* method code */
      Quit ##class(%Library.Function).HostName()
}

}

And once that was done,  you can then use that stored procedure from a sql query, such as:

SELECT Utils.Procedures_HostName()

which on my system returns

poindextwin10vm

which is the hostname of my Windows system

Jenna Makin · Jun 1, 2016 go to post

Continuing my testing with Ens.Utils.HTML.Parser, I am trying to parse information contained within paragraphs.

For example:

<p>This is paragraph one</p>

<p>This is paragraph two with some <i>italics</i>.</p>

<p>This is paragraph three</p>

I want to parse the contents of each of these paragraphs, including the italics content contained within paragraph two.

I setup my template to:

+<p>{paragraph}</p>+

This kind of works, except in the case of the second paragraph, it stops when it hits the <i> tag even though if it were to continue it would eventually hit the </p> tag.  So what ends up being parsed is:

This is paragraph one

This is paragraph two with some

This is paragraph three

What I am expecting is:

This is paragraph one

This is paragraph two with some <i>italics</i>

This is paragraph three

Does this seem to be a limitation of the parser?  Is there any way to get what I'm trying to get from this document using the parser?

Jenna Makin · Jun 1, 2016 go to post

I was able to use this method (Ens.Utils.HTML.Parser) to successfully parse disease information from the CDCs website.  Basically I created a persistent class to store disease names along with the source URLs from the CDC's a-z web pages.

The template looks like this:

<div,class=span16><ul>+<li><a,class=noLinking,href={pageurl}>{pagetitle}</a></li>+</ul>

and the Class method that actually does the parsing of all of the pages:


ClassMethod getDiseasesOrCondition(Output tCount As %Integer) As %Status
{
set tCount=0
set template="<div,class=span16><ul>+<li><a,class=noLinking,href={pageurl}>{pagetitle}</a></li>+</ul>"
for alpha=1:1:26 {
kill tOut
set url="http://www.cdc.gov/DiseasesConditions/az/"_$c(96+alpha)_".html"
do ##class(Ens.Util.HTML.Parser).test(url, template, .tOut)
for i=1:1 {
quit:'$d(tOut("pageurl",i))
if tOut("pageurl",i)?1"http://www.cdc.gov/".e {
set iCDC=##class(iCDC.DiseaseOrCondition).%New()
set iCDC.title=tOut("pagetitle",i)
set iCDC.sourceUrl=tOut("pageurl",i)
set tSC=iCDC.%Save()
set tCount=$i(tCount)
}
}
}
quit $$$OK
}

There was a little checking that was needed to verify that the url that was returned for the source url's were actually pointing to the CDCs website, but other than that, the Ens.Utils.HTML.Parser worked exactly as I had hoped it would.

Very clean and straight forward for my needs.