Here's an example of how one might use the FileSet query in the %File class and the Delete class method in %File to purge backup files in a given directory before a given date.

/// Purge backups older than <var>DaysToKeep</var>
/// <var>Path</var> points to the directory path containing the backups.
/// Only *.cbk files will be purged
ClassMethod PurgeBackups(Directory As %String, DaysToKeep As %Integer = 14) As %Integer
{
	// Calculate the oldest date to keep files on or after
	set BeforeThisDate = $zdt($h-DaysToKeep_",0",3)

	// Gather the list of files in the specified directory
	set rs=##class(%ResultSet).%New("%File:FileSet")
	do rs.Execute(Directory,"*.cbk","DateModified")

	// Step through the files in DateModified order
	while rs.Next() {
		set DateModified=rs.Get("DateModified")
		if BeforeThisDate]DateModified {
			// Delete the file
			set Name=rs.Get("Name")
			do ##class(%File).Delete(Name)
		}
		// Stop when we get to files with last modified dates on or after our delete date
		if DateModified]BeforeThisDate quit
	}
}

Funny you should ask this as I was just looking at how to do this today.  

Most operating systems offer a way to search for files given a certain filter, such as being older than a certain date, and then piping that list to another command, such as delete.

Here is a class method I wrote to do this on a Windows 2012 R2 server running Cache

ClassMethod PurgeFiles(Path As %String, OlderThan As %Integer)
{
    set Date=$zd($h-OlderThan)
    set cmd="forfiles /P "_Path_" /D -"_Date_" /C ""cmd /c del @path"""
    set sc=$zf(-1,cmd)
}

This method accepts a path and an integer indicating the number of days to keep files for.  It then uses constructs a command line which uses the "forfiles" command passing the path and a calculated date.  For each file it finds, it executes the command cmd /c del <path> which deletes the file.

There are probably more elegant ways to do this, cross platform compatible, but this is one solution that I had.

And yes, there are many ways to accomplish this task from the Cache server side. 

The document data model doesn't really apply here as this is not a new application.

Once 2016.2 is released, we could just map the globals as objects using CacheSQLStorage and then use the .$toJSON() method to export to JSON at that point.

That's one of the great things about the Cache technology, there are so many different options and choices for doing the same thing.

This post is really just about a discussion on what the best way to represent a global structure in JSON, not the specifics of how to accomplish that in code.

 

Stefan-

My use case is that I have a Cache application (not object based) and I want to provide a set of REST services for accessing the raw global structures, all methods.  I want this REST interface to be generic in nature such that it can be used to read, set and kill any global node of any global.  In order to do this, I need an encoding method for sending and receiving the global data.  JSON seems the best way to package the data, thus I was looking for a good JSON structure that could represent the global structure.

 

You could create a SQL Stored procedure to return ##class(%Library.Functions).HostName(), such as:

Class Utils.Procedures Extends %RegisteredObject
{

ClassMethod hostname() As %String [ SqlProc ]
{
 /* method code */
      Quit ##class(%Library.Function).HostName()
}

}

And once that was done,  you can then use that stored procedure from a sql query, such as:

SELECT Utils.Procedures_HostName()

which on my system returns

poindextwin10vm

which is the hostname of my Windows system

Continuing my testing with Ens.Utils.HTML.Parser, I am trying to parse information contained within paragraphs.

For example:

<p>This is paragraph one</p>

<p>This is paragraph two with some <i>italics</i>.</p>

<p>This is paragraph three</p>

I want to parse the contents of each of these paragraphs, including the italics content contained within paragraph two.

I setup my template to:

+<p>{paragraph}</p>+

This kind of works, except in the case of the second paragraph, it stops when it hits the <i> tag even though if it were to continue it would eventually hit the </p> tag.  So what ends up being parsed is:

This is paragraph one

This is paragraph two with some

This is paragraph three

What I am expecting is:

This is paragraph one

This is paragraph two with some <i>italics</i>

This is paragraph three

Does this seem to be a limitation of the parser?  Is there any way to get what I'm trying to get from this document using the parser?

 

I was able to use this method (Ens.Utils.HTML.Parser) to successfully parse disease information from the CDCs website.  Basically I created a persistent class to store disease names along with the source URLs from the CDC's a-z web pages.

The template looks like this:

<div,class=span16><ul>+<li><a,class=noLinking,href={pageurl}>{pagetitle}</a></li>+</ul>

and the Class method that actually does the parsing of all of the pages:

ClassMethod getDiseasesOrCondition(Output tCount As %Integer) As %Status
{
set tCount=0
set template="<div,class=span16><ul>+<li><a,class=noLinking,href={pageurl}>{pagetitle}</a></li>+</ul>"
for alpha=1:1:26 {
kill tOut
set url="http://www.cdc.gov/DiseasesConditions/az/"_$c(96+alpha)_".html"
do ##class(Ens.Util.HTML.Parser).test(url, template, .tOut)
for i=1:1 {
quit:'$d(tOut("pageurl",i))
if tOut("pageurl",i)?1"http://www.cdc.gov/".e {
set iCDC=##class(iCDC.DiseaseOrCondition).%New()
set iCDC.title=tOut("pagetitle",i)
set iCDC.sourceUrl=tOut("pageurl",i)
set tSC=iCDC.%Save()
set tCount=$i(tCount)
}
}
}
quit $$$OK
}

There was a little checking that was needed to verify that the url that was returned for the source url's were actually pointing to the CDCs website, but other than that, the Ens.Utils.HTML.Parser worked exactly as I had hoped it would.

Very clean and straight forward for my needs.