It really depends on what the acceptance criteria is, in my opinion. You don't have to name it to an excel extension, a .csv will open just fine in excel although you do lose some formatting abilities. 

One of the things we recently implemented is just having a simple node server that accepts a JSON call via REST using the exceljs node library. Granted this is outside of cache but it does the job.

Depending on your memory settings calling out your reporting result to another service can hit a max string issue unless it takes a pure stream.

There is another post out here that tried to achieve this problem by actually creating html tables in excel (This may be what Charles was talking about but his link breaks for me so I just get a 404), but once again if it's the same approach I am thinking of you hit formatting issues.

One approach I am currently investigating is using an ADO .net connection to pull the data from the cache database and presenting it to the front end using an excel nugget package. 

Hope this information helps and good luck, please let us know what you ended up with!

- Chris

Class User.Person Extends %Persistent {

Property Name as %String;

Property Gender as %String;

Property JobTitle as %String;


set Person = ##class(User.Person).%New()

set Person.Name = "Mike"

set Person.Gender = "Male"

set Person.JobTitle = "Developer"

set status = Person.%Save()

This creates a new record in the Person table, but it's also in the User.PersonD global. Assuming this is the first record, you'll have an %Id() of 1

if you wanted to get the Person

set Person = ##class(User.Person).%OpenId(1)

if you wanted to delete:

set status = ##class(User.Person).%DeleteId(1)

Hope that helps.

P.S. - I wrote this in only a couple of minutes so there may be errors. Intersystems documentation can tell you more and I think there are examples in the Samples namespace.

Look at the production xml and see if that value if defined there. Sometimes if you happen to save an empty value from the production page it'll put the xml tag in there. Once that is there, the system default settings will not override that value.

I am unfamiliar with being able to remove that besides removing it from the production xml itself.

You implement DSTIME on a class by doing the following in your persistent class:

Parameter DSTIME As STRING [ Constraint = ",AUTO,MANUAL", Flags = ENUM ] = "AUTO";

You will need to implement batching as well so you only pick up changes to a class from the last time you queried against DSTIME. I believe there are a few posts in the community that further discuss implementing DSTIME. 

Best of luck. 


Thank you for clearing up some things I left out. I actually learned a little which I greatly appreciate! I would like to take this time to somewhat hijack the thread (sorry) but when it comes to using the Ensemble scheduler don't you need to consider how often you are scheduling a task? The quicker you need it to occur doesn't CallInterval become more beneficial? We have quite a few Ensemble services that need to kick off (~30 seconds) and we have had this debate for some time when we need to invoke the Ensemble scheduler. 


- Chris

You would override the OnInit method in your business service, it's what is called when a business host starts up when a production comes up (assuming it was enabled when the production went down)

You can even set a call interval so that it does that X amount of seconds / minutes / hours. 

Also, Ens.Director is not a business host class that you can use, it's a class that allows you to control various features of a production as well as monitor its state.

I would recommend the DSTIME approach as well. We use this for syncing up information from our cache ERP to our BI tool. DeepSea is actually using DSTIME to show changes as well. 

With using DSTIME you are allowed to write code and determine what DSTime batches you want to synchronize with, allowing you to fully control how often you want to sync up data. 

This is probably an aggressive approach but you could try and do the following:

  • Create a code database with your class marked with final in it
  • Deploy said code with class marked as final and mount code as read only
  • Map the read only code database to the namespace that needs to access it

This should prevent a developer from being able to inherit from the class while still being able to access it. This is also an assumption that the meta data of class definition is included in the read only database. If it isn't, that's a very interesting flaw of what a read only code database means. 


We have the following solution when having the code and data dats separated. We would have a CODE1 dat and a CODE2 dat. Which ever code dat wasn't being used by the namespace, we would overwrite that file and then switch the namespace to point to the new code database. Keeps our deployment downtime to only a few milliseconds. 

The only situation we've run into is this doesn't seem to be ideal for ensemble without having to stop the ensemble production.

Maybe I'm missing something but with the way we have our code and data split our, we're only mirroring our data. So an advantage of using the System Default Settings is the change made on the primary is already updated on the secondary. Is there a best practice on this type of setup when dealing with having to fail over to a secondary and needing the existing business host values the same? I do agree that it would be nice if we could use the Sytem Default Settings to be available to multiple namespaces.