Yep. this is exactly what I did. I added a property JSON that was a string, and also added one that was a %Library.DynamicObject. Both of them when I went to assign the JSON array as in:

Set MyObj.JSON=JSONString.%ToJSON()

gave me the INVALID OREF error.

I'm going to try what some others have mentioned - pipe that data from %ToJSON() into a stream property in an object that extends Ens.Request and give that a shot.

Thanks!

Here's another question about the Activity Monitor with respect to having so much activity the page takes a few minutes to render.

I checked the documentation but didn't really find a way to define something like a "SiteDimension" that would perhaps list messages consumed by services (or maybe a specific category?), or messages sent by all operations in the running production in the hopes that something like that might run faster when viewing?

Please keep in mind this isn't a site that has a DeepSee license, so creating cubes, etc., doesn't seem to be something that can be done with this system.

Would a better approach be to expose the data collected via some sort of RESTful service and have something like Grafana or similar do an external dashboard? I've seen some interesting writeup's on this subject so trying to determine if staying with the built-in Activity Monitor is a viable option for a high-volume site.

Thanks!

Any word on this? I too would be very curious to see how batch processing via JDBC could be implemented in an Ensemble operation. I have one where I am calling a number of stored procedures and it crawls. Having a way to push this off onto the DB end to be done faster would be awesome.

Thanks!

A good list of the various shared libraries and their descriptions can be found here  On the Docs site

So are you setting up ODBC on RedHat in order for external clients to query the Cache database, or are you setting up a data source in cacheodbc.ini so as to link tables, or use the SQL Gateway to get data from external systems?

If trying to push data and query external databases (at least to MS-SQL), I've had really good luck in building their Microsoft's MS-SQL ODBC client on Redhat and calling that. 

Ooo I like this. In this context, someone compiling a set of package classes won't have to count or otherwise depend on any counters that a build responds to - as long as all the classes compile without error, it would be good to go - and searching for the error text would be easy for any logger to spot, especially if the exception text contained the filename it was loading, etc., abnormalities in the build would be spotted ASAP.

The terminate could be conditional as well as you might want to see all of the errors in that build cycle so they can all be addressed simultaneously by the different groups (if there are more than one involved).

I appreciate letting us know that. When I wasn't able to save the schema I kind of figured the wizards and such weren't fully fleshed out yet. I look forward to 2.0!

Thanks!

Okay, when reviewing the information here, and at the link https://community.intersystems.com/post/why-keepintegrity-important-when-purging-healthshareensemble-data there appears to be a conflict.

If the message session is persistent while the production runs, in theory if you do not modify any of the schedules, nor do you recycle the production before a purge of 30 days is kicked off (when you might have "Keep Integrity" off?) the same thing occurs. Your schedule stops running.

I would recommend that the core purge code be updated so that one of the first things it does is modify the created date/time of any current "in flight" scheduler message be the current date/time before the purge does its thing. This would prevent a standard purge from killing the scheduler. Just a thought. 

Edit: This apparently was recognized as an issue and is corrected in 2017.1 releases of HealthShare and Ensemble. 

If you are talking about somehow having a rule in message router get fired, or perhaps some trigger to "tickle" a business process class?

Look at the documentation regarding "Adapterless Service" and building custom tasks in the task scheduler.

In essence, you would create a custom request message in the OnTask() method of a custom scheduler class, populate it with whatever data you want and then passes it to an adapterless business service that would send it to one or more targets. The target could then act on that message accordingly.

 

It works really well when at odd schedules you can create a message that says "go check and do things" and the service passes it on, and other times the message sent can say something like "issue a report" all through the same mechanism. And the best thing is nothing is so custom as to be outside of the normal Ensemble messaging framework.

Both - inside and outside (using "kill -9" in Linux). So just going out and whacking whatever temp globals/tables that tell Ensemble the job is there should be safe to do. 

$ZV is "Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2014.1.2 (Build 753U) Tue Jul 22 2014 11:25:14 EDT"

Thanks!