Yep. this is exactly what I did. I added a property JSON that was a string, and also added one that was a %Library.DynamicObject. Both of them when I went to assign the JSON array as in:

Set MyObj.JSON=JSONString.%ToJSON()

gave me the INVALID OREF error.

I'm going to try what some others have mentioned - pipe that data from %ToJSON() into a stream property in an object that extends Ens.Request and give that a shot.

Thanks!

Here's another question about the Activity Monitor with respect to having so much activity the page takes a few minutes to render.

I checked the documentation but didn't really find a way to define something like a "SiteDimension" that would perhaps list messages consumed by services (or maybe a specific category?), or messages sent by all operations in the running production in the hopes that something like that might run faster when viewing?

Please keep in mind this isn't a site that has a DeepSee license, so creating cubes, etc., doesn't seem to be something that can be done with this system.

Would a better approach be to expose the data collected via some sort of RESTful service and have something like Grafana or similar do an external dashboard? I've seen some interesting writeup's on this subject so trying to determine if staying with the built-in Activity Monitor is a viable option for a high-volume site.

Thanks!

A good list of the various shared libraries and their descriptions can be found here  On the Docs site

So are you setting up ODBC on RedHat in order for external clients to query the Cache database, or are you setting up a data source in cacheodbc.ini so as to link tables, or use the SQL Gateway to get data from external systems?

If trying to push data and query external databases (at least to MS-SQL), I've had really good luck in building their Microsoft's MS-SQL ODBC client on Redhat and calling that. 

Ooo I like this. In this context, someone compiling a set of package classes won't have to count or otherwise depend on any counters that a build responds to - as long as all the classes compile without error, it would be good to go - and searching for the error text would be easy for any logger to spot, especially if the exception text contained the filename it was loading, etc., abnormalities in the build would be spotted ASAP.

The terminate could be conditional as well as you might want to see all of the errors in that build cycle so they can all be addressed simultaneously by the different groups (if there are more than one involved).

Okay, when reviewing the information here, and at the link https://community.intersystems.com/post/why-keepintegrity-important-when-purging-healthshareensemble-data there appears to be a conflict.

If the message session is persistent while the production runs, in theory if you do not modify any of the schedules, nor do you recycle the production before a purge of 30 days is kicked off (when you might have "Keep Integrity" off?) the same thing occurs. Your schedule stops running.

I would recommend that the core purge code be updated so that one of the first things it does is modify the created date/time of any current "in flight" scheduler message be the current date/time before the purge does its thing. This would prevent a standard purge from killing the scheduler. Just a thought. 

Edit: This apparently was recognized as an issue and is corrected in 2017.1 releases of HealthShare and Ensemble. 

If you are talking about somehow having a rule in message router get fired, or perhaps some trigger to "tickle" a business process class?

Look at the documentation regarding "Adapterless Service" and building custom tasks in the task scheduler.

In essence, you would create a custom request message in the OnTask() method of a custom scheduler class, populate it with whatever data you want and then passes it to an adapterless business service that would send it to one or more targets. The target could then act on that message accordingly.

 

It works really well when at odd schedules you can create a message that says "go check and do things" and the service passes it on, and other times the message sent can say something like "issue a report" all through the same mechanism. And the best thing is nothing is so custom as to be outside of the normal Ensemble messaging framework.

Thank you! 

I just added another property to the request object class holding the HL7 data to contain an instance of the Header object populated with the auth value I received in the first step.

Then a quick edit to the operation class method I needed to call and adding the line:

Do ..Adapter.%Client.HeadersOut.SetAt(pRequest.AccessAuth,1)

Where the AccessAuth property is the Header object added to the initial request message worked like a champ!

I really wish there was an explanation like that in the docs! :-) My guess is - not so many custom web services require separate headers, and most of the application-level auth token strings are implemented in the body of the message (most times).