Peter-

I can think of two ways, one using the plugin and a second by just adding a new dimension to the cube that compared these two values.  Would there be any benefit to doing it either way?  I would think less coding in the case of adding a dimension to the cube.

I found what I was looking for.  I was looking at EnsLib.HL7.Message looking for a way to correlate that to Ens.MessageHeader when I should have been looking from the reverse order.

In Ens.MessageHeader there is a field MessageBodyClassName which contains the name of the class containing the message body.  In this case EnsLib.HL7.Message

There is also a field in Ens.MessageHeader called MessageBodyId which contains the ID of the corresponding message body.

 

Thanks Bachhar, 

Expanding on this a bit, we also have the need to be able to query the message store for messages we have sent/received from a given interface.  These are all HL7 messages, which means they are stored in EnsLib.HL7.Message. 

I'm assuming there is a way to correlate a message in EnsLib.HL7.Message with either Ens.MessageBody or Ens.MessageHeader where regular Ensemble messages are stored.

What would be the proper way to correlate these?  I looked at the fields in EnsLib.HL7.Message but don't see anything that stands out as obvious connection between the two tables.

So this begs the question, how could one invalidate the result cache for the entire cube without running %KillCache as that purges the underlying globals and would interfere with actively running queries, correct?

 

So if we were to use the APIs to update the dimension table directly for example, this would not be adequate to insure that future queries showed the proper data?

As mentioned already, running ##class(HoleFoods.Cube).%KillCache() would actually go in and purge the internal globals associated with the result cache for that particular cube.  Not really the best thing to do on a production system.

A safer way to invalidate cache for a given cube would be to run %ProcessFact for an individual record within the source table for the cube.

For example

&sql(select top 1 id into :id from <sourcetable>)

do ##class(%DeepSee.Utils).%ProcessFact(<cubename>, id)

 

replacing <sourcetable> with the source table for your cube and <cubename> with the name of the cube.

This has the effect of causing the result cache for the cube to be invalidated without completely purging the internal globals that store the cache.  This is much safer to run during production times than the %KillCache() method.

 

One concern with using this method is that it actually goes in and kills all of the cache for the cube by killing the globals containing the cache.  What affect will this have if it is done while users are using the Analyzer for example?

After further review and advice from others I have discovered that there are two problems.

1.  The order of my inheritance needs to be switched to Extends (%Persistent, %Populate)

 2.  The POPSPEC attribute should actually point to a method name, ie:  POPSPEC="Name()" and not POPSPEC="NAME"   This was a problem with me mis-reading the documentation on POPSPEC.