Was there a question?  Yes, of course, Cache' supports methods to serialize and deserialize XML enabled objects.  However, those methods have to apply to all sorts of general XML structures, and as a result, are much less efficient than ones where we have a very known/constrained data model.  One specific case involves embedded serial objects - serializing these to XML will cause them to be instantiated in memory even if null.  In addition, SDA containers can be arbitrarily large - we have examples of 50-100mb size containers.  Depending on your partition size, attempting to deserialize an entire container can cause a <STORE> error.

There are a couple of API method that can be helpful here, in class HS.SDA3.Container:

1. ListStreamletIds - this can return a list of streamlet IDs (in the aggregation cache) corresponding to a patient in the viewer.

2. LoadSDAObject - this takes a streamlet ID and returns an instantiated SDA object.

These can be called from custom code within the viewer to populate UI elements on the fly.

Alternatively, in the access gateway production, HS.Gateway.Access.Manager has a NotificationOperation that gets called during key events as data gets loaded into the viewer cache.  The events are Initialize, Abort, WaitTimeout, FetchNotification, and FetchComplete.

There are no standard supported ways of loading data into a chart, however.

Not the least bit inefficient.

  • It's the minimum required to represent the data.
  • Any other attempts to represent the data with this cardinality would heap on complexities and inefficiencies (nasty denormalizations).
  • When considering efficiency, the general parameters are execution speed (and corresponding resource impact)  and storage utilization.  In Cache, even if we assumed a 1-1 cardinality (and we can't so this is all hypothetical), the space taken by having this is a child table is only slightly greater than embedding it in the parent row.  Storing the data only causes a single extra global set in an adjacent spot in the disk block, which is a negligible impact.  Because of the way the data is stored, the cost to query and fetch the data is virtually the same.  One could argue that having this in a child table is actually a bit more efficient for events where there are no documents (which may represent the vast majority of events).

Perhaps you are thinking about other SQL databases, with different performance characteristics?  Even with those other databases, denormalizing this sort of cardinality is usually something to avoid (although there are cases where denormalization can help efficiency).

The "documents" that you are finding with your query are not documents, but are SDA record requests.  If you are looking for retrievals of IHE documents from a document repository, the event type to use is "Retrieve Document Set".  You didn't mention what version of HealthShare you are running, but in recent versions, the ATNA repository has a relationship to a child table, HS_IHE_ATNA_Repository.Document, and you can join to that.  It has a RepositoryID and DocumentID property.

Storage depends on where you store the data (object, tables), not where you store classes.  The classes that manage these all reside in the HSLIB database.

Don't know if you are asking about Health Connect or Information Exchange.  If it's the latter, this is managed by where you set up an edge gateway namespace & database to handle the HL7v2 inputs, vs. where you set up the IHE XDSb repository and namespace which stores documents that have been provided and registered.  They would all be managed by the single common registry namespace and database.  If it's the former, it's completely up to you about how to store the data.  You would store them in different globals, and your namespace can map the globals to different databases.

Most of what you are looking for is already captured in the ATNA audit repository, which in current versions lives in the Registry namespace (in future versions we'll allow that to move to its own namespace/instance).  There is a DeepSee cube associated with this, as well as a few dashboards.  You can create your own SQL queries.  We also provide a framework for creating your own reports, including a set of "data volume" reports which can run using this as well as other data, and can incrementally capture dated subsets of this for data comparisons.  This is also useful in case the repository grows to a very large size, where SQL queries become less practical.