Discussion
· Aug 14

Best practices for Streams in Interoperability

I would like to know which are the best practices of using Streams in Interoperability messages.

I have always use %Stream.GlobalCharacter properties to hold a JSON, or a base64 document, when creating messages. This is fine and I can see the content in Visual Trace without doing anything, so I can check what is happening and resolve issues if I have, or reprocess messages if something went wrong, because I have the content.

But I think this is not the best way of using Streams. After a couple of years, I ussually have space problems: messages seems not to purge correctly (I don't know yet if it's because the use of %Stream.GlobalCharacter, or it's just a coincidence).

I have asked InterSystems, and they recomended me to use QuickStream instead of %Stream.GlobalCharacter. But if I choose this way, I loose visibility on Visual Trace (unless I write the content with a $$$LOGINFO or something like that, which doesn't convince me), and I think I have read somewhere that QuickStream content dissapear after ending the process (I mean, it's not persistent, which is fine to avoid space problems, but not to resolve issues).

So, I want to ask the community: which are the best practices? What you recomend me?

Thank you in advance :) 

Discussion (8)4
Log in or sign up to continue

In general (and messages and streams are just one example) there is a trade-off between more traceability and visibility vs. storage/space (and performance).

QuickStream is indeed a mechanism used internally to address the performance and storage concerns, but, to complete this with a traceability option, there is also a dedicated Business Operation that can add the desired data - see Enhanced Debugging and the introduction of HS.Util.Trace.Operations. This simply adds more calls in the session, to this Operation which (could) include the stream data. The advantage of this is that you can turn it on or off, and you can control also the "level" of tracing. Take into account of course that this needs to be done ahead of what you want to trace/visualize, you can't "apply" this retroactively.

One important note about QuickStream.  If you decide to go down this path you must make certain to Clear the QuickStream as part of your pipeline.  The Message Purge that is set up as a task will purge the message header and the associated message body but when it looks at the message body all it sees is a property called QuickStreamId and it doesnt know it should also Clear the associated QuickStream object.

> messages seems not to purge correctly

That might be if you use:

Property Text as %Stream.GlobalCharacter (or other global based streams)

And then in code: Set Object.Text=PointerToAnotherStream

The original Global node for the property stream gets 'orphaned', it is not referenced by any object and won't be purged (or garbage collected) if the object is deleted or gets out of scope.

The correct way is to use Do Object.Property.CopyFrom(PointerToAnotherStream) 

on any class that extends  Ens.Request or Ens.Response you may use:

Parameter COMPRESS = 2;
Property JSON As %Stream.GblChrCompress;

On large messages (or in a system with billions of messages per day) the save on storge is huge!
- Visual trace working fine
- You may search within the compressed property (but it will take time)
- Property is visible in SQL queries against Ens.MessageHeader / Ens.MessageBody 
- The "out of the box" purge working fine

Stream compression is enabled by default since version 2021.2, see release notes:

Saving on Storage

Stream compression – is now on by default for all globals-based stream classes, with no application change required. Existing data remains readable and will be compressed upon the next write. Experiments with real-world data have indicated compression ratios ranging from 30% for short texts to 80% and more for XML and other document types.