In line with what @Yaron Munz was saying - when you purge your messages as part of your interoperability data, and you choose to include Message Bodies, then your Body class' data (whether your message body class extends from Ens.Request or Ens.Response, or whether it is simply a class extending from %Persistent) will get deleted together with the Message Header.

The purge code [in Ens.MessageHeader:Purge()] looks at the MessageBodyClassName and MessageBodyId fields of the MessageHeader record and then calls the %DeleteId() method for that class, for the given Id. 

That being said, as @Cristiano Silva pointed out if the class you use as your message body includes references to other persistent classes, these will not get deleted/purged with the referencing object, unless you have an %OnDelete callback method/Trigger taking care of this.

You can see these related posts I shared in the past -

Ran,

Thanks to @Tom Woodfin for finding this - there is a documented limitation for using || within properties that are part of an IDKEY, see from here:

   IMPORTANT:

   There must not be a sequential pair of vertical bars (||) within the values of any property used by an IDKEY index, unless that property is a valid reference to an instance of a persistent class. This restriction is imposed by the way in which the InterSystems SQL mechanism works. The use of || in IDKey properties can result in unpredictable behavior.

And also after some internal discussion - there is no way around this limitation.

You can find in the ENSDEMO "package" (for IRIS, available via Open Exchange here) various sample Productions including for SAP.

Note I believe the sample @Eduard Lebedyuk referred to uses custom Java classes using the PEX approach, and the samples in ENSDEMO use the built-in SAP Java Connector. In both cases SAP JCo is used behind the scenes. And in the 'iris-sap' case also the IDoc support.

If you are creating a FHIR Bundle manually (programmatically; as opposed for example of just getting a Bundle back as a search result response from our built-in FHIR Resource Repository) I think you should be using HS.FHIRServer.Util.Bundle:CreateBundle().

Indeed behind the scenes you can see it calls the CreateGUID() method @Marc Mundt pointed to populate the id -

 Set bundle.id = $ZConvert($SYSTEM.Util.CreateGUID(),"L")

[The $ZConvert changes it to lower-case, e.g.:

USER>set guid = $SYSTEM.Util.CreateGUID()
 
USER>write guid
B990B74D-C008-4F4D-BA3B-4247A740250A
USER>write $ZConvert(guid,"L")
b990b74d-c008-4f4d-ba3b-4247a740250a

]

I actually saw now that the Wizard would work (as per above) also in Ensemble (I tested on v2018.1.x).

The fact that an SQL query just shows the OID, doesn't bother the Wizard to do it's job.

[If you take a peek at the code you can see it generates special stream handling -

 ; STREAMOUT()
 Do rtn.WriteLine("STREAMOUT(oref) {")
  ...
 do rtn.WriteLine(" while (oref.AtEnd = 0) {")
 do rtn.WriteLine(" set len = 32000")
 do rtn.WriteLine(" set val=oref.Read(.len)")
  ...
 do rtn.WriteLine(" write val")
 do rtn.WriteLine(" }")
  ...

]

Of course you could do this writing your own code in a routine (or method) and accessing the data via objects.

But the Wizard is flexible in the sense that it allows you to choose the table you are exporting from, and the fields you want to include in the export (and some other related settings); and on the import side what table you want to import into. Of course the classes can be different, and you don't need to to include all of the fields.

As for the product/version that's apparently an issue. I don't know if it is worth the hassle (vs. implementing your own mechanism), but you could in theory consider exporting the relevant classes and globals from Ensemble into InterSystems IRIS, perform the SQL Data Export and Import there, and then go the other way back - exporting and importing back (the classes and globals) from InterSystems IRIS into Ensemble. Again, not sure this makes sense for you... You could make a small test to (a) check if it indeed works as you desire, and (b) estimate how much work is entailed in this approach.

Indeed I see now I misunderstood your question...

Note that with InterSystems IRIS I could use the SQL Data Export and Import Wizard.

I performed a quick test -

One class:

Class Test.StreamExport Extends %Persistent
{

Property SomeString As %String;

Property SomeStream As %GlobalCharacterStream;  

}

And a 2nd class:

Class Test.StreamImport Extends %Persistent
{

Property SomeString As %String;

Property SomeStream As %Stream.GlobalCharacter;

}

I had a stream with a 1000 lines in SomeStream in the 1st class, exported it to a file, and imported it to the 2nd and worked ok.

Note an SQL query would display the text in the stream and not just the OID.

This is what the data looked like in the table to be exported -

And this is the data that was imported to the 2nd class -

Checking the size of the Stream property, it is identical.

Note this was tested on InterSystems IRIS v2021.1.

In Ensemble v2018.1.x you would see something like this (per Eduard's example):

You can try this query -

SELECT parent, Name
FROM %Dictionary.PropertyDefinition
WHERE Type = '%GlobalCharacterStream'

Here's a sample run (in %SYS) -

You can find more info about %Dictionary.PropertyDefinition here:

https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic....

And in general about using the %Dictionary classes:

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

As for the general "Business Process Scope Designing" topic I think we are in agreement that this would depend on the needs of the particular process, and on the preferences of the implementer.

I do want to respond though on the topic of what information is provided as part of our Documentation, and your thoughts on that.

First, I will let my colleagues from Documentation comment on whether they feel there is room to improve and provide clearer details about these "conditions" (and my colleagues in Product Management and/or Development to comment about the ideas you raised with regard to how the Compiler could behave differently and how the Stack Handler should/could handle the Continue in these circumstances). In general we always strive to improve, and I'm sure people who have been with us over the years, have seen (and continue to see) serious leaps of improvement in our Documentation over the past few years. Firsthand feedback from the field, like yours, is extremely valuable for us for this kind of improvement.

And from my perspective I would like to mention just two points -

1. "The Documentation Balance" 

When I said "vague - on purpose" I did not mean this of course in any sinister way, as "withholding information", or in a negligent way either. The whole purpose of Documentation is to be helpful, to provide information, and to empower the readers. Not providing information would be naturally counter productive, but, there is more than one way to be counter productive... In providing information there is always a balance we need to keep between delivering information in a way that enables the "Users" to become more efficient and successful in their work, vs. spilling out too much data that might only end up confusing the "User", which would be contrary to the purpose of what we set out doing.

So you might say "certain conditions" is not clear enough, better have said - "a loop with a Scope inside, with a Call inside (or a Delay, or an Alert, or maybe some other such activity), and a Continue (or maybe also some other activity that would have the same effect today, or maybe added in the future, that would have a similar effect) and above 13 iterations (or maybe less or maybe more, if things change a little in the future), etc., etc."

I think this again might come back to some "personal preferences" - some might say - just give me all of the hard facts, I'll make sense of them, and know how to handle them, while others will just get baffled. At the end of the day we need to keep this balance and probably try to have in mind the "common" or "typical" case of "Users".

2. "The Knowledge/Learning Eco-System"

Today our "offering" or "platform" for enabling people to gain knowledge and learn and experience our technology, is made up of much more than just our Documentation. In fact if you look at the top banner of our Documentation, or any one of the "members of the family" I'll mention right away, you'll see they are always "grouped" together: Online Learning, Documentation, Developer Community, Open Exchange, Global Masters and Certification.

Each has it's own characteristics, vibe, nature, audiences and goals, but they all complement each other and create, together, an "eco-system" for the "whole prism" of how one can learn and interact with our technology. Our Documentation nowadays includes links to Posts on the Community, to Videos and Courses in the Online Learning, etc. They are all intertwined. For example the Documentation includes code samples, but if you're looking for more comprehensive, fuller, perhaps closer to real-life, and vibrant samples - you'll probably go to Open Exchange. If you'd want not just a "straight answer" to some question or dilemma, but rather and opinionated one, you'd probably go to the Community. If you'd want an in-depth expert's overview of some topic, again a Community Post would probably be your choice. Etc.

So in this case, in fact by providing your comments to this Post (and our back and forth discussion), you have actually created Gertjan a valuable resource filling in the exact gap you were talking about... The Documentation would contain the basic/general info, and the more detailed information can be found right here in this thread. This is indeed an example of how these resources complement each other, and create the wider picture.

So, again, thank you for this.

Thanks again Gertjan, your feedback continues to help refining the correct messaging on this.

Indeed I believe the wording in the documentation is, as you put it, "vague", on purpose - "Under certain conditions". The Continue I used to "trigger" the error (as you noticed, and reproduced) is "part of the picture" but not the whole picture, therefore instead of diving too deep into what is the exact sequence of events that might cause this (I tried not to either, just provided an example...), the reference is more "general".

I think also you arrived yourself at the understanding of what I was referring to when I said "not the best practice" when you mentioned - "A Continue inside a Scope, however, is never needed ... Perhaps I also did a Continue inside a Scope. In that case, it was a bug on my part."

So are scopes insides loops "totally forbidden"? No... the Documentation did not state that, just "Under certain conditions".

But - if someone encounters the error I pointed out - I hope they come to realize it's root cause is this structure, and that they have a direction for avoiding it (and probably arriving at a better flow as a consequence).

Now perhaps you are saying - well, instead of taking the scope out of the loop, just get rid of the Continue, that would really depend on the use-case, on the specifics of the process being implemented. Also, different people have different opinions on the "best way" to handle errors (in general in code; I've seen in another discussion you had some strong opinions about Try/Catch in code; and specifically using this mechanism within a BP), and some might argue that one general Scope for the whole BP is enough (or better) rather than having multiple ones in separate parts of the BP. But I think that's a topic for a different discussion... wink

I'm happy to hear this has helped you Gertjan shed some light on the behavior you were experiencing.

Please note that this is not considered as a "bug" though, but rather as a (documented) "limitation".

And, as documented, and as you seem to indicate about your case, it is not too difficult to avoid/workaround.

Since I just came across already the 3rd customer of mine that has encountered this phenomena I figured I'd share what the manifestation of this limitation looks like, for the benefit of saving some time for the next person who runs into it, and "scratches their head" when they face that error, probably not immediately correlating it to the known issue I pointed out.

Note also (as this might have been your thread of thought, or of others who read through this), the issue here is not the simple technicality of the 50 character length limit of the related Property (which happens to be the call-stack of fault-handlers of a BP; as indicated in the error message; which I assume you could simply suggest extending to a larger capacity), but rather this is a "pointer" or an "opportunity" to raise the fact that this "design" or "structure" is not the "recommended" or "best practice" construct for a BP (to have a scope inside a loop).

So thanks also for the chance of clarifying this side of the topic.