As for the general "Business Process Scope Designing" topic I think we are in agreement that this would depend on the needs of the particular process, and on the preferences of the implementer.

I do want to respond though on the topic of what information is provided as part of our Documentation, and your thoughts on that.

First, I will let my colleagues from Documentation comment on whether they feel there is room to improve and provide clearer details about these "conditions" (and my colleagues in Product Management and/or Development to comment about the ideas you raised with regard to how the Compiler could behave differently and how the Stack Handler should/could handle the Continue in these circumstances). In general we always strive to improve, and I'm sure people who have been with us over the years, have seen (and continue to see) serious leaps of improvement in our Documentation over the past few years. Firsthand feedback from the field, like yours, is extremely valuable for us for this kind of improvement.

And from my perspective I would like to mention just two points -

1. "The Documentation Balance" 

When I said "vague - on purpose" I did not mean this of course in any sinister way, as "withholding information", or in a negligent way either. The whole purpose of Documentation is to be helpful, to provide information, and to empower the readers. Not providing information would be naturally counter productive, but, there is more than one way to be counter productive... In providing information there is always a balance we need to keep between delivering information in a way that enables the "Users" to become more efficient and successful in their work, vs. spilling out too much data that might only end up confusing the "User", which would be contrary to the purpose of what we set out doing.

So you might say "certain conditions" is not clear enough, better have said - "a loop with a Scope inside, with a Call inside (or a Delay, or an Alert, or maybe some other such activity), and a Continue (or maybe also some other activity that would have the same effect today, or maybe added in the future, that would have a similar effect) and above 13 iterations (or maybe less or maybe more, if things change a little in the future), etc., etc."

I think this again might come back to some "personal preferences" - some might say - just give me all of the hard facts, I'll make sense of them, and know how to handle them, while others will just get baffled. At the end of the day we need to keep this balance and probably try to have in mind the "common" or "typical" case of "Users".

2. "The Knowledge/Learning Eco-System"

Today our "offering" or "platform" for enabling people to gain knowledge and learn and experience our technology, is made up of much more than just our Documentation. In fact if you look at the top banner of our Documentation, or any one of the "members of the family" I'll mention right away, you'll see they are always "grouped" together: Online Learning, Documentation, Developer Community, Open Exchange, Global Masters and Certification.

Each has it's own characteristics, vibe, nature, audiences and goals, but they all complement each other and create, together, an "eco-system" for the "whole prism" of how one can learn and interact with our technology. Our Documentation nowadays includes links to Posts on the Community, to Videos and Courses in the Online Learning, etc. They are all intertwined. For example the Documentation includes code samples, but if you're looking for more comprehensive, fuller, perhaps closer to real-life, and vibrant samples - you'll probably go to Open Exchange. If you'd want not just a "straight answer" to some question or dilemma, but rather and opinionated one, you'd probably go to the Community. If you'd want an in-depth expert's overview of some topic, again a Community Post would probably be your choice. Etc.

So in this case, in fact by providing your comments to this Post (and our back and forth discussion), you have actually created Gertjan a valuable resource filling in the exact gap you were talking about... The Documentation would contain the basic/general info, and the more detailed information can be found right here in this thread. This is indeed an example of how these resources complement each other, and create the wider picture.

So, again, thank you for this.

Thanks again Gertjan, your feedback continues to help refining the correct messaging on this.

Indeed I believe the wording in the documentation is, as you put it, "vague", on purpose - "Under certain conditions". The Continue I used to "trigger" the error (as you noticed, and reproduced) is "part of the picture" but not the whole picture, therefore instead of diving too deep into what is the exact sequence of events that might cause this (I tried not to either, just provided an example...), the reference is more "general".

I think also you arrived yourself at the understanding of what I was referring to when I said "not the best practice" when you mentioned - "A Continue inside a Scope, however, is never needed ... Perhaps I also did a Continue inside a Scope. In that case, it was a bug on my part."

So are scopes insides loops "totally forbidden"? No... the Documentation did not state that, just "Under certain conditions".

But - if someone encounters the error I pointed out - I hope they come to realize it's root cause is this structure, and that they have a direction for avoiding it (and probably arriving at a better flow as a consequence).

Now perhaps you are saying - well, instead of taking the scope out of the loop, just get rid of the Continue, that would really depend on the use-case, on the specifics of the process being implemented. Also, different people have different opinions on the "best way" to handle errors (in general in code; I've seen in another discussion you had some strong opinions about Try/Catch in code; and specifically using this mechanism within a BP), and some might argue that one general Scope for the whole BP is enough (or better) rather than having multiple ones in separate parts of the BP. But I think that's a topic for a different discussion... wink

I'm happy to hear this has helped you Gertjan shed some light on the behavior you were experiencing.

Please note that this is not considered as a "bug" though, but rather as a (documented) "limitation".

And, as documented, and as you seem to indicate about your case, it is not too difficult to avoid/workaround.

Since I just came across already the 3rd customer of mine that has encountered this phenomena I figured I'd share what the manifestation of this limitation looks like, for the benefit of saving some time for the next person who runs into it, and "scratches their head" when they face that error, probably not immediately correlating it to the known issue I pointed out.

Note also (as this might have been your thread of thought, or of others who read through this), the issue here is not the simple technicality of the 50 character length limit of the related Property (which happens to be the call-stack of fault-handlers of a BP; as indicated in the error message; which I assume you could simply suggest extending to a larger capacity), but rather this is a "pointer" or an "opportunity" to raise the fact that this "design" or "structure" is not the "recommended" or "best practice" construct for a BP (to have a scope inside a loop).

So thanks also for the chance of clarifying this side of the topic.

Indeed Nicky, the size of the Audit database is a serious concern.

One option you might want to consider (it entails some coding but might be worth your time, vs. your sizing concerns), is to have some task that would do "house keeping" for your events.

If you have a lot of various SQL events (you mention INSERTs, and probably also SELECTs) that you don't have interest in, and one or two that you do (like DELETE). You could have a task that would run periodically (per the timing that makes sense for you, could even be every hour if that's relevant). This task would go through the SQL Audit events, specifically the SQL ones you just turned on, looking at the Description field returned, it should include the type of statement (e.g. SELECT or DELETE). Then for every event that is not a DELETE, for example, you can delete it (remove it from the Audit Log; or export and delete if you want to save them aside).

You could query the Audit by using the %SYS.Audit relevant class queries, "List" for example, filtering on the related event, though Description is just returned and not filtered in advance.

Or you could use direct SQL, for example (this time for looking for the DELETEs) -

SELECT ID, EventSource, EventType, Event, Description FROM %SYS.Audit 
WHERE (Event='XDBCStatement') AND (Description %STARTSWITH 'SQL DELETE')

Which would return for example:

ID EventSource EventType Event Description
2021-01-10 08:23:24.420||MYMACHINE:MYINSTANCE||2386 %System %SQL XDBCStatement SQL DELETE Statement

Then you could of course also look for what is not DELETEs...

SELECT ID, EventSource, EventType, Event,Description FROM %SYS.Audit 
WHERE (Event='XDBCStatement') AND NOT (Description %STARTSWITH 'SQL DELETE')

which could return for example:

ID EventSource EventType Event Description
2021-01-10 08:32:53.014||MYMACHINE:MYINSTANCE||2388 %System %SQL XDBCStatement SQL SELECT Statement

 Then you could delete it, for example:

DELETE FROM %SYS.Audit 
WHERE ID='2021-01-10 08:32:53.014||MYMACHINE:MYINSTANCE||2388'

[Of course you could combine these last two statements into one - e.g. DELETE FROM ... WHERE ID IN (...)]

Hope this helps...

Nicky,

You can use the %System->%SQL family of audit events.

From the docs:

Event Source Event Type and Event Occurs When Event Data Contents Default Status
%System

%SQL/

DynamicStatement

A dynamic SQL call is executed. The statement text and the values of any host-variable arguments passed to it. If the total length of the statement and its parameters exceeds 3,632,952 characters, the event data is truncated. Off
%System

%SQL/

EmbeddedStatement

An embedded SQL call is executed. See below for usage details. The statement text and the values of any host-variable arguments passed to it. If the total length of the statement and its parameters exceeds 3,632,952 characters, the event data is truncated. Off
%System

%SQL/

XDBCStatement

A remote SQL call is executed using ODBC or JDBC. The statement text and the values of any host-variable arguments passed to it. If the total length of the statement and its parameters exceeds 3,632,952 characters, the event data is truncated. Off

If you're interested only with JDBC you can stick with the last event above (XDBCStatement).

Note, while we're on the subject, if you're looking for "fancier" kind of auditing, for example to log changed fields (before and after values), you can check this thread by @Fabio Goncalves.

For differences between InterSystems CACHE and InterSystems IRIS Data Platform I suggest you have a look here.

Specifically you can find there a table comparing the products (including InterSystems Ensemble).

As well as a document going into detail about various new features and capabilities

If you want to perform a PoC for a new system definitely use InterSystems IRIS.

Important Note -

Some classes - for example the built-in Business Process class (Ens.BusinessProcessBPL) - have already specific implementation of an %OnDelete method, and hence adding the DeleteHelper as a SuperClass would override the "default" %OnDelete that class might have (depending on the order of Inheritance defined in the class).

So for example a BPL-based Business Process would have this %OnDelete -

ClassMethod %OnDelete(oid As %ObjectIdentity) As %Status
{
    Set tId=$$$oidPrimary(oid)
    &sql(SELECT %Context INTO :tContext FROM Ens.BusinessProcessBPL WHERE %ID = :tId)
    If 'SQLCODE {
        &sql(DELETE from Ens_BP.Context where %ID = :tContext)
        &sql(DELETE from Ens_BP.Thread where %Process = :tId)
    }
    Quit $$$OK
}

Note this takes care of deleting the related Context and Thread data when deleting the Business Process instance. This is the desired behavior. Without this lots of "left-over" data could remain.

Note also that in fact the "AddHelper" class that assists in adding the DeleteHelper as a Super Class to various classes - does not add the DeleteHelper to Business Process classes -

// Will ignore classes that are Business Hosts (i.e. Business Service, Business Process or Business Operation)
// As they do not require this handling
If  $ClassMethod(className,"%IsA","Ens.Host")||$ClassMethod(className,"%IsA","Ens.Rule.Definition")  {
Continue
}


But if you add the DeleteHelper manually to your classes please take caution to see if the class already has an %OnDelete() method and how you'd like to handle that.

A good option is to simply add a call to the original %OnDelete in the newly generated %OnDelete() method calling ##super().

Thanks to @Suriya Narayanan Suriya Narayanan Vadivel Murugan for helping a customer diagnose this situation!

Hi Murillo,

I replied to your similar comment on this discussion thread. In fact I referenced this discussion there as well...blush

So you'll find a more detailed answer there.

In any case - using both utilities could be helpful for your scenario going ahead - the DeleteHelper to hopefully avoid these kinds of cases, and this utility to validate you indeed don't have any "Purge leaks".

Hi Murillo,

Happy you found interest in this utility.

The situation you described is in fact the classic scenario I had in mind when I built this utility, so it could definitely help.

Here are a few clarifications though –

  1. Ensemble Version

As I mentioned in my original post above, in “relatively” newer versions of Ensemble (since 2017.1), and definitely in InterSystems IRIS, within the SOAP Wizard there is a checkbox that if you check, the auto-generated classes will include a similar %OnDelete method to the one generated by my utility. And therefore the built-in Purge Task will take care of deleting this data (going ahead that is… see more about this in the next comment).

So in these versions, for this specific scenario (SOAP Wizard generated classes, as well the XML Schema Wizard) you don’t have to use my utility. For other cases, where you have custom Persistent classes with inter-relations, my utility would still be helpful.

 

  1. Looking Ahead vs. Looking Back

This %OnDelete method (generated by my utility, or by checking the checkbox in the Wizard mentioned above) takes care of deleting these objects when the Message Body is deleted. But if the Message Body has already been deleted (which seems like your case), this would not help (directly) retroactively.

It would still be recommended to add it, for Purges happening going ahead (and for another consideration mentioned soon), but just by adding this, the old data accumulated will not get auto-magically deleted.

If indeed you have this old data (of objects pointed to by older, previously purged Message Bodies) accumulated then you’ll have to take care of deleting it programmatically.

I won’t get into too many details about how to do this, but here are a few general words of background –

The general structure of Ensemble Messages (in this context) is:

Message Header -> Message Body [ -> Possible other Persistent objects referenced at; and potentially other levels of more object hierarchy]

The built-in Purge Task will delete the Message Header along with the Message Bodies (assuming you checked the “Include Bodies” checkbox in the Purge Task definition, which in 99.99% cases I’ve seen should be checked). But it will not delete other Persistent objects the Message Body was referencing, at least not just “out-of-the-box” (that’s where the %OnDelete method comes into play).

So if your Message Headers and Bodies referring to the “other objects” were already deleted, in order to delete the “not referenced anymore” objects, you’d need to find the last ID of these objects (at the top most level of the object hierarchy) that still has a reference, and delete those under that ID (assuming an accumulative running integer ID for these objects).

I believe the WRC (InterSystems Support – Worldwide Response Center) have a utility built for these kinds of cases (at least scenarios), so I recommend reaching out to the WRC for their assistance, if this is your situation.

Note that if you added my %OnDelete to your classes, then finding just the top-level object in the hierarchy, the one(s) that the Message Body referenced, would be enough to delete, since the %OnDelete will take care of deleting the rest of “the tree” of objects. That’s why I said above that it won’t help the situation of this “lingering” data all by itself, but it could help.

 

  1. Future Proofing

As you can see finding yourself in the situation of having accumulated old data that is not too straightforward to delete, is something you’d very much want to avoid. That is why I also built another utility that helps in validating, during development/testing stages, that there are no such “Purge leaks” in your interfaces.

In addition this tool also helps at providing an estimate as to how much diskspace (for database growth and journaling) your interface will require.

You can check this out here.

 

Hope this helps.

Thanks for this tip Evgeny.

Indeed this /_spec convention pointing to the Swagger spec does seem convenient, I wasn't aware this was part of the Open API Specification (in fact I didn't find this mentioned in the Spec...).

I actually saw the method you mentioned when I was looking at the rest template provided as part of the previous contest, and thought of using it, but eventually since I used the spec-first approach, and this method you mentioned is currently (at least as-is) implemented assuming you use a code-first approach, I opted not to, and to have the same functionality (a REST API end-point providing the Swagger spec) via our already built-in API Mgmt. REST API (which I mentioned above).

I could of course consider adding something similar in the future.