%JSON.Adaptor does not construct an intermediate dynamic object. You can use %ObjectToAET to convert normal object into dynamic object:

set dynObj = ##class(%ZEN.Auxiliary.altJSONProvider).%ObjectToAET(obj)

There are two approaches:

1. Create a "result set" class to hold the results (interestingly, InterSystems provides %XML.DataSet and other tools for this specific use case with XML/SOAP. Docs):

Class Test.JSONRS Extends (%RegisteredObject, %JSON.Adaptor)
{
Property count As %Integer;
Property results As list Of Book;
}

2. Simple approach:

  • Output header {"results": 3, "items": [
  • Call %JSONExport on each book (don't forget the comma at the end)
  • Output footer ]}

Despite being hacky, the second approach is better:

  • If JSON export on each individual object is successful it works every time and if some object fails we won't get valid a JSON anyways
  • It can be easily generalized to use with any type of result
  • It does not hold a large structure in memory as each object is loaded/displayed individually.

That said, I generally recommend against supplying count in query results because of many reasons:

So %Dictionary.CacheClassname would be my table name then?

Well, class name. If you want you can convert table name to class name with:

set:table'["." table=$$$DefaultSchema_"."_table	// support unqualified names
set class = $$$GetClassNameFromIQN(table)

I don't have to use any :sql.... type code for it to do the lookup? 

You can but you don't have to.

What about the existing EXISTS function that already exists that is used for Data Lookup Tables (lut)?

It's a separate function that works only with Data Lookup Table  (which are actually not tables at all).

Interesting article!

I encountered Business Process variation of this issue recently and would like to add that setting  SKIPMESSAGEHISTORY parameter is not always a complete solution. It disables sent/received reference history, but pending messages are still stored in a collection (no way around it).

In cases where one BP instance waits on more than ~10 000 messages simultaneously same issue occurs (I got a journal file per minute on 50 000 pending messages).

The recommended approach would be to change architecture layout so that one process would wait on a more reasanable amount of messages.