A couple of thoughts to this

1. Clearly all of the metrics depend on the presence of any indices.  For Query #1 it may have done a table scan vs a (bitmap) index scan.  However, you are comparing the same query so it should be an apples to apples comparison.

2  Did you perform the test and ensure the global cache was flushed.  For Query #1 if you ran the embedded query firstly, the side effect of that is much of the data was in the global cache which mean the Dynamic SQL query was able to do logical reads vs physical reads.  This could explain why it is faster.  I would expect that both techniques used the same exact Query plan and likely used the same exact generated query routine (%sqlcq*).  For that reason I wouldnt expect the times to be so very different.  BTW I dont know of a way to flush the global cache other than to restart IRIS and perform the statement again.  Another way of lookng at this is if you run a sql statement(whether Embedded or Dynamic) twice, the second time will always be faster as the data is now in the logical global cache.

3. I dont agree with "In general, we can see that parent-children relationship is a lot slower from the side of the children property, even though all the data of Patient and Visit is stored in the Patient global.".  The devil is in the details and the indices, where clause. 

For any of these queries without seeing the query plan we dont have the full story.  We also dont know

1. the return for the count(*)

2. were the queries purley index satisfiable or did they have to go back to the table(mastermap).  If they had to go to the table how "fat"(how many total columns and the overall lentgh of a row) was the table and how tall(how many total rows) was the table.    

Not enought detail here to see exactly what you are encountering but %LIbrary.DynamicObject in the %Set method supports

 method %Set(key, value, type) as %DynamicAbstractObject

Create a new value or update an existing value.

key The key name of the value you wish to create or update.

value The new value with which to update the previous value or create a new value.

type OPTIONAL, the 'type' of the value being assigned.

If the 'type' argument is present then its value must be one of the following string values: ""          - %Set(key,value) without conversions
"null"        - "" becomes null; else assign with no conversion
"boolean"      - 0/nonzero becomes JSON false/true
"number"      - Convert to numeric value
"string"       - Convert to text string
"string>base64"  - Convert to text string then encode into base64
"string<base64"  - Convert to text string then decode from base64
"stream"      - %Stream contents converted to text string
"stream>base64" - %Stream contents are encoded into base64 string
"stream<base64" - %Stream is decoded from base64 into byte string

 so if one if your fields is greater than the max string lenght usage of type properly would place the data in a stream object. 

Another alternative is to not use %LIbrary.DynamicObject but rather define classes and extend from %JSON.Adaptor and define your properties with the correct data type, again for the field larger than Max. String Length as %Stream.GlobalCharacter.

While %Library.DynamicObject is quick and (not dirty) I prefer the approach of using %JSON.Adaptor except for the cases where the payload can change for some reason or another or over time.  The approach of defining your classes is purposeful as it defines the schema concretely(whether you think this is good or bad is up to you).

you stated 

"We use a LOT of embedded SQL in our application and I have had to move to dynamic to make this work, however since dynamic queries aren't cached,  "

Dynamic queries have always been cached, ie the query code but not the results.

Embedded query prior to the Universal Query Cache produced code in line inside of your class/mac routines.  With the advent of the Universal query cache all sql statements, whether with embedded sql or otherwise are cached(the query code again not the results). The Universal Query cache addressed several issues with embedded sql.  For embedded SQL the query plan/code was produced at compilation time of of your class/mac routine.  If the table statistics changed, indices were added/removed, or other events occured to the table definition your embedded query would continue to use the old code unless recompiled(for better or worse).

Global reads might find that running a query more than once is faster but that is because the data is in the cache and results in a logical read vs. a phsical read to disk.

Analytics/MDX queries are different from SQL in that there is a caching mechanism for query results.

Yes this is true and a formal way to do this is really the thing you queue, whether a function or a class method should return a %Status to signal to the wqm that an error occured.  In practice the usage of wqm.Queue should not only call a function or class method but pass some parameters.  I often times have my function/class method act on a collection/list of rowids so my call to .Queue includes the function/classmethod as well as the list of rowids to process.

Maybe someone is looking for job security.  

Regarding 

You will miss out on DTL specific functions (example lookups), albeit you could call the look up code in COS

some might think that using classes with COS code is better as it allows you to call any funtion you like. I think that's demonstrating a lack of knowledge.  If you created a class that extends Ens.Rule.FunctionSet , the functions then become available to anyone using the DTL or Rule Editor.

Additionally the new Rule Editor allows for ForEach whch otherwise might have been difficult to implement in the past.

I'd much rather be spending my time working on more advanced parts of an implementation instead of DTLs and Rules.

The only odd thing about DTLs and Rules is when you ask a business analyst to be involved they might not feel as strongly about following a rigorous CI/CD process. In my experience, some folks like to make changes to the DTLs and Rule and not commit to a feature branch and promote up thru the different environment(TEST,STAGE,PROD) as would normally be done if one was striclty in a class, even thought DTLs and Rules are in fact classes.

In the past what I have done is define a Web Application with a REST Dispatch class and in that class serialize the data from the JSON payload into a real message class with properties.  In this way the Business Service receives a request object with the properties defined and now the Message Viewer has the expected functionality.  This also means that the business service is not purlely a tied to REST.  Anyone that sends a proper pRequest object can use the business service.

The documentation link reports

https://docs.intersystems.com/hs20241/csp/docbook/DocBook.UI.Page.cls?KE...

When version checking is implemented, the property specified by VERSIONPROPERTY is automatically incremented each time an instance of the class is updated (either by objects or SQL). Prior to incrementing the property, InterSystems IRIS compares its in-memory value to its stored value. If they are different, then a concurrency conflict is indicated and an error is returned; if they are the same, then the property is incremented and saved.

I've used the VERSIONPROPERTY  parameter before and have observed that it is updated each time the object is saved.

The video talks about using the published APIs.  I assume this specifically referes to the APIs in the Security package.  My specific question is will we still be able to write SELECT statements against %SYS.Audit and will anything special need to be done to be able to do so.  Will we have to update users to add resources or grant permissions?

Might you be able to share the 2 sql statements and the query plans?

If you are running the queries in the System Management Portal there is a limit that you could set 

but I dont imagine that is the issue.

Where possible a COUNT(*) query could chose to use your indices and in theory your indices might be different from your table data but that would be highly suspicios under normal circumstances as the writing to the table and the indices are in a database transaction so it should have all been written or not.  

To your specific question

Is there something inherent with the queries that limits the result set size, or anything I can do to make sure I'm getting back all the results? Rather, than trying to run it about 100 times limiting all previously gotten accounts.

Again if you are running via the System Management Portal there are times I run the query with a limit of 100 for example and once the results are presented I select the Print hyperlink to run the entire query results to a file

I believe we can determine this by using

SELECT distinct SystemVersionNumber
FROM ZPM_Analytics.Event
where Action='install'
order by SystemVersionNumber

On our private registry we see the following


SystemVersionNumber
2020.1
2021.1
2021.1.2
2021.1.3

2022.1
2022.1.2
2022.1.4
2022.2
2022.3
2023.1.1
2023.1.2
2023.3
2024.1
2024.1.1

This is why we have been adovocating to have Username in the table ZPM_Analytics.Event as we create a Username for each one of our "clients" and can trace back who these install records are specifically for,