Not enought detail here to see exactly what you are encountering but %LIbrary.DynamicObject in the %Set method supports
method %Set(key, value, type) as %DynamicAbstractObject
Create a new value or update an existing value.key The key name of the value you wish to create or update.
value The new value with which to update the previous value or create a new value.
type OPTIONAL, the 'type' of the value being assigned.
If the 'type' argument is present then its value must be one of the following string values: "" - %Set(key,value) without conversions
"null" - "" becomes null; else assign with no conversion
"boolean" - 0/nonzero becomes JSON false/true
"number" - Convert to numeric value
"string" - Convert to text string
"string>base64" - Convert to text string then encode into base64
"string<base64" - Convert to text string then decode from base64
"stream" - %Stream contents converted to text string
"stream>base64" - %Stream contents are encoded into base64 string
"stream<base64" - %Stream is decoded from base64 into byte string
so if one if your fields is greater than the max string lenght usage of type properly would place the data in a stream object.
Another alternative is to not use %LIbrary.DynamicObject but rather define classes and extend from %JSON.Adaptor and define your properties with the correct data type, again for the field larger than Max. String Length as %Stream.GlobalCharacter.
While %Library.DynamicObject is quick and (not dirty) I prefer the approach of using %JSON.Adaptor except for the cases where the payload can change for some reason or another or over time. The approach of defining your classes is purposeful as it defines the schema concretely(whether you think this is good or bad is up to you).
you stated
"We use a LOT of embedded SQL in our application and I have had to move to dynamic to make this work, however since dynamic queries aren't cached, "
Dynamic queries have always been cached, ie the query code but not the results.
Embedded query prior to the Universal Query Cache produced code in line inside of your class/mac routines. With the advent of the Universal query cache all sql statements, whether with embedded sql or otherwise are cached(the query code again not the results). The Universal Query cache addressed several issues with embedded sql. For embedded SQL the query plan/code was produced at compilation time of of your class/mac routine. If the table statistics changed, indices were added/removed, or other events occured to the table definition your embedded query would continue to use the old code unless recompiled(for better or worse).
Global reads might find that running a query more than once is faster but that is because the data is in the cache and results in a logical read vs. a phsical read to disk.
Analytics/MDX queries are different from SQL in that there is a caching mechanism for query results.









A couple of thoughts to this
1. Clearly all of the metrics depend on the presence of any indices. For Query #1 it may have done a table scan vs a (bitmap) index scan. However, you are comparing the same query so it should be an apples to apples comparison.
2 Did you perform the test and ensure the global cache was flushed. For Query #1 if you ran the embedded query firstly, the side effect of that is much of the data was in the global cache which mean the Dynamic SQL query was able to do logical reads vs physical reads. This could explain why it is faster. I would expect that both techniques used the same exact Query plan and likely used the same exact generated query routine (%sqlcq*). For that reason I wouldnt expect the times to be so very different. BTW I dont know of a way to flush the global cache other than to restart IRIS and perform the statement again. Another way of lookng at this is if you run a sql statement(whether Embedded or Dynamic) twice, the second time will always be faster as the data is now in the logical global cache.
3. I dont agree with "In general, we can see that parent-children relationship is a lot slower from the side of the children property, even though all the data of Patient and Visit is stored in the Patient global.". The devil is in the details and the indices, where clause.
For any of these queries without seeing the query plan we dont have the full story. We also dont know
1. the return for the count(*)
2. were the queries purley index satisfiable or did they have to go back to the table(mastermap). If they had to go to the table how "fat"(how many total columns and the overall lentgh of a row) was the table and how tall(how many total rows) was the table.