The Temp file often times uses a process private global 

Process-private globals are written to the IRISTEMP database. In contrast to global variables, InterSystems IRIS does not treat a SET or KILL of a local variable or a process-private global as a journaled transaction event; rolling back the transaction has no effect on these operations.

That being said as Peter mentioned if you had an index on code_1_text you could greatly improve performance.  In fact I suspect this type of query would be completely index satisfiable, ie the query would only examine the index and would not have to go to the mastermap for even better performance.  

Depending on the legth of values for code_1_text if you do chose to add an index you might want to define the property as

Property code_!_text as %String(COLLATION="SQLUPPER(113)",MAXLEN=32000);

by setting the collation and length.  If the values exceed the maximum global subscript length and you do not do this you could encounter <subscript> errors.

A subscript has an illegal value or a global reference is too long. For further details, refer to $ZERROR. For more information on maximum length of global references, see “Determining the Maximum Length of a Subscript”.

The REST API is for

SQL Search

The InterSystems IRIS® SQL Search tool integrates with the InterSystems IRIS Natural Language Processor (NLP) to perform context-aware text search operations on unstructured data, with capabilities such as fuzzy search, regular expressions, stemming, and decompounding. Searching is performed using a standard SQL query with a WHERE clause containing special InterSystems SQL Search syntax.

Is your table using the NLP features?

while it wouldnt be difficult to build a REST API to accept an arbitrary SQL statement and return results there are several things to consider.

1. a single SQL SELECT statement might return 10s of thousands of rows making the REST service problematic with respect to timeouts and payload response size

2. you will want to make sure SQL security so that someone isnt allowed to 

- perform a delete if they dont have access to do so

- select data from tables the dont have access to.

etc.

the language=tsql feature allows you to write SQL statements that are supported by transact SQL.

https://docs.intersystems.com/iris20252/csp/docbook/Doc.View.cls?KEY=GTS...

InterSystems TSQL is an implementation of the Transact-SQL procedural language which supports many of the features of both the Sybase and Microsoft implementations. Transact-SQL is used with Sybase Adaptive Server, Microsoft SQL Server (MSSQL), and other platforms.

On the one hand it means if you are familiar with tsql then you can continue to write using it's supported syntax and

When you run TSQL code on the InterSystems IRIS platform, the code is compiled into the corresponding InterSystems SQL and ObjectScript code. ObjectScript is InterSystems object-oriented programming language. The compiled code is run on InterSystems IRIS and available for debugging purposes, if needed. This compilation step is fully automated and not a one-time migration task. The migration effort consists of loading the schema, data and TSQL code from the old environment to InterSystems IRIS. From then on you can continue to use and modify the TSQL application code, simply recompiling it after making any changes.

On the other hand if you use language =tsql you may not be able to use the extensions that InterSystems SQL supports such as implicit join syntax.

InterSystems SQL provides a special –> operator as a shorthand for getting values from a related table without the complexity of specifying explicit JOINs in certain common cases. This arrow syntax can be used instead of explicit join syntax, or in combination with explicit join syntax. Arrow syntax performs a left outer join.

While you can write a method or trigger using language=tsql I do not believe you can write a class query with language = tsql.  In this case you are kind of mixing apples and oranages.  

A class Query is a defined/callable query statement that can optionally be defined as a SQLProc.  When a Query is defined it can be defined as %Query in which case you write the Execute/Fetch/Close and within there I imagine you could use language = tsql but this is the less common approach and is a special case.  If Query is based on %SQLQuery you would simply provide an InterSystems SQL statement and not a tsql statement.

A class method can define language =tsql and this means the sql statements in the method are using the tsql dialect.  While a classmethod can also be defined as a SQLProc that can either act as a function returning a value or can use ReturnResultSets to 

Specifies whether this method returns result sets (so that ODBC and JDBC clients can retrieve them).

The Data keyword means to capture/store the values of City and State in the index but not as a key.  if your sql query needs to query for the ZipCode, City, and State with Data=(City,State) the query can be index satisfiable, meaning it will read data from the index and get the City and State from the index vs. having to go to the master map/Table to get the City and State.

A couple of thoughts to this

1. Clearly all of the metrics depend on the presence of any indices.  For Query #1 it may have done a table scan vs a (bitmap) index scan.  However, you are comparing the same query so it should be an apples to apples comparison.

2  Did you perform the test and ensure the global cache was flushed.  For Query #1 if you ran the embedded query firstly, the side effect of that is much of the data was in the global cache which mean the Dynamic SQL query was able to do logical reads vs physical reads.  This could explain why it is faster.  I would expect that both techniques used the same exact Query plan and likely used the same exact generated query routine (%sqlcq*).  For that reason I wouldnt expect the times to be so very different.  BTW I dont know of a way to flush the global cache other than to restart IRIS and perform the statement again.  Another way of lookng at this is if you run a sql statement(whether Embedded or Dynamic) twice, the second time will always be faster as the data is now in the logical global cache.

3. I dont agree with "In general, we can see that parent-children relationship is a lot slower from the side of the children property, even though all the data of Patient and Visit is stored in the Patient global.".  The devil is in the details and the indices, where clause. 

For any of these queries without seeing the query plan we dont have the full story.  We also dont know

1. the return for the count(*)

2. were the queries purley index satisfiable or did they have to go back to the table(mastermap).  If they had to go to the table how "fat"(how many total columns and the overall lentgh of a row) was the table and how tall(how many total rows) was the table.    

Not enought detail here to see exactly what you are encountering but %LIbrary.DynamicObject in the %Set method supports

 method %Set(key, value, type) as %DynamicAbstractObject

Create a new value or update an existing value.

key The key name of the value you wish to create or update.

value The new value with which to update the previous value or create a new value.

type OPTIONAL, the 'type' of the value being assigned.

If the 'type' argument is present then its value must be one of the following string values: ""          - %Set(key,value) without conversions
"null"        - "" becomes null; else assign with no conversion
"boolean"      - 0/nonzero becomes JSON false/true
"number"      - Convert to numeric value
"string"       - Convert to text string
"string>base64"  - Convert to text string then encode into base64
"string<base64"  - Convert to text string then decode from base64
"stream"      - %Stream contents converted to text string
"stream>base64" - %Stream contents are encoded into base64 string
"stream<base64" - %Stream is decoded from base64 into byte string

 so if one if your fields is greater than the max string lenght usage of type properly would place the data in a stream object. 

Another alternative is to not use %LIbrary.DynamicObject but rather define classes and extend from %JSON.Adaptor and define your properties with the correct data type, again for the field larger than Max. String Length as %Stream.GlobalCharacter.

While %Library.DynamicObject is quick and (not dirty) I prefer the approach of using %JSON.Adaptor except for the cases where the payload can change for some reason or another or over time.  The approach of defining your classes is purposeful as it defines the schema concretely(whether you think this is good or bad is up to you).

you stated 

"We use a LOT of embedded SQL in our application and I have had to move to dynamic to make this work, however since dynamic queries aren't cached,  "

Dynamic queries have always been cached, ie the query code but not the results.

Embedded query prior to the Universal Query Cache produced code in line inside of your class/mac routines.  With the advent of the Universal query cache all sql statements, whether with embedded sql or otherwise are cached(the query code again not the results). The Universal Query cache addressed several issues with embedded sql.  For embedded SQL the query plan/code was produced at compilation time of of your class/mac routine.  If the table statistics changed, indices were added/removed, or other events occured to the table definition your embedded query would continue to use the old code unless recompiled(for better or worse).

Global reads might find that running a query more than once is faster but that is because the data is in the cache and results in a logical read vs. a phsical read to disk.

Analytics/MDX queries are different from SQL in that there is a caching mechanism for query results.

Yes this is true and a formal way to do this is really the thing you queue, whether a function or a class method should return a %Status to signal to the wqm that an error occured.  In practice the usage of wqm.Queue should not only call a function or class method but pass some parameters.  I often times have my function/class method act on a collection/list of rowids so my call to .Queue includes the function/classmethod as well as the list of rowids to process.

Maybe someone is looking for job security.  

Regarding 

You will miss out on DTL specific functions (example lookups), albeit you could call the look up code in COS

some might think that using classes with COS code is better as it allows you to call any funtion you like. I think that's demonstrating a lack of knowledge.  If you created a class that extends Ens.Rule.FunctionSet , the functions then become available to anyone using the DTL or Rule Editor.

Additionally the new Rule Editor allows for ForEach whch otherwise might have been difficult to implement in the past.

I'd much rather be spending my time working on more advanced parts of an implementation instead of DTLs and Rules.

The only odd thing about DTLs and Rules is when you ask a business analyst to be involved they might not feel as strongly about following a rigorous CI/CD process. In my experience, some folks like to make changes to the DTLs and Rule and not commit to a feature branch and promote up thru the different environment(TEST,STAGE,PROD) as would normally be done if one was striclty in a class, even thought DTLs and Rules are in fact classes.

In the past what I have done is define a Web Application with a REST Dispatch class and in that class serialize the data from the JSON payload into a real message class with properties.  In this way the Business Service receives a request object with the properties defined and now the Message Viewer has the expected functionality.  This also means that the business service is not purlely a tied to REST.  Anyone that sends a proper pRequest object can use the business service.

The documentation link reports

https://docs.intersystems.com/hs20241/csp/docbook/DocBook.UI.Page.cls?KE...

When version checking is implemented, the property specified by VERSIONPROPERTY is automatically incremented each time an instance of the class is updated (either by objects or SQL). Prior to incrementing the property, InterSystems IRIS compares its in-memory value to its stored value. If they are different, then a concurrency conflict is indicated and an error is returned; if they are the same, then the property is incremented and saved.

I've used the VERSIONPROPERTY  parameter before and have observed that it is updated each time the object is saved.