Still not fully able to grasp what you are doing but if are trying to address 

log.cls_id IN (61) AND log.q_id IN (19, 25, 27)

I often find myself doing something like

log.cls_id %INLIST $LISTFROMSTRING(':Param1,',')

AND log.q_id %INLIST $LISTFROMSTRING(':Param2,',')

and then set Param1=61 and Param2 = "19, 25, 27"

so long as you know the values of Param1 and Param2. 

%INLIST is documented https://docs.intersystems.com/iris20241/csp/docbook/Doc.View.cls?KEY=RSQ...

and

$LISTFROMSTRING is documented here https://docs.intersystems.com/iris20241/csp/docbook/Doc.View.cls?KEY=RSQ...

Regarding

If the first bullet item in the Query Plan is “Read master map”, or the Query Plan calls a module whose first bullet item is “Read master map”, the query first map is the master map rather than an index map. Because the master map reads the data itself, rather than an index to the data, this almost always indicates an inefficient Query Plan. Unless the table is relatively small, we should create an index so that when we rerun this query the Query Plan first map says “Read index map.”

I think it's more subtle than that.  My general plan of attack is to review the results of Show Plan and then search for Looping.  If the first bullet item in Show Plan is one of the following

  • Read master map Ens.MessageHeader.IDKEY, using the given idkey value.
  • Read master map Ens.MessageHeader.IDKEY, looping on ID (with a range condition).

I'm not immediately concerned.  In the first case this is going directly to the row which is perfectly fine.  

In the second case so long as the range condition is not going to read the entire extent I can accept that and not look for a better query plan.

Honestly, where its 

Read master map Ens.MessageHeader.IDKEY, looping on ID (with a range condition).

or

Read index map Ens.MessageHeader.TimeCreated, looping on TimeCreated (with a range condition) and ID.

I don't care if its the master map or the index map, what I'm interested in is Looping and does looping cause the engine to look at the entire extent or index.

You wrote

The module.xml has the one line which describes unit tests:

<UnitTest Name="src" Package="UnitTests" Phase="test"/>

This line means that  ZPM will look in /src folder of the repo and will run all the tests of all the classes which are included in UnitTests packageIs 

Is this to say the Name="src" means it will look on the machine where the module has been installed in the /src folder?  If this is so this is problematic for us I believe.  We are trying to protect our IP and as part of the Module.xml definition we have 2 Invokes that help with this effort

1. <Invoke Class="RCIPM.Utils" Method="MakeModuleDeployed" Phase="Compile" When="After" CheckStatus="false">  We do this because there are some classes which cannot be marked as Deployed as they need to be compiled additional times on the application namespaces after we have run the install command in HSCUSTOM

2.  <Invoke Class="RCIPM.Utils" Method="DeleteModulesFolder" Phase="Activate" When="After" CheckStatus="true">

If we don't delete the Module folder then our classes are there in plain text and could be lifted off of the server.

so if Name="src" mean look in the /src folder on the target machine(not the registry) then this wouldn't work for us.

That's not to say that DeleteModulesFolder doesn't cause other issues but its currently what we feel we need to do to protect our code.

Just one person's opinion but most all of my SQL development for SELECT statements is done using class queries. Several compelling features of class queries include

1. When at least using IRIS Studio you can right-click on the text and call Show Plan.  I believe that the first thing that matters for a SQL statement is the actual plan, the relative cost is relative and not so interesting to me.  What the Query plan says that it is doing tells me all that I need to know.  If you are using Dynamic SQL and building a SQL string this is much harder to do.

2. In IRIS Studio and VS Code there is a modest amount of syntax checking and coloring in class queries whereas approaches that use some string creation for the statement do not have this.  If I were to incorrectly write

set sql = "select Product->Name, Outlet->City, AmountOfSale, UnitsSold "_
        "from HoleFoods.SalesTransaction wher Actual = 1 "

ie, I've misspelled WHERE I won't see this error until after I've run the code.

This is something that can't easily done with Dynamic SQL.

3. If I choose to mark the class query as a SQLProc then I can perform a form of test using System Explorer->SQL by selecting the stored procedure and then selecting RUN.

4. I'm really liking Table Valued Functions(TVF).  Some documentation is here https://docs.intersystems.com/iris20233/csp/docbook/DocBook.UI.Page.cls?... .  I couldn't find better info in DOCBOOK but from the interweb 

The simple definition of the table-valued function (TVF) can be made such like that; a user-defined function that returns a table data type and also it can accept parameters. TVFs can be used after the FROM clause in the SELECT statements so that we can use them just like a table in the queries.The first thing that comes to our mind is thatwhat is the main difference between the view (Views are virtual database objects that retrieve data from one or more tables) and TVF? The views do not allow parameterized usage this is the essential difference between views and TVFs

so if I have

Class Sample.Person Extends (%Persistent, %Populate)
{
Property Name As %String(POPSPEC = "LastName()");
Property Age As %String [ Calculated, SqlComputeCode = { w:{Name}="Uhles" 1/0 Set {*}=""}, SqlComputed ];
Query ByName(LastName As %String) As %SQLQuery(CONTAINID = 1)[SqlProc]
{
SELECT 		%ID,Age,Name 
FROM 		Sample.Person
WHERE  		Name %STARTSWITH :LastName
ORDER By    Name
}

I can run in SQL

SELECT * FROM Sample.Person_ByName('Test')

and I can also include columns that the TVF provides in other portions of the Query so I could 

SELECT   Age,Name
FROM     Sample.Person_ByName('Test')
ORDER BY Age

as well as use the TVF as part of a JOIN with other tables.

5. Class Queries SQL statements are visible in DOCUMATIC.  Some less than experienced SQL authors, might be able to have more success if they see a class query that has the best-case example of how to write the JOINs.  Dynamic SQL doesn't provide any UI for folks to view other than being able to view the class.

6. Class queries afford re-use.  I see often folks including SQL statements midway through a method.  A class query allows you to separate the SQL statement from your method code and can be utilized in so many things

  • Class queries as SQL Procs in any xDBC client/reporting tool
  • REST APIs
  • ZEN Reports.. yes I know deprecated
  • ZEN UI pages .. yes I know deprecated
  • ObjectScript code
  • etc.

Maybe I've sold you on the idea, maybe not.

I've used ChatGPT for some IRIS related things.. Sometimes it's been useful and some times its simply made up stuff, it made up the names of classes in IRIS that have never existed and do not exist in the most recent version.  Other times it's provided me some information that wasnt boiler plate to copy and paste but led me down a path that allowed me to find the solution.

There is in fact a global on the Registry that has a record of the Patient data each time it is changed, the global is ^HS.Registry.Patient.OldHistory

the way to understand this is to determine what resource is required to access the database associated with /database/db4/syb/.  You can find this out by going to System Administration->Configuration->System Configuration->Local Databases and view the table that is associated with /database/db4/syb/.  You should see something like

find your and find the Resource then add the Resource to the Role you are connecting with.  While adding %All as a role will prevent the error it will also grant a whole lot more permissions than just the <protect> error.  

I almost never use the %On... methods as 

  1. they are only called via object interaction
  2. if I want to handle the SQL interaction I have to write the code a second time

as Wolf suggests a properly defined trigger with Foreach=row/object will be called during both Object and SQL interaction which means I can write one set of code to handle both and I find for this scenario it's best to think "SQL" instead of objects.

The attached class should provide what you are looking for based on my testing.  This follows work I did while at a prior position.

Some things to note about this class

  • I'm declaring  SqlRowIdName = Id so that I don't end up with an Id1 column in the SQL view of the table
  • I've set  Parameter STORAGEDELIMITERCHECK = 1; so that when an Save/Insert/Update is done it ensures that the data fields do not contain a delimiter and then "throw off" the piecing strategy, if you aren using delimited data but instead using $list data then you wouldn't need this.
  • I've overridden %OnNew so as to establish an array of namespaces where the data lives
  • You would also want to call SetServerInitCode in %SYSTEM.SQL or the corresponding SetOption in %SYSTEM.SQL.Util  so as to build up the namespaces array.
  • My usage of the namespaces array is for demonstration purposes only, your application probably already has some array that defines where the CB global is defined.
  • If you want to only provide read access thru objects and SQL you could set the class Parameter READONLY=1;

seems like uploading a file doesn't work.. here's the class definition .. this will work on Cache or IRIS although on a Cache based system you would have to change <Type>%Storage.SQL</Type> to the proper Cache type.


Class Mapping.Test1 Extends %Persistent [ SqlRowIdName = Id, StorageStrategy = SQLStorage ]
{

/// Namespace
Property NS As %String [ Required ];

/// Contact Row ID
Property RowID As %Library.String(SELECTIVITY = 1, TRUNCATE = 1) [ Required ];

Property Field1 As %String;

Property Field2 As %String;

Index RowIDBasedIDKeyIndex On (NS, RowID) [ IdKey ];

/// <Example>
/// Set tSC=##class(Mappting.Test1).Populate()
/// </example>
ClassMethod Populate() As %Status
{
    #dim tSC 			As %Status=$$$OK
    #dim eException  	As %Exception.AbstractException
    try {
        Set tObject=..%New()
        Set tObject.NS="LAB"
        Set tObject.RowID=1
        $$$THROWONERROR(tSC,tObject.%Save())
        Write !,tObject.NS,": ",tObject.RowID
        
        Set tObject=..%New()
        Set tObject.NS="ARK"
        Set tObject.RowID=1
        $$$THROWONERROR(tSC,tObject.%Save())
        Write !,tObject.NS,": ",tObject.RowID
        
        &SQL(insert into Mapping.Test1 (NS,Rowid) values ('LAB',2))
        If SQLCODE $$$ThrowStatus(##class(%Exception.SQL).CreateFromSQLCODE(SQLCODE,%msg))
        
    }
    catch eException {
        Set tSC=eException.AsStatus()
    }
    Quit tSC
}

Storage SQLStorage
{
<SQLMap name="Map1">
<Data name="Field1">
<Delimiter>"^"</Delimiter>
<Piece>1</Piece>
</Data>
<Data name="Field2">
<Delimiter>"^"</Delimiter>
<Piece>2</Piece>
</Data>
<Global>namespaces</Global>
<Subscript name="1">
<Expression>{NS}</Expression>
</Subscript>
<Subscript name="2">
<Expression>1</Expression>
</Subscript>
<Subscript name="3">
<DataAccess>^[{L1}]CB({L2})</DataAccess>
<Expression>{RowID}</Expression>
</Subscript>
<Type>data</Type>
</SQLMap>
<StreamLocation>^Mappting.Test1S</StreamLocation>
<Type>%Storage.SQL</Type>
}

/// This callback method is invoked by the <METHOD>%New</METHOD> method to 
/// provide notification that a new instance of an object is being created.
/// 
/// <P>If this method returns an error then the object will not be created.
/// <p>It is passed the arguments provided in the %New call.
/// When customizing this method, override the arguments with whatever variables and types you expect to receive from %New().
/// For example, if you're going to call %New, passing 2 arguments, %OnNew's signature could be:
/// <p>Method %OnNew(dob as %Date = "", name as %Name = "") as %Status
/// If instead of returning a %Status code this returns an oref and this oref is a subclass of the current
/// class then this oref will be the one returned to the caller of %New method.
Method %OnNew() As %Status [ Private, PublicList = namespaces, ServerOnly = 1 ]
{
    
    #dim tSC		As %Status=$$$OK
    #dim eException	As %Exception.AbstractException
    Try {
        Set namespaces("LAB")=""
        Set namespaces("ARK")=""
    }
    Catch eException {
        Set tSC=eException.AsStatus()
    }
    Quit tSC
}

Parameter STORAGEDELIMITERCHECK = 1;

}

If you are describing your globals in a class and they do not use default storage, but rather SQL Storage(%Storage.SQL) you can define your storage to include extended references.  This is a technique that I have used in the past.  How you loop thru the extended references is up to you.  In my case we took advantage of the ability to SetServerInitCode which is called once when the connection is made to the server, there is an equivalent capability in DeepSee/Analytics.  In our ServerInitCode we populated an array of the different extended references we needed to access and our storage map firstly looped thru this structure and then the actual global.

and I could be incorrect as I don't speak for InterSystems but I read this message as 

  1. %ZEN.Report* won't be included in standard ISC kits in the future, ie the kits will be slightly smaller.
  2. You can always get this code using IPM
  3. As reported in the past, ZEN Reports are deprecated which means no more enhancements, maybe even means no bug fixes.  Maybe(my guess) this means at some point %ZEN.Report* will become open sourced and we can contribute where needed... but again that's purely a guess.

I'm a big user/developer of ZEN Reports and it is important that I still have the ability to utilize ZEN Reports.  At the same time, it just might be the time to look at what LogiReport offers and see if it's on the whole, a better reporting environment.

and just today spotted https://blog.postman.com/introducing-the-postman-vs-code-extension/ .  This isn't full formed as they report 

 We will continuously ship features and improvements—such as support for Postman Collections and environments—so be on the lookout. We value your input, so if you have any feedback on this beta version, please feel free to leave it in the comments. 

but just one more example why measured on the whole VS Code really is a better environment IMHO.