Hi,

Using the new(ish) External Java Server and Gateway connection, this worked for me. I copied your class, refactored it for the Apache PDFBox version 3.0.0 and then called it from the command line. I did receive some warnings that have nothing to do with PDFBox, they are related to the larger project that I placed this code into. The call worked just fine. I don't know exactly when the new external language server/gateway support was added but it is documented in 2021.1: https://docs.intersystems.com/iris20211/csp/docbook/DocBook.UI.Page.cls?KEY=BEXTSERV_intro. I'm pasting the IRIS Session commands I used:

USER>set java = $system.external.getJavaGateway()

USER>do java.addToPath("<path to my project/target/intersystems-demo-1.0-SNAPSHOT.jar")

USER>set text = java.invoke("demo.intersystems.utils.PdfToText","getText","/home/danp/Downloads/DM32.2-2017-00157-ambiguity-in-JSON-array-constructor.pdf")

USER>zw text
text="ISO/IEC JTC1/SC32 WG3:CMH-023"_$c(10)_"ANSI INCITS DM32.2-2017-00157"_$c(10)_"- 1 of 7 -"_$c(10)_"Title: Ambiguity in <JSON array constructor>"_$c(10)_"Author: <<snipped>>

I even have a nice way to return the text as an instance of %Stream.Object if that is interesting to you.

HTH,

Dan

Interesting question. There is no good option to manipulate the ObjectRegistry but you do have options to access stored data. My recommended option you've already taken off the table.

The FOREACH=ROW/OBJECT trigger is by far the best and easiest solution if you require access to the version of an object that is currently stored. There are several reasons why this is the best option, perhaps the most important being that ROW/OBJECT triggers are consistently applied between Objects and SQL. Of course, the same restrictions exist for triggers that exist for %OnBeforeSave - we don't recommend modifying objects/rows.

Another option is to use <property>GetStored. This may not work for every situation but it does allow code to retrieve the value of a property directly from storage. I believe this is restricted to what we call "default storage". That isn't an entirely true statement but it is for common storage types (SQL Mapped storage being the other common type).

USER>zw ^demo.person.1(1)
^demo.person.1(1)=$lb("Doe, John","123 Main Street","Cambridge","MA","02142","[{""type"":""mobile"",""country_code"":""1"",""number"":""999-999-9999""},{""type"":""work"",""country_code"":""1"",""number"":""888-888-8888""}]")

USER>set person = ##class(demo.intersystems.Person).%OpenId(1)

USER>write person.name
Doe, John
USER>set person.name = "Richard, Maurice"

USER>write person.nameGetStored(person.%Id())
Doe, John
USER>write person.name
Richard, Maurice

Of course, you can always use SQL to retrieve stored values.

For the intrepid explorer, there is another option. This code isn't supported as the %Load* api's aren't supposed to be public. I'm sure most people have examined generated code and have discovered these methods so I'm divulging any big secrets here. First, the results and then the code.

USER>set person = ##class(demo.intersystems.Person).%OpenId(1)

USER>set person.name = "Enono, William"

USER>set sperson = ##class(demo.intersystems.Person).GetStored(person.%Id())

USER>write sperson.name
Leavitt, Timothy
USER>write person.name
Enono, William

Fair warning - this code isn't supported, it is considered to be user-implemented code. I also didn't rigorously test this code. I added this class method to my demo.intersystems.Person class:

ClassMethod GetStored(id As %Integer) As demo.intersystems.Person
{
    try {
        set obj = ..%New()
        set cur = obj.%Concurrency
        set obj.%Concurrency = 0
        $$$THROWONERROR(sc,obj.%LoadInit(,,1))
        $$$THROWONERROR(sc, obj.%LoadData(id))
        set obj.%Concurrency = cur
        do obj.%SetModified(0)
    } catch e {
        set obj = $$$NULLOREF
    }
    return obj
}

The object referenced by the oref returned by this code is not assigned an id. If you were to save this object it would create a new stored object and it would be assigned its own ID. Exercise caution.

Property methods, or any member methods in general, originate from three places: the property's memberType class, the declared type class (sometimes call the "datatype" class) and the containing class definition.

The memberType is simply the type of class member - property, index, query, method, parameter - all are types of members that can be defined in a class. What isn't well known is that there is a memberType class for several memberTypes. (in the past, this was referred to as the property class) We'll focus on the property memberType here but keep in mind that the concepts extend to indexes, queries, triggers, etc.

There are two categories of property methods. Methods inherited from the datatype class collectively provide the datatype class runtime for the datatype generated specifically for a property. These methods are classmethods but they behave as if they are operating on an instance of the datatype class. That instance is passed as an argument when invoking the method or returned by the method. We think of these methods as running in a provided context.

The second category of property methods is those inherited from the memberType classes. Yes, I wrote classes - plural. That is because in addition to the memberType classes that are automatically inherited by each property, the class definition can specify additional classes as the value of the PropertyClass class keyword. To specify multiple classes simply enclose the list of classes in parentheses.

Class User.Special Extends %RegisteredObject [ PropertyClass = User.MyProperty ]
{

Property MyProperty As %String;
}

In this example, each property defined in User.Special will inherit methods implemented by User.MyProperty. Before we look at User.MyProperty we need to understand another class keyword - NoContext. There is also a NoContext method keyword. When used as a class keyword it applies to all methods that do not also specify NoContext.

The documentation for NoContext describes the code generation implications of using NoContext but does not explain what it means. Perhaps that is because the keyword name implies something other than its compile-time behavior. NoContext simply means that the method has access to the current instance. In other words, the member method does not provide its own context. What member methods do provide their own context? Datatype methods.

Why do we care about context? Simply put, datatype methods should not have the ability to mutate the object since they are intended to implement only the datatype behavior and can access only the provided value.

Class User.MyProperty [ NoContext ]
{

Method IsSpecial() As %Boolean [ CodeMode = generator ]
{
    set ivar = "i%"_$$$QN(%property)
    $$$GENERATE(" return $select(+"_ivar_"#2>0:1,1:0)")
}

}

This is a simple little example of a property method generator. When the Special class is compiled it will as an IsSpecial property method to each property defined by the class.

MyPropertyIsSpecial() methodimpl {
 return $select(+i%MyProperty#2>0:1,1:0)
}

And when you instantiate Special you can invoke this method.

USER>set special = ##class(Special).%New()

USER>set special.MyProperty = "200"

USER>write special.MyPropertyIsSpecial()
0
USER>set special.MyProperty = 201

USER>write special.MyPropertyIsSpecial()
1

I submitted some feedback to documentation regarding the content you cited.

The line in question originated from something I wrote while implementing the %On*Finally set of methods. The "calling method" in this context refers to the method that invokes the %On*Finally() method. In this case, %Save() is the calling method.

%Save may or may not initiate a transaction, depending on a few factors. If %Save does initiate a transaction then that transaction is complete prior to the %OnSaveFinally() call.

%Library.ResultSet remains in the product for backward compatibility reasons but there are better ways to execute class queries. Any class query can be projected as a table valued function (TVF). TVF's can be executed if the class query also declares SQLPROC. A TVF can be included in the FROM item list, it can be joined with other FROM items, it can be ordered, restricted and a subset of available columns made available. Here is a simple example from the Sample.Person class:

select id,name,dob from sample.SP_Sample_By_Name('Ad') order by dob desc

I populated Sample.Person with some generated data and ran the above statement:

 
ID Name DOB
855 Adams,Elvira X. 03/16/2021
1378 Adams,Ed L. 01/15/2018
477 Adams,Debra S. 10/01/2015
1341 Adam,Chad U. 10/20/2013
32 Adam,Dmitry N. 10/28/2010
1099 Adams,Pam Z. 10/20/1993
897 Adam,Joe Y. 02/23/1984
1469 Adam,Phyllis N. 04/20/1982
358 Adam,Liza H. 12/13/1980
1096 Adam,Belinda Z. 08/02/1975
1269 Adam,Charlotte P. 03/03/1974
1396 Adams,Robert E. 03/14/1973
1109 Adams,Quigley H. 01/01/1968
454 Adam,Amanda A. 01/22/1964
856 Adams,Lawrence A. 03/23/1961
1104 Adam,Stavros O. 02/24/1948
1179 Adam,Pam A. 05/16/1941
426 Adams,Brian M. 01/15/1928

18 row(s) affected

And you can also execute this using a dynamic statement:

USER>set result = $system.SQL.Execute("select id,name,dob from sample.SP_Sample_By_Name('Ad') order by dob desc")

USER>write result.%Next()
1
USER>write result.Name
Adams,Elvira X.

There is a project that will be available soon (don't ask me to define "soon") that will allow the Java programmer to load and compile sources from the local system into an IRIS Server. The IRIS Server does not need to be running on the same system where the files are located.

This example is slightly old as the current implementation of load() returns a list of items loaded and compile() returns a list of items compiled. There are load implementations that accept directories, individual files, any Java streamable, and also JAR files.

SourceLoader sourceLoader = new SourceLoader(connection);
Path path = Paths.get("path/to/samples/cls/Sample");
sourceLoader.load(path, null);
sourceLoader.compile(true);

Yes, of course "inverse" - sorry.

Persistent vs RegisteredObject - not a problem but you are calling a simple class method so we don't need any super class. I used this implementation for the IRIS Class:

Class Utils.CSW1JavaFunctions
{
    ClassMethod IrisReturn(user = "user", pass = "pass") As %Stream.GlobalBinary
    {
         try {
             set cswStream=##class(%Stream.GlobalBinary).%New()
             set cswReturn = {"user":(user), "pass":(pass) }
             do cswReturn.%ToJSON(cswStream)
             return cswStream
         } catch exc {
             write !,"Caught Exception on server: ", exc.AsSQLMessage()
         }
    }
}

 

 

And this is a crude hack at the Java code - the anonymous InputStream class could use more work but it does run for this simple example. I'll leave the rest of the InputStream coding to you.

package utils;

import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.intersystems.jdbc.*;

import java.io.*;
import java.sql.SQLException;


public class Reader {
    public static final String CACHE_CLASS_NAME = "Utils.CSW1JavaFunctions";

    public IRISConnection connection;
    public IRIS iris;

    public Reader(IRISConnection connection) throws SQLException {
        this.connection = connection;
        this.iris = IRIS.createIRIS(connection);
    }
    public static void main(String[] args) throws SQLException {
        IRISDataSource dataSource = new IRISDataSource();
        dataSource.setServerName("localhost");
        dataSource.setPortNumber(51776);
        dataSource.setDatabaseName("USER");
        dataSource.setUser("_SYSTEM");
        dataSource.setPassword("SYS");
        IRISConnection connection = (IRISConnection) dataSource.getConnection();
        Reader reader = new Reader(connection);
        try {
            JsonNode jsonNode = reader.execute("IrisReturn", "java", "jpass");
            System.out.println(jsonNode.toString());
        } catch (Exception exc) {
            exc.printStackTrace();
        }
    }

    public JsonNode execute(String method, Object... args) throws Exception {
        ObjectMapper mapper = new ObjectMapper();
        JsonNode jsonNode = null;
        try {
            IRISObject data  = (IRISObject) iris.classMethodObject(CACHE_CLASS_NAME, method, args[0], args[1]);
            InputStream is = new InputStream() {
                byte[] buffer;
                int pos = 0;
                int len = -1;

                @Override
                public int read() throws IOException {
                    if (pos >= len) {
                        getBuffer();
                    }
                    if (len == -1) {
                      return -1;
                    }
                    return buffer[pos++];
                }

                void getBuffer() {
                    pos = 0;
                    IRISReference readLen = new IRISReference(3200);
                    String string = (String) data.invoke("Read", readLen);
                    if (readLen.getLong() == -1) {
                        buffer = null;
                        len = -1;
                    } else {
                        buffer = string.getBytes();
                        len = buffer.length;
                    }
                }
            };

            jsonNode = (JsonNode) mapper.readTree(is);
            return jsonNode;
        } catch (Exception ex) {
            ex.printStackTrace();
        }
        return null;
    }
}

Running this produces this output:

/usr/lib/jvm/adoptopenjdk-11-hotspot-amd64/bin/java -javaagent:/home/...
{"user":"java","pass":"jpass"}

Process finished with exit code 0

Let me focus on the last two items in your list. IRIS Native for Java, Node.js, DotNet, Python - these are all consistent implementations of the IRIS Native API and the communication is over TCP or shared memory. IRIS Native for ObjectScript is just another - consistent - implementation of the IRIS Native API.

To get a connection to an IRIS server, the command is similar across all implementations of IRIS Native API:

set connection = ##class(%Net.DB.DataSource).CreateConnection(host, port, namespace, user, pwd)

Once you have a connection, you can get an IRIS object.

set iris = connection.CreateIris()

and from an iris object, you can invoke class methods, code implemented in routines, set/get globals, and so on.

Timothy Leavitt's excellent response notwithstanding, this is supported. I do fully embrace the option presented by Timothy Leavitt. The structures I demonstrate here actually produce a model very close to his and the index, since it includes both KEYS and ELEMENTS is projected to the child table projected from the addresses array. Of course, reversing KEYS and ELEMENTS in the index key specification would make the index more useful for searching on city name.

This definition:

Property addresses As array Of Sample.Address;
Index xA On (addresses(KEYS), addresses(ELEMENTS).City)

Not only works but the filing code also recognizes the ability to fold both properties in the index into the same iterator:

    If ('pIndexHandle)||($Ascii($Get(pIndexHandle("Sample.Person")),5)=1) {
        set bsv26N1 = $Order(^Sample.PersonD(id,"addresses",""))
        While bsv26N1 '= "" {
            Set bsv0N8=$zu(28,##class(Sample.Address).%Open($select(^Sample.PersonD(id,"addresses",bsv26N1)="":"",1:$listbuild(^Sample.PersonD(id,"addresses",bsv26N1)_""))).City,7,32768)
            Set ^Sample.PersonI("xA",bsv26N1,bsv0N8,id)=$listget(bsv0N2,1)
            set bsv26N1 = $Order(^Sample.PersonD(id,"addresses",bsv26N1))
        }
    }

And a quick test shows this structure is produced:

panther.local:XDBC:USER>d ##class(Sample.Person).Populate(10) 

panther.local:XDBC:USER>zw ^Sample.PersonI("xA")

^Sample.PersonI("xA","A886"," GANSEVOORT",3)=""

^Sample.PersonI("xA","B350"," MIAMI",6)=""

^Sample.PersonI("xA","B748"," NEWTON",3)=""

^Sample.PersonI("xA","C135"," UKIAH",9)=""

^Sample.PersonI("xA","C261"," ALBANY",1)=""

^Sample.PersonI("xA","C883"," DENVER",2)=""

^Sample.PersonI("xA","D162"," ST LOUIS",4)=""

And this has been in the product since maybe 2010. I couldn't find the original release note for this but I did find a change that fixed a problem when consolidating the iterators and that fix is present in 2010.2.

You didn't specify a version so I'll use IRIS 2020.1. This should also work in 2019 versions. My example uses a Table-Valued Function. Any query can potentially be used as a table valued function. %SQL.CustomQuery simply generates all the infrastructure for you. Custom query works quite simply and is very similar to an embedded SQL cursor - you implement Open, Fetch and Close yourself (%OpenCursor, %FetchCursor, %CloseCursor). You define properties that correspond to columns in your row. You define private properties to hold the cursor state. Open typically binds the source data to the instance of your custom query and close releases any resources. Fetch simply advances the cursor to the next row, populating the properties that represent columns in your table. Look at the class doc for %SQL.CustomQuery for more details.

select * from TVF_TEST('[["one",2,"three"],["first",0,"third"]]')
col1
col2
col3
one
2
three
first
0
third

The class is simple:

Class User.TVF Extends %SQL.CustomQuery [ Language = objectscript ]
{

Parameter SQLNAME As String = "TVF_TEST";

Property data As %Library.DynamicArray [ Private ];

Property iterator As %Iterator.Array [ Private ];

Property col1 As %String;

Property col2 As %Integer;

Property col3 As %String;

Method %OpenCursor(data As %Library.DynamicArray) [ Private ]
{
    if $isobject(data) {
        set ..data = data
    } else {
        set ..data = [].%FromJSON(data)
    }
    set ..iterator = ..data.%GetIterator()
}

Method %FetchCursor() As %Library.Integer
{
    if ..iterator.%GetNext(.key,.value) {
        set ..col1 = value.%Get(0)
        set ..col2 = value.%Get(1)
        set ..col3 = value.%Get(2)
        return 1
    } else {
        set ..col1 = ""
        set ..col2 = ""
        set ..col3 = ""
    }
    return 0
}

}

In reviewing the storage definition you present above, I realize that the list of properties has little to do with the SQL Map definitions it includes. So I will continue with my example, using the properties your example defines (they are present in the storage definition only for the purpose of computing statistics and other storage specific metadata).

Without altering the class definition, other than to add a couple of keywords to make life a bit simpler in the class header and the conversion of the storage to use SQL Mapped Storage (%CacheSQLStorage in your version), and adding %Populate - our test data generator - I came up with this example data storage:

USER>d ##class(Patient).Populate(10)

USER>zw ^User.PatientD

^User.PatientD=10

^User.PatientD(1)=":J5201:Z5211:58985:Isaacs,Michael A.:501759566:H2536:A8788:377-96-6394:J7857:G3137:R4692:42233"

^User.PatientD(2)=":S4498:Z4308:62077:Cooke,Mario C.:832248338:Z9867:V1891:859-50-1555:I9221:H3938:W7632:25909"

^User.PatientD(3)=":J8016:S3895:53889:Quilty,Nellie J.:150706592:J3845:B6691:998-18-8294:B482:D5191:R7961:59453"

^User.PatientD(4)=":H8837:T289:43380:Quince,Samantha U.:115923507:F6623:S6188:407-88-9788:R8672:Y1441:A9501:60822"

^User.PatientD(5)=":N1854:W4497:55465:North,Susan A.:784860058:Z8257:E2831:187-93-1523:T3167:U4316:A9955:34073"

^User.PatientD(6)=":A4324:Z7427:61318:Lopez,Will F.:133068033:K7869:R5254:302-73-2490:B2970:F1889:P8421:26050"

^User.PatientD(7)=":N4346:Y671:65137:Moon,Milhouse Z.:459189579:E6160:R3362:507-37-8752:L8401:R7909:F4245:60716"

^User.PatientD(8)=":N4328:V1682:47890:Zweifelhofer,Terry V.:360767849:B8856:E145:466-23-4632:K4269:X2839:S1937:49318"

^User.PatientD(9)=":M7370:B6086:49261:Rotterman,Lawrence S.:353537548:S6329:B9164:874-34-2035:D4140:U7504:N1456:66241"

^User.PatientD(10)=":W995:B5004:50613:Ironhorse,Barb I.:809117324:S6518:V1966:873-92-8543:Z9470:H6976:G2259:40210"


USER>zw ^User.PatientI

^User.PatientI("A"," A4324",6)=""

^User.PatientI("A"," H8837",4)=""

^User.PatientI("A"," J5201",1)=""

^User.PatientI("A"," J8016",3)=""

^User.PatientI("A"," M7370",9)=""

^User.PatientI("A"," N1854",5)=""

^User.PatientI("A"," N4328",8)=""

^User.PatientI("A"," N4346",7)=""

^User.PatientI("A"," S4498",2)=""

^User.PatientI("A"," W995",10)=""

And this is the full definition of the class - with the index map defined as a new SQL Map Definition in the storage. I highlighted a few things to show the connections between the logical class definition and the corresponding items in the storage definition.

Class User.Patient Extends (%Persistent, %Populate) [ SqlRowIdName = Patient, StorageStrategy = SQLStorage ]
{

Index AcctIndex On accountNo;

Property accountNo As %String;

Property citySt As %String;

Property dob As %Date;

Property name As %String;

Property patientNo As %Integer;

Property rel2Guar As %String;

Property sex As %String;

Property ssn As %String;

Property street1 As %String;

Property street2 As %String;

Property telephone As %String;

Property zip As %String;

Storage SQLStorage
{
<SqlIdExpression>$i(^User.PatientD)</SqlIdExpression>
<SQLMap name="DataMasterMap">
<Data name="accountNo">
<Delimiter>":"</Delimiter>
<Piece>2</Piece>
</Data>
<Data name="citySt">
<Delimiter>":"</Delimiter>
<Piece>3</Piece>
</Data>
<Data name="dob">
<Delimiter>":"</Delimiter>
<Piece>4</Piece>
</Data>
<Data name="name">
<Delimiter>":"</Delimiter>
<Piece>5</Piece>
</Data>
<Data name="patientNo">
<Delimiter>":"</Delimiter>
<Piece>6</Piece>
</Data>
<Data name="rel2Guar">
<Delimiter>":"</Delimiter>
<Piece>7</Piece>
</Data>
<Data name="sex">
<Delimiter>":"</Delimiter>
<Piece>8</Piece>
</Data>
<Data name="ssn">
<Delimiter>":"</Delimiter>
<Piece>9</Piece>
</Data>
<Data name="street1">
<Delimiter>":"</Delimiter>
<Piece>10</Piece>
</Data>
<Data name="street2">
<Delimiter>":"</Delimiter>
<Piece>11</Piece>
</Data>
<Data name="telephone">
<Delimiter>":"</Delimiter>
<Piece>12</Piece>
</Data>
<Data name="zip">
<Delimiter>":"</Delimiter>
<Piece>13</Piece>
</Data>
<Global>^User.PatientD</Global>
<Structure>list</Structure>
<Subscript name="1">
<Expression>{Patient}</Expression>
</Subscript>
<Type>data</Type>
</SQLMap>
<SQLMap name="IndexNName">
<BlockCount>-4</BlockCount>
<Global>^User.PatientI</Global>
<Structure>list</Structure>
<Subscript name="1">
<Expression>"A"</Expression>
</Subscript>
<Subscript name="2">
<Expression>$$SQLUPPER({accountNo})</Expression>
</Subscript>
<Subscript name="3">
<Expression>{Patient}</Expression>
</Subscript>
<Type>index</Type>
</SQLMap>
<StreamLocation>^User.PatientS</StreamLocation>
<Type>%Storage.SQL</Type>
}

}

and this is how it appears in the SQL Manager display of the indexes:

Fair enough. Let's make this more fun! First of all, we need to understand the IDKEY index. InterSystems is somewhat unique in the market with this one. Most SQL vendors support indexes and most support "clustering" data with indexes to further improve the performance of certain queries. Think of our IDKEY index as the "Master Data Index" (sometimes referred to as the "Master Map" or "Data Map"). The "Master Data Index" (MDI) is simply the index that has every column "clustered" with it - all data values are present in the IDKEY index structure. Every table (persistent class) has an IDKEY index and every column (property in Object terms) is stored with it.

Any other index - not the MDI - is simply there to improve query performance. Each non-MDI index has an index key and perhaps some data clustered with it. InterSystems is not different from most other SQL implementations. You can create a new table and add indexes to it using standard DDL statements. We, like most vendors, have some non-standard syntax to support features unique to our product but it is mostly familiar syntax. You can also define a class directly using one of our IDE's or your favorite editor. And all of this will just work - no special effort required and your experience will be similar to that of using any other SQL. 

Your example is different. We support low-level storage mapping to legacy data structures. There are two additional storage classes that support this, one of which is used by your example - %CacheSQLStorage. This storage class was introduced specifically to provide a bridge from legacy applications to Caché/InterSystems IRIS. Mapping legacy data structures can be a complicated task and it can be confusing. And this is where you are - you have an application that is using our support for bridging existing data structures to our DBMS.

And none of that helps you with your problem! So let's try this. Let's define your class without using the legacy data mapping. I'll choose some simply data types just to illustrate.

Class User.Patient Extends %Persistent
{

Index AcctIndex On accountNo;

Property accountNo As %String;

Property citySt As %String;

Property dob As %Date;

Property name As %String;

Property patientNo As %Integer;

Property rel2Guar As %String;

Property sex As %String;

Property ssn As %String;

Property street1 As %String;

Property street2 As %String;

Property telephone As %String;

Property zip As %String;

}

That's it - no special work involved, just a quick and easy class definition with an index on accountNo defined. This is what it looks like in the SQL Manager:

This is an example without the legacy global mapping. %BuildIndices is automatically generated and there is nothing else you would need to do to define, populate and maintain this index - other than to build it initially if it was added to a previously populated extent.

In my next comment (this one is getting long), I'll convert the default storage structure into a mapped storage structure using %CacheSQLStorage.

Hi Keith,

%BuildIndices/%PurgeIndices are generated for any class that primarily extends %Library.Persistent. When such a class defines one or more indexes and that class uses SQL Storage then the class must also define an index map in the storage definition. Without that the index simply won't be maintained - it doesn't exist as far as SQL is concerned. Your display in the original posting is a view of the SQL table definition. The SQL table definition is a projection of the class definition to SQL.

Your first example shows an index that is defined as the IDKEY index - that is the index definition that corresponds to the SQL Master Map (data map). I'm not surprised that it doesn't show up in the indexes display of the SQL Manager.

If WRC isn't an option then a good place to start is to work through just one of your classes. If you can choose one simple example perhaps we can suggest corrections for you.

-Dan

There is a more general way to describe this situation. First consider what is actually happening. When you open an Object (%Open/%OpenId) you are copying data from persistent storage into memory. Then some other actor is modifying the data in persistent storage so the in-memory data is no longer the same as the data in persistent storage. If you were to then copy the in-memory data back to persistent storage (%Save), assuming the same identifier is used, then the modifications from the other actor would be overwritten.

There are two ways that Objects/SQL Persistence will allow this to happen (I call the action performed by the other actor an underwrite). First is the case you present with your example (same process performs underwrite and overwrite). Second is when there are no or insufficient concurrency controls in place. The second case is easily resolved simply by employing proper concurrency control mechanisms to prevent under/over writes. The first case is more sinister because normal pessimistic concurrency mechanisms won't guard against this case. This case occurs when a process holds data in memory and then updates the data in persistent storage either using direct global set/kill or by using SQL. Optimistic concurrency controls will guard against only the SQL update case, direct global set/kill will go undetected unless those set/kill actions also update the version number of the data.

%Reload() was not intended to solve this problem. %Reload() is automatically invoked when the concurrency setting of an object in memory is updated from no-retained lock (value less than 3) to a retained lock (value 3 or 4). In this case, data is automatically sync'ed with data on disk to ensure that the now-lock protected data is the same in-memory and in persistent storage. However, %Reload() can be invoked directly to perform this same action should user code somehow detect it is necessary. 

If the reason you want to %Reload() the data is because the persistent storage was modified by SQL then I have the same question as Fab asked above. Why would you use SQL to update the data? I can think of a few reasons why you might want to do that but there are other ways to accomplish the same task without having to worry about the state of data in memory vs. the state of data in persistent storage.

If you were to use optimistic concurrency controls (VERSIONPROPERTY) then %Save()/UPDATE will fail if the version numbers don't match and then the user code can go into a state where conflicts can be resolved.

There is an interesting dynamic SQL feature that will guard against the case where some objects may be in memory and the user code needs to SQL to update data. I am happy to provide an example of this technique if someone is interested.

-Dan

Another option that abstracts the caller completely from the quoting requirements is to use a parameter. Parameters in Dynamic SQL are positional. Also, keep in mind that literals are replaced automatically in dynamic SQL statements so using a parameter for this will not add any overhead to the processing.

  set statement = ##class(%SQL.Statement).%New()

  do statement.prepare("insert into Stats(Country) VALUES (?)")

  set result = statement.execute(CountryId)

Keep in mind the lowercase form of prepare and execute work as their %Prepare/%Execute counterparts but they throw exceptions instead of using %Status interfaces.