System assigned id's are projected to SQL as an integer as previously mentioned. The name of the column projected to SQL from a system-assigned id is 'ID' by default but it isn't guaranteed to always be 'ID'. 

As for controlling how system-assigned ID's are defined - both name and type - I suggest using IDENTITY. I think your code will be happier. You can even name it 'ID' if you wish.

Class User.Person Extends (%Persistent, %Populate)
{

Property ID As %BigInt [ Identity ];

Property Name As %String;

}

And a sample run:

USER>set ^User.PersonD = 5983658923640                                                       

USER>d ##class(Person).Populate(50)                                                          

USER>do $system.SQL.Execute("select id,name from person where id = 5983658923646").%Display()

ID Name

5983658923646 Young,Imelda B.


1 Rows(s) Affected

The reason it works when you specify a different dialect is because the type mappings are different for different dialects.

Amir,

If you want the results of your query to be usable by the outside world then you can easily do this by generating a custom query (subclass %SQL.CustomQuery), compile and execute it. I probably have some examples of doing this somewhere. %SQL.CustomQuery is a bit more convenient than defining a query in a class - it basically accomplishes the same thing. %SQL.CustomQuery will generate all of the relevant metadata from your property definitions and you do not have to be concerned with ROWSPEC, etc. It is fully compatible with SQL and can even be used as a TVF in other queries.

-Dan

Be careful. %Extends returns 1 | 0 but it is checking the entire SUPER value of your class, not just the primary superclass. Take a simple class that extends both MyLibrary.ParentClass and %Library.Populate - %Extends will return 1 (true) for both MyLibrary.ParentClass and %Library.Populate.

There is another method - %IsA(superclass) that returns 1 (true) only if the superclass is the primary super class (forms a %IsA() relationship with your current class).

Take a look at the %SQL.Util.Procedures class - specifically the CSVTOCLASS method. The interface isn't very nice and it isn't intuitive but it will do what you want - and more. You can invoke this directly or you can execute it as an SQL procedure. I worked up a very quick example using the data you included - I just created a simple file. In the file you can replace the column headers with a ROWTYPE (SQL standard defines this thing) or you can pass the ROWTYPE as an argument (that is what I did here).

USER>do ##class(%SQL.Util.Procedures).CSVTOCLASS(,"Name VARCHAR(50),Acc INT,Div INT","/Users/danp/projects/csvdemo/data.csv",,,1,"User.Customers")

USER>zw ^User.CustomersD

^User.CustomersD=3

^User.CustomersD(1)=$lb("","Eric",1234,567)

^User.CustomersD(2)=$lb("","John",1235,987)

^User.CustomersD(3)=$lb("","Peter",3214,879)

Alternatively, you can use the CSV() method to simply return a result set .

USER>do ##class(%SQL.Util.Procedures).CSV(,"Name VARCHAR(50),Acc INT,Div INT","/Users/danp/projects/csvdemo/data.csv")    



USER>set result = %sqlcontext.%NextResult()



USER>w result.%Next()

1

USER>do result.%Print()

Name Acc division



USER>write result.%Next()

1

USER>write result.Name

Eric

USER>write result.Acc

1234

USER>write result.Div

567


-Dan

Sorry! I didn't mean "we should never" comment to refer to how you reported the problem!! Not at all!

I wrote the code that is reporting the <FUNCTION> error. What I meant was that my code is wrong, there is a bug in my code and this is a problem that I need to address! ( I think that I have already in a later version.)

I am sorry that I wasn't more clear about this. I do recommend that you open an issue through the WRC however. That way you can be updated as to the resolution of the issue and we can use it internally to track the resolution.

We should never report an error in this way. The Extent Manager can hold stale extent definitions that, over time, can become a bit unwieldy but <FUNCTION> errors are certainly never normal. All registered extents can be rebuilt from compiled class information. I agree with Kyle in that this should be brought to the attention of support so we can do a proper analysis of why this is happening.

As for the "unwieldy" comment, look into the /deleteextent qualifier. This can be optionally specified when deleting a class definition. It causes the Extent Manager information (the registered extent) to be deleted along with the class definition. It helps keep the Extent Manager a bit cleaner. It also causes the compiled class's extent (the data!!) to be deleted so be careful.

There is a more general way to describe this situation. First consider what is actually happening. When you open an Object (%Open/%OpenId) you are copying data from persistent storage into memory. Then some other actor is modifying the data in persistent storage so the in-memory data is no longer the same as the data in persistent storage. If you were to then copy the in-memory data back to persistent storage (%Save), assuming the same identifier is used, then the modifications from the other actor would be overwritten.

There are two ways that Objects/SQL Persistence will allow this to happen (I call the action performed by the other actor an underwrite). First is the case you present with your example (same process performs underwrite and overwrite). Second is when there are no or insufficient concurrency controls in place. The second case is easily resolved simply by employing proper concurrency control mechanisms to prevent under/over writes. The first case is more sinister because normal pessimistic concurrency mechanisms won't guard against this case. This case occurs when a process holds data in memory and then updates the data in persistent storage either using direct global set/kill or by using SQL. Optimistic concurrency controls will guard against only the SQL update case, direct global set/kill will go undetected unless those set/kill actions also update the version number of the data.

%Reload() was not intended to solve this problem. %Reload() is automatically invoked when the concurrency setting of an object in memory is updated from no-retained lock (value less than 3) to a retained lock (value 3 or 4). In this case, data is automatically sync'ed with data on disk to ensure that the now-lock protected data is the same in-memory and in persistent storage. However, %Reload() can be invoked directly to perform this same action should user code somehow detect it is necessary. 

If the reason you want to %Reload() the data is because the persistent storage was modified by SQL then I have the same question as Fab asked above. Why would you use SQL to update the data? I can think of a few reasons why you might want to do that but there are other ways to accomplish the same task without having to worry about the state of data in memory vs. the state of data in persistent storage.

If you were to use optimistic concurrency controls (VERSIONPROPERTY) then %Save()/UPDATE will fail if the version numbers don't match and then the user code can go into a state where conflicts can be resolved.

There is an interesting dynamic SQL feature that will guard against the case where some objects may be in memory and the user code needs to SQL to update data. I am happy to provide an example of this technique if someone is interested.

-Dan

Another option that abstracts the caller completely from the quoting requirements is to use a parameter. Parameters in Dynamic SQL are positional. Also, keep in mind that literals are replaced automatically in dynamic SQL statements so using a parameter for this will not add any overhead to the processing.

  set statement = ##class(%SQL.Statement).%New()

  do statement.prepare("insert into Stats(Country) VALUES (?)")

  set result = statement.execute(CountryId)

Keep in mind the lowercase form of prepare and execute work as their %Prepare/%Execute counterparts but they throw exceptions instead of using %Status interfaces.

If you have flexibility with the schema definition then there are certainly other models that would make a query such as this simpler.

With an array collection that is stored as a serialized list you will have to either write a procedure, define some computed properties, or you will have to use some of the InterSystems collection extensions that may not work very well with collections of objects.

Our projection of serial types (classes that extend %SerialObject) uses a "flattening model" and we don't use that for serial collections stored as "listnode" structures. We do use it in the specific case of an array of serial objects stored using a "subnode" structure - the default.

I know this all sounds a bit scary but it doesn't need to be. When you get back to this we can walk through the options.

If the default array projection for PET.ImageStudy.Injection is used then there will be an SQL table projected from the array of objects. By default, the name of this table will be PET.ImageStudy_Injection. This projected table will define a column referencing the PET.ImageStudy table. I believe this column will be named ImageStudy. There will also be a column named Injection_RadioTracer in this table and that column will be a reference to the RadioTracer table.

If the array, Injection, in the PET.ImageStudy class is not stored using defaults then this becomes a more complex problem - still solvable but more difficult. 

Let me know if array storage defaults are used and then I can help compose a query - or perhaps you can do it with the information I provided above. You can verify this information using the System Explorer->SQL to view the table definitions.

-Dan

Have you considered using $xecute?

Version:1.0 StartHTML:00000092 EndHTML:00001371 StartFragment:00000172 EndFragment:00001337 CachÙ Studio clip   Class User.Exec [ Abstract ]
{

ClassMethod test(STATUS = 1) As %String
{
set greeting = $xecute("(STATUS) { return $S(STATUS:""HELLO"", 1:""GOODBYE"") }",STATUS)
return greeting
}

}

and here is this code in action:

PANTHER:XEP:DEV>w ##class(Exec).test()

HELLO

PANTHER:XEP:DEV>w ##class(Exec).test(0)

GOODBYE

As you have already discovered, there is no opportunity for a user to change the collection type class. That class is determined by the compiler when processing LIST and ARRAY keywords. This assignment, indicated by the compile-only keyword of RUNTIMETYPE, occurs during inheritance resolution and it cannot be overridden by the user. Your solution of coercing the RUNTIMETYPE using a method generator is not completely correct even though the runtime behavior seems correct. The problem with using the method generator is that is runs late in the compile cycle and the modification you make will not be processed by compiler actions that have already completed by this time. Your changes - simple addition of new methods - will probably work as you expect as long as those methods don't require any additional compiler processing. 

A user has only two ways to add methods to a class member - property in this case. The first and easiest is through the use of the declared type class. Internally this is cPROPtype (you have already discovered this I'm sure ;) ). Methods inherited from the datatype class (actually these are "datatype generator" classes) are combined with the property to produce the datatype implementation. These methods are typically not instance methods of the containing class but one could say they are instance methods of the generated datatype.

The second way for a user to inject methods into a class member runtime is by overriding the member super class. In the old days this was referred to as the property class but that definition is expanded to include other member types such as query, index, and so on. Methods inherited by a member from the member supertype class are typically instance methods of the containing class. For a property these methods include Get and Set. There are two keywords that a user can define - MEMBERSUPER and PROPERTYCLASS.

Both mechanisms for adding methods to a member's runtime produce what we call "composite methods". That doesn't really mean anything other than the name of the method which is composed of the member name and the inherited method name. There is an internal difference but that has little impact on runtime behavior. Composite methods look very much like normal methods but sometimes confuse users because it seems there should be a dot in the middle! For example, direct invocation of the Name property's Set method is

    do oref.NameSet(value)

This could be confusing as it seems logical to use

    do oref.Name.Set(value)

It is possible for a user to define a property super class containing method generators that will produce composite member methods.  This is not a simple process but it can be done. If this interests you then I can provide more details.

What I would recommend is an instantiable helper class. You can construct an instance of the helper class, passing in the collection instance. Methods in the helper class then operate on that instance. This is the model we use for iterators operating on instances of %Library.DynamicAbstractObject. 

I often see user classes that extend either %RegisteredObject or %Persistent that implement only class methods with no properties at all. This produces larger-than-needed runtimes.

For classes that implement only classmethods (static) there is no reason to inherit an instantiable interface such as that provided by %RegisteredObject.

It isn't necessary but it is good practice to define such classes to be abstract.

Using a simple helper class such as the one used by the various $system.<class> classes implemented by ISC it is possible to provide help at the command prompt for all methods implemented in the class.

I tend to lean toward classes-only but I do have requirements that can only be met by using routines. I'm not a fan of the ##class() syntax and, outside of instantiation, there aren't very good alternatives to that syntax. 

At one time - in the pre-2.0 days of Caché Objects - we did discuss this but decided to settle on the implementation we currently have. The landscape is much more complex these days and there is a greater chance of a composite method name collision or parameter name collision. Not too long ago, I had to redesign the index constraint projection (primary key, unique) and referential constraint projection (foreign key) because we had name collisions.