This should work. I assume the code you included resides in a routine named 'XJSON' and in a function named 'fromFile'? I can guess that the class referenced is %DynamicAbstractObject and that class should certainly exist in versions 2016.2 and later. In version 2016.1, this class was known as %Library.AbstractObject.

In 2016.1, the JSON syntax was much different. The correct expression for that version is:

 set obj = [].$fromJSON(stream)

Errors can be encountered during the normal execution of any code and the cause of those errors is not always known or predictable. Caché has three primary error reporting mechanisms two are passive, meaning the user code is responsible for checking some error indicator and one is active, meaning the error triggers a change in the code path. The passive error reporting, %Status and SQLCODE, both require the user code to check for error conditions after executing some code. The code author must know before hand that these error indicators can be changed by the code that is being executed.

The third mechanism is exceptions. ($ZE with ZTRAP, etc. exist of course). With exceptions, the user code only needs to know that an exception could be thrown and to properly structure the code to deal with that possibility. If Caché user code is not already properly structured then $ZTRAP errors can cause problems so structuring code to deal with exceptions should not be considered a hardship.

The advantage of exceptions over the two passive types is that all three types of errors - %Status, SQLCODE, and exception - can be integrated into a single error handler. The passive checks can be used to construct exceptions using exception class constructors - CreateFromStatus and CreateFromSQLCODE - which can then be thrown. The error handling code lives in the CATCH block and can use the exception instance without further need to parser of otherwise process the exception in order to identify the type of exception, the exact error code and name, error context values, and so on. There is no need to litter your code with a lot of error handling code, detecting different types of errors, formatting, parsing, and so on. Just do it all in one place.

There are also exception methods to cast an exception as %Status or SQLCODE error values should there be a contractual obligation to report errors using one of the passive methods.

Another advantage to using exceptions is that there is virtually no runtime overhead. Also, unless you must return a %Status value (it happens), your code is free to return a useful value from a function, improving code readability and efficiency.

Please keep in mind that you are not supposed to be checking the local variable, SQLCODE. You should be checking the %SQLCODE property of either the prepared statement or the %SQL.StatementResult instance returned by %Execute().

And, the undocumented but fully supported methods of prepare() and execute() do throw exceptions now - no status code unless you want a status code. Just catch any thrown exception and process it as you wish.

I see two primary advantages of using exceptions over other error handling mechanisms. First is that exceptions can integrate all of the other error reporting mechanisms, allowing for consolidation of error handling code as well as the ability to report errors reported by one mechanism as an error using a different mechanism - %Status reported as SQLCODE and so on. The second is performance. Exceptions, using try/catch, are basically zero-cost for success, overhead encountered only when an exception is actually thrown. 

An additional advantage that is more subjective is code "cleanliness". Code written using try/catch/throw doesn't have to continually check for errors unless there functions not using some other error protocol are referenced.

To your question, I do not always add try/catch to a method if the catch {} block simply re-throws the exception. I only catch an exception if I need to do something to the exception before re-throwing it (or not throwing it at all) to the caller.

-Dan

 

PS: 

There is a macro, $$$THROWONERROR, that helps clean up calls to functions that return %Status. This macro is a convenient way to replace this pattern:

    set status = ..StatusReturningMethod()
    ​if $$$ISERR(status) { throw ##class(%Exception.StatusException).CreateFromStatus(status) }

with

    $$$THROWONERROR(status,..StatusReturningMethod())

If the status is not returned by the code, perhaps it is returned as a by-reference parameter value, then there is another macro that can help with the throw:

     $$$ThrowStatus(status)

Here are two snippets. The first is as close as I can come to Amir's original example and the second is a more radical version that embraces try/catch and exceptions.

 

 /// Always return a %Status
ClassMethod SomeMethod2() As %Status
{
    Set tSC = $System.Status.OK()
    Try
    {
        Set oRS = ##class(%SQL.Statement).%New()
        Set tSC = oRS.%Prepare("Select Name,DOB,Home_City from Sample.Person where Name %STARTSWITH ?")
        Quit:$System.Status.IsError(tSC)
        Set result = oRS.%Execute("A")
        if result.%SQLCODE '< 0 {
            While result.%Next()
            {
                //Do something...
            }
else {
            throw ##class(%Exception.SQL).CreateFromSQLCODE(result.%SQLCODE,result.%Message)
        }
    }
    Catch (oException)
    {
        Set tSC = oException.AsStatus()
    }
    Quit tSC
}

Snippet #2 - embrace exceptions

 /// Always throw an exception, return something useful to the caller
ClassMethod SomeMethod3() as %String
{
    try {
        set oRS = ##class(%SQL.Statement).%New()
        do oRS.prepare("Select Name,DOB,Home_City from Sample.Person where Name %STARTSWITH ?")
        set result = oRS.execute("A")
        while result.%Next() {
            //Do something...
        }
         if result.%SQLCODE < 0 {
            throw ##class(%Exception.SQL).CreateFromSQLCODE(result.%SQLCODE,result.%Message)
        }
        set response = "something useful"
    } catch (oException) {
            // process the exception - perhaps eat it and return or re-throw
            // often, we can place error/exception logging code here
        throw oException
    }
    return response
}

Of course, it would be nice if there were a "next()" method that throws an exception. Even the prepare() and execute() are not "officially" documented.

 

HTH,

Dan

%Library.ResultSet combines the statement (prepared) and the result into a single instance. You must consume the entire result - or discard it - before you can execute the prepared statement again. %Library.ResultSet also copies the data from the underlying embedded SQL memory into the result set object - perhaps more than once. %SQL.Statement is the prepared statement and it can be executed as many times as you wish, each producing an independent result. The SQL statement result shares memory with SQL SELECT statements so data does not have to be copied in most cases.

%SQL.Statement and %SQL.StatementResult do provide status values where appropriate. However, the convention of "always return a %Status value" destroys our ability to implement true functions that return a usable return value. With the convention of always returning a %Status value, the error case is the primary check. With exceptions, errors become, well, the exception and code can be written in a more direct manner. Most modern programming languages use try/catch.

Since you like macros, you might investigate $$$THROWONERROR. This macro allows the COS programmer to combine %Status values and exceptions using try/catch, writing the error handling only once - in a CATCH block.

For many simple statements and common operations, they will perform nearly the same. Both employ a generated code container with an embedded SQL statement that is compiled using the same SQL query processor. It is the interface to the compiled embedded SQL query where we see differences. The new(er - implemented several years ago) dynamic SQL uses objects to scope versions, you can have multiple instances of the same query open at the same time, you can return multiple result sets from a stored procedure using CALL, you can retrieve output-directed parameters from a CALL-ed procedure, you can execute DDL statements, and so on. With %ObjectSelectMode active, you can retrieve columns that are directly swizzled, enabling access to in-memory versions of objects that may not have been saved to disk. There are many reasons to use %SQL.Statement.

 Class User.DQ Extends %SQL.CustomQuery
{

Property id As %Integer;

Property lastName As %String;

Property firstName As %String;

Property age As %Integer;

Property ptr As %Integer [ Private ];

Property atEnd As %Boolean [ Private ];

Parameter SQLNAME As String = "DQ";

Method %OpenCursor() [ Private ]
{
try {
// Let's set up some temporary data
set ^CacheTempDQ(1) = $listBuild("Smith","John",42)
set ^CacheTempDQ(2) = $listBuild("Jones","Quincy",80)
set ^CacheTempDQ(3) = $listBuild("Wilson","George",71)
set ^CacheTempDQ(4) = $listBuild("Garcia","Andrew",32)
set ^CacheTempDQ(5) = $listBuild("Smoak","Felicity",24)
set ^CacheTempDQ(6) = $listBuild("Partridge","Audrey",32)
set ^CacheTempDQ(9) = $listBuild("Williams","Andrie",92)
set ^CacheTempDQ(10) = $listBuild("Orr","Robert",62)
set ^CacheTempDQ(11) = $listBuild("Moon","Lila",21)
set ^CacheTempDQ(120) = $listBuild("Lola","Elleoh",67)
set ..ptr = ""
set ..atEnd = 0
set ..%SQLCODE = 0
set ..%Message = ""
catch exception {
set ..%SQLCODE = exception.AsSQLCODE()
set ..%Message = exception.AsSQLMessage()
}
}

Method %CloseCursor() [ PlaceAfter = %Next, Private ]
{
try {
set ..ptr = ""
set ..atEnd = 0
kill ^CacheTempDQ
set ..%SQLCODE = 0
set ..%Message = ""
catch exception {
set ..%SQLCODE = exception.AsSQLCODE()
set ..%Message = exception.AsSQLMessage()
}
}

Method %FetchCursor(ByRef sc As %Library.Status = {$$$OK}) As %Library.Integer
{
try {
if '..atEnd {
set ..ptr = $order(^CacheTempDQ(..ptr),1,data)
if ..ptr {
set link = ""
set ..lastName = $list(data,1)
set ..firstName = $list(data,2)
set ..age = $list(data,3)
else {
set ..atEnd = 1
set ..%SQLCODE = 100
}
}
catch exception {
set ..%SQLCODE = exception.AsSQLCODE()
set ..%Message = exception.AsSQLMessage()
}
return '..atEnd
}

}

Then execute this statement:

 

select * from DQ()

 

or 

 

select * from DQ() where lastName %STARTSWITH 'S' order by age DESC

 
 
idlastNamefirstNameage
 SmithJohn42
 SmoakFelicity24

2 row(s) affected

 

Hi Chris,

Good question! This is a poorly understood area and there is much to say about it. I'll try to be brief. Feel free to follow up.

Your new property falls into the category of properties commonly referred to as 'derived columns' in other products. We support two different flavors - 'always computed' and 'triggered computed'. An 'always computed' property is indicated by the CALCULATED keyword. A 'triggered computed' property is 'NOT CALCULATED' although this is the default if you don't specify it. A triggered computed property can be TRANSIENT or stored (not transient - again, the default) and its value is triggered to recompute if the value of any property specified in the SQLCOMPUTEONCHANGE list of properties changes. In an object context, you need to be a aware that the value of a TRANSIENT property is directly settable unless you take precautions to prevent that.

The advantage to using SQLCOMPUTED/SQLCOMPUTECODE to derive a property value is that SQL and Objects seamlessly (almost) recognize the derived value and invoke the computation at the proper time. Overriding Get/Set methods can be used to derive property values but SQL will not recognize this as you have already discovered. 

SQL does not operate in an instance context. In other words, there is no oref available to SQL so SQLCOMPUTECODE must run in a class context and not in an instance (object) context. No oref. The way you have implemented your code is correct and, sorry to say, there isn't a cleaner way that I know about. 

There is one other thing you should know. When you open the object you may not actually get a new oref if that object was previously open in your process. That goes for any call to %Open([d]  - we cache the OID's with their active orefs and always return the existing oref from %Open if one is present in process memory. What does that mean? It means that you may see data in memory that has not committed to storage - Objects sees the data because of the in-memory context but SQL won't because it runs in an on-disk (storage) context. By opening objects you may produce a computed value that is different from the same data that is stored. There are ways you can avoid this problem, one being to store the derived value and trigger it to recompute on INSERT and UPDATE. This is easily defined by adding %%INSERT and %%UPDATE to the SQLCOMPUTEONCHANGE list of trigger properties.

Not so brief.

-Dan

Hi Peter!!

Your comment is correct of course. But think about the behavior and not the physical model. Everything that PARENT/CHILDREN relationships offers can be easily replicated using ONE/MANY with the only differences being the default storage model and the composite IDKEY that is required by PARENT/CHILDREN. With ONE/MANY you have the advantage of using system assigned integer ID's (frees you up to use bitmap indices - think product/invoice line perhaps) and you have more flexibility with referential actions. PARENT/CHILDREN requires ONDELETE=CASCADE. With ONE/MANY you have the option for ONDELETE=NOACTION/CASCADE... And, of course, if you with to establish existence dependency between the parent class and the child then simply add a REQUIRED constraint to the relationship whose cardinality = ONE.

Hope to see you at Global Summit!

 

Dan

I am not a fan of parent/children relationships. Think about what this really means. Kyle's comment about the data being nested beneath the parent object in the global is just the default physical storage model. The user has control over storage and can change that structure. But consider the logical behavior that PARENT cardinality brings. First, a relationship is like a foreign key - it is constrained by a referential constraint and it is subject to referential actions. Since this is a foreign key referencing the IDKEY of the parent class there is no need to define an ONUPDATE action (IDKEY values cannot be changed) but ONDELETE must be CASCADE. Why? Because PARENT cardinality establishes an EXISTENCE dependency on the parent. You can't change that unless you change the cardinality to ONE. Secondly, the IDKEY of the child class implicitly includes the parent relationship. This establishes an IDENTIFICATION dependency on the child class. You can't identify (the ID/OID) an instance of the child class unless you know the ID of the parent class. Furthermore, once a parent value is assigned and saved, you cannot change the value of the parent property. To do so would identify a different object and it isn't allowed. So - PARENT defines existence and identification dependencies.

To summarize PARENT/CHILDREN behavior:

1. Existence dependency;

2. Identification dependency;

3. Physical storage default is nested (artificially creating an implicit ONDELETE CASCADE behavior at the physical level - IMO this is not a good thing!)

 

And think about other ways to achieve the same thing using ONE/MANY or even a simple foreign key:

1. Existence dependency - make the relationship in the child class (or property if using fkey) REQUIRED;

2. Identification dependency - define the IDKEY and add the relationship with cardinality = ONE to the IDKEY;

3. Physical storage model nested - I don't recommend this model for ONE/MANY. 

The advantages of ONE/MANY:

1. The user has control over dependencies;

2. More ONDELETE/ONUPDATE options;

3. The option to base the IDKEY on a positive integer value, allowing the use of bitmap indices;

What does a relationship do for you that a foreign key does not? One thing. It maintains the relationship in memory when either side of the relationship is in memory (swizzled). This can be a good thing. It can also be a bad thing if there are a large number of related objects in the n-cardinality side.

Caché Objects persistent classes project to SQL as tables. Caché Objects relationships are binary - a relationship must have a defined inverse relationship in the related class. We only support relationship/inverse relationship cardinality pairs of one:many and parent:children (a special case of one:many where the parent is implicitly required and is also implicitly part of the IDKEY). A Caché Objects n-cardinality relationship (children, many) is transient in the container and there is no projection to SQL. A 1-cardinality relationship is stored and projects to SQL as a column that is constrained by a foreign key constraint. A simple query can be used to retrieve the contents of the unprojected n-cardinality relationship. Consider two tables, A and B, related by properties b and a. A.b is of type B and specifies cardinality = many.  B.a is of type A and specifies cardinality = one.

The value of A.b can be determined by this query:

select %ID from B where B.a = ?

and specify '?' as the A.%ID value. In fact, the internal relationship implementation uses a very similar query to populate A.b on %Open.

Yes, of course. The SQLCOMPUTECODE can invoke a classmethod but not an instance method unless you provide the instance. If you wish to pass arguments that correspond to other properties then just use curly-brace syntax to refer to them:

sqlcomputecode = { set {*} = ..MyComputeMethod({property1},{property2}) }

and the compiler will work out the details.

Also, we have proposed syntactical changes that would make this much nicer. The idea is that a class definition is composed of some set of members, members being things like properties, methods, indices, and so on. Each member type, excepting methods, can also have methods. The idea for the syntax is to embed the method definition with the member. This was just an idea and is not an active development project. But - something like this is what we envision:

property foo as %String [ sqlcomputed ] {

    method Compute(someParameter as %String) as %String {

        your code goes here, set the return value and return it

    }

}

 

Just an idea...

Triggers and computed properties (compute triggers) are certainly better than callbacks. Now that we have unified triggers callbacks should be avoided - IMO - in favor of using triggers. However, initial expression when there is some outside influence such as a clock or other value source involved, when the property is not read-only protected, etc. makes using initialexpression a bad idea in some cases. In cases where there is no opportunity for the user to interact with a new instance prior to it being saved then initialexpression is acceptable. If there is opportunity for interaction between instantiation and save (serialization) then initialexpression can be unreliable. The %%INSERT/%%UPDATE compute triggers avoid that.