go to post Dan Pasco · Aug 7, 2017 Please keep in mind that you are not supposed to be checking the local variable, SQLCODE. You should be checking the %SQLCODE property of either the prepared statement or the %SQL.StatementResult instance returned by %Execute().And, the undocumented but fully supported methods of prepare() and execute() do throw exceptions now - no status code unless you want a status code. Just catch any thrown exception and process it as you wish.
go to post Dan Pasco · Aug 7, 2017 I see two primary advantages of using exceptions over other error handling mechanisms. First is that exceptions can integrate all of the other error reporting mechanisms, allowing for consolidation of error handling code as well as the ability to report errors reported by one mechanism as an error using a different mechanism - %Status reported as SQLCODE and so on. The second is performance. Exceptions, using try/catch, are basically zero-cost for success, overhead encountered only when an exception is actually thrown. An additional advantage that is more subjective is code "cleanliness". Code written using try/catch/throw doesn't have to continually check for errors unless there functions not using some other error protocol are referenced. To your question, I do not always add try/catch to a method if the catch {} block simply re-throws the exception. I only catch an exception if I need to do something to the exception before re-throwing it (or not throwing it at all) to the caller. -Dan PS: There is a macro, $$$THROWONERROR, that helps clean up calls to functions that return %Status. This macro is a convenient way to replace this pattern: set status = ..StatusReturningMethod() if $$$ISERR(status) { throw ##class(%Exception.StatusException).CreateFromStatus(status) } with $$$THROWONERROR(status,..StatusReturningMethod()) If the status is not returned by the code, perhaps it is returned as a by-reference parameter value, then there is another macro that can help with the throw: $$$ThrowStatus(status)
go to post Dan Pasco · Aug 7, 2017 Yes, that was placed there for consistency with %Library.ResultSet.Thanks for pointing that out!
go to post Dan Pasco · Aug 7, 2017 Here are two snippets. The first is as close as I can come to Amir's original example and the second is a more radical version that embraces try/catch and exceptions. /// Always return a %Status ClassMethod SomeMethod2() As %Status { Set tSC = $System.Status.OK() Try { Set oRS = ##class(%SQL.Statement).%New() Set tSC = oRS.%Prepare("Select Name,DOB,Home_City from Sample.Person where Name %STARTSWITH ?") Quit:$System.Status.IsError(tSC) Set result = oRS.%Execute("A") if result.%SQLCODE '< 0 { While result.%Next() { //Do something... } } else { throw ##class(%Exception.SQL).CreateFromSQLCODE(result.%SQLCODE,result.%Message) } } Catch (oException) { Set tSC = oException.AsStatus() } Quit tSC } Snippet #2 - embrace exceptions /// Always throw an exception, return something useful to the caller ClassMethod SomeMethod3() as %String { try { set oRS = ##class(%SQL.Statement).%New() do oRS.prepare("Select Name,DOB,Home_City from Sample.Person where Name %STARTSWITH ?") set result = oRS.execute("A") while result.%Next() { //Do something... } if result.%SQLCODE < 0 { throw ##class(%Exception.SQL).CreateFromSQLCODE(result.%SQLCODE,result.%Message) } set response = "something useful" } catch (oException) { // process the exception - perhaps eat it and return or re-throw // often, we can place error/exception logging code here throw oException } return response } Of course, it would be nice if there were a "next()" method that throws an exception. Even the prepare() and execute() are not "officially" documented. HTH, Dan
go to post Dan Pasco · Aug 7, 2017 %Library.ResultSet combines the statement (prepared) and the result into a single instance. You must consume the entire result - or discard it - before you can execute the prepared statement again. %Library.ResultSet also copies the data from the underlying embedded SQL memory into the result set object - perhaps more than once. %SQL.Statement is the prepared statement and it can be executed as many times as you wish, each producing an independent result. The SQL statement result shares memory with SQL SELECT statements so data does not have to be copied in most cases.%SQL.Statement and %SQL.StatementResult do provide status values where appropriate. However, the convention of "always return a %Status value" destroys our ability to implement true functions that return a usable return value. With the convention of always returning a %Status value, the error case is the primary check. With exceptions, errors become, well, the exception and code can be written in a more direct manner. Most modern programming languages use try/catch.Since you like macros, you might investigate $$$THROWONERROR. This macro allows the COS programmer to combine %Status values and exceptions using try/catch, writing the error handling only once - in a CATCH block.
go to post Dan Pasco · Aug 3, 2017 Can you elaborate on your comment regarding status and exception mixtures with %SQL.Statement? I think the interface is completely normal - and consistent with the SQL/CLI Standard. We do report %Status from some calls but the primary error handling is through the very normal SQLCODE/%Message properties.
go to post Dan Pasco · Aug 1, 2017 For many simple statements and common operations, they will perform nearly the same. Both employ a generated code container with an embedded SQL statement that is compiled using the same SQL query processor. It is the interface to the compiled embedded SQL query where we see differences. The new(er - implemented several years ago) dynamic SQL uses objects to scope versions, you can have multiple instances of the same query open at the same time, you can return multiple result sets from a stored procedure using CALL, you can retrieve output-directed parameters from a CALL-ed procedure, you can execute DDL statements, and so on. With %ObjectSelectMode active, you can retrieve columns that are directly swizzled, enabling access to in-memory versions of objects that may not have been saved to disk. There are many reasons to use %SQL.Statement.
go to post Dan Pasco · Aug 1, 2017 Are you saying that %Library.ResultSet performs better than %SQL.Statement when executing an equivalent query and retrieving the result? If so, please provide timing tests showing the difference.
go to post Dan Pasco · Aug 1, 2017 It really depends on what you are doing. For retrieving all rows from a result set the increase we measured when we did initial testing showed 4-7x improvement. Dynamic SQL is a more fully featured implementation. It was designed to be consistent with the SQL-CLI standard. It can do so much more than our other dynamic mechanisms.
go to post Dan Pasco · Jun 28, 2017 Class User.DQ Extends %SQL.CustomQuery { Property id As %Integer; Property lastName As %String; Property firstName As %String; Property age As %Integer; Property ptr As %Integer [ Private ]; Property atEnd As %Boolean [ Private ]; Parameter SQLNAME As String = "DQ"; Method %OpenCursor() [ Private ] { try { // Let's set up some temporary data set ^CacheTempDQ(1) = $listBuild("Smith","John",42) set ^CacheTempDQ(2) = $listBuild("Jones","Quincy",80) set ^CacheTempDQ(3) = $listBuild("Wilson","George",71) set ^CacheTempDQ(4) = $listBuild("Garcia","Andrew",32) set ^CacheTempDQ(5) = $listBuild("Smoak","Felicity",24) set ^CacheTempDQ(6) = $listBuild("Partridge","Audrey",32) set ^CacheTempDQ(9) = $listBuild("Williams","Andrie",92) set ^CacheTempDQ(10) = $listBuild("Orr","Robert",62) set ^CacheTempDQ(11) = $listBuild("Moon","Lila",21) set ^CacheTempDQ(120) = $listBuild("Lola","Elleoh",67) set ..ptr = "" set ..atEnd = 0 set ..%SQLCODE = 0 set ..%Message = "" } catch exception { set ..%SQLCODE = exception.AsSQLCODE() set ..%Message = exception.AsSQLMessage() } } Method %CloseCursor() [ PlaceAfter = %Next, Private ] { try { set ..ptr = "" set ..atEnd = 0 kill ^CacheTempDQ set ..%SQLCODE = 0 set ..%Message = "" } catch exception { set ..%SQLCODE = exception.AsSQLCODE() set ..%Message = exception.AsSQLMessage() } } Method %FetchCursor(ByRef sc As %Library.Status = {$$$OK}) As %Library.Integer { try { if '..atEnd { set ..ptr = $order(^CacheTempDQ(..ptr),1,data) if ..ptr { set link = "" set ..lastName = $list(data,1) set ..firstName = $list(data,2) set ..age = $list(data,3) } else { set ..atEnd = 1 set ..%SQLCODE = 100 } } } catch exception { set ..%SQLCODE = exception.AsSQLCODE() set ..%Message = exception.AsSQLMessage() } return '..atEnd } } Then execute this statement: select * from DQ() or select * from DQ() where lastName %STARTSWITH 'S' order by age DESC idlastNamefirstNameage SmithJohn42 SmoakFelicity24 2 row(s) affected
go to post Dan Pasco · Jun 28, 2017 Hi Chris,Good question! This is a poorly understood area and there is much to say about it. I'll try to be brief. Feel free to follow up.Your new property falls into the category of properties commonly referred to as 'derived columns' in other products. We support two different flavors - 'always computed' and 'triggered computed'. An 'always computed' property is indicated by the CALCULATED keyword. A 'triggered computed' property is 'NOT CALCULATED' although this is the default if you don't specify it. A triggered computed property can be TRANSIENT or stored (not transient - again, the default) and its value is triggered to recompute if the value of any property specified in the SQLCOMPUTEONCHANGE list of properties changes. In an object context, you need to be a aware that the value of a TRANSIENT property is directly settable unless you take precautions to prevent that.The advantage to using SQLCOMPUTED/SQLCOMPUTECODE to derive a property value is that SQL and Objects seamlessly (almost) recognize the derived value and invoke the computation at the proper time. Overriding Get/Set methods can be used to derive property values but SQL will not recognize this as you have already discovered. SQL does not operate in an instance context. In other words, there is no oref available to SQL so SQLCOMPUTECODE must run in a class context and not in an instance (object) context. No oref. The way you have implemented your code is correct and, sorry to say, there isn't a cleaner way that I know about. There is one other thing you should know. When you open the object you may not actually get a new oref if that object was previously open in your process. That goes for any call to %Open([d] - we cache the OID's with their active orefs and always return the existing oref from %Open if one is present in process memory. What does that mean? It means that you may see data in memory that has not committed to storage - Objects sees the data because of the in-memory context but SQL won't because it runs in an on-disk (storage) context. By opening objects you may produce a computed value that is different from the same data that is stored. There are ways you can avoid this problem, one being to store the derived value and trigger it to recompute on INSERT and UPDATE. This is easily defined by adding %%INSERT and %%UPDATE to the SQLCOMPUTEONCHANGE list of trigger properties.Not so brief.-Dan
go to post Dan Pasco · Jun 20, 2017 Hi Peter!!Your comment is correct of course. But think about the behavior and not the physical model. Everything that PARENT/CHILDREN relationships offers can be easily replicated using ONE/MANY with the only differences being the default storage model and the composite IDKEY that is required by PARENT/CHILDREN. With ONE/MANY you have the advantage of using system assigned integer ID's (frees you up to use bitmap indices - think product/invoice line perhaps) and you have more flexibility with referential actions. PARENT/CHILDREN requires ONDELETE=CASCADE. With ONE/MANY you have the option for ONDELETE=NOACTION/CASCADE... And, of course, if you with to establish existence dependency between the parent class and the child then simply add a REQUIRED constraint to the relationship whose cardinality = ONE.Hope to see you at Global Summit! Dan
go to post Dan Pasco · Jun 16, 2017 I am not a fan of parent/children relationships. Think about what this really means. Kyle's comment about the data being nested beneath the parent object in the global is just the default physical storage model. The user has control over storage and can change that structure. But consider the logical behavior that PARENT cardinality brings. First, a relationship is like a foreign key - it is constrained by a referential constraint and it is subject to referential actions. Since this is a foreign key referencing the IDKEY of the parent class there is no need to define an ONUPDATE action (IDKEY values cannot be changed) but ONDELETE must be CASCADE. Why? Because PARENT cardinality establishes an EXISTENCE dependency on the parent. You can't change that unless you change the cardinality to ONE. Secondly, the IDKEY of the child class implicitly includes the parent relationship. This establishes an IDENTIFICATION dependency on the child class. You can't identify (the ID/OID) an instance of the child class unless you know the ID of the parent class. Furthermore, once a parent value is assigned and saved, you cannot change the value of the parent property. To do so would identify a different object and it isn't allowed. So - PARENT defines existence and identification dependencies.To summarize PARENT/CHILDREN behavior:1. Existence dependency;2. Identification dependency;3. Physical storage default is nested (artificially creating an implicit ONDELETE CASCADE behavior at the physical level - IMO this is not a good thing!) And think about other ways to achieve the same thing using ONE/MANY or even a simple foreign key:1. Existence dependency - make the relationship in the child class (or property if using fkey) REQUIRED;2. Identification dependency - define the IDKEY and add the relationship with cardinality = ONE to the IDKEY;3. Physical storage model nested - I don't recommend this model for ONE/MANY. The advantages of ONE/MANY:1. The user has control over dependencies;2. More ONDELETE/ONUPDATE options;3. The option to base the IDKEY on a positive integer value, allowing the use of bitmap indices;What does a relationship do for you that a foreign key does not? One thing. It maintains the relationship in memory when either side of the relationship is in memory (swizzled). This can be a good thing. It can also be a bad thing if there are a large number of related objects in the n-cardinality side.
go to post Dan Pasco · Jun 15, 2017 Caché Objects persistent classes project to SQL as tables. Caché Objects relationships are binary - a relationship must have a defined inverse relationship in the related class. We only support relationship/inverse relationship cardinality pairs of one:many and parent:children (a special case of one:many where the parent is implicitly required and is also implicitly part of the IDKEY). A Caché Objects n-cardinality relationship (children, many) is transient in the container and there is no projection to SQL. A 1-cardinality relationship is stored and projects to SQL as a column that is constrained by a foreign key constraint. A simple query can be used to retrieve the contents of the unprojected n-cardinality relationship. Consider two tables, A and B, related by properties b and a. A.b is of type B and specifies cardinality = many. B.a is of type A and specifies cardinality = one.The value of A.b can be determined by this query:select %ID from B where B.a = ?and specify '?' as the A.%ID value. In fact, the internal relationship implementation uses a very similar query to populate A.b on %Open.
go to post Dan Pasco · May 26, 2017 Yes, of course. The SQLCOMPUTECODE can invoke a classmethod but not an instance method unless you provide the instance. If you wish to pass arguments that correspond to other properties then just use curly-brace syntax to refer to them:sqlcomputecode = { set {*} = ..MyComputeMethod({property1},{property2}) }and the compiler will work out the details.Also, we have proposed syntactical changes that would make this much nicer. The idea is that a class definition is composed of some set of members, members being things like properties, methods, indices, and so on. Each member type, excepting methods, can also have methods. The idea for the syntax is to embed the method definition with the member. This was just an idea and is not an active development project. But - something like this is what we envision:property foo as %String [ sqlcomputed ] { method Compute(someParameter as %String) as %String { your code goes here, set the return value and return it }} Just an idea...
go to post Dan Pasco · May 26, 2017 Triggers and computed properties (compute triggers) are certainly better than callbacks. Now that we have unified triggers callbacks should be avoided - IMO - in favor of using triggers. However, initial expression when there is some outside influence such as a clock or other value source involved, when the property is not read-only protected, etc. makes using initialexpression a bad idea in some cases. In cases where there is no opportunity for the user to interact with a new instance prior to it being saved then initialexpression is acceptable. If there is opportunity for interaction between instantiation and save (serialization) then initialexpression can be unreliable. The %%INSERT/%%UPDATE compute triggers avoid that.
go to post Dan Pasco · May 26, 2017 As a follow-up, I should have provided an example! Class User.TriggerCompute Extends (%Persistent, %Populate){Property foo As %String;Property CreationDate As %TimeStamp [ SqlComputeCode = { set {*}=$zdt($h,3) }, SqlComputed, SqlComputeOnChange = %%INSERT ];}and I populated this class at random intervals. Here are five sample rows:select top 5 * from triggercompute order by creationdate descIDCreationDatefoo112017-05-26 08:40:15Y1407102017-05-26 08:40:10J979392017-05-26 08:40:03Y501182017-05-26 08:39:57E5172017-05-26 08:39:55L34685 row(s) affected
go to post Dan Pasco · May 26, 2017 Very interesting question. Rubens Silva's answer is accurate - %OnAddToSaveSet provides more flexibility when you need to modify the object or the graph of objects to be serialized. The callbacks we provide as part of Caché Objects are limited to Objects and are often quite useful. However, Caché Persistent classes also project to SQL as tables. Users can perform CRUD operations using either Objects or SQL. Callbacks are not processed by SQL so any work you do in a callback must either be reproduced in some form in SQL or the user must recognize that accessing persistent data through SQL might produce different results from performing an equivalent access using Caché Objects. But we have better choices these days. Caché Objects provides two mechanisms for triggering behavior. SQL recognizes both of these mechanisms. The first is a compute trigger. You can define a property as SQLCOMPUTED. The value of the property can be defined to be computed whenever the value of a property defined in the SQLCOMPUTEONCHANGE list of properties is changed. We support two meta-properties to trigger a compute on insert and update - %%INSERT and %%UPDATE. If SQLCOMPUTEONCHANGE includes %%INSERT then the value of the property is computed when the object is about to be inserted into the database. This computation will override the current value of the property. Similarly, if %%UPDATE is specified in SQLCOMPUTEONCHANGE then the property's value is computed just prior to saving an updated version of the object. Compute triggers have been present in Caché for a very long time and %%INSERT/%%UPDATE were available in 2008.1.The second mechanism is a TRIGGER. TRIGGERs were one the exclusive domain of SQL but we since added a new FOREACH keyword value of ROW/OBJECT. When FOREACH = ROW/OBJECT the trigger will be pulled at the defined time for each filing event using SQL or Objects. FOREACH = ROW indicates that the trigger is invoked only when SQL is used. There is no FOREACH value for invoking a trigger only when Objects is used.We refer to this feature as "unified triggers". This feature first appears in Caché 2014.1.-Dan
go to post Dan Pasco · May 26, 2017 Very interesting question. The callbacks we provide as part of Caché Objects are limited to Objects and are often quite useful. However, Caché Persistent classes also project to SQL as tables. Users can perform CRUD operations using either Objects or SQL. Callbacks are not processed by SQL so any work you do in a callback must either be reproduced in some form in SQL or the user must recognize that accessing persistent data through SQL might produce different results from performing an equivalent access using Caché Objects. Rubens Silva's answer is accurate - %OnAddToSaveSet provides more flexibility when you need to modify the object or the graph of objects to be serialized. But we have better choices these days. Caché Objects provides two mechanism for triggering behavior. SQL recognizes both of these mechanisms. The first is a compute trigger. You can define a property as SQLCOMPUTED. The value of the property can be defined to be computed whenever the value of a property defined in the SQLCOMPUTEONCHANGE list of properties is changed. We support two meta-properties to trigger a compute on insert and update - %%INSERT and %%UPDATE.The second mechanism is a TRIGGER. TRIGGERs were one the exclusive domain of SQL but we since added a new FOREACH keyword value of ROW/OBJECT. When FOREACH = ROW/OBJECT the trigger will be pulled at the defined time for each filing event using SQL or Objects. FOREACH = ROW indicates that the trigger is invoked only when SQL is used. There is no FOREACH value for invoking a trigger only when Objects is used.We refer to this feature as "unified triggers". This feature first appears in Caché 2014.1.-Dan