This should all work fine. Can you make a standalone example (as small as possible) that demonstrates the method signature problem and attach it to this thread?

The property question while also related to inheritance, is probably a different discussion, and I'd recommend working up a small example about that and starting a new discussion, but I'll leave it to your judgment if you think it would better fit here.

In general, classic single inheritance in Zen should work exactly as expected -- and has worked in my experience pretty much from day one.

If you copy the client method to a client method with a different name, do you still have the duplicate parameter in the new client method code?

What do you get for the output from the following commands? Replace "YourApp.Page" with the name of your page or component class (without any .cls or .zen extension).
Set sig=##class(%Dictionary.CompiledMethod).IDKEYOpen("YourApp.Page","doReturn").FormalSpecParsed
Write $LL(sig),! zzdump sig

In a database, your most important asset is your data -- business logic and code comes a close second. There should at least be someone paying attention to the data slots for sanity, even though in most cases, there is nothing to do. At the very least, any developer adding or modifying properties should "diff" the storage definition when submitting code to the source repository. The class compiler by default will handle everything correctly.

1. The simplest and best way to move from one version of a class to the next is to keep the same storage definition. The class compiler will create slots for any new properties and is completely non-destructive and conservative. When I retire properties, I generally manually rename the storage slot and give it a prefix such as zzz<retiredPropertyName>. This allows me to explicitly clean up and reuse these slots later if I so choose.

2. For the type of before/after compile triggers you are looking for, I would use the RemoveProjection/CreateProjection methods of a projection class which provide "a way to customize what happens when a class is compiled or removed".

3. You can use %Dictionary.CompiledStorage and related classes(Data) to have full access to the compiled storage definition of a class.

3a. An alternate approach is to use XSLT or XML parsing to read the storage definition from the exported class definition. I would only use this alternative if you need to capture details for separate source control purposes.

4. The simplest storage definitions use $ListBuild slots and a single global node. When subclasses and collection properties are used, the default storage definition gets more complicated -- that is where you will really want to stick to the simple approach (item 1).

I lean toward the manifest solution: By explicitly creating configuration changes in code, one avoids the problems of having to import and export the changes which carries the risk of missing something and also the risk of propagating a change unintentionally.


I prefer using configuration exports for detecting out-of-band changes to configuration and triggering a corrective workflow where needed; this allows for the [hopefully] rare cases where not all changes should be propagated between environments.

I am interested in hearing from other application developers who are using $System.Security.Audit() to audit data change events.

I agree that rollback handling would be difficult to make general purpose.

To summarize:
1. A generic rollback handler wouldn't be practical to implement or to use.

2. We would need a mechanism like $System.Security.Audit() or ^%ETN, or we'd need a log daemon process to do auditing for us so that the logging would occur outside of the application's normal transactions.

3. We might as well use $System.Security.Audit(), since that has been documented as a mechanism available to applications including the ability to create user-defined audit events.

I am actually talking about application level auditing, but your response is still relevant to the discussion.

It sounds as though I'll need to find a way to do some of the auditing outside a transaction (perhaps in a separate job).

There is an object specific %OnRollback() callback (mostly used for stream optimization and integrity), but I don't think that this is exposed to SQL, so it is not available as a general purpose mechanism. So, I think I still need a two step audit process:

  1. Do the main auditing work somehow outside of the transaction (maybe in a separate job?) setting a commit flag set to false.
  2. Inside the transaction update the audit entry (or create a second one) setting the commit flag to true

Since rollbacks are exceptions, it would be much more elegant if the application could register a rollback handler. Would an enhancement request here make sense?

Since you work on the client with Atelier, and only use the server for deployment/compilation/debugging, it does not matter if you need to restart the server every two days, and it allows bug fixes to be deployed automatically. If you export to XML from Atelier and use source control on a system using Studio, then you should see the code automically refresh in Studio when you open the file.

Aaron makes important points -- So it is absolutely necessary to journal updates to Audit tables, unless the recovery procedure reads the system journals (might work for an Object Syncrhonization based auditing mechanism -- does not work for simpler auditing frameworks).

How would one audit an attempted transaction that is then rolled back? It would seem that the auditing transaction would need to be atomically committed prior to the real transaction (or in a separate transaction/process), and then the audit data could be amended by the main transaction to show that it is committed. If it gets rolled back, then the committed entry would also be rolled back and the audit log would show the story.

From Aaron Wassall:

Hi Derek, 

 

1.      I’ve noticed that there is little point of journaling updates to an audit database, as the Audit log is essentially an additional journal. Is there any reason not to disable journaling for audit log updates?


I think the reasons not to disable journaling for the audit database are the same reasons that apply to all databases.

 

Startup recovery after a crash will restore journals in order to apply journaled data to the databases that wasn't previously applied because Caché crashed before the write daemon could write it.  If your audit database has journaling turned off, this is a situation where you can lose data.  For example:

 

-  journaling is turned off for the audit database

-  there is a login failure and then the system immediately crashes --> this data didn't make it to the database

-  startup recovery would apply this data during journal restore, but it wasn't journaled, so it doesn't get applied

-  this data is lost forever

 

Another reason is disaster recovery.  The best, supported way to fully recover from a disaster is to restore from a backup and apply journal files.  But if the audit database isn't journaled, then applying journal files does nothing.  You have lost all of the audit data between the time of the most recent backup and now.

 

Best, 

Aaron

 

Aaron Wassall

Support Specialist