Transaction suspencion

It’s often useful to make changes inside the current transaction, that would not be rolled-back if transaction is rolled-back. For example to do some logging.

This can be achieved by using global that is mapped to temporary database -- IRISTEMP. All globals that start with ^IRIS.Temp* are mapped to IRISTEMP by default. Problem with such approach is that IRISTEMP is cleaned on InterSystems IRIS restart, so this log is lost.

What else you can do is -- suspend transaction temporarily, do the logging, and then resume the same transaction.

Method that suspends/resumes transaction is $System.Process.TransactionsSuspended(switch).

If switch is 1 transactions are suspended, if switch is 0 transactions are resumed.

For example:

USER>tstart

TL1:USER>set ^a=1

TL1:USER>do $system.Process.TransactionsSuspended(1)

TL1:USER>set ^log($I(^log)) = "we set ^a to 1"

TL1:USER>do $system.Process.TransactionsSuspended(0)

TL1:USER>trollback

USER>zw ^a

USER>zw ^log
^log=1
^log(1)="we set ^a to 1"

Notice, in this case changes in ^log global are still journaled (if it is mapped to journaled-database).

Some details are in “Suspending All Current Transactions” section of “Transaction Processing” chapter of documentation.

Comments

I would suggest it makes much more sense to do logging to globals that are mapped into a DB that isn't being journaled in the first place. That way you don't have to add additional complexity of suspending a transaction (which is a great source for bugs...)
Thanks,
Fab

Hi Fab!

Changes to globals mapped to non-journaled databases are still journaled inside transactions. Unless globals are mapped to IRISTEMP or journaling is completely stopped at the system.

Quoting the docs: "While updates to non-journaled databases that are part of transactions are journaled, these journal records are used only to roll back transactions during normal operation. These journal records do not provide a means to roll back transactions at startup, nor can they be used to recover data at startup or following a backup restore."

https://irisdocs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page...

First let me state for clarity that the life of a transaction should be short.  That is a single logical transaction to the application.  I have seen cases where an entire batch update was treated as a single transaction.  This can create many problems that are not easy to diagnose.

I also agree with Fabian that suspending transactions is a path that can lead to complexity in the code that it best to avoiding.  The nature of a transaction is to track EVERYTHING that happens in the transaction.  Messing with that will lead to issues with maintainability and debugging later in the life of the application.

My suggestion is to do your logging to an in-memory global while in the transaction.  Then you can permanently write this after the transction commits or rolls back.   Two things to be sure of in this method: 

  1. always explicitly commit or rollback the transaction.   Don't rely on implied rollback that would occur from a process end
  2. enclose your transaction in a try/catch block.  This will insure that you have the opportunity to commit your logs in the case of a system fault.  This could be an existing try/catch in the program.  However for control and clarity I recommend a separate try/catch for the transaction.

How you commit your log updates will depend on your error handling.  I this example the logs are committed in two places, after a tcommit and in the catch block since the catch block will throw the exception up the stack.  If this did not occur then a simpler approach would be to write the logs out after the catch block.  

Example (just typed it here for example.  Not a running program) with exception passed up the stack

Try {
    set ^LOG($Increment(^LOG)) = "starting a transaction"
    TSTART
    // do some program logic
    // do some logging
    set tLog($Increment(^LOG)) = "some application trace" // note using the still using he ^LOG increment
    
    // if application error
    throw ##class(%Exception.StatusException).CreateFromStatus(tSC)

    TCOMMIT
    Merge ^LOG = tLog
} catch except {
    TROLLBACK
    Merge ^LOG = tLog
    Throw except
}
 

Example with internal error handling only

Try {
    set ^LOG($Increment(^LOG)) = "starting a transaction"
    TSTART
    // do some program logic
    // do some logging
    set tLog($Increment(^LOG)) = "some application trace" // note using the still using he ^LOG increment
    
    // if application error
    throw ##class(%Exception.StatusException).CreateFromStatus(tSC)

    TCOMMIT
} catch except {
    TROLLBACK
}
Merge ^LOG = tLog
 

I would not agree with the way of using "in-memory global" instead for logging. It would be easier to have one ClassMethod  Log, which would log everything needed to be logged, it can do it with objects, which would have indexes for future usage, to get faster access. But it can temporarily switch off journalling at all, or just suspend the transaction. In any normal application, any logging should already be centralized. So, it would not add any complexity for an application. But in some cases quite difficult to debug some issues, when you lost some logging because they were rollbacked.

Dmitry,

The method of logging to a global was primarily to match the original use case.  Having a full class for logging including indexes would be a better option.

However, my personal belief is that suspending transactions, or even worse stopping and starting  journaling,  is not really a good option.  Even if the actually coding does not appear to be complex there is the potential for reach beyond the current transaction if you don't cover all bases.  Holding the logs in a temporary global that then gets written out after the transaction insures that the application logic is intact.  You could even encapsulate this functionality in classmethods.  If there is any failure the impact would be on the logging and not the actual logic of the transaction.

Not if we could perform an update/insert/save that could be intentionally and deliberately excluded from the transaction OR if the suspend was only for the current transaction so no lasting impact was even possible then I would feel more comfortable with that approach.

I will add that this is a matter of design and the programmer's approach.  Either method works the decision is where you wish to have any risk, no matter how small that risk is.  My preference is to be absolutely sure the integrity of the transaction  is maintained over the logging.

But it can temporarily switch off journalling at all, or just suspend the transaction

Switching journaling off [for the current process] is unreliable as it's ignored in MIRRORred environment. Not sure about suspending transactions.

Some details are in “Suspending All Current Transactions” section of “Transaction Processing” chapter of documentation.

Looking through the docs, I'm getting curious if there is a discrepancy between its different parts:

1) Suspending All Current Transactions
You can use the TransactionsSuspended() method of the %SYSTEM.Process class to suspend all current transactions system-wide...

2) IRIS 2019.2 Class Reference. %SYSTEM.Process
The TransactionsSuspended(switch) class method controls a switch that will allow a process to temporarily suspend transactions.

I hope that the only second statement is true, isn't it?

Thank you Alexey. Yes, you are correct, second statement is correct. I've asked to modify first statement.