Dmitry Maslennikov · Jun 1, 2021

Mirroring Ensemble service information

I have some system with heavy production. There are about 500GB of journals daily. And I'm looking at the ways, how to decrease the amount of data that appeared there.

I found no way, on how to split have the journal separately for mirroring databases and for others. So, I'm thinking about moving some of the globals to CACHETEMP. So, they will disappear from journals. So, thinking about Ens.* globals, I have about 30% of data in journals just for such data. Which production data can be safely moved to CACHETEMP, with no issues for mirroring?

Or, maybe you have some other suggestions, how to focus journaling more on important data.

Product version: Caché 2017.1
0 192
Discussion (6)2
Log in or sign up to continue

Alternatively, you can create your own separate database that you are able to manage yourself and mark the setting of Journal to 'No'. Then map all not so important globals to that DB.

You should know that even in such a database with switched-off journalling, data will still appear in journals if data changed in transactions.

My question was, just exactly about that data stored for Ensemble, should I keep it in a mirror, or, may omit it?

There are few considerations here:

1. If you are using a failover (sync) mirror, you will need to that Ens.* data on the backup so a switch can be done seamlessly, and Ensemble production can start from the exact same point when failure occurred.

2. mapping any globals into a non-journal database will still have them on the journal when the process is "in transaction". Ensemble production is writing everything "in-transaction"

3. Mapping Ens.* to another DB is good idea if you have several disks, and you want to improve I/O on your system (specifically within cloud VMs that usually have a bit slower disks than on-premise). Another option is to let the system purge Ensemble data regularly and maybe keep less days of history.

4. Mirror journals are regularly being purged after all sync/a-sync mirror members got them.

I think the WRC would be the only ones who could tell you what could be moved into CACHETEMP unless you found someone else  who they had told already.

If you just don't want to store so many journals you could just backup multiple times a day and do a journal purge.

Congratulations on using Ensemble to the point you have this problem.


I had similar issues with journal size. Like you case, it grew a lot during the day and the Ensemble only cleaned at the end of the day. It could reach over 300Gb, but the disk had that amount of space available. Depending on the day, it consumed all the HD space, stopping the journal and requiring manual intervention.

And as there were numerous transactions in SQL that depends on BeginTransaction, CommitTransaction, in addition to mirroring, removing this data from the journal was not an option.

The solution was to adjust the size of the journal file fractions in System > Configuration > Journal Settings [Start new journal file every (MB)] to something like ([YouMaxJournalSizeInMB]/256)

Create a scheduled task by calling, for example, ManualPurge.
It should be daily, hourly, in my case during business hours, from 06:00 to 23:59 (so as not to disturb the backup)

The task file should have something like:

/// [2016-12-29] Marcio Dias - Creation.
Class YourAplication.Utilities.JournalTask Extends %SYS.Task.Definition

Parameter TaskName = "ManualPurge";

Method OnTask() As %Status
    Set $ZTrap = "Error", tSC = ##class(%SYSTEM.Status).OK()

    Set zNS = $NAMESPACE

    Set tSC = ##class(%SYS.Journal.File).PurgeAll()
    If ( $System.Status.IsError( tSC ) ) {
        Do ##class(%SYS.System).WriteToConsoleLog( "Error in ManualPurge: " _ ##class(%SYSTEM.Status).GetErrorText( tSC ) )

    Quit tSC


The purpose of this solution is, every hour, to physically erase the individual journal file in which all open transactions are closed and already exported to the mirror system.
The ##class(%SYS.Journal.File).PurgeAll() parses and guarantees these conditions

In my case, after deploying this solution, the maximum journal size hardly exceeded 80Gb in total.

Set in the BP/BPL classes:


it would improve journaling.