· Dec 2, 2017

Cache Synchronization of databases

Hello everyone,

i want an automatic unidirectional syncronization of multiple databases (some tables from .dat file, not the whole .dat file) . 

So far i have tried everything from package %SYNC and the best working class is SyncSet with journals and guids. The problem is initial database transport for example when i want to add another server.  The easiest solution i have found is to transport syncing globals ^OBJ.SYNC.N to another database and then call %SYNC.SyncSet.Import(), however it seems not to work with only the global structure, although it works fine using import files.

Whats the best way to sync multiple databases (tables, programmatically)? 
How to perform initial database transport when using SyncSet?

Thanks in advance.

Discussion (7)1
Log in or sign up to continue


If it's unidirectional and the network is protected and under control, I would definitely avoid using %SYNC and would use Asynchronous Mirroring instead.

On the other hand, there is a limit on the number of Asynchronous Mirroring you can have. I think it's currently 16. If this is not enough for you then you will have to think on another solution such as %SYNC or simply a process that periodically exports the globals and send them everywhere through a secure channel (a SOAP web service).

I have implemented a %SYNC over SOAP toolkit that makes things easier to setup and monitor. I am still finishing some aspects of it. %SYNC is a very good toolkit but it lacks a good communication and management infrastructure. I have the communication (through a protected SOAP communication) sorted out. I am now working on operational infrastructure such as purging ^OBJ.SYNC global, protecting the journal files from being purged if a node is lagging behind, some monitoring, etc.

If Asynchronous Mirroring doesn't work for you (that should be your first choice) and if you can build a simple task that periodically export your globals and send them through SOAP to your other nodes, I can share with you my code around %SYNC.

Kind regards,

Amir Samary

If your connection is not stable enough I'd suggest to take a closer look to good old Shadowing.

It's really jungle proof.
I used it in past to transfer data from an oil drill platform  somewhere out on the ocean over a satellite link with just 64k bd bandwidth without any data loss. And this link was far from whatever you would expect from a wired connection on ground.
Shadowing did id with incredible stability and no loss.
Cascading of shadowing is also almost unlimited with no  issue. 

As Robert indicates Shadowing and Mirroring on the Cache level are the ways to go.

Shadowing creates a copy of the master via journal Sets, Kills and $Bit. It's slower to sync and, I think, more prone to desync issues.

Mirroring is fairly immediate sync, offers more flexibility and features, like automatic failover. It's pretty awesome.

Tom Fitzgibbon | gototomAtG...l | 3474648531

I'm just thinking that maybe we need more information as to why this is needed before recommending anything. If the network connection is good enough for mirroring, then why not just map the classes to a central repository? Perhaps all that is needed are security settings to prevent updates from the "slave" systems. Perhaps there is no network connection, in which case mirroring or shadowing is not possible, and what is needed is a good way to automate export/import to OS files.