I guess it would not as ZRemove / ZLoad / ZIinsert / ZSave commands are mostly used as tools for the legacy way of editing routines (using routine buffer) rather than for code execution. Of course, one can execute the code inside the previously loaded routine: `zload routine do sub(...)`, but this looks strange if compared with the normal way of doing the same using `do sub^routine(...)` or `set sc=$$fun^routine(...)`. In short, virtually nobody runs code this way.

Hello Stefan,

Thanks for the reference, while I'm still not sure about the step #2 as ^JRNRESTO provides the  (defaulted) option to disable journaling of updates during the restore to make the operation faster, see the step #10 of Restore Globals From Journal Files Using ^JRNRESTO. Besides, this is the only option compatible with parallel dejournaling. So the idea to switch off journaling system-wide looks excessive.

Regards,
Alexey

The manual says I should (in short):

  1. Stop journaling with ^JRNSTOP

Which manual does it say? Any manual I've read last 20 years says quite opposite: never stop journaling unless you want to get your system in troubles.
It seems that you should not bother on the subject at all: during the database restore it can't be involved in any users' activity, so there would not exist any journal record on its change.

Evgeniy, thank you for sharing your experience.

there is one table that contains 11,330,263 rows at the time of writing. Not so critically much, but it creates delays. Even the query to count the number of rows takes almost 30 seconds

Looking just at the number of rows, one apparently can't consider such a table to be a big one.
What was the size (in GB) of the underlining global?

If you are interested in global moving without downtime there is (live-global-mover)

Thank you, already not as new deployments of our HIS use separate document storage from the very beginning. 

Your solution is beautiful as it allows placing the ECP enabled "moved data" server to some less expensive disk storage. In our case that I've briefly described ECP & Mirror were already in use, so we couldn't place document DB on a separated data server as having several independent data servers would be a bad decision for many reasons.

To reduce the backup time, it could be interesting to move these data to a database dedicated to the archive and make a backup of this database only after an archive process

...

Copy data older than 30 days to the ARCHIVE database

Hi Lorenzo,

We had the similar problem with our largest customer's site which summary database size overgrew 2 TB those days (now the have more than 5 TB). Our solution was more complex than yours as the data move process lasted several days, so we can't stop users for such a long time. We placed a reference to the new storage of (already moved) document data instead the document itself. After the data move was finished, namespace mapping was changed. References were left in the original DB because they took much less space than the original documents.

This rather sophisticated approach allowed us to move the document data without stopping the users' activity. And what was the total win? Should the document data be backup'd? Yes. Should it be journalled / mirrored? Definitely yes. The only advantage achieved with this data separation was no more or less than the ability to deploy testing and/or learning environments without the document database. That's all.