Cold backup is one of the options documented in the Data Integrity Guide.

But a lot depends on what you mean by "all data" -- don't forget installation files, license key, journals, any stream files you may have, and on Windows remember that installation puts some keys in the Registry.  How much of this you'll need to restore depends on the kind of trouble you encounter.  For example, if you want to restore a database, but the instance is otherwise healthy, the Registry is likely in working order, but if the entire server is destroyed, you have a more complicated recovery.

If you need to be able to bring the instance back to its latest state (roll-forward recovery), you'll need journal files that won't exist at the time you take the backup so you may want to backup journal files separately during the working hours.

It would be a good idea to protect your installation .EXE file as well, so that you can install a fresh copy of the exact version you use.  Down the road, that version might no longer be available from the WRC.

Take care to preserve the correct ownership and permissions on files so they are correct when they are restored from NAS.

Your account team can be a good resource.

Robert Cemper has a good idea for getting the container running so you can do work.  Or you can use iris-main to tell IRIS not to start when you bring the container up.

The user and password for EmergencyId is one you define for that session, so it can be whatever you choose.  See https://docs.intersystems.com/iris20241/csp/docbook/DocBook.UI.Page.cls?... for the details.

Knowing more about why you want emergency mode can help us get you a better answer.

Echoing what Vic and Dmitry have mentioned.  GARCOL cleans up large KILLs, so it's a database operation.  Your post seems to ask a different question, akin to Java object garbage collection. 

I've been quite surprised to see how much work gets done by IRIS processes with scant process memory; I suppose because it's so easy to use globals.

Understanding what you are experiencing and trying to do would help a lot.

I ran a quick test, Luis, on my Windows 10 laptop with 32GB of memory.  Memory usage peaked at about 25GB and settled down to about 20GB.  When I deleted the container and stopped Docker, memory went down to about 10GB used.

I did not see obvious memory exhaustion in what is an informal test.  I don't have a good sense of what "good" memory usage should be like on this image.  If this is causing you trouble, reach out to WRC and they can get you more help.

Rochdi, if you haven't created and practiced your own steps for doing journal restore, you might be best served by contacting the WRC or your account team at InterSystems.  Restoring from journals can involve many decisions about where the journals come from, where you are using them now, and several other items -- it can get complicated depending on exactly what you need to do.

Also, you can't restore a complete database from journals unless you have all the journals from the time you created the database, which isn't very likely if you have been using the database for more than a few days.  You also reported a very old version of Caché (2014.1) which might affect how you recover.

Certainly if you are in a crisis situation, contact the WRC for immediate help.

Rochdi,
2.2TB is within the limits I could find for the NTFS file size limits and partition size limits.  Caché can grow a CACHE.DAT file to 32TB before reaching its software limit.

From what you've shared, I have a few ideas:

The NTFS partition is limited in size, causing the <FILEFULL>.

There is some Windows policy that is limiting the maximum file size to less than what NTFS allows.

You've encountered an incompatibility between Ensemble 2014.1 and Windows Server 2016 (which is not a supported platform for this version according to documentation -- it's too new).

Your best bet is probably contacting WRC for help in sorting this situation out.

^INTEGRIT is the simplest way to check integrity.  Run the integrity check output to a file and contact WRC, as others have said, if you have support.  The most direct way to resolve database errors is to restore from a good backup and replay journals.  If you can't do that, the other alternatives almost always involve loss of information.  The WRC has specialists who understand database internals, and WRC always wants to investigate for the root cause of any database problems.

Hi Paul, I think you might better talk with the WRC or your Linux distro's support folks depending on whether this works at the Linux shell or not.  Is this trouble with Linux and Caché (start with your vendor) or just with Caché (ask the WRC).  If you have a contact at VMS Software, they might have a reliable solution too.

Something like

PROCAUTO  = /JRNDSK/ProcAuto_share

just sets up a variable, so you can replace the string '/JRNDSK/ProcAuto_share' with $PROCAUTO.  There's nothing fancier than that.  Linux doesn't have a notion of a "shortcut" like Windows does (although some Linux GUI's do give you shortcuts, just at application level) and certainly nothing like VMS logicals.  Soft links and hardlinks are just different ways of giving a file a different name at a different location, so there's nothing to pass in, really.

I still might not be clear on what you are trying to achieve though, and the details matter a bit -- things are a little different if you are using Samba or another shared filesystem and what you're trying to pass the link into.  There may be another way to reach your goal...or maybe not.  VMS is very different from Linux and you've bumped into one of those really nice VMS features that people miss when they move away.

If you are overall trying to migrate from Caché on VMS, I would certainly talk with your account team because there's a lot of learning gained from working with customers who have started that journey. 

Hi Paul! Nice to see you on the DC.  These questions are hard because there's a lot of"plumbing" to think about.

If I understand things right, you have an environment variable for root set via a script file in /etc/profile.d and that's working at the Linux shell.  But you don't see that at a Cache' terminal even when you start the instance as root.

Caché daemons will run as the instance owner and user jobs would run with the privilege and environment of the user who logs in.  Otherwise, an ordinary user could open a Caché terminal and work with root privilege.  I'm writing this from memory, and Caché 2015.1 is pretty old now, so there might be some other details that are relevant (and which I'm forgetting).

Another issue is that when you use the sudo or su commands to "become root", you don't necessarily get root's environment unless you use the right options for that.  The manpages for sudo and su should help you figure out this.

If you can tell us some more about what you are trying to accomplish with the link, we might be able to help further.  If you are trying to establish links to files that are established when the instance starts and persist, what you need might better be addressed with actions in SYSTEM^%ZSTART or ^ZSTU.  For that, the WRC might also be able to help.

Good luck.  I hope this at least gives you a trailhead to solve the trouble.