Robert Cemper has a good idea for getting the container running so you can do work.  Or you can use iris-main to tell IRIS not to start when you bring the container up.

The user and password for EmergencyId is one you define for that session, so it can be whatever you choose.  See for the details.

Knowing more about why you want emergency mode can help us get you a better answer.

Echoing what Vic and Dmitry have mentioned.  GARCOL cleans up large KILLs, so it's a database operation.  Your post seems to ask a different question, akin to Java object garbage collection. 

I've been quite surprised to see how much work gets done by IRIS processes with scant process memory; I suppose because it's so easy to use globals.

Understanding what you are experiencing and trying to do would help a lot.

I ran a quick test, Luis, on my Windows 10 laptop with 32GB of memory.  Memory usage peaked at about 25GB and settled down to about 20GB.  When I deleted the container and stopped Docker, memory went down to about 10GB used.

I did not see obvious memory exhaustion in what is an informal test.  I don't have a good sense of what "good" memory usage should be like on this image.  If this is causing you trouble, reach out to WRC and they can get you more help.

Rochdi, if you haven't created and practiced your own steps for doing journal restore, you might be best served by contacting the WRC or your account team at InterSystems.  Restoring from journals can involve many decisions about where the journals come from, where you are using them now, and several other items -- it can get complicated depending on exactly what you need to do.

Also, you can't restore a complete database from journals unless you have all the journals from the time you created the database, which isn't very likely if you have been using the database for more than a few days.  You also reported a very old version of Caché (2014.1) which might affect how you recover.

Certainly if you are in a crisis situation, contact the WRC for immediate help.

2.2TB is within the limits I could find for the NTFS file size limits and partition size limits.  Caché can grow a CACHE.DAT file to 32TB before reaching its software limit.

From what you've shared, I have a few ideas:

The NTFS partition is limited in size, causing the <FILEFULL>.

There is some Windows policy that is limiting the maximum file size to less than what NTFS allows.

You've encountered an incompatibility between Ensemble 2014.1 and Windows Server 2016 (which is not a supported platform for this version according to documentation -- it's too new).

Your best bet is probably contacting WRC for help in sorting this situation out.

^INTEGRIT is the simplest way to check integrity.  Run the integrity check output to a file and contact WRC, as others have said, if you have support.  The most direct way to resolve database errors is to restore from a good backup and replay journals.  If you can't do that, the other alternatives almost always involve loss of information.  The WRC has specialists who understand database internals, and WRC always wants to investigate for the root cause of any database problems.

Hi Paul, I think you might better talk with the WRC or your Linux distro's support folks depending on whether this works at the Linux shell or not.  Is this trouble with Linux and Caché (start with your vendor) or just with Caché (ask the WRC).  If you have a contact at VMS Software, they might have a reliable solution too.

Something like

PROCAUTO  = /JRNDSK/ProcAuto_share

just sets up a variable, so you can replace the string '/JRNDSK/ProcAuto_share' with $PROCAUTO.  There's nothing fancier than that.  Linux doesn't have a notion of a "shortcut" like Windows does (although some Linux GUI's do give you shortcuts, just at application level) and certainly nothing like VMS logicals.  Soft links and hardlinks are just different ways of giving a file a different name at a different location, so there's nothing to pass in, really.

I still might not be clear on what you are trying to achieve though, and the details matter a bit -- things are a little different if you are using Samba or another shared filesystem and what you're trying to pass the link into.  There may be another way to reach your goal...or maybe not.  VMS is very different from Linux and you've bumped into one of those really nice VMS features that people miss when they move away.

If you are overall trying to migrate from Caché on VMS, I would certainly talk with your account team because there's a lot of learning gained from working with customers who have started that journey. 

Hi Paul! Nice to see you on the DC.  These questions are hard because there's a lot of"plumbing" to think about.

If I understand things right, you have an environment variable for root set via a script file in /etc/profile.d and that's working at the Linux shell.  But you don't see that at a Cache' terminal even when you start the instance as root.

Caché daemons will run as the instance owner and user jobs would run with the privilege and environment of the user who logs in.  Otherwise, an ordinary user could open a Caché terminal and work with root privilege.  I'm writing this from memory, and Caché 2015.1 is pretty old now, so there might be some other details that are relevant (and which I'm forgetting).

Another issue is that when you use the sudo or su commands to "become root", you don't necessarily get root's environment unless you use the right options for that.  The manpages for sudo and su should help you figure out this.

If you can tell us some more about what you are trying to accomplish with the link, we might be able to help further.  If you are trying to establish links to files that are established when the instance starts and persist, what you need might better be addressed with actions in SYSTEM^%ZSTART or ^ZSTU.  For that, the WRC might also be able to help.

Good luck.  I hope this at least gives you a trailhead to solve the trouble.

In the errpt output, Roger, it looks like something made your disks unavailable.


Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21297
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE


Probable Causes

Detail Data
8000 0028 0000 0000
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014B 506D D812 0000 0000 0000 0000

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21296
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE

Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    Attempt to bring the Concurrent Volume Group back online

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014B 506D D812
0028 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000

Make sure your hardware is healthy.

Nicki, I would start with your account manager and have a chat about how best to migrate.  Since Ensemble and IRIS for Health are different products, the specific versions involved matter a great deal.  A sales engineer on your account team probably has seen migrations such as the one you propose.

As for differences between IRIS and Ensemble, check the Migration Guide which you can download from the WRC.  There's a lot there, but it includes the result of a lot of testing and migration experience.

And of course, if you are finding something that doesn't work as it should, reach out to the WRC.