Hi Elisha,

I happened to have worked on a somehow similar request recently, even if not with the use of the FTP adapter within a production, but with the %Net.FtpSession class directly.

The short answer is probably that we do have support for the "new" SSLUseSessionResumption property, starting with the IRIS 2022.1 release.
The matching documentation in the  %Net.FtpSession class:

property SSLUseSessionResumption

Using the 2022.1 release, and the %Net.FtpSession class, you could adapt your code to include an extra set on the matching property, like "set ftp.SSLUseSessionResumption = 1"

In order to use the same property within the FTP adapter, it gets a bit more complex:
the matching change will come into the product with the 2022.2+ releases.

I would suggest that you perform a quick test using the %Net.FtpSession class in a 2022.1 installation, just to confirm that the use of the SSLUseSessionResumption property will allow the code to connect to your specific SFTP server.

Once you have that confirmed, you will probably want to open a WRC Support request to see if the change could be made available into your current release.


To get a list of the built-in aliases, try ":?", this is from 2021.1 for example:

 :<number>    Recall command # <number>
 :?           Display help
 :alias       Create/display aliases
 :clear       Clear history buffer
 :history     Display command history
 :unalias     Remove aliases

Note how indeed we don't have :sql in the 2021.1 release, it came later..

Buongiorno Robert,

Indeed "iris session" doesn't exist (yet) in the Windows world, but it will probably come at some point in the future.

The "InterSystems IRIS Adoption Guide" happens to have been updated recently to clarify this point, suggesting the use of the terminal/console/run options instead.

Your example could look as follow:

iris terminal IRIS JOBEXAM %SYS


My understanding is that while you can easily retrieve environment variables visible to your process using the $system.Util.GetEnviron() interface, you simply do not have access to an equivalent $system.Util.SetEnviron(), as setting of environment variables can be quite tricky, risky and very OS specific.

If you do absolutely need to be able to change an environment variable for your running process, and you are dealing with one specific platform, you can probably create your own $ZF("SetEnviron",var,value) code, based on OS specific calls like the Linux system service setenviron()..

Again: the short answer would be that out of the box you will be able to read but not set environment variables.
If you really do have the need of setting them, you would need to look into building the necessary $ZF call-out interface based on your target OS.


With the assumption that running a full integrity check on the primary, Live instance is not really an option here, my approach would be something along these lines:

- backup of your primary instance using your preferred third party tool

- automate the restore of the backup into a separate environment

- run the integrity check in this second server

I see a couple of advantages in such a scenario:
you can verify that your backups are indeed valid and can be restored, quite a good felling in case of actual needs at some point in time.

As an integrity check is quite I/O intensive, you would move all those extra IOs away from your Live storage and only affect a separate set of disks.

Our documentation includes a section on how to script integrity checks:

"Checking Database Integrity Using the ^Integrity Utility"

Hi Joyce,

I just tested the documentation steps on my Mac just to confirm that they would work correctly with the current Docker release, and it seems IO got everything going through correctly.

The "permission denied"  error suggests some kind of permission issue, and looking at your "docker run" command I have a strong suspicion that the problem may be coming from the --volume /Users/docker:/external mapping, as anything under "Users" is handled quite strictly by MacOS (and correctly so I would add:-)

Try your test by using a different path for your storage volume.. This is an example of what I ended up running with for my current test run (I used a /Temp/durable/ directory to store my key and password file):
docker run --name iris \
> --detach \
> --publish 52773:52773 \
> --volume /Temp/durable/:/external \
> --env ICM_SENTINEL_DIR=/external \
> acme/iris:stable \
> --key /external/iris.key \
> --before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /external/password.txt"

docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                      NAMES
b23d2d4c8b1d        acme/iris:stable    "/iris-main --key /e…"   9 minutes ago       Up 9 minutes>52773/tcp   iris

Let us know if the /Users/docker path was indeed the cause of your issue.

While possibly being a bit out of scope for your specific request here, as it seems you may be working on a standalone server, I wanted to point out a recent Community article on "VM Backups and Caché freeze/thaw scripts" containing some useful batch examples too:


The article also includes discussions on the use of OS authentication for the script etc

Hi Anzelem,

if your question is "how can I allocate more than 2GB of memory for my shared memory segment", Dmitry already answered that pointing out that this is an expected limitation of your 32bit environment, with a switch to 64bit being the required factor to go over 2GB..

On the other hand, if your question is "why I am only getting 1.GB instead of the full 2GB", the answer is that by using manual configuration values, starting from your current 1622MB total shared memory, you can try to increase them by small amounts, until you find the ones that finally report a "Failed to allocate", but I would expect that you will not be able to go much further than a max of 1.7GB total:

there are internal structures both at the Caché and Windows level that also need to fit in the 32bit addressable space, thus reducing the actual space for the shared memory segment.

Hopefully this covers your question.. and let us know if there may be further points worth reviewing together.

Is there any specific reason for not wanting to use the RoutineList() query from the %Library.Routine class?

query RoutineList(spec As %String(MAXLEN=512), dir As %Integer, type As %Integer, nsp As %String)

    This query provides a list of all routines that match the pattern specified by spec.

    spec may contain both * and ? as wildcards. It may also consist of more than one, comma-delimited selections. For example:
    dir specifies the direction to search in, 1 is forwards and -1 is backwards.
    type is 1 to include OBJ files in the search and the default, 0 will just include INT, MAC, INC, BAS.
    nsp is the namespace to list from. If omitted, the query returns the routines from the current namespace. nsp can be either an explicit or an implied namespace. 

Your request could then be implemented with a simple call like:

 s tRS=##class(%ResultSet).%New("%Routine:RoutineList")
 d tRS.Execute("ABC*.INT")
 f  q:'tRS.Next()  s routName=tRS.GetData(1) w "Found ",routName,!

I have to pass.. as my example, expanded with your calculated/private Active property (a simplified version of it), still works as I would expect it to:

Property Name As %String(POPSPEC = "Name()") [ Required ];

Property NickName As %String(POPSPEC = "Name()") [ Private ];

Property Active As %Integer [ Calculated, Private, SqlComputeCode = { Set {Active}=($R(100))}, SqlComputed ];
select Name,NickName,Active from Test.PrivateTest
Name    NickName    Active
Hills,Lawrence B.    Wilson,Barb G.    0
Frost,Brendan H.    Smith,Zoe T.    98
Cooper,Tara C.    Edwards,Mo G.    19
Hanson,Howard H.    Cerri,Buzz X.    74
Hammel,Elmo P.    Klausner,Peter G.    77
Quincy,Robert Q.    Townsend,Alice M.    52