Hi Maik,

changePassword.sh is still in the container, but its location has moved.  It is in the PATH for your container, so if you delete the absolute-path portion and use "changePassword.sh /durable/password_copied.txt" you should be up and running.

changePasswordHash.sh is an internal tool and we don't recommend customers use it.

Hope this helps.

Hi Oliver,

Generally, each unique ISC_DATA_DIRECTORY should be bound to one persistent instance.  If you have two instances you wish to maintain, IRIS1 and IRIS2, they should each have their own unique persistent storage location.  There's nothing preventing you from putting both of them on the same EFS store, as long as that point to different, unique directories within that store.  So your idea of an individual directory tree for each container should work just fine.  (Exception: If you are using mirroring for availability, I would not recommend storing both/all parts of a given mirror set on the same EFS store.)  Two simultaneously running containers/instances should never be pointing at the same mgr/IRIS.DAT, that will not result in a functional application.

I say persistent instance because you can freely stop a container, and then start a new one with the same ISC_DATA_DIRECTORY and have a new container - possibly with a newer version of InterSystems IRIS, or a newer version of your code - run on the same persisted data.

Additionally: Depending on your particular needs, you may not wish to use EFS for ISC_DATA_DIRECTORY locations.  EFS is an NFS implementation, which has been known to cause problems with container runtime engines, and EFS will generally have higher disk latency than other things, EBS might be one better choice.  Consult AWS documentation to be sure!  https://docs.aws.amazon.com/efs/latest/ug/performance.html

This is an interesting hack, but I can't recommend using Docker for Windows or Mac.  There's some detail on this in Shakespeare on Linux Containers, and @Rich Taylor did a great job of laying out the basics.  If you have the ability to run your own Linux VM, I recommend it - and it looks like you're already working on this, which is great!

The file ownership problem on Windows bind mounts can't be fixed, Windows doesn't have user/group/world permissions like Unix does, and it doesn't understand your Linux uids.  Even when we avoid that, the CIFS mechanism used to expose Windows folders into your container doesn't support all the different fsync() options needed to guarantee journal durability.  I don't know for sure, but I think the fsync problem is the reason that every written-to-disk DBMS that runs in containers has the advice of "when running on Docker for Windows, use a named volume and not a bind mount."

This is a security feature.  The environment for things like LD_LIBRARY_PATH is strictly controlled to minimize the risk of unauthorized input.

There's an iris.cpf setting that will help you: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RACS_LibPath
 

Two Dockerfiles that would work:

FROM ${IMAGE}

ENV MYDIR /usr/mydir

    # Add MYDIR to the LD_LIBRARY_PATH of IRIS subprocesses
RUN sed /usr/irissys/iris.cpf -e "s%^LibPath=%LibPath=$MYDIR%" > /tmp/iris.cpf \
    # sed -i alters ownership, this approach preserves it
 && cat /tmp/iris.cpf > /usr/irissys/iris.cpf
FROM ${IMAGE}

ENV MYDIR /usr/mydir

USER root
    # Add MYDIR to the LD_LIBRARY_PATH of IRIS subprocesses
RUN sed -i $ISC_PACKAGE_INSTALLDIR/iris.cpf -e "s%^LibPath=%LibPath=$MYDIR%" \
    # sed -i alters ownership, let's reset it
 && chown $ISC_PACKAGE_MGRUSER:$ISC_PACKAGE_IRISGROUP $ISC_PACKAGE_INSTALLDIR/iris.cpf


USER $ISC_PACKAGE_MGRUSER

That kind of whitespace transformation has serious tradeoffs in ObjectScript.  While they're not commonly used, ObjectScript does have cases where two spaces are significantly different than one (argumentless commands), or where a newline and a space are significantly different (legacy versions of no-block FOR, DO, etc).  The way that docker parses these things in a Dockerfile, as part of SHELL, mangles that whitespace.

I would rather expect that some kind of irissession.sh was implemented by InterSystems.

The reasons I've listed here are why we have chosen not to implement a feature like this.  It is not possible to embed exact ObjectScript syntax inside a Dockerfile.  I recommend, wherever possible, piping as few lines as you can into an irissession, and letting the routines or classmethods handle things from there.  This is part of why QuiesceForBundling exists.

See this done in imageBuildSteps.sh on GitHub: https://github.com/intersystems/container-tools/blob/master/2019.3/offic...

I also see no credentials being passed?

As of 2019.3, the IRIS images we provide have OS Authentication enabled.  The net effect of this is that login to irissession is automatic for the right Unix user.  This does not change anything else, as the new user can't log in in any other way in those images.

Customers who build their own images from our kits - available on the WRC! - can choose to use this feature or not.

Hi Evgeny,

As clever as this hack is, I strongly recommend against this approach.  This is a fun tool to write but a dangerous tool to hand to someone who doesn't understand all the factors at play well enough that they could have written our own.  There are a few problems lurking here just beneath the surface, and when taken together they mean that the SHELL directive in Dockerfiles is not our friend.

Error handling

irissession is a tool intended for human, interactive use.  In containers we often make use of it to "bootstrap" some code into IRIS, but there are caveats here.  The Docker build daemon interprets exit status 0 as success, and anything else as failure.  irissession reports errors by printing them:

USER>write foo
WRITE foo
^
<UNDEFINED> *foo


This error doesn't change irissession's exit status, so it isn't caught by an image build daemon.  Nor are bad status values, though I see that @Dmitry Maslennikov 's caught that by checking "sc". :)  Additionally, while these errors end an ObjectScript routine or method when uncaught, they do not end execution in irissession, which increases the risk that these errors get silently swallowed.

No multi-line blocks

The tool here is similar to pasting your text into irissession, which takes line-oriented input.  This means that if someone looks at the format and uses blocks, they can get caught off-guard, when blocks like this get read as their component lines:

for i=1:1:10 {
  w !, "i: ", i
}

That gets executed like so:

USER>for i=1:1:10 {

FOR i=1:1:10 {
               ^
<SYNTAX>
USER>  w !, "i: ", i

i: 1
USER>}

}
^
<SYNTAX>
USER>

In simple cases, this is no problem to cram onto one line: "for i=1:1:10  w !, "i: ", i"  In complex real-world cases, this paradigm breaks down quickly. 

Backslashes

All this and we don't quite get the ability to paste in ObjectScript code from elsewhere.  Because of the Dockerfile's syntax re: line endings, each line of ObjectScript that we'd put in still needs an extra backslash, except for the last one.

It's okay, though, because there's a better way.

A better way

Every time you cross the ObjectScript/shell boundary, you have to be careful.  Since you have to be careful there, why not cross it as few times as possible?  Your existing payload is good:

  do $SYSTEM.OBJ.Load("Installer.cls", "ck")
  set sc = ##class(App.Installer).setup()

But it could be better.  If you're writing Installer.cls, why not add all the other calls there?  All of these:

if '$Get(sc) do ##class(%SYSTEM.Process).Process.Terminate(, 1)
do ##class(SYS.Container).QuiesceForBundling()
do ##class(SYS.Container).SetMonitorStateOK("irisowner")
Do ##class(Security.Users).UnExpireUserPasswords("*")

Can be added into your installer.cls.  There, you can engage in as-sophisticated-as-you-want error handling, with no need for backslashes and no need to keep a tight leash on your use of whitespace.

You'll still need to have a minimum of one pipe into irissession, until we add other features to streamline this process, but your code will be more readable, and your daily builds will be more robust.

InterSystems IRIS 2019.3 includes a lot of new features for containers, several of which are aimed at streamlining this kind of image building.  Take a look at what we've committed to https://github.com/intersystems/container-tools, and what the 2019.3 Class Reference has to say about the SYS.Container class.  These things are intended to make the ObjectScript-in-a-Dockerfile process less frustrating, and to demystify some of the things we do like "kill ^SYS("NODE")".

Hi Brendan,

For write-style operations, my recommendation is: Do not allow this in production.  Part of the gain from containerization is that you make production as automation-friendly as possible, and that it's manipulated by humans in potentially-unpredictable ways as rarely as possible.

For read-style operations - i.e., logging in and running "do ^GetReports" - I don't expect this to be a common use case. Containers usually operate on service models that involve a minimum of command-line interactivity, and as few ways as possible to access the application running inside.  Most of those methods are usually network-based rather than terminal-based.  You're absolutely correct that giving folks membership in the "docker" group on a host is equivalent to giving them host-root, which means it may be unsuitable for your use case here.

Installing sshd inside the image is an option, but will come with some complexity.  You'll need to start sshd in your iris-main --before, and preferably also stop it gracefully on the other side. You'll also have to clear the hurdle where the /etc/passwd and /etc/group entries inside the container are relatively unrelated to the entries outside of it, though if you have the container connect to the same NIS server as the host does, that problem should become fairly moot.

Side note on users/groups in the container vs. on host: Integer UIDs and GIDs are the same, which can be important when writing files to your Durable %SYS area or other host bind mounts, but the number->name mappings are totally independent; they're essentially two different interpretations of the same "primary key".  This can be a plus, and can allow you to have a custom approach to users/groups inside a container; when combined with named Docker volumes (instead of bind mounts), you can create a whole distinct user/group ecosystem, if you like.

I don't have a lot of experience with Web Terminal, but it would be my go-to option for letting users log in remotely via their IRIS credentials while bypassing the need for any management of Linux credentials inside the container.

If I'm wrong and this turns out to be a commonly-desired use case, we can look into other solutions or new features here.

Cheers,
Conor

Siva,

This message is something you can safely ignore:

(376) 2 System appears to have failed over from node bbce08f01ef1

This message is printed because the hostname in your docker build environment is different from the hostname inside your container runtime, which causes the system to think this is a potential mirror-failover situation, and set a warning state.  You can suppress this message by killing the global ^SYS("NODE") during the docker build step.

I'm not sure what the problem is here, but one possibility is that Docker has changed storage drivers in the last year or two, and that the version of Cache you're using is only supported on the devicemapper storage driver.  You may wish to consider using a version of IRIS that supports overlay2, which is the default storage driver on almost all modern versions of Docker.

Rich,

Docker's approach to the docker0 bridge does default to 172.17.x.x, but it can be configured.  See https://docs.docker.com/network/bridge/#use-the-default-bridge-network

Note that on Linux installations of Docker, while dockerd will look for /etc/docker/daemon.json and use what it finds there, the file is not created by default.  If you try to edit the file and don't find one, this is okay - just create it, like so:

{
  "bip": "192.168.3.0/24"
}
Cheers,
Conor

A small suggestion: you may not wish to use "--log /usr/irissys/mgr/messages.log" in conjunction with ISC_DATA_DIRECTORY.  As soon as the system has finished coming up, all ongoing logging will be written to ISC_DATA_DIRECTORY/mgr/messages.log.  The docker-compose file you have here will result in the output of "docker logs iris2019.1" being relatively short, ending in "Executing iris qlist", and omitting everything after the first few seconds of startup.

For better logging to container stdout, you might wish to use something like "--log /shared/iris_conf.d/mgr/messages.log".

Hope this helps!

As a general rule, any RUN command in a Dockerfile that involves a running Caché instance (like calling $system.OBJ.Load()) should be called in the form of:

RUN ccontrol start INSTANCE \
 && /loadWebTerminal.sh \
 && ccontrol stop INSTANCE quietly

Other commands can be mixed in, but best practice is to start Caché, use Caché, and stop Caché, in that order, all in a single RUN command.  More complex examples could look like this:

RUN /editCacheCPF.sh \
&& /usr/bin/rsyslogd \
&& ccontrol start INSTANCE \
&& /loadWebTerminal.sh \
&& /checkCacheHealth.sh \
&& ccontrol stop INSTANCE quietly \
&& /deleteJournalFiles.sh

Hope that helps.