Article
· Apr 26, 2021 3m read

SSH for IRIS container

Why SSH ?

If you do not have direct access to the server that runs your IRIS Docker container
you still may require access to the container outside "iris session" or "WebTerminal".
With an SSH terminal (PuTTY, KiTTY,.. ) you get access inside Docker, and then, depending
on your needs you run "iris session iris" or display/manipulate files directly.

Note: 
This is not meant to be the default access for the average application user
but the emergency backdoor for System Management, Support, and Development.

This project is based on templates for InterSystems ObjectScript Github repository.
There a few significant extensions:

  • docker-compose.yaml exposes port 22 for SSH
  • Dockerfile installs SSH server and prepares Server start. You may observe a significant bunch of updates as the underlying Ubuntu is not very fresh

The rest is pretty default for InterSystems IRIS Community Edition in a docker container.

Prerequisites

Make sure you have git and Docker desktop installed.

Installation

Clone/git pull the repo into any local directory

$ git clone https://github.com/rcemper/SSH-for-IRIS-container.git   

Open the terminal in this directory and run:

$ docker-compose build

Run the IRIS container with

$ docker-compose up -d 

How to Test it:

If you didn't assign a fixed port to projected container port 22 you may run

$ docker ps
e37392a1c7c3   ssh-for-iris-container   "/bin/sh -c '/iris-m…"   2 hours ago   Up 2 hours (unhealthy)   
2188/tcp, 54773/tcp,    
0.0.0.0:41022->22/tcp, 0.0.0.0:41773->1972/tcp, 0.0.0.0:42773->52773/tcp, 0.0.0.0:49716->53773/tcp   

and see the assigned_port for port 22 in the container 0.0.0.0:41022->22/tcp,  (here it's 41022).   
Next, you connect with PuTTY over SSH to server:assigned_port
Log in as irisowner + the PW of your choice and you are in your container.

This is similar as with docker-compose exec iris sh in a local docker instance.

Example:

login as: irisowner
irisowner@localhost's password:
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 5.4.72-microsoft-standard-WSL2 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.
Last login: Sat Apr 17 11:10:56 2021 from 172.18.0.1
$
$ iris view

Instance 'IRIS'   (default)
        directory:    /usr/irissys
        versionid:    2020.4.0.524.0com
        datadir:      /usr/irissys
        conf file:    iris.cpf  (SuperServer port = 1972, WebServer = 52773)
        status:       running, since Sat Apr 17 09:12:38 2021
        state:        ok
        product:      InterSystems IRIS

$ iris session iris -U "%SYS"
Node: e37392a1c7c3, Instance: IRIS
%SYS>zpm
zpm:%SYS>list
ssh-for-iris-container 0.0.1
webterminal 4.9.2
zpm 0.2.14
zpm:%SYS>q  
%SYS>d ^zSSH 
This is a placeholder for testing
 if you see it, then the installation was OK   
%SYS>h
$

Demo Video is here

GitHub

Discussion (34)2
Log in or sign up to continue

This is a completely very bad idea. You have several issues with your Dockerfile, for instance

  • Hardcoded password for irisowner and even for root
  • IRIS starts with root, instead of irisowner user

Administrators of the running instance should always have access from the Docker host, or through Kubernetes.

In container should appear only what supposed to be in a Production environment, no backdoors of any kind.

Pinging @Luca Ravazzolo for comments 

Dear @Dmitry Maslennikov !
Thanks for the compliment "bad idea"  !
All my life was driven by cross thinking, away from old tracks, doing the undoable, unchain my mind.
And it was 99% success. 

My ISC colleagues in and outside US can confirm this.
@Evgeny Shvarov knows much more details about me that would break the frame here.

BUT I'm a little bit disappointed. You didn't read the disclaiming note on top:
It is for developers, supporters,  system managers. 
And in addition my examples are never meant for production use,
but for training and learning. I don't make money with my software.

Just 1  minor detail:
Though multiple requests I never got a root password for any IRIS container.
You might have access to this information as you have also access to other no-public info.
So I had to set one for myself.  x-thinking!

All about the reasoning and other details are in my reply to @Evgeny Shvarov
Since he placed the more important question: WHY?

I read the note, somebody will not read and will use it in production.

Again, if you need root access in real-time inside the container, you doing something wrong. If you need to install anything, you have to do it with Dockerfile, if you need to change some system settings you have to do it in Dockerfile. The container is just to run a particular application inside, and it gets permissions to fit its needs. 

Docker containers completely different from Virtual Machines, and it's important to remember. One container one application, in our case it's IRIS. The state of the container does not matter at all, what's matters is what in the image. Container may die at any time for any reason, and its content has to be restored from the image. And your image has to be prepared for this case. If your delete the container and restart it, will break your application, that something wrong with it, and have to be fixed with Dockerfile.

And even, any Dockerfile can be configured with HEALTHCHECK, which has to check periodically the state of the running application inside, and with control from outside, like with Kubernetes, it will sacrifice container with no expected state and will start it the state from the image. And basic IRIS image produced by InterSystems has it as well. So, if you just stop IRIS inside, the container will be destroyed after a while.

Again container it's not a virtual machine, and in any size of organizations, should not be any restrictions to having only direct access inside.

Hello Robert 

I can say it is not a bad idea I have done this by a customer because they still use terminal application I have also include keberos and use the same userid as the host (create kerberos user in the docker user database with same uid)  for each terminal user. So you have the userid on the host in the docker and in IRIS (cached Kerberos login and authorization over LDAPS)

the benfit of this configuration

.) the cached kerberos ticket is only in the container

.) all files and system access is done with one userid (security)

.) in case the user gets a shell (with should not possible in my setup the user is still in the container shell and not in the host shell

In my setup iris is still running as irisowner and I start ssh server outside with docker exec (i don't find a better solution yet) 

Your suggestion is valid:
IF - there is access with sufficient privileges to the server that hosts Docker.
This is most likely an OS Level system manager or operator that runs all containers.

BUT - To run / check  / restart ..  IRIS there is no need to have rights outside Docker container
but instead, you need direct access to OS inside the container. Without external rights.

The next level is SYSmgr access inside IRIS vs. Developer or User access.

Back to the original scenario:
Running Docker is to me from a security point of view the same as running  Linux/ Windows on an ESX.
Would you sugggest giving someone access to ESX  with enough privileges just to do
Windows System management?  I don't think so!
In any midsize to larger organization, there is a strict separation between
HW server, Network, Virtualization, OS, Application - Management & Operation
mainly to prevent mistakes and error fixing at the wrong end.

Of course for me at home with a notebook and 2 desktops, I'm godfather with all rights you can think of.

Docker is claimed to replace VMware.
This is only correct if after installation you have the same privileges. 
If I build my image, I have all access rights.
But with no access to root or similar, I feel cheated.
Sorry, it's like a car without a steering wheel.

Why do you need to restart IRIS in a working docker container? If you need to change something in a container the typical approach is to rebuild the container's image and deploy it again. 

It looks like you are trying to reinvent some approach working with remote servers to containers.

Why not using the best practice for containers and leverage it's power?

What is the idea where you need SSH access and IRIS restarts with containers on a constant basis?

@Alexey Maslov !
You are totally right.
It is not the final solution but the start of a different scenario.
PW was just the most simple approach to begin with.
I was much more puzzled by the fact that sshd only starts from root
and that it does a very detailed check of the access rights on the internal generated keys.
An just found no way to start a service from within IRIS.
Now in the soft version, it is started with docker exec  ... as by README.md and OEX.
and the pw can be provided in a similar way 

 

@Alexey Maslov 
Following your suggestion, I investigated public key bases authentication.
And it's of course available (no surprise it's standard Linux)


$ cd /etc/ssh
$ ls -l
total 580
-rw-r--r-- 1 root root 553122 Mar  4  2019 moduli
-rw-r--r-- 1 root root   1580 Mar  4  2019 ssh_config
-rw------- 1 root root    227 Apr 20 20:32 ssh_host_ecdsa_key
-rw-r--r-- 1 root root    179 Apr 20 20:32 ssh_host_ecdsa_key.pub
-rw------- 1 root root    411 Apr 20 20:32 ssh_host_ed25519_key
-rw-r--r-- 1 root root     99 Apr 20 20:32 ssh_host_ed25519_key.pub
-rw------- 1 root root   1679 Apr 20 20:32 ssh_host_rsa_key
-rw-r--r-- 1 root root    399 Apr 20 20:32 ssh_host_rsa_key.pub

BUT:
- These keys change with every run of a docker build
- the client side varies with the platform, client type, .... and is rather tricky
  It is for sure beyond the bounds of this demo

For production, it makes sense, but not for download and run within 4 minutes.

The adoption of containers, just like the adoption of a new UI paradigm from CHUI to GUI (think Visual Basic and similar 1990s client-server UI tools) to web-based design (formatting, graphical display, estate utilisation and URL links to other resources), forces us to think differently and adopt new "modus operandi".

When I first started with containers the first two thing I had to figure out were:

  • how do I keep this thing alive? It's not an OS (that's were ccontainermain came from) and
  • how do I securely connect to this thing? :) and so off I was installing SSHd

Those were Docker early days 2014/15

There was a Spanish company called Tutum, subsequently acquired by Docker that had a CentOS-based base-OS-layer with an SSH daemon already installed and so life was good back then.

Pointless to say that as one grows into understanding the technology, the new work-paradigm & the user-methodologies start to grow as the ecosystem of tools (consider docker exec, Docker Swarm, Kubernetes, Nomad, AWS ECS/Fargate, etc. but also fully automated CI/CD pipelines with Dev-QA-Sec-Ops), one starts to appreciate that things are done differently.

Since then, I've had a many a conversation with people trying to install SSH. It's not secret. We like what we know :-)

I've also had conversations with customers that want to move to a more modern provisioning pipeline, adopt containers and have a portable and more homogeneous  solution for their app. At times there are business constraints... I get that, however, when you start analysing the possible implementation solutions (you have CHUI-based solution that you want to port, how do you handle individual users? .profile at the OS level? You might as well have no containers. You cannot adopt that approach with Kubernetes anyway... do you handle it all in the containers? It means you must make /etc/passwd durable and all the $HOME directories, etc. etc. it becomes super complicated straight away... at that point your container-based provisioning becomes and hinderance vs an enabler.

Bottom line: "The times are a changing" as Bob Dylan used to sing and if you are interested in this new "cloud-native" way of working you're better off leaving things behind and adopt a new way of working that has many benefits... even if that means rewriting the CHUI interface.

Ultimately, just because it is doable it does not mean it is the right thing to do. Personally I don't feel I need ssh into containers when I develop nor do I see developers needing it. It is easy enough to jump into containers in any environments. OTOH I do understand the need to have better loggings and stats (think utilities/side-cars like cAdvisor). I think those type of sidecars should be like leaches and attach themselves as soon as they see an interesting container starting... but that is another story for automation, monitoring and for the next story :-)

Thank you @Luca Ravazzolo It's a great story!
And 
the CHUI interface is a dead horse. No doubt!
But the need is not an invention but a demand from existing customers that fear
to lose control over their data and operation. Especially if there is nothing
anymore in the basement you can touch.
So I show that is possible. I don't judge if it makes sense. 
Like in real life:
- Some people climb Aiguille de Midi with ropes and hooks
even as there is a cable car to the top installed.
- Others drive SUV and HUMMERs but mostly run the highway
and almost never leave the well-pathed roads. 

Robert, what's is the idea to publish ZPM module "ssh-for-iris-container"? 

I Installed it and it does nothing:

zpm:IRISAPP>install ssh-for-iris-container

[ssh-for-iris-container]        Reload START
[ssh-for-iris-container]        Reload SUCCESS
[ssh-for-iris-container]        Module object refreshed.
[ssh-for-iris-container]        Validate START
[ssh-for-iris-container]        Validate SUCCESS
[ssh-for-iris-container]        Compile START
[ssh-for-iris-container]        Compile SUCCESS
[ssh-for-iris-container]        Activate START
[ssh-for-iris-container]        Configure START
[ssh-for-iris-container]        Configure SUCCESS
[ssh-for-iris-container]        Activate SUCCESS

zpm:IRISAPP>list

ssh-for-iris-container 0.0.1 
zpm:IRISAPP>q

IRISAPP>

I mean I'm having this:

Hi Robert,

Your demo is now available on InterSystems Developers YouTube:

⏯ SSH for IRIS container Demo

https://www.youtube.com/embed/fC61EPdTDQQ
[This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]

Thanks for your contribution! 👏🏼

Hello Hannes!
Thanks for the hint ! I'll check immedeatly.
-  for the stop:
I've seen this in some cases but could imagine it related to large global buffers.
default timeout for docker stop iris is 10 sec  but  docker stop -t 60  iris will give it a minute
the total save approach could be 
docker exec iris iris stop iris quietly      so iris is down
docker stop iris                                          now stop the container

I'm not sure I understand the use case for installing SSH when docker exec is already available:

  • Running an IRIS session:
    • docker exec -it iris iris session IRIS
  • Starting a shell:
    • docker exec -it iris bash
  • Starting a shell as root (in an IRIS container with "Normal" security):
    • docker exec -it --user=root iris bash

These commands can be run from a machine other than the host:

  • docker --host=tcp://35.231.179.51:2376 exec -it iris bash

They can be run securely from an outside machine by configuring docker to use TLS:

  • docker --host=tcp://35.231.179.51:2376 --tls --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem exec -it iris bash

In what situations would SSH be preferable?

All your examples are correct.
BUT: 
- all your examples assume that docker is available to the users.
- this prerequisite is not always given. And may even contradict customer-defined rules.
And as I mentioned earlier: customers pay so they define their rules.
Losing a bid just because someone deep in the background dislikes SSH access must not be acceptable.  

[I have learned my lessons using VMware long before it became an "accepted platform" ]