The adoption of containers, just like the adoption of a new UI paradigm from CHUI to GUI (think Visual Basic and similar 1990s client-server UI tools) to web-based design (formatting, graphical display, estate utilisation and URL links to other resources), forces us to think differently and adopt new "modus operandi".

When I first started with containers the first two thing I had to figure out were:

  • how do I keep this thing alive? It's not an OS (that's were ccontainermain came from) and
  • how do I securely connect to this thing? :) and so off I was installing SSHd

Those were Docker early days 2014/15

There was a Spanish company called Tutum, subsequently acquired by Docker that had a CentOS-based base-OS-layer with an SSH daemon already installed and so life was good back then.

Pointless to say that as one grows into understanding the technology, the new work-paradigm & the user-methodologies start to grow as the ecosystem of tools (consider docker exec, Docker Swarm, Kubernetes, Nomad, AWS ECS/Fargate, etc. but also fully automated CI/CD pipelines with Dev-QA-Sec-Ops), one starts to appreciate that things are done differently.

Since then, I've had a many a conversation with people trying to install SSH. It's not secret. We like what we know :-)

I've also had conversations with customers that want to move to a more modern provisioning pipeline, adopt containers and have a portable and more homogeneous  solution for their app. At times there are business constraints... I get that, however, when you start analysing the possible implementation solutions (you have CHUI-based solution that you want to port, how do you handle individual users? .profile at the OS level? You might as well have no containers. You cannot adopt that approach with Kubernetes anyway... do you handle it all in the containers? It means you must make /etc/passwd durable and all the $HOME directories, etc. etc. it becomes super complicated straight away... at that point your container-based provisioning becomes and hinderance vs an enabler.

Bottom line: "The times are a changing" as Bob Dylan used to sing and if you are interested in this new "cloud-native" way of working you're better off leaving things behind and adopt a new way of working that has many benefits... even if that means rewriting the CHUI interface.

Ultimately, just because it is doable it does not mean it is the right thing to do. Personally I don't feel I need ssh into containers when I develop nor do I see developers needing it. It is easy enough to jump into containers in any environments. OTOH I do understand the need to have better loggings and stats (think utilities/side-cars like cAdvisor). I think those type of sidecars should be like leaches and attach themselves as soon as they see an interesting container starting... but that is another story for automation, monitoring and for the next story :-)

Hi @Michael Jobe 

Yes, you can post here or open a WRC ticket.

The error usually denotes a privilege issue. I see that the file name is not the default we ship the product with. That is isc_prometheus.yml. Also the file should have privileges set to 764 or rwxrw-r---

Another important thing would be to use the start.sh script that checks for the correct privileges on directories and files.

Let us know how you get on.

Thanks

Hi Raj:

This article is not about AWS but it clearly explains how to set it up with Gitlab so I hope you can find your answers.

If you use containers, when you build them you'd be installing your solution code from a file or a series of files. you'd probably be copying them in the container and importing and compiling them. You could also do that before and just copy in the container the IRIS.DAT containing the code.. it depends.

You should also look and consider the ZPM package manager for packaging and moving your code around.

I hope this is of some help in your investigation