@Lorenzo Scalese great way of exposing the IRIS internal API! I like it and I like the way the community brings innovation and supports the needs of users. Great effort, Lorenzo!

I also wanted to draw attention to a utlity that InterSystems has been supporting for several versions. We call this feature the CPF merge feature.

Q: What is the CPF merge feature?

A: It's the capability to configure an instance dynamically from the outside. It can be used with any configuration management tool like Chef, Puppet, Ansible, Salt or simpler bash or any cloud service provider provisioning tool like AWS CloudFormation, Terraform or orchestrator like Kubernetes. A user can define the ultima state of an IRIS instance. The operation is executed idempotently and all you need is an environment variable called ISC_CPF_MERGE_FILE=the_file_that_holds_my_desired_config

The CPF merge file could have been JSON, YAML, TOML or whatever but we decided to go with the familiar format we know, for now. The CPF merge file provides a way to Create, Delete and Update instance resources.

The doc.

Some Examples - Note how the CPF merge feature does not only helps us in single instance configuration but also automates more complex cluster configurations like Mirror pairs and shard architecture topologies.

I hope this is useful to the reader who is seeking more elegant and easy ways to automate InterSystems IRIS clusters.

I see both of them

 {
      "RepositoryName": "intersystems/arbiter",
      "Tags": [
        "2019.1.1.615.1",
        "2020.1.0.215.0",
        "2020.1.1.408.0",
        "2020.2.0.211.0",
        "2020.3.0.221.0",
        "2020.4.0.547.0",
        "2021.1.0.215.0"
      ]
    },
    {
      "RepositoryName": "intersystems/arbiter-arm64",
      "Tags": [
        "2020.4.0.547.0"
      ]
    },

--

Command used

docker run --rm carinadigital/docker-ls \
  docker-ls \
    -u luxabc \
    -p abcdefghijklmnopqrstuvxyz0987654321 \
    --registry https://containers.intersystems.com \
      repositories \
        --level 2 \
        --json

--

The adoption of containers, just like the adoption of a new UI paradigm from CHUI to GUI (think Visual Basic and similar 1990s client-server UI tools) to web-based design (formatting, graphical display, estate utilisation and URL links to other resources), forces us to think differently and adopt new "modus operandi".

When I first started with containers the first two thing I had to figure out were:

  • how do I keep this thing alive? It's not an OS (that's were ccontainermain came from) and
  • how do I securely connect to this thing? :) and so off I was installing SSHd

Those were Docker early days 2014/15

There was a Spanish company called Tutum, subsequently acquired by Docker that had a CentOS-based base-OS-layer with an SSH daemon already installed and so life was good back then.

Pointless to say that as one grows into understanding the technology, the new work-paradigm & the user-methodologies start to grow as the ecosystem of tools (consider docker exec, Docker Swarm, Kubernetes, Nomad, AWS ECS/Fargate, etc. but also fully automated CI/CD pipelines with Dev-QA-Sec-Ops), one starts to appreciate that things are done differently.

Since then, I've had a many a conversation with people trying to install SSH. It's not secret. We like what we know :-)

I've also had conversations with customers that want to move to a more modern provisioning pipeline, adopt containers and have a portable and more homogeneous  solution for their app. At times there are business constraints... I get that, however, when you start analysing the possible implementation solutions (you have CHUI-based solution that you want to port, how do you handle individual users? .profile at the OS level? You might as well have no containers. You cannot adopt that approach with Kubernetes anyway... do you handle it all in the containers? It means you must make /etc/passwd durable and all the $HOME directories, etc. etc. it becomes super complicated straight away... at that point your container-based provisioning becomes and hinderance vs an enabler.

Bottom line: "The times are a changing" as Bob Dylan used to sing and if you are interested in this new "cloud-native" way of working you're better off leaving things behind and adopt a new way of working that has many benefits... even if that means rewriting the CHUI interface.

Ultimately, just because it is doable it does not mean it is the right thing to do. Personally I don't feel I need ssh into containers when I develop nor do I see developers needing it. It is easy enough to jump into containers in any environments. OTOH I do understand the need to have better loggings and stats (think utilities/side-cars like cAdvisor). I think those type of sidecars should be like leaches and attach themselves as soon as they see an interesting container starting... but that is another story for automation, monitoring and for the next story :-)

Hi @Michael Jobe 

Yes, you can post here or open a WRC ticket.

The error usually denotes a privilege issue. I see that the file name is not the default we ship the product with. That is isc_prometheus.yml. Also the file should have privileges set to 764 or rwxrw-r---

Another important thing would be to use the start.sh script that checks for the correct privileges on directories and files.

Let us know how you get on.

Thanks