@David, Yes ICM could help and be very useful if you had provisioned the cluster with it. Please note that it supports both container based and containerless deployments.

I think that in the case of @Sylvie Greverend we have a tough case as we are missing a generic REST API for some of these operations and then again you get into the issue of who is authorized to run these ops? How do you handle keys and/or certificates? Etc. So it's not an easily solvable issue.

It appears also that the environment is not homogeneous (most probably not a cloud cluster where "things" are typically more "even") and so we have the issue of modularity and want to be dynamic as we approach the different services (IRIS instances).

I'd like to hear more about the use case to get a fuller picture. I understand the frustration. We are looking at innovating in this area. 

For the time being it sounds like you could do with a dictionary/table/XML/JSON of your nodes that would define the operations you want to run on each instance. Based on that and a %Installer XML template (don't be fooled by the class name; it can be used for configuring an instance post-installation) create a different config for each instance. Subsequently, architect a delivery mechanism of the Installer/config (bash, Ansible, etc.) and run it (load the code; run a method).

Doc on %Installer and an example.

The above idea might sound like overkill but with a varied and wide range of instances all with different configuration and needs, there is no other way to be dynamic, modular & efficient IMO.

I hope to see you at Global Summit 2019 for a chat on this type of issues.

We have a session on ICM but also some interesting preliminary news in this area and we'll be talking about Kubernetes too if this is an area of system management that interests you.

TTYL

Nice intro @Patrick Jamieson 

Anybody out there willing to post an intro like the above for Windows? And also covering Docker volumes for the Durable %SYS feature? They are nice and flexible and just like containers portable :) I think they are ideal for development, testing & CI/CD provisioning pipelines. Production, of course, is another matter as you'd want to use bind mount volumes backed by an enterprise level storage services with appropriate resiliency and performance (SAN, SDS, pro-storage plugins or CSI interfaces)

Thanks

@James Fitzpatrick. Caché and Ensemble containers are not and will not be available. However, our new InterSystems IRIS product is fully containerized. Please give it a try. You have options now to: 

  • pull the community edition that you can find at the Docker Hub store or
  • play in the cloud Marketplaces with it (AWS, Azure and Google)
  • Download the tar.gz container from the WRC page

Durable %SYS is available for your containers; however we are maintaining the containers we create with security checks and improvements all the time. The advice is to derive your from our as in general it should make things easier: FROM intersystems/iris:

You can still customize the container the way you want it...

HTH

Hello Everybody,

I think it's worth clarifying the fact that since 2016 (the date of the original post), InterSystems has been working on offering its main product, InterSystems IRIS, in an officially supported container. The day of the officially supported container was the day of the launch of Intersystems IRIS 2018.1. Today this container is also Docker certified.

You can find the Docker certified community edition of InterSystems IRIS in the Docker Hub repository at this link. Other images will be posted as soon as Docker comes out with their new partner site supporting BYOL containers and the rest.

It is also important to note how, although many customers have leveraged the flexibility of containers, using Caché and Ensemble, in general, the experience is sub-optimal. We have made many improvements that remove many frictions and issues. Consider the Durable %SYS feature for example or the container main process iris-main. We would like to encourage you to test out the InterSystems IRIS containers ASAP.

Furthermore, we have and still keep trying to make the container as compact as possible & we keep scanning it and maintaining against known SVs threats. Finally, you know you'd be running with a certified image that is supported by both InterSystems and Docker.

For more info on InterSystems IRIS container see our container documentation.

Also, don't forget to review the Docker EE Compatibility Matrix that gives us all details on the supported host OSs and storage drivers. Please note how the overlay2 containers storage driver is becoming predominant as the default driver in many distros. 

Details of supported OSs, container engines and exceptions of InterSystems IRIS 2019.1 container are found here.

Hi Dmitry, sure.

So, GKE, if anything makes things easier in general. Also you might be interested to know that Google has just announced GKE for on-prem (it's Alpha (this is Google alpha ;) so expect it by the fall in beta)). This is a super-cool news IMO. You'll be able to seamlessly manage your K8s cluster on-prem and in the cloud. Furthermore, you'll also be able to grow or expand your on-prem cluster in the cloud as and how you might need more resources.

Anyway, back to your question: you need to define "pv" or persistent volumes in K8s. In GKE: go to Kubernetes Menu --> Storage and you'll be able to define your cluster-wide storage there.

HTH

PS: If you are getting worried about Docker, you shouldn't.

There is already version 1.0.1 of the Open Container Initiative (OCI) that was set up so that a standard might be agreed & adhered to by implementers for how containers run and images are dealt with. Docker and other prominent players like CoreOS (now owned by Red Hat), Red Hat itself, SUSE, Microsoft, Intel, VMware, AWS, IBM and many others are part of this initiative.

One would hope we should be safeguarded on our investment.

NP Sebastian.

You could handle %SYS persistence with creativity :)

Given that you'd have a script or a container orchestrator tool (Mesosphere, Kubernetes, Rancher, Nomand, etc.) to handle running a new version of your app, you'd have to factor in exporting those credentials and security setting before stopping your container. Then as you spin up the new one you'd import the same. It's a workaround if you like but doable, IMO.

Having a DB you probably have to think about schema migration and other things anyway so... 

HTH