Luca Ravazzolo · Oct 7, 2019 go to post

Hi Brendan,

So you must have a CHUI based app or part of it, still on chui/ssh. On one hand I'd like to suggest it's time to move on to something more modern as containers are in general seen as backend services (UDP, TCP, http, etc.) on the other I do understand that it's difficult to evolve quickly.

A few years ago I used to use Tutum containers based on Ubuntu and CentOS that had sshd in it. Docker bought them and their containers disappeared. You can probably still find them in the Docker Hub but they'd be old and stagnating.

Since then you've probably seen this link on the Docker doc. One issue is about users's mapping though, as mentioned. The WebTerminal or managing your sshd are the two most realistic options right now.

Apparently InterSystems is working on something that maybe useful to you but their lips are tight :)

Luca Ravazzolo · Sep 4, 2019 go to post

@David, Yes ICM could help and be very useful if you had provisioned the cluster with it. Please note that it supports both container based and containerless deployments.

I think that in the case of @Sylvie Greverend we have a tough case as we are missing a generic REST API for some of these operations and then again you get into the issue of who is authorized to run these ops? How do you handle keys and/or certificates? Etc. So it's not an easily solvable issue.

It appears also that the environment is not homogeneous (most probably not a cloud cluster where "things" are typically more "even") and so we have the issue of modularity and want to be dynamic as we approach the different services (IRIS instances).

I'd like to hear more about the use case to get a fuller picture. I understand the frustration. We are looking at innovating in this area. 

For the time being it sounds like you could do with a dictionary/table/XML/JSON of your nodes that would define the operations you want to run on each instance. Based on that and a %Installer XML template (don't be fooled by the class name; it can be used for configuring an instance post-installation) create a different config for each instance. Subsequently, architect a delivery mechanism of the Installer/config (bash, Ansible, etc.) and run it (load the code; run a method).

Doc on %Installer and an example.

The above idea might sound like overkill but with a varied and wide range of instances all with different configuration and needs, there is no other way to be dynamic, modular & efficient IMO.

I hope to see you at Global Summit 2019 for a chat on this type of issues.

We have a session on ICM but also some interesting preliminary news in this area and we'll be talking about Kubernetes too if this is an area of system management that interests you.

TTYL

Luca Ravazzolo · Jun 14, 2019 go to post

Nice intro @Patrick Jamieson 

Anybody out there willing to post an intro like the above for Windows? And also covering Docker volumes for the Durable %SYS feature? They are nice and flexible and just like containers portable :) I think they are ideal for development, testing & CI/CD provisioning pipelines. Production, of course, is another matter as you'd want to use bind mount volumes backed by an enterprise level storage services with appropriate resiliency and performance (SAN, SDS, pro-storage plugins or CSI interfaces)

Thanks

Luca Ravazzolo · May 13, 2019 go to post

@Buy Percocet 10/325 Online Fast & Secure Fitzpatrick. Caché and Ensemble containers are not and will not be available. However, our new InterSystems IRIS product is fully containerized. Please give it a try. You have options now to: 

  • pull the community edition that you can find at the Docker Hub store or
  • play in the cloud Marketplaces with it (AWS, Azure and Google)
  • Download the tar.gz container from the WRC page

Durable %SYS is available for your containers; however we are maintaining the containers we create with security checks and improvements all the time. The advice is to derive your from our as in general it should make things easier: FROM intersystems/iris:

You can still customize the container the way you want it...

HTH

Luca Ravazzolo · Mar 29, 2019 go to post

Hello Everybody,

I think it's worth clarifying the fact that since 2016 (the date of the original post), InterSystems has been working on offering its main product, InterSystems IRIS, in an officially supported container. The day of the officially supported container was the day of the launch of Intersystems IRIS 2018.1. Today this container is also Docker certified.

You can find the Docker certified community edition of InterSystems IRIS in the Docker Hub repository at this link. Other images will be posted as soon as Docker comes out with their new partner site supporting BYOL containers and the rest.

It is also important to note how, although many customers have leveraged the flexibility of containers, using Caché and Ensemble, in general, the experience is sub-optimal. We have made many improvements that remove many frictions and issues. Consider the Durable %SYS feature for example or the container main process iris-main. We would like to encourage you to test out the InterSystems IRIS containers ASAP.

Furthermore, we have and still keep trying to make the container as compact as possible & we keep scanning it and maintaining against known SVs threats. Finally, you know you'd be running with a certified image that is supported by both InterSystems and Docker.

For more info on InterSystems IRIS container see our container documentation.

Also, don't forget to review the Docker EE Compatibility Matrix that gives us all details on the supported host OSs and storage drivers. Please note how the overlay2 containers storage driver is becoming predominant as the default driver in many distros. 

Details of supported OSs, container engines and exceptions of InterSystems IRIS 2019.1 container are found here.

Luca Ravazzolo · Mar 18, 2019 go to post

Nice one Evgeny! I like it!

I'm sure it'll help all those that want to leverage the agility of containers and our quarterly container releases.

Luca Ravazzolo · Jul 26, 2018 go to post

Hi Dmitry, sure.

So, GKE, if anything makes things easier in general. Also you might be interested to know that Google has just announced GKE for on-prem (it's Alpha (this is Google alpha ;) so expect it by the fall in beta)). This is a super-cool news IMO. You'll be able to seamlessly manage your K8s cluster on-prem and in the cloud. Furthermore, you'll also be able to grow or expand your on-prem cluster in the cloud as and how you might need more resources.

Anyway, back to your question: you need to define "pv" or persistent volumes in K8s. In GKE: go to Kubernetes Menu --> Storage and you'll be able to define your cluster-wide storage there.

HTH

Luca Ravazzolo · Jul 25, 2018 go to post

Hi Dmitry,

It looks like they put up with us :-) for about a year (see reported issues)

We are addressing this. Stay tuned and thanks for your prompt alerting.

Luca Ravazzolo · May 23, 2018 go to post

Great job Murray on much-needed storage optimization that we should all implement out of the box.

Luca Ravazzolo · May 22, 2018 go to post

Thanks Michelle!

I'm happy to answer any question anybody may have on the webinar where I presented InterSystems Cloud Manager and generally on the improvement an organization can achieve in its software-factory with the newly available technologies from InterSystems.

Luca Ravazzolo · Feb 8, 2018 go to post

Again this is where Durable %SYS helps, Sebastian, because we take care of upgrades.

InterSystems’ ultimate goal is to have releases which can be installed with no, or little, effect on the applications it supports. The upgrade checklist appears innocuous, however, make sure you put the upgrade through its pace.

Luca Ravazzolo · Feb 5, 2018 go to post

Sebastian:

I'm pretty sure journal restore allows you to find journal files without journal.log and probably that's why the request was closed.

CACHELIB is a read-only DB, so adhering to the separation of concerns (code vs data)  that containers allow us to have,  it stays inside the container.

HTH

Luca Ravazzolo · Feb 3, 2018 go to post

PS: If you are getting worried about Docker, you shouldn't.

There is already version 1.0.1 of the Open Container Initiative (OCI) that was set up so that a standard might be agreed & adhered to by implementers for how containers run and images are dealt with. Docker and other prominent players like CoreOS (now owned by Red Hat), Red Hat itself, SUSE, Microsoft, Intel, VMware, AWS, IBM and many others are part of this initiative.

One would hope we should be safeguarded on our investment.

Luca Ravazzolo · Feb 2, 2018 go to post

NP Sebastian.

You could handle %SYS persistence with creativity :)

Given that you'd have a script or a container orchestrator tool (Mesosphere, Kubernetes, Rancher, Nomand, etc.) to handle running a new version of your app, you'd have to factor in exporting those credentials and security setting before stopping your container. Then as you spin up the new one you'd import the same. It's a workaround if you like but doable, IMO.

Having a DB you probably have to think about schema migration and other things anyway so... 

HTH

Luca Ravazzolo · Feb 2, 2018 go to post

Hi Dmitry,

I hear you, we all want a simple life. The direction we have taken, even with the standard distribution is that we want to make it more secure.

When creating your container you'd have to define your password as the first thing then proceed with loading your app dependencies.

Also, remember that InterSystems IRIS is not Ensemble :)

Luca Ravazzolo · Feb 2, 2018 go to post

Hi Sebastian,

Caché and Ensemble will be fully supported in a Docker container. However, for the reason expressed above, right now there is no plan to offer Durable %SYS in Caché and Ensemble.

InterSystems IRIS is a new product so things are different. You will find that some features are deprecated in favour of others, etc. One of them is the license key. In general, in my comment above, I was referring -at least in my mind, to a licensing plan that you will hopefully find more flexible and favorable.

Luca Ravazzolo · Feb 1, 2018 go to post

Hi Fabian Sebastian (fingers went astray)

Nice to hear from you.

Durable %SYS is indeed a powerful and needed feature. Right now we just don't have the bandwidth to do all that is necessary as per your request.

However, I hear that upgrades to IRIS should be convenient and easy, so I would look into that option first.

HTH

Luca Ravazzolo · Feb 1, 2018 go to post

Hi Dmitry,

Thank you for downloading our InterSystems IRIS data platform container.

Our images are carefully crafted, dependencies are checked and even pinned so we know exactly what we ship. We further test them regularly for security vulnerabilities. By the time our images are published they are a safe bet for you to use. In general, we expect our customers to derive theirs from the published one so you only have to worry about implementing your app-solution in it.

However, I understand that you might want to create your custom image. We could make isc-main available if there is the request for it.

Password: we do not want to be in the news like a known database that recently was discovered with thousands of instances in the cloud up and running with default credentials. We are forcing you to do the right thing otherwise it's too easy with containers to ignore this and as you can appreciate it's not a safe practice.

This is true also when you'll use the InterSystems Cloud Manager to provision an InterSystems IRIS data platform cloud cluster. If you forgot to define the password for the system users, you will be forced to create one before the services are run.

Luca Ravazzolo · Jan 22, 2018 go to post

Thanks @Rich. A year and a bit later things are indeed better and I think the way you're looking at it is the way to go :)

Luca Ravazzolo · Nov 15, 2017 go to post

Hi Rob,

I think the article is confusing and not clear. There are various technologies that are coming into play here and simple terms like volumes are just not enough to comprehend the context they are used within. I wish IT could speak like in Edward DeBono "The DeBono Code Book". Things would be clearer, maybe...  :)

This post could degenerate in a long essay so I'm going to try to focus on explaining the basics. My hope is that future Q&A in this thread will have the opportunity to clarify things further and allows us all to learn more as technologies are developed as we sleep... 

The wider context that the marketing person from RH is writing about is that of containers & persistent data. That is enough to write a book :) In its essence, a container does NOT exclude persistent data. The fact that we see containers described as ephemeral is only partially true because:

  • They actually have a writeable layer
  • They force us to think about the separation of concern between code and data (this is a good thing)

The other part of the context of the article is that the writer refers to Kubernetes, a container orchestrator engine that you can now find in your next Docker CE engine you'll download. Yes, Docker has Docker Swarm (their container orchestrator) featured in the engine but you can now play with Kubernetes (K8s) too. In the cloud-era everybody is friend with everybody :)

Now, K8s was select last December as the main application-orchestrator for OpenShift (a cloud-like, full application-provisioning, management platform owned by Red Hat). That was a major change: no more VMs to handle an application; just provide containers and let K8s manage them.

Now, we all know that having 100 Nginx containers running is one thing but having 100 stateful, transactional & mirrored Caché, Ensemble or InterSystems IRIS containers is a completely different game. Well, you would not be surprised to know that persistence is actually the toughest issue to solve and manage... with &  without containers. Think of an EC2 instance that just died. What happened to that last transaction you committed? Hopefully, you had an AWS EBS volume that was just mounted on it (i.e you did not use the boot/OS disk) and the data is still in that volume that has a life of its own. All you have to do is spin up another EC2 instance and mount that same volume. The point being that you must know that you have an EBS volume and that you take very good care of it and of its data: you snapshot it, you replicate the data, etc.

The same exact thing is true when you use K8s. You must provision those volumes. Now in K8s volumes have a slightly different definition: yes it defines the storage and has a higher-level abstraction (see their docs for more info) but the neglected part is that a volume is linked to a POD. A POD is a logical grouping of containers that has its definitions and lifecycle, however, a K8s Volume has a distinct life of its own that can survive the crash of a POD. We get into software defined storage (SDS) territory and hyperconverged storage/infrastructure. This is a very interesting and evolving part of the technology world that I believe will enable us to be even more elastic in the future than we are now (both on-prem and public clouds).

K8s is just trying to make the most of what is available now keeping a higher level abstraction so that they can further tune their engine as new solutions comes to market.

Concluding,  in order to clear some misconceptions the article might have created:

There aren't any special data-container in K8s. There are, however with Docker containers (just to confuse further :) but you should not use them for your database storage).

With K8s and Docker there are SDS drivers/provisioners  (or plugins in Docker lingo) that can leverage available storage. K8s allow you to mount them in your containers. It is supposed to do other things like formatting the volumes but not all drivers are equals and as usual YMMV.

What those drivers and plungins can do for you is mount the same K8s PV from another spun up container in another POD if you original container or POD dies. The usual caveats apply like if you are in another AZs or Region you won't be able to do that. IOW we are still tied to what the low-level storage can offer us. And of course, we would still need to deal with the consistency of that DB that suddenly died. So, how do you deal with that?

With InterSystems IRIS data platform you can use its Mirror technology that can safeguard you against those containers or POD disappearances. Furthermore, as our Mirroring replicates the data you could have a simpler (easier, lower TCO, etc.) volume definition with lower downtime. Just make sure you tell K8s not to interfere with what it does not know: DB transactions ;)

HTH