go to post Luca Ravazzolo · Nov 15, 2017 Hi Rob,I think the article is confusing and not clear. There are various technologies that are coming into play here and simple terms like volumes are just not enough to comprehend the context they are used within. I wish IT could speak like in Edward DeBono "The DeBono Code Book". Things would be clearer, maybe... :)This post could degenerate in a long essay so I'm going to try to focus on explaining the basics. My hope is that future Q&A in this thread will have the opportunity to clarify things further and allows us all to learn more as technologies are developed as we sleep... The wider context that the marketing person from RH is writing about is that of containers & persistent data. That is enough to write a book :) In its essence, a container does NOT exclude persistent data. The fact that we see containers described as ephemeral is only partially true because:They actually have a writeable layerThey force us to think about the separation of concern between code and data (this is a good thing)The other part of the context of the article is that the writer refers to Kubernetes, a container orchestrator engine that you can now find in your next Docker CE engine you'll download. Yes, Docker has Docker Swarm (their container orchestrator) featured in the engine but you can now play with Kubernetes (K8s) too. In the cloud-era everybody is friend with everybody :)Now, K8s was select last December as the main application-orchestrator for OpenShift (a cloud-like, full application-provisioning, management platform owned by Red Hat). That was a major change: no more VMs to handle an application; just provide containers and let K8s manage them.Now, we all know that having 100 Nginx containers running is one thing but having 100 stateful, transactional & mirrored Caché, Ensemble or InterSystems IRIS containers is a completely different game. Well, you would not be surprised to know that persistence is actually the toughest issue to solve and manage... with & without containers. Think of an EC2 instance that just died. What happened to that last transaction you committed? Hopefully, you had an AWS EBS volume that was just mounted on it (i.e you did not use the boot/OS disk) and the data is still in that volume that has a life of its own. All you have to do is spin up another EC2 instance and mount that same volume. The point being that you must know that you have an EBS volume and that you take very good care of it and of its data: you snapshot it, you replicate the data, etc.The same exact thing is true when you use K8s. You must provision those volumes. Now in K8s volumes have a slightly different definition: yes it defines the storage and has a higher-level abstraction (see their docs for more info) but the neglected part is that a volume is linked to a POD. A POD is a logical grouping of containers that has its definitions and lifecycle, however, a K8s Volume has a distinct life of its own that can survive the crash of a POD. We get into software defined storage (SDS) territory and hyperconverged storage/infrastructure. This is a very interesting and evolving part of the technology world that I believe will enable us to be even more elastic in the future than we are now (both on-prem and public clouds).K8s is just trying to make the most of what is available now keeping a higher level abstraction so that they can further tune their engine as new solutions comes to market.Concluding, in order to clear some misconceptions the article might have created:There aren't any special data-container in K8s. There are, however with Docker containers (just to confuse further :) but you should not use them for your database storage).With K8s and Docker there are SDS drivers/provisioners (or plugins in Docker lingo) that can leverage available storage. K8s allow you to mount them in your containers. It is supposed to do other things like formatting the volumes but not all drivers are equals and as usual YMMV.What those drivers and plungins can do for you is mount the same K8s PV from another spun up container in another POD if you original container or POD dies. The usual caveats apply like if you are in another AZs or Region you won't be able to do that. IOW we are still tied to what the low-level storage can offer us. And of course, we would still need to deal with the consistency of that DB that suddenly died. So, how do you deal with that?With InterSystems IRIS data platform you can use its Mirror technology that can safeguard you against those containers or POD disappearances. Furthermore, as our Mirroring replicates the data you could have a simpler (easier, lower TCO, etc.) volume definition with lower downtime. Just make sure you tell K8s not to interfere with what it does not know: DB transactions ;)HTH
go to post Luca Ravazzolo · Sep 21, 2017 What problem di you have? was it a docker storage issue? I would advise in using only devicemapper.
go to post Luca Ravazzolo · Sep 14, 2017 In general, it is a good thing to have these 2 concerns (code vs data) separate. You're de-coupling 2 items (concerns, concepts) that are effectively different. When using technologies like containers & container services this point becomes even more apparent and allow you to be much more flexible and agile in your deployments. Of course, schema migration is still an issue, but the separation still helps. This is one of the reason why orgnaizations can perform multiple deployments per day.HTH
go to post Luca Ravazzolo · Aug 7, 2017 Use the official and documented API. If pure existence or simply the class is defined (but not compiled) SAMPLES>w ##class(%Dictionary.ClassDefinition).%Exists($lb("Aviation.Aircraft")) 1 If you're after existing and compiled: SAMPLES>w ##class(%Dictionary.CompiledClass).%Exists($lb("Aviation.Aircraft")) 1 -- HTH
go to post Luca Ravazzolo · Jul 31, 2017 The article mention Martin Fowler, Rob, so I'm OK with that :-)Talk to you soon and thanks for sharing that.
go to post Luca Ravazzolo · May 16, 2017 @NatasaAre you using the ccontainermain from the github account as pointed out by @Dmitry? Or do you have your own solution?Thanks
go to post Luca Ravazzolo · May 5, 2017 Hi Natasa,I am aware of several organisations that are working with Docker containers with InterSystems technology.Do you have any specific question?
go to post Luca Ravazzolo · May 3, 2017 Hi John,InterSystems has shifted gear into a more agile, cloud-oriented approach that is going to leverage & be better integrated with a DevOps modus operandi.InterSystems will unveil such new approach and what goes with it at this year Global Summit in September.HTH
go to post Luca Ravazzolo · Apr 23, 2017 @Dmitry: Thanks for sharing it. Good work and helpful to the community.
go to post Luca Ravazzolo · Apr 3, 2017 Starting containers with a different configuration... say you're moving a container from within your CI/CD provisioning process and finally you deploy it in production you'll want to tune your system differently (more buffers etc.).It's the same artefact you use across your software factory; you just tune it accordingly to its needs and environment and in so doing adhere to the 12-factor app principles... :)HTH
go to post Luca Ravazzolo · Apr 3, 2017 @HermanYou have the option to elect license servers for cooperating instances as per documentation.HTH
go to post Luca Ravazzolo · Mar 29, 2017 Hi All,I just wanted to highlight, for all the cloud and Linux savvy people out there, that we have added production support for our 2017.1 products for the most used Linux distribution out there called Ubuntu. Specifically, we will keep focusing on their LTS or Long Term Support platform version that at present is 16.04.x named Xenial Xerus.Enjoy it and Canonical flexible support plans. More info can be found at Canonical's overview page and on the plans and pricing page.
go to post Luca Ravazzolo · Mar 22, 2017 @AlexeyIt sounds like -as you say, having to deploy on-site -if I understand correctly, might not be the best use case.If they use virtualization wouldn't be easier for you guys to deploy a VM that you prepare at your site and just mount the FS/DB they have? That way you'd still have the privilege to run and have guarantees on your build process vs having to do it all on-site.Just a thought.All the best
go to post Luca Ravazzolo · Mar 21, 2017 "we are working on a solution", Sebastian :)We are very pleased with further enhancements & improvements we have been making from the first internal version. The fundamental idea is that, while containers are ephemeral, databases are not and we want to assist in this area. While you can deal with your application database and make sure you mount your DBs on host volumes, there could be an issue as soon as you use %SYS. You use %SYS when you define your z* | Z* routines and/or globals or when you define user credentials, etc. In this case, right now, you'd have to create an export and an import process which would not be elegant, nor would it help to be agile. What we will offer is a very easy-to-use facility by which you can define where %SYS will reside on the host... and all using canonical Docker syntax. Stay tuned...
go to post Luca Ravazzolo · Mar 21, 2017 Thanks, Dimitri.Alexey: Docker does not encourage anything aside using its container technology and its EE (Enterprise Edition and cloud to monetise their effort) :-) However, containers in general help and fit very well into a microservices type architecture. Please note how you can create a microservices architecture without containers via VM, jar, war, based solution with a different type of engine. Containers lend themselves to it more naturally.It's worth pointing out that just because people talk about 1 process per container, it does not preclude you from using multiple processes in each container. You could naturally have 3 sockets open, for example, 57772, 1972 and 8384, all serving different purposes + various background processes (think of our WD & GC) and still be within the boundaries of a microservice definition with a clear "bounded context". For more info on microservices you might want to read Martin Fowler microservices article and books like Building Microservices by Sam Newman or Production-Ready Microservices by Susan J. Fowler. Also you should check out Domain Driven Design by Eric Evans where "bounded contexts" and similar concepts like context, distillation and large-scale structures are dealt much more in depth.On the 2GB Database inside the container, I would advise against it. In general one of the advantages of containers is the clear separation of concerns between code and data. Data should reside on a host volume, while you're free to swap containers at will to test & use the new version of your application. This should be an advantage if you use a CI/CD provisioning pipeline for your app.Having said all that, it depends on your specific use case. If I want my developers to all have a std data-set to work against, then I might decide that their container environments do indeed feature a CACHE.DAT. Our Learning Services department has been using containers for two years now, and every course you take on-line runs on Docker containers. As I said, it depends on the use-case.In ref. to your last comment: right now -March2017- I would not use Windows for a container solution. Although Microsoft has been working closely with Docker for over a year, containers are born out of a sandboxing effect just above the Linux kernel (see Kernel namespace and cgroup).HTH for now.
go to post Luca Ravazzolo · Mar 14, 2017 Hi guys,Thank you for the thread! Containers are here to stay, suffice to say that all public and most private cloud providers offers specific services to support just containers. However, we should look at them as a new system. There are many gotchas but also many aspects about them that will help the way we work.I have some commentsOn the Dockerfile above we need to make sure we tell the full story and educate as appropriate as we are all learning here:-Container layer OS: Running an OS updates on every single triggered build from your CI/CD pipeline might not be the best thing to do if you truly want to know what you have running in production. It's better to ask for the exact OS version you desire in the FROM statement above. In general getting a Docker image "latest" is not such a great idea.As a side effect and if you need a particular package installed, make sure it is what you know and pin-point the exact package version you desire (isn't an automated delivery pipeline and infrastructure-as-code about knowing exactly what you have and having it all versioned?). Use $ apt-get install cowsay=3.03+dfsg1-6 -Provenance: we need to make sure we know we are getting the image we think we are getting. Man in the middle attacks do happen. Organisations should make sure they are covered. Please investigate this step and see tools like asking for an image hash image (docker pull debian@sha256:cabcde9b6166fcd287c1336f5....) or even better Docker Notary if your image publisher has signed images.-On Security: Watch those passwords in either Dockerfile definitions or env vars...-Container image size: copying a tarball will expand the storage layer Docker creates as you make the statement "ADD file" and it's not contractable. One should try to use an ftp or http server to download AND run all commands into a single statement (fetch distribution, installation and removal of unwanted files). That way you can shrink that single layer you're working on. Your code then should read something like:RUN curl http://mysourcecodeRepo.com/file.tgz -o /file.tgz&& tar xzf /file.tgz&& ./myInstall.sh&& rm /file.tgz -On Persistence: Obviously your data must be mounted on a volume. One of the nice things about containers is the clear demarcation between code and data. As far as Caché system data is concerned, like the cache.cpf file and %SYS we are working on a solution that will make Caché a 1st class citizen of the container world and it will be very easy to use and upgrades will just work. HTH and thanks again for the thread!Keep them coming :-)
go to post Luca Ravazzolo · Feb 2, 2017 one way to configure your system is to edit the cache.cpf fileHTH