Hi All,

I just wanted to highlight, for all the cloud and Linux savvy people out there, that we have added production support for our 2017.1 products for the most used Linux distribution out there called Ubuntu. Specifically, we will keep focusing on their LTS or Long Term Support platform version that at present is 16.04.x named Xenial Xerus.

Enjoy it and Canonical flexible support plans. More info can be found at Canonical's overview page and on the plans and pricing page.

@Alexey

It sounds like -as you say, having to deploy on-site -if I understand correctly, might not be the best use case.

If they use virtualization wouldn't be easier for you guys to deploy a VM that you prepare at your site and just mount the FS/DB they have? That way you'd still have the privilege to run and have guarantees on your build process vs having to do it all on-site.

Just a thought.

All the best

"we are working on a solution", Sebastian :)

We are very pleased with further enhancements & improvements we have been making from the first internal version. The fundamental idea is that, while containers are ephemeral, databases are not and we want to assist in this area. While you can deal with your application database and make sure you mount your DBs on host volumes, there could be an issue as soon as you use %SYS. You use %SYS when you define your z* | Z* routines and/or globals or when you define user credentials, etc. In this case, right now, you'd have to create an export and an import process which would not be elegant, nor would it help to be agile. What we will offer is a very easy-to-use facility by which you can define where %SYS will reside on the host... and all using canonical Docker syntax. Stay tuned...

Thanks, Dimitri.

Alexey: Docker does not encourage anything aside using its container technology and its EE (Enterprise Edition and cloud to monetise their effort) :-) However, containers in general help and fit very well into a microservices type architecture. Please note how you can create a microservices architecture without containers via VM, jar, war, based solution with a different type of engine. Containers lend themselves to it more naturally.

It's worth pointing out that just because people talk about 1 process per container, it does not preclude you from using multiple processes in each container. You could naturally have 3 sockets open, for example, 57772, 1972 and 8384, all serving different purposes + various background processes (think of our WD & GC) and still be within the boundaries of a microservice definition with a clear "bounded context". For more info on microservices you might want to read Martin Fowler microservices article and books like Building Microservices by Sam Newman or Production-Ready Microservices by Susan J. Fowler. Also you should check out Domain Driven Design by Eric Evans where "bounded contexts" and similar concepts like context, distillation and large-scale structures are dealt much more in depth.

On the 2GB Database inside the container, I would advise against it. In general one of the advantages of containers is the clear separation of concerns between code and data. Data should reside on a host volume, while you're free to swap containers at will to test & use the new version of your application. This should be an advantage if you use a CI/CD provisioning pipeline for your app.

Having said all that, it depends on your specific use case. If I want my developers to all have a std data-set to work against, then I might decide that their container environments do indeed feature a CACHE.DAT.  Our Learning Services department has been using containers for two years now, and every course you take on-line runs on Docker containers. As I said, it depends on the use-case.

In ref. to your last comment: right now -March2017- I would not use Windows for a container solution. Although Microsoft has been working closely with Docker for over a year, containers are born out of a sandboxing effect just above the Linux kernel (see Kernel namespace and cgroup).

HTH for now.
 

Hi guys,

Thank you for the thread! Containers are here to stay, suffice to say that all public and most private cloud providers  offers specific services to support just containers. However, we should look at them as a new system. There are many gotchas but also many aspects about them that will help the way we work.

I have some comments

On the Dockerfile above we need to make sure we tell the full story and educate as appropriate as we are all learning here:

-Container layer OS: Running an OS updates on every single triggered build from your CI/CD pipeline might not be the best thing to do if you truly want to know what you have running in production.  It's better to ask for the exact OS version you desire in the FROM statement above. In general getting a Docker image "latest" is not such a great idea.

As a side effect and if you need a particular package installed, make sure it is what you know and pin-point the exact package version you desire (isn't an automated delivery pipeline and infrastructure-as-code about knowing exactly what you have and having it all versioned?). Use $ apt-get install cowsay=3.03+dfsg1-6 

-Provenance:  we need to make sure we know we are getting the image we think we are getting. Man in the middle attacks do happen. Organisations should make sure they are covered. Please investigate this step and see tools like asking for an image hash image (docker pull debian@sha256:cabcde9b6166fcd287c1336f5....) or even better Docker Notary if your image publisher has signed images.

-On Security: Watch those passwords in either Dockerfile definitions or env vars...

-Container image size: copying a tarball will expand the storage layer Docker creates as you make the statement "ADD file" and it's not contractable. One should try to use an ftp or http server to download AND run all commands into a single statement (fetch distribution, installation and removal of unwanted files). That way you can shrink that single layer you're working on. Your code then should read something like:

RUN curl http://mysourcecodeRepo.com/file.tgz -o /file.tgz
&& tar xzf /file.tgz
&& ./myInstall.sh
&& rm /file.tgz

 

-On Persistence: Obviously your data must be mounted on a volume. One of the nice things about containers is the clear demarcation between code and data. As far as Caché system data is concerned, like the cache.cpf file and %SYS we are working on a solution that will make Caché a 1st class citizen of the container world and it will be very easy to use and upgrades will just work.

 

HTH and thanks again for the thread!

Keep them coming :-)

Ha... there was another question on MS-Windows support :)

Given the Docker engine dependency on the Linux Kernel, the dependencies or support of the various Linux FS and that the September announcement was just for a GA version on Win server 2016, I would not waste my time right now.

Spin up a Linux node on your private or public cloud and just enjoy the speed and lightness of containers. :)

HTH

Thanks @Dmitry; perfect.

@Andrew: as you can appreciate Docker containers can be very suitable for just about any use case when you want to embrace infrastructure-as-code and, even better, immutable infrastructure. Of course, we are already talking about automation, provisioning, deployment process and, in many cases, DevOps and why-not? Microservices.

It's typically a full package that one embraces as soon as she/he tip-toe into any of the above areas. Why? Because cloud computing (off-prem and on-prem) is about a "modus operandi" that  is, agility and lowering the TCO of an app (which typically it's its running lifespan).

There are many considerations and gotchas. We are working at improving the experience for those choosing this technology.

HTH

Hi Ryan,

as an extra comment:

Yes, what you describe is absolutely possible and we have customers actually doing that.

Things may vary depending on your work processes. There are complexities to overcome, however, containers are very flexible and a great help in development, testing, demo & training stages as you're already picking up yourself.

For production purposes I would suggest looking into a microservices architecture, however, a more traditional, legacy or monolith approach is not precluded either.

Hi Prasad,

Deleting the install dir after having it copied has already bloated the FS so you won't be able to shrink it because for every command Docker creates a new FS layer.

The best strategy is to run a single command and within it perform your delete. It will help.  My personal opinion is that we should provide discrete components and "pullable" resources but that's another story.

The other thing to notice is that pulling the distribution via http/ftp vs copying it also helps.

Try something like

RUN yum -y install tar hostname net-tools \
  && curl -o ${INT_F_NAME} ftp://myftpserver/distrib/CacheDB/${DIST_VER}.${DIST_BUILD}/${DIST_NAME}.tar.gz \
  && tar zxf ${INT_F_NAME} \
  && cd ${DIST_NAME} \
  && useradd cacheusr \
  && ./cinstall_silent \
  && cd ..; rm -fr ${DIST_NAME} \
  && rm ${INT_F_NAME}\
  && ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly

--

 

HTH

FWIW this is a huge chapter and we are not even mentioning global mapping or SLM (subscript level mapping), how dynamic you might want to be with it and what kind of elasticity you are after (scaling out and back in). You might find that ECP is not really your friend right here and right now given your cloud-based requirements. ECP is however extremely good for what it was created to do. 

--

To answer your question, I worked with Terraform that allows you to create any infrastructure on just about any IaaS cloud. You can then finely customise your nodes by injecting any script you may want. No need for any other configuration management (CM) tool. It has variables interpolation so it knows to wait to get the info it needs after resources are created etc. etc.

HTH

1) if they are DevOps engineers they don't do things manually. It's an axiom. :-)

2) the issue IMO is not so much which tool one needs to pick -you mention Ansible, but understand the requirements of the architecture.

3) it's not so much the numbers of nodes -although important, but the fact that the software needs to be much more dynamic and provide clear APIs that one can call. With a static architecture the monolith lives on ;)

4) one very important fact when working in the cloud is networking: you may not know in advance some facts like IP addresses. In general apps use discovery services to find things out. This is the part we need to grow and provide more "dynamicity". You can, however, adapt and evolve via ZSTART and tell the cluster who you are and you you're after.

5) I was playing with this few days ago and because of the lack of API I had to create one of those horrible scripts with <CR> redirection, blind options selections etc. ;) - Not the most robust solution for Dev or Ops engineers ;)

HTH in your endeavour Timur