Containerization Caché

In this article, I am going to give some examples to get your own docker image with InterSystems Caché/Ensemble.

Let’s start from the beginning, from Dockerfile. Dockerfile is a plaintext configuration file which is used to build a docker image.

I would recommend using centos as a core distributive for the image because InterSystems supports RedHat, and Centos is the most compatible distributive.

FROM centos:6

You can add your name as an author of this file. 

MAINTAINER Dmitry Maslennikov <mrdaimor@gmail.com>

In the first step, we should install some dependencies, and configure operating systems, as I configured TimeZone here. These dependencies needs for the installation process, and for Caché itself.

# update OS + dependencies & run Caché silent instal
RUN yum -y update \
 && yum -y install which tar hostname net-tools wget \
 && yum -y clean all \ 
 && ln -sf /etc/locatime /usr/share/zoneinfo/Europe/Prague

Let’s define the folder where we will store installation distributive.

ENV TMP_INSTALL_DIR=/tmp/distrib

Let's set up some arguments with default values. These arguments can be changed during the build process.

ARG password="Qwerty@12"
ARG cache=ensemble-2016.2.1.803.0

Then we should define some environment variables for silent installation. 

ENV ISC_PACKAGE_INSTANCENAME="ENSEMBLE" \
    ISC_PACKAGE_INSTALLDIR="/opt/ensemble/" \
    ISC_PACKAGE_UNICODE="Y" \
    ISC_PACKAGE_CLIENT_COMPONENTS="" \
    ISC_PACKAGE_INITIAL_SECURITY="Normal" \
    ISC_PACKAGE_USER_PASSWORD=${password}

I decided to set security to the normal level, and I should set some password.

You can look at the documentation to find more options.

WORKDIR ${TMP_INSTALL_DIR}

Working directory would be used as a current directory for the next commands. If the directory does not exist, it would be created.

COPY cache.key $ISC_PACKAGE_INSTALLDIR/mgr/

You can include license key file if you are not going to publish this image in public repositories.

Now we should get the installation distributive, and there are several ways to do it:

  • Download manually and place this file near to Dockerfile and use this line.
ADD $cache-lnxrhx64.tar.gz .

This command will copy and extract distributive to our working directory.

  • Download file directly from the WRC.
RUN wget -qO /dev/null --keep-session-cookies --save-cookies /dev/stdout --post-data="UserName=$WRC_USERNAME&Password=$WRC_PASSWORD" 'https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp' \
 | wget -O - --load-cookies /dev/stdin "https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/distrib/$cache-lnxrhx64.tar.gz" \
 | tar xvfzC - . 

In this case, we should pass login password for the WRC. And you can add this lines in this file above.

ARG WRC_USERNAME=”username”
ARG WRC_PASSWORD=”password”

But you should know that in this case, login/password could be extracted from the image. So, it is not the secure way.

  • And the preferable way, publish this file on internal FTP/HTTP server in the company.
RUN wget -O - "ftp://ftp.company.com/cache/$cache-lnxrhx64.tar.gz" \
 | tar xvfzC - .

Now we are ready to install.

RUN ./$cache-lnxrhx64/cinstall_silent

Once installation is being completed shutdown the instance.

RUN ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly

But it is not over. We should have some control process In a Docker image and this task could be done by ccontainermain project made by Luca Ravazzolo. So, download it directly from the github repository.

# Caché container main process PID 1 (https://github.com/zrml/ccontainermain)
RUN curl -L https://github.com/zrml/ccontainermain/raw/master/distrib/linux/ccontainermain -o /ccontainermain \
 && chmod +x /ccontainermain

Clean up the temporary folder.

RUN rm -rf $TMP_INSTALL_DIR

In case if your docker daemon uses overlay driver for storage, we should add this workaround to prevent starting Cache with error <PROTECT>.

# Workaround for an overlayfs bug which prevents Cache from starting with <PROTECT> errors
COPY ccontrol-wrapper.sh /usr/bin/
RUN cd /usr/bin \
 && rm ccontrol \
 && mv ccontrol-wrapper.sh ccontrol \
 && chmod 555 ccontrol

Where ccontrol-wrapper.sh, should contain​

#!/bin/bash

# Work around a werid overlayfs bug where files don't open properly if they haven't been
# touched first - see the yum-ovl plugin for a similar workaround
if [ "${1,,}" == "start" ]; then
    find $ISC_PACKAGE_INSTALLDIR -name CACHE.DAT -exec touch {} \;
fi

/usr/local/etc/cachesys/ccontrol $@

You can use this command to check which driver is using docker.

docker info --format '{{.Driver}}'

Here we say that our image exposes two standard for Caché ports 57772 for web and 1972 for binary connections.

EXPOSE 57772 1972

And finally we should say how to execute our container.

ENTRYPOINT ["/ccontainermain", "-cconsole", "-i", "ensemble"]

In the end our file should look like this:

FROM centos:6

MAINTAINER Dmitry Maslennikov <Dmitry.Maslennikov@csystem.cz>

# update OS + dependencies & run Caché silent instal
RUN yum -y update \
 && yum -y install which tar hostname net-tools wget \
 && yum -y clean all \ 
 && ln -sf /etc/locatime /usr/share/zoneinfo/Europe/Prague

ARG password="Qwerty@12"
ARG cache=ensemble-2016.2.1.803.0

ENV TMP_INSTALL_DIR=/tmp/distrib

# vars for Caché silent install
ENV ISC_PACKAGE_INSTANCENAME="ENSEMBLE" \
    ISC_PACKAGE_INSTALLDIR="/opt/ensemble/" \
    ISC_PACKAGE_UNICODE="Y" \
    ISC_PACKAGE_CLIENT_COMPONENTS="" \
    ISC_PACKAGE_INITIAL_SECURITY="Normal" \
    ISC_PACKAGE_USER_PASSWORD=${password} 

# set-up and install Caché from distrib_tmp dir 
WORKDIR ${TMP_INSTALL_DIR}

ADD $cache-lnxrhx64.tar.gz .

# cache distributive
RUN ./$cache-lnxrhx64/cinstall_silent \
 && ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly \
# Caché container main process PID 1 (https://github.com/zrml/ccontainermain)
 && curl -L https://github.com/daimor/ccontainermain/raw/master/distrib/linux/ccontainermain -o /ccontainermain \
 && chmod +x /ccontainermain \
 && rm -rf $TMP_INSTALL_DIR 

WORKDIR ${ISC_PACKAGE_INSTALLDIR}

# TCP sockets that can be accessed if user wants to (see 'docker run -p' flag)
EXPOSE 57772 1972

ENTRYPOINT ["/ccontainermain", "-cconsole", "-i", "ensemble"]

Now we are ready to build this image. Execute the following command in the same folder where you've placed our Dockerfile, execute command

docker build -t ensemble-simple .

You will see all process of building an image, since downloading source image, to installation Ensemble. 

To change default password or build of cache

docker build --build-arg password=SuperSecretPassword -t ensemble-simple .

docker build --build-arg cache=ensemble-2016.2.1.803.1 -t ensemble-simple .

And we are ready to run this image, with commad

docker run -d -p 57779:57772 -p 1979:1972 ensemble-simple

Here 57779 and 1979, are the ports which you can use to access to inside our container.

docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                              NAMES
5f8d2cb3745a        ensemble-simple     "/ccontainermain -..."   18 seconds ago      Up 17 seconds       0.0.0.0:1979->1972/tcp, 0.0.0.0:57779->57772/tcp   keen_carson

This command shows all running containers with some details.

You can now open http://localhost:57779/csp/sys/UtilHome.csp. Our new system running and available.

 

Sources can be found here on GitHub.

UPD: Next part of this article have already available here

Vote up!
Vote down!

Rating: 15

Comments: 18 Views: 784

Comments

It's super satisfying being able to spin up full cache systems with a single command!

Have you got any strategies for persistence? Say I log in, add a cache user, edit some globals in USER and then notice Cache 2017.1 is out - how best to upgrade?

Vote up!
Vote down!

Rating: 1

Stay tuned, I'm going to write next about it. In this article, we did a basic image. Wich will be as a source FROM for next images with some our application.

Vote up!
Vote down!

Rating: 1

Hey Dmitry,

       a couple of us on my team have been experimenting with windows containers and i see how easy it is to install cache on linux in a container what about on windows server core container. any ideas on direction with this. 

Vote up!
Vote down!

Rating: 0

Hi, I would not recommend thinking seriously about Caché in windows server core container, I think it is too early, yet.

Vote up!
Vote down!

Rating: 0

Hi guys,

Thank you for the thread! Containers are here to stay, suffice to say that all public and most private cloud providers  offers specific services to support just containers. However, we should look at them as a new system. There are many gotchas but also many aspects about them that will help the way we work.

I have some comments

On the Dockerfile above we need to make sure we tell the full story and educate as appropriate as we are all learning here:

-Container layer OS: Running an OS updates on every single triggered build from your CI/CD pipeline might not be the best thing to do if you truly want to know what you have running in production.  It's better to ask for the exact OS version you desire in the FROM statement above. In general getting a Docker image "latest" is not such a great idea.

As a side effect and if you need a particular package installed, make sure it is what you know and pin-point the exact package version you desire (isn't an automated delivery pipeline and infrastructure-as-code about knowing exactly what you have and having it all versioned?). Use $ apt-get install cowsay=3.03+dfsg1-6 

-Provenance:  we need to make sure we know we are getting the image we think we are getting. Man in the middle attacks do happen. Organisations should make sure they are covered. Please investigate this step and see tools like asking for an image hash image (docker pull debian@sha256:cabcde9b6166fcd287c1336f5....) or even better Docker Notary if your image publisher has signed images.

-On Security: Watch those passwords in either Dockerfile definitions or env vars...

-Container image size: copying a tarball will expand the storage layer Docker creates as you make the statement "ADD file" and it's not contractable. One should try to use an ftp or http server to download AND run all commands into a single statement (fetch distribution, installation and removal of unwanted files). That way you can shrink that single layer you're working on. Your code then should read something like:

RUN curl http://mysourcecodeRepo.com/file.tgz -o /file.tgz
&& tar xzf /file.tgz
&& ./myInstall.sh
&& rm /file.tgz

 

-On Persistence: Obviously your data must be mounted on a volume. One of the nice things about containers is the clear demarcation between code and data. As far as Caché system data is concerned, like the cache.cpf file and %SYS we are working on a solution that will make Caché a 1st class citizen of the container world and it will be very easy to use and upgrades will just work.

 

HTH and thanks again for the thread!

Keep them coming :-)

 

Vote up!
Vote down!

Rating: 7

-On Persistence: Obviously your data must be mounted on a volume. One of the nice things about containers is the clear demarcation between code and data. As far as Caché system data is concerned, like the cache.cpf file and %SYS we are working on a solution that will make Caché a 1st class citizen of the container world and it will be very easy to use and upgrades will just work.

This sounds perfect - can you possibly elaborate on details/timeframe?

We're rolling out Cache on Docker over the next few months, so it would be really nice to know to what extent we should roll our own infrastructure in the interim :)

Vote up!
Vote down!

Rating: 1

"we are working on a solution", Sebastian :)

We are very pleased with further enhancements & improvements we have been making from the first internal version. The fundamental idea is that, while containers are ephemeral, databases are not and we want to assist in this area. While you can deal with your application database and make sure you mount your DBs on host volumes, there could be an issue as soon as you use %SYS. You use %SYS when you define your z* | Z* routines and/or globals or when you define user credentials, etc. In this case, right now, you'd have to create an export and an import process which would not be elegant, nor would it help to be agile. What we will offer is a very easy-to-use facility by which you can define where %SYS will reside on the host... and all using canonical Docker syntax. Stay tuned...

Vote up!
Vote down!

Rating: 1

Is there a license model that supports containers and/or microservices ?

 

Vote up!
Vote down!

Rating: 0

@Herman

You have the option to elect license servers for cooperating instances as per documentation.

HTH

Vote up!
Vote down!

Rating: 0

AFAIK, docker encourages to implement microservices. What about classic Cache based app deployment, can docker be useful in this case? By classic I mean the system accessible via several services (ActiveX, TCP sockets, SOAP and REST), multi-tasking, etc with initial 1-2 GB database inside. At the moment I'm to choose a solution to roll up several dozen of rather small systems of such kind. Should I look at docker technology in this very case?

I am bound to ver. 2015.1.4. Host OS most likely will be Windows 2008/2012, while I'd prefer to deploy our system on Linux, that's why I am searching a light solution how to virtualize or containerize it. Thank you!

Vote up!
Vote down!

Rating: 2

Alexey, yes, it is very easy to build a little application just in one container. I don't see any problem, to use any supported technologies as well. Even also possible some load balancing from docker, which also good for microservices. I'm going to show some example with application in the next article soon.

Vote up!
Vote down!

Rating: 1

Thanks, Dimitri.

Alexey: Docker does not encourage anything aside using its container technology and its EE (Enterprise Edition and cloud to monetise their effort) :-) However, containers in general help and fit very well into a microservices type architecture. Please note how you can create a microservices architecture without containers via VM, jar, war, based solution with a different type of engine. Containers lend themselves to it more naturally.

It's worth pointing out that just because people talk about 1 process per container, it does not preclude you from using multiple processes in each container. You could naturally have 3 sockets open, for example, 57772, 1972 and 8384, all serving different purposes + various background processes (think of our WD & GC) and still be within the boundaries of a microservice definition with a clear "bounded context". For more info on microservices you might want to read Martin Fowler microservices article and books like Building Microservices by Sam Newman or Production-Ready Microservices by Susan J. Fowler. Also you should check out Domain Driven Design by Eric Evans where "bounded contexts" and similar concepts like context, distillation and large-scale structures are dealt much more in depth.

On the 2GB Database inside the container, I would advise against it. In general one of the advantages of containers is the clear separation of concerns between code and data. Data should reside on a host volume, while you're free to swap containers at will to test & use the new version of your application. This should be an advantage if you use a CI/CD provisioning pipeline for your app.

Having said all that, it depends on your specific use case. If I want my developers to all have a std data-set to work against, then I might decide that their container environments do indeed feature a CACHE.DAT.  Our Learning Services department has been using containers for two years now, and every course you take on-line runs on Docker containers. As I said, it depends on the use-case.

In ref. to your last comment: right now -March2017- I would not use Windows for a container solution. Although Microsoft has been working closely with Docker for over a year, containers are born out of a sandboxing effect just above the Linux kernel (see Kernel namespace and cgroup).

HTH for now.
 

Vote up!
Vote down!

Rating: 0

Thanks, Dmitry and Luca.

Meanwhile I read some docs and interviewed colleagues who had more experience with Docker than me (while w/o Caché inside). What I've got out of it: Docker doesn't fit well to this very case of mine, which is mostly associated with deployment of already developed app rather than with new development. The reasons are:

- Only one containerized app at the client's server (our case) doesn't mean so many benefits as if it would be several ones;

- Windows issues which Luca already mentioned;

- I completely agree that "data should reside on a host volume..." with only one remark: most likely that all this stuff will be maintained at the client's site by not very well skilled IT personal. It seems that in case of Doker/host volume/etc its configuring will be more complex than rolling up Cache for Windows installation with all possible settings prepared by %Installer based class.

Vote up!
Vote down!

Rating: 0

@Alexey

It sounds like -as you say, having to deploy on-site -if I understand correctly, might not be the best use case.

If they use virtualization wouldn't be easier for you guys to deploy a VM that you prepare at your site and just mount the FS/DB they have? That way you'd still have the privilege to run and have guarantees on your build process vs having to do it all on-site.

Just a thought.

All the best

 

Vote up!
Vote down!

Rating: 0

Luca,

It was clear that we can use VMs from the very beginning of the project, and maybe we'll take this approach at last. I've just looked at the Docker's side willing to search more light/reliable alternative.

Thank you again.

Vote up!
Vote down!

Rating: 1

Hi, Dmitry!

What should I do if I want to use "prepared" docker image?

E.g. not the standard installation but the image with some 3rd party community software installed and set up like WebTerminal, ClassExplorer, MDX2JSON and DeepSeeWeb, cache-udl,  Cache REST-Forms , etc...

Vote up!
Vote down!

Rating: 0

Stay tuned, I'm going to show how to do it the next part.

Vote up!
Vote down!

Rating: 0

Continuation of this article already available.

Vote up!
Vote down!

Rating: 0