Question
· Dec 8, 2016

Using Docker for standalone deployment of Ensemble

How suitable is Docker for standalone deployment of an Ensemble version and Ensemble application together?

The context is deployment by an application partner of an integration application and the supporting Ensemble version as a single package (single file ideally), to multiple environments and to multiple customer sites.

I don't have experience with Ensemble on Docker so I'm wondering what gaps and pitfalls may exist.

The focus of the question is deploying the Ensemble product and application code - I do understand that consideration is needed on management of the application data, including any retained Ensemble messages.

This would be for new applications on Ensemble 2016.1+ on Linux. Linux containers on Windows are of interest - but how mature is the Windows support for them?

Discussion (8)3
Log in or sign up to continue

When you want to use Ensemble on Docker, it means that you need some integration with some external services. When you have to connect to some external services, no need any extra actions. But when you should use some shared folder, or some own internal services, which will be used by others, you have to expose some TCP ports, or volumes, and do some changes in a running process for this container. But you can use some network settings for the container, and in this case, Ensemble will be as a usual machine in your network, and you don't need to change any settings when you have to add some new internal service into Ensemble Production.
Windows also supports Docker but from version Server 2016. Windows also supports windows containers, but I don't know yet, is it supported to run windows version of Ensemble or not.

What you need to start your first Ensemble container within Ensemble application. The easiest way is to define some %Installer Manifest. When you can create a new database, load and compile your code, do some changes in an Ensemble settings.
There are some different examples of Dockerfile.
Quite simple, with only Ensemble. Or even with Apache inside, for some web application.

You can also look at this example of Dockerfile

### Cache ###
FROM tutum/centos:latest

MAINTAINER Dmitry Maslennikov <Dmitry.Maslennikov@csystem.cz>

# update OS + dependencies & run Caché silent instal
RUN yum -y update \
 && yum -y install which tar hostname net-tools wget \
 && yum -y clean all \ 
 && ln -sf /etc/locatime /usr/share/zoneinfo/Europe/Prague

ARG WRC_USERNAME
ARG WRC_PASSWORD
ARG cache=ensemble-2016.2.1.803.0
ARG globals8k=512
ARG routines=32
ARG locksiz=117964800

ENV TMP_INSTALL_DIR=/tmp/distrib

# vars for Caché silent install
ENV ISC_PACKAGE_INSTANCENAME="ENSEMBLE" \
    ISC_PACKAGE_INSTALLDIR="/opt/ensemble/" \
    ISC_PACKAGE_UNICODE="Y" \
    ISC_PACKAGE_CLIENT_COMPONENTS="" \

# vars for install our application     
    ISC_INSTALLER_MANIFEST=${TMP_INSTALL_DIR}/Installer.cls \
    ISC_INSTALLER_LOGFILE=installer_log \
    ISC_INSTALLER_LOGLEVEL=3 \
    ISC_INSTALLER_PARAMETERS="routines=$routines,locksiz=$locksiz,globals8k=$globals8k"

# set-up and install Caché from distrib_tmp dir 
WORKDIR ${TMP_INSTALL_DIR}

# our application installer
COPY Installer.cls .
# custom installation manifest 
COPY custom_install-manifest.isc ./$cache-lnxrhx64/package/custom_install/manifest.isc 
# license file
COPY cache.key $ISC_PACKAGE_INSTALLDIR/mgr/

# cache distributive
RUN wget -qO /dev/null --keep-session-cookies --save-cookies /dev/stdout --post-data="UserName=$WRC_USERNAME&Password=$WRC_PASSWORD" 'https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp' \
 | wget -O - --load-cookies /dev/stdin "https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/distrib/$cache-lnxrhx64.tar.gz" \
 | tar xvfzC - . \
 && chmod +r ./$cache-lnxrhx64/package/custom_install/manifest.isc \
 && ./$cache-lnxrhx64/cinstall_silent custom_install \
 && cat $ISC_PACKAGE_INSTALLDIR/mgr/cconsole.log \
 && cat $ISC_PACKAGE_INSTALLDIR/mgr/installer_log \
 && ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly \
 && rm -rf $TMP_INSTALL_DIR

# Caché container main process PID 1 (https://github.com/zrml/ccontainermain)
RUN curl -L https://github.com/daimor/ccontainermain/releases/download/0.1/ccontainermain -o /ccontainermain \
 && chmod +x /ccontainermain

# TCP sockets that can be accessed if user wants to (see 'docker run -p' flag)
EXPOSE 57772 1972 22

ENTRYPOINT ["/ccontainermain", "-cconsole", "-i", "ensemble"]

And installer manifest, with just some settings.

Class Temp.Installer 
{

XData setup [ XMLNamespace = INSTALLER ]
{
<Manifest>
  <If Condition='+"${routines}"=0'>
    <Var Name="routines" Value="32"/>
  </If>
  <If Condition='+"${globals8k}"=0'>
    <Var Name="globals8k" Value="256"/>
  </If>
  <If Condition='+"${locksiz}"=0'>
    <Var Name="locksiz" Value="1179648"/>
  </If>

  <SystemSetting 
    Name="Config.config.routines"
    Value="${routines}"/>

  <SystemSetting 
    Name="Config.config.globals8kb"
    Value="${globals8k}"/>

  <SystemSetting 
    Name="Config.config.locksiz"
    Value="${locksiz}"/>

</Manifest>
}

ClassMethod setup(
    ByRef pVars, 
    pLogLevel As %Integer = 3, 
    pInstaller As %Installer.Installer, 
    pLogger As %Installer.AbstractLogger
  ) As %Status [ CodeMode = objectgenerator, Internal ]
{
    do %code.WriteLine($char(9)_"set pVars(""CURRENTCLASS"")="""_%classname_"""")
    do %code.WriteLine($char(9)_"set pVars(""CURRENTNS"")="""_$namespace_"""")
  #; Let our XGL document generate code for this method. 
  Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "setup")
}

}

In this example, Ensemble distributive will be downloaded directly from the WRC.

to build this image

docker build -t ensemble --build-arg WRC_USERNAME=******* --build-arg WRC_PASSWORD=******* .

and to run the container from this image

docker run -p 57772:57772 ensemble

Thanks @Dmitry; perfect.

@Andrew: as you can appreciate Docker containers can be very suitable for just about any use case when you want to embrace infrastructure-as-code and, even better, immutable infrastructure. Of course, we are already talking about automation, provisioning, deployment process and, in many cases, DevOps and why-not? Microservices.

It's typically a full package that one embraces as soon as she/he tip-toe into any of the above areas. Why? Because cloud computing (off-prem and on-prem) is about a "modus operandi" that  is, agility and lowering the TCO of an app (which typically it's its running lifespan).

There are many considerations and gotchas. We are working at improving the experience for those choosing this technology.

HTH

Ha... there was another question on MS-Windows support :)

Given the Docker engine dependency on the Linux Kernel, the dependencies or support of the various Linux FS and that the September announcement was just for a GA version on Win server 2016, I would not waste my time right now.

Spin up a Linux node on your private or public cloud and just enjoy the speed and lightness of containers. :)

HTH

Let me add my experience to this comment.  I have been wading into the Docker ocean.  I am on Widows and really did not want to run a Linux VM to get Docker Containers (seemed a bit redundant to me) so Docker for Windows was the way to go.  So far this has worked extremely well for me.  I am running an Ubuntu container with Ensemble added int.   My dockerfile is a simplistic version of the one earlier in these comments.   I am having only one issue related to getting the SSH daemon to run on when the container starts.

 I hope to have all my local instances moved into containers soon.

My feeling is that this will be great for demonstrations, local development, and proofs of concept.   I would agree that for any production use having a straight Linux environment with Docker would be a more robust and stable solution.

 I am having only one issue related to getting the SSH daemon to run on when the container starts.

Dou you need it for local use, just to have access to the console or you have to share console access outside?

In case if you need local access, you can use docker exec command to get access inside of your container and run command.

docker exec -it 'container_name' csession 'instance_name' -UUSER

if you have to share access, I think it is also possible to achieve but with the second container and links between.

No it is not 'necessary'.  However I do like to be able to have an environment that more closely matches what one might need in production.  This is both for my experience and to be able to show InterSystems technology in a manner that might occur for a client.

I do use Docker exec, thought I choose to go to BASH so I have more general access.  I actually wrote  a simple cmd file to do this and added it to a menu on my toolbar.

@echo off
docker container ls --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
echo:
set /P Container=Container ID: 
docker exec -it %Container% bash