Article
· Dec 4, 2015 3m read

Atelier Beta Cloud Infrastructure

A few people wrote to me asking about the infrastructure behind the Atelier Server implementation. Its neat and a worthwhile story to share so I am writing it up here as a post on the community. I want to go in to a little detail on why it was needed and then I will outline in detail how we went about implementing this.

So why did we need to give people a “server in the cloud”? Atelier is the new IDE for InterSystems products. It is Eclipse based and implemented in Java so it is cross platform. The communication between the Atelier client and the server is REST based over http. The REST api for Atelier is new and was built specifically to support not just Atelier but anyone who wished to roll up an IDE against Cache. It is also brand new and does not exist on any released product or even the current field test version, 2016.1. So to distribute the field test every user needed to install a new version of the server as well as the new client. Without a new server you can’t use Atelier at all. As we go forward we will ship versions that have the api and this will remove this limitation.

A second consideration was to drive home one major architectural difference between Atelier and Studio. Studio was an editor that presented what was available in a given server instance. Atelier is an independent IDE that can be used even when there is no instance of Cache available. The code you edit lives outside the database. The server is no longer the repository of the code base. Its simply the compiler / debugging environment for testing out the code you are developing.

A last consideration was the rise of containers as a critical new element in the matrix of ways people can leverage the cloud in new and different ways. For those not familiar with containers they operate differently then virtual machines in how they interact with the host OS. They are a critical aspect of the elastic cloud. You need a resource, you grab it, when you are done you discard it. If you need it again you can always just grab it again. Its the model driving DevOps and has made Docker a critical new technology. However Docker, the industry leader, today works primarily on Linux. So while this was a great solution for the server it was not suited for the client because we don’t want to force everyone to learn Linux to play with Atelier.

However this lead to what is really a perfect motif for what we needed with Atelier and this beta. You have the client on your own instance under your control. All your code is there. But when you need to compile something, or debug, or run a unit test well there you have it. And when you are done you can discard it. Your code is safe. Its in the client repository, and hopefully integrated to source control. In addition because we control the container we can update it any time to pick up new features. You as the beta tester don’t have to download a new version every couple days. When we have something worth updating for we just publish a new container.

So how did we do it? We are using Docker to build the containers. We take an install of the server code and run it into a CentOS image. We set up users, load the key etc using standard docker scripts. We give this Docker file to a company based here in Boston called Appsembler.. They also host our E-Learning educational offerings using a similar stack. They put the container up on Digital Ocean who deploy it in to AWS EC2 environments. Appsembler provides me with a JS widget that renders the Launch Server button on the field test page. Implemented in JQuery, it calls Appsembler via REST and provides them with a unique identifier for that user. Over on their side of things they then launch a container based on a set of business rules we agreed to. How long will it be up for? Can it be relaunched? The original container state or do changes need to be preserved? etc. I don’t have to worry about any of these things, Appsembler sorts all of that out for me.

In conclusion I just want to say I think containers are great technology. At the moment this market is owned by Docker but Microsoft is adding support in Windows for Docker and is working on their own container solution. We are preparing an official statement on container support as well. If you have any additional questions feel free to post them here and I will reply as soon as possible. Happy coding!

Discussion (24)0
Log in or sign up to continue

I am not sure exactly what you are asking. The only current restriction is that you have to have a valid WRC credential for access to the online container. If you want to install a version locally then right now the answer is no. We will be updating this container at an aggressive rate. At a later date and as we get closer to release there will be more channels for accessing Atelier servers

I think he might be asking about running a Docker instance locally as you are doing in the cloud. This is something my boss has expressed interest in as containers appear to be the next evolution in virtualization.

My boss asked me last Thursday if Cache and Ensemble could run OK in a container using a config file to indicate version numbers and all that good stuff. I replied at the time, that I was not sure. Then I have been feeding him more info about how you guys did it to him as I learn more.

Any best practices you can offer for container-izing Cache and Ensemble in Docker could be just as useful as best practices with classic virtualization.

Here is why everyone is asking...

http://blogs.technet.com/b/server-cloud/archive/2015/11/19/announcing-the-release-of-hyper-v-containers-in-windows-server-2016-technical-preview-4.aspx

Any insight is always appreciated here.

Great post! Thanks!

Hi,

I am using the Dockerfile steps suggested in https://github.com/zrml/ccontainermain.  I have noticed a problem with the created Docker image size, with full 2016 HealthShare install, the image size is coming out at around 11 GB.  I have  removed the temporary install dir which should reduce 5+ GB but it did not reduce the Docker image size from 11 GB.  It appears the the way the Docker containers work "the ADD package, the Cache Install and the Dir remove steps were executed in different layers".  It appears the solution is to pkg download, extract, cache install, and remove temporary install dir, all these steps should happen in one container to reduce the image size.  The solution here is to use either wget/curl to download image from wrc.intersystems.com but that did not work for me.  I do have proper access but I am not able to download the package using wget/curl.  Would you be able to help with that?

Thanks

Prasad

 Ex: This is the package that I am trying to download.

HS-2016.1.2.208.0-hscore15.01_hsaa15_hspi15_hsviewer15.01_linkage15-b6402-lnxrhx64.tar.gz

Commands used with real password:

1)curl -v -O -u pkosuru@j2interactive.com:PASSWORD https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/distrib/...

2)wget --no-check-certificate --user=pkosuru@j2interactive.com --password=PASSWORD "https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/distrib/..."

I use the following script (courtesy of Alexander Koblov):

wget --delete-after --keep-session-cookies --save-cookies cookies.txt --post-data='UserName=?????&Password=???????' 'https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp'
wget --load-cookies cookies.txt --content-disposition 'https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/distrib/HS-2016.1.2.208.0-hscore15.01_hsaa15_hspi15_hsviewer15.01_linkage15-b6402-lnxrhx64.tar.gz'

First line for authentication and saves cookies into a file. Second line loads cookies from file and downloads with file.

Hi,

Thank you for the reply.  The First command did save the cookie but the second wget still resulted in stream not found error.  Please see below.  Am I doing it wrong?

Thanks

Prasad

ubuntu@ip-172-31-34-120:/tmp$ wget --load-cookies cookies.txt --content-disposition 'https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/FieldTes...'

--2016-10-08 03:30:01--  https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/FieldTes...

Resolving wrc.intersystems.com (wrc.intersystems.com)... 38.97.67.139

Connecting to wrc.intersystems.com (wrc.intersystems.com)|38.97.67.139|:443... connected.

HTTP request sent, awaiting response... 404 Stream Not Found

2016-10-08 03:30:02 ERROR 404: Stream Not Found.

 

ubuntu@ip-172-31-34-120:/tmp

Hi Prasad,

Deleting the install dir after having it copied has already bloated the FS so you won't be able to shrink it because for every command Docker creates a new FS layer.

The best strategy is to run a single command and within it perform your delete. It will help.  My personal opinion is that we should provide discrete components and "pullable" resources but that's another story.

The other thing to notice is that pulling the distribution via http/ftp vs copying it also helps.

Try something like

RUN yum -y install tar hostname net-tools \
  && curl -o ${INT_F_NAME} ftp://myftpserver/distrib/CacheDB/${DIST_VER}.${DIST_BUILD}/${DIST_NAME}.tar.gz \
  && tar zxf ${INT_F_NAME} \
  && cd ${DIST_NAME} \
  && useradd cacheusr \
  && ./cinstall_silent \
  && cd ..; rm -fr ${DIST_NAME} \
  && rm ${INT_F_NAME}\
  && ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly

--

 

HTH

@Ryan I think you might be right about @Dmitry comment.

We are supposed to announce Docker containers support any minute now. We have been testing for a couple of releases. Aside the usual gotchas it's "business as usual" as long as you use supported OS container images. Right now these are RH, SUSE, CentOS. 

Your Docker engine can run on anything you like of course. That's the whole point of it and it does it fast! :)

I would recommend using Ubuntu 15.05 & above as it comes with the latest kernel and therefore does not have SHMMAX restrictions (32MB!). You'll be a happy man when you run your container and ask for more shared mem in the form of globufs :-) or just during installation time.  The alternative is to use the --privileged flag which is of course not desirable.

Best Practises? Treat it as a new platform with many things to learn about it. There is much documentation out there starting from the Docker site itself.  Of particular interest is the data persistence part, as containers are immutable by nature. How do you save data? Where do you save it to? Why should I then use containers? 

if you can get hold on our site of the Global Summit 2015 presentation titled "All about Containerization" you'll find interesting things in there. Let me know.

Containers are not the solution to all our troubles. However, they aid in many aspects of architecture design as they present operational issues much earlier in the operational process. The idea is the OP people work closely with Developers... and both find a programmatic (automation) solution to the deployment of an app.

Yes, if you thought of DevOps, you're in the right quadrant. Furthermore, think of infrastructure-as-code and even immutable-infrastructure.

HTH for now

Bill, Luca,

Have you ever considered Caché as a container as opposed to in a container ?

Secure (sandboxed) small applications (let's call them MicroServices ;-)) easily deployable, replicable and synchronizable across multiple Caché instances.

I still find Docker containers a huge overkill (in terms of needed stack) if all you want to containerize are Caché applications.

This is interesting idea, but what those containers would export? (namespace access?) How those containers would be reverted to the initial state? What isolation should be provided between containers? What sequence of steps and set of states would be cloned from the basic container for the different user?

Effeciency of Docker is heavily based on unionfs, cgroups and bridged networking services, which were already available in the linux. What could serve the similar purpose here? I assume you thought about this all before asked, and have some ideas. That would be interesting to hear. 

I am not sure it is fair to say Docker is a huge overkill.

We researched infrastructure-as-code as mentioned by Bill the other day and it blew our minds what folks are doing out there with containers.

I have not tested this yet, but it seems that on Heroku now, we can develope an app in a container all in code, specifying all dependencies and such. It is tested on the local machine using docker. Then once it is all done, it is checked into GItHub where Heroku pulls from and then they do the exact same thing... They tear down the old container, rebuild a new one based on the updated code, deploy the app and the container on their server.

So we are not really deploying code anymore. We are deploying containers from development on the local workstation, to staging on a test server, to production.

At least I think this is where it is at but I am still putting pieces together.

I still need to test it out for myself to see if it is really that slick.

It seems we are just scratching the surface here.

So put this in terms of Cache...

Lets say we have 30 developers all writing Cache codes for an environemtn that has 10 different Cache servers, several of which are staging/development servers but others might be test servers testing the latest upgrade to the latest version of Cache.

Getting all of these tools together, Cache Studio, Visual Studio, Atelier, Entity Framework support and the different versions of Studio needed to test the different versions of Cache, this is all a huge mass of software to maintain across 30 workstations.

So how could containers be used here?

COuld we do it how heroku does it?

Store code for the container, all dependencies, and all code for the app in source control.

Then push that out to other developers who will logon in the AM and find their workstation container seems new and has new software on it for the next version of Cache because it passed testing the night before. Similar containers went out to all of the servers. And somehow, developers write entire apps in their workstation container and push the source of that container and app out to staging and production?

Hi Ryan,

as an extra comment:

Yes, what you describe is absolutely possible and we have customers actually doing that.

Things may vary depending on your work processes. There are complexities to overcome, however, containers are very flexible and a great help in development, testing, demo & training stages as you're already picking up yourself.

For production purposes I would suggest looking into a microservices architecture, however, a more traditional, legacy or monolith approach is not precluded either.

@Herman:

the busybox container last time I checked was 1.1MB (yes MB) and I think the "helloworld" one was 256KB.

That is a nice portable sandbox. I agree with @Ryan: not an overkill, but just the thin veil of the sandbox :) Wed' have to re-invent a linux union FS... There is no point.  I think the trick is to start from the container you find more suitable for your specific needs like size, tools already configured, support, etc. 

As an example, I like the Tutum CentOS distribution because they provide the ssh daemon that the official CentOS image does not have. Ssh is cumbersome to provide with all the security and start-up scripts. options that we will need etc. Tutum does a great job and has been maintaining it for over a year now. Ah, they've also just been bought by Docker ;)

Bottom line: we are not in the OS business but are happy to work with these great innovations. 

You mentioned micro-service: YUP! That's where we're all going...even those with monolith...(all of us) :-D 

Micro-services deserve a thread or a GS2016 session on their own, so I won't waste this space :)

You also mention the stack... well, what about a standard VM? it's bloated; you don't care about it but you must maintain the OS; it takes ages to boot up; ditto for shutdown... isn't that bloated?

Thanks for your contribution!

Glad to have these kind of discussions.

I didn't mean 'huge overkill' in the sense of amount of bytes, but merely the stack that is needed to run a simple Cache service (Docker, Linux and a complete Cache instance). And yes, it's a huge improvement compared to VM's.
Now (at least in this phase of containerization) you need to know quite some Linux and since I'm not a DevOps guy, I don't want to know Linux (or any other OS for that matter) beyond being able to use it.
 
I've been working on a skunk project (Bento) that tries to provide containerization just using Cache.
My thoughts are about two kinds of containers, application (code) and data containers that expose a REST interface.

Applications don't have a state so they can be instantiated, upgraded, replicated quite easily.
The problem is with the data containers, they can be instantiated from a initial state ofcourse, but the amount of catching up to do in terms of data synchronization could be substantial.
It depends on the granularity of the data, but it makes no sense to instantiate a data container that needs to catch up terrabytes of data.

Bento is far from being published, not even as a demonstration, but I might put together a little presentation that explains its principles.

I don't think we need/want/can rebuild all the stuff that Linux provides, but it should be possible to sandbox a namespace a lot more then is prossible right now.

I think both solutions (Cache in a container and Cache as a container) can both exist, each having it's own advantages.

@Herman:

OK Understood. However bear in mind that with a container you don't have to worry about the OS (your cloud provider does); you just worry about "containerizing" your "service".

It sounds like you are a developer :) you need to pair up with an Op guy then :-D

You don't have to know much. Just run a Dockerfile manifesto to create your container image then ask a cloud provider to spin it up for you.  https://www.joyent.com/ only use containers these days for example... just like google.

A word of warning on data containers: they are only alive while the Docker daemon is up. I would not trust those type of containers to hold my data. Data must reside on reliable memory (NVM) and/or storage devices. Therefore, the only reasonable choice now is to mount host volumes (see "docker run -v" option). Furthermore, you can take advantage of storage level snapshots (See AWS for example), LVMs etc.

HTH and all the best with the Bento project. Let us know when available on Github.

@Ryan:

Where I come from we'd say "You are putting a lot of meat on the fire" :-)

What you're referring to is the wonderful world of DevOps. The bottom line is that, yes, you can achieve all that. However, it's a new journey on a new road and like any new adventure, we'll all learn things as we do them. We can also learn from other people and architectures that we use every day ;)

One extra aspect that makes our job interesting with Caché is the fact that it's not just code, dynamically linked to an environment (container) or statically compiled (I'm thinking of C exes, NodeJS, GOlang, Java etc.) but we have to deal with a database and a container has only 3 ways to deal with persistency.

There are interesting challenges; None insurmountable :)

Again, this topic needs a session or a day at GS :)

FYI one of the best presentation -actually talking about a real working solution ref. DevOps, micro-services and immutable-infrastructure was delivered last year at the 2014 DockerCon by the then CTO of GUILT.

IMO it's worth a watch as the guy has a very good dialectic and presentation style

https://www.youtube.com/watch?v=GaHzdqFithc

We'll need a new thread on any of the above-mentioned approaches to technologies, though :-)

Thanks for posting!

Hi Bill and Luca,

Thanks for the mention of Appsembler and our Docker-powered service. Regarding the comment about running Windows software inside containers, we're currently evaluating the new beta offering from Microsoft's Azure for another customer. https://msdn.microsoft.com/virtualization/windowscontainers/containers_w...

We'll keep you posted as this development progresses!

Nate