Docker

Syndicate content 10 

Hi guys!

Portrait of  Madame X, Gustave Caillebotte.

One of the features I like in InterSystems ObjectScript is how you can process array transformations in a specific method or a function.

Usually when we say "process an array" we assume a very straightforward algorithm which loops through an array and does something with its entries upon a certain rule.

The trick is how you transfer an array to work with into a function. 

One of the nice approaches on how to pass the information about an array is using $Name and Indirection operator. 

Below you can find a very simple example which illustrates the thing.

Last comment 36 min 1 sec ago
+ 2   1 2
77

views

+ 2

rating

Hi Community!

I think everyone keeps the source code of the project in the repository nowadays: Github, GitLab, bitbucket, etc. Same for InterSystems IRIS projects  check any on Open Exchange.

What do we do every time when start or continue working with a certain repository with InterSystems Data Platform?

We need a local InterSystems IRIS machine, have the environment for the project set up and the source code imported.

So every developer performs the following:

  1. Check out the code from repo
  2. Install/Run local IRIS installation
  3. Create a new namespace/database for a project
  4. Import the code into this new namespace
  5. Setup all the rest environment
  6. Start/continue coding the project 

If you dockerize your repository this steps line could be shortened to this 3 steps:

  1. Check out the code from repo
  2. Run docker-compose build 
  3. Start/continue coding the project 

Profit - no any hands-on for 3-4-5 steps which could take minutes and bring head ache sometime.

You can dockerize (almost) any your InterSystems repo with a few following steps. Let’s go!

Last comment 4 days ago
+ 3   5 4
192

views

+ 3

rating

1. Purpose

This is a 10-minute simple step-by-step guide on how to quickly set up various flavors of HealthShare docker containers from scratch on a Win10 laptop. 

For example, we can build a couple of  HealthShare "global edition vs UK Edition" demos as shown below.  

There are a couple of frequently asked questions from HealthShare colleagues and partners:

  • "I am no Docker guy, but is there a quick way to build various flavors of HealthShare containers simply for demo/PoC/dev/training or troubleshooting purpose?"
  • "I just can't make "Docker for Windows" work on my Win10 laptop - how did you make that work? What's the simplest/easiest way to play with HealthShare containers on my old Windows laptop?"

The truth is I am not Docker specialist either - I wish I had time for it. I am using an old laptop, and I haven't even tried "Docker for Windows" yet.

Last comment 6 days ago
+ 1   0 2
43

views

+ 1

rating

Hi Community!

We're pleased to invite you to the DockerCon 2019 – the #1 container industry conference for all things Kubernetes, microservices, and DevOps. The event will be held at the Moscone Center in San Francisco from April 29 to May 2.

In addition, there will be a special session "Containerized Databases for Enterprise Applications" presented by @Joe Carroll,  Product Specialist at InterSystems. 

See the details below.

Last comment 6 days ago
+ 3   0 2
55

views

+ 3

rating

InterSystems is delighted to announce the support of Docker container technology as a platform from the moment 2016.1 will be released.


Docker is a disruptive system technology that has many benefits and offers many advantages for those investing in infrastructure-as-code or immutable-infrastructure provisioning & deployment scenarios.
Like any new technology that appears, there is a learning curve, and many considerations need to occur when using it. However, Docker container technology has already proven to be successful by a huge follow-up with many companies already using it in production. Furthermore & FYI, all public cloud providers already support it.


InterSystems validates and supports its technologies on Docker container engine from v1.8 and above with CentOS, Ubuntu, RedHat and SUSE based containers from the official Docker Hub repository or their respective projects and sites.

Have fun with it

Last comment 29 March 2019
+ 5   0 9
1808

views

+ 5

rating

This is a continuation of my story about the development of my project isc-tar started in the first part.

Just having tests is not enough, it does not mean that you will run tests after all changes. Running tests should be automated, and when you cover all your functionality with tests, everything should work well after any change in any place.  And Continuous Integration (CI) helps to keep the code and deployment procedure with as fewer bugs as possible and automates the routine procedures, like publishing releases.

I use GitHub to store the source code. And some time ago GitHub started to work on their own CI/CD platform and named it GitHub Actions. It is not widely available, yet. You have to be signed as a beta tester for this feature, as I did. GitHub Actions uses quite a different way how to deal with a build workflow. What is important that Github Actions allows to use docker, and it’s quite easy to customize available actions. And interesting that GitHub Actions is really much bigger than any classic CI like we have in Travis, Circle or Gitlab CI and so on. You can find more in the official documentation.

+ 2   1 1
0

comments

110

views

+ 2

rating

Hello all,

I am having some issues creating a docker image with a fresh cache installation. The error that I am getting right now is that gzip is required for a cache installation, but was not found in my system. Even though, this is shown as installed in the base centos 7 image. My host machine is Windows 10 using the latest docker version.

Here is my dockerfile, its simple:

FROM centos:latest

RUN yum update -y && yum -y upgrade

COPY ./cache-2017.1.3.317.0.18571-lnxrhx64.tar.gz .

RUN tar -zxf cache-2017.1.3.317.0.18571-lnxrhx64.tar.gz

RUN ISC_PACKAGE_INSTANCENAME="MyDatabase" \

          ISC_PACKAGE_INSTALLDIR="/usr/cachesys/" \

          ISC_PACKAGE_UNICODE="Y" \

         ISC_PACKAGE_CLIENT_COMPONENTS="" \

         ISC_PACKAGE_INITIAL_SECURITY="Minimal" \

        /cache-2017.1.3.317.0.18571-lnxrhx64/cinstall_silent



EXPOSE 57772 22 1972

ENTRYPOINT ["ccontrol", "start cache"]

Last answer 20 March 2019 Last comment 20 March 2019
0   0 3
93

views

0

rating

I am just recently announced my project isc-tar. But sometimes it is not less interesting what’s behind the scene: how it was built, how it works and what happens around the project. Here is the story:

  • How to develop this project
  • How to test it
  • How to release new versions for publishing
  • And finally how to automate all above
  • Continuous integration

So, I would like to tell all about it.

+ 2   2 1
0

comments

150

views

+ 2

rating

InterSystems supports use of the InterSystems IRIS Docker images it provides on Linux only. Rather than executing containers as native processes, as on Linux platforms, Docker for Windows creates a Linux VM running under Hyper-V, the Windows virtualizer, to host containers. These additional layers add complexity that prevents InterSystems from supporting Docker for Windows at this time

Last comment 14 March 2019
+ 8   0 8
805

views

+ 8

rating

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD

In the previous article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software. Still, our focus was on the implementation part of software development, but this part presents:

  • GitLab Workflow - a complete software life cycle process - from idea to user feedback
  • Continuous Delivery - software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently.

Last comment 1 March 2019
+ 5   0 4
981

views

+ 5

rating

IRIS is a powerful platform and one of the new features is the Java Business Host (DOC: Connecting Systems Using Java Business Hosts) that allow you to develop Business Services and Business Operations directly in Java (JavaDocs of the InterSystems Gateway Package).

I was testing this feature using an IRIS Docker image, but this image doesn't come with Java, the image is a bare Ubuntu image plus IRIS. So I had to build a new image adding the Java stuff. After some research I finally get this Dockerfile

Last comment 8 January 2019
+ 3   1 3
242

views

+ 3

rating

I recently had to diagnose a networking problem I was having when attached to our corporate network.   I was seeing an unknown bridge network being defined that shared the same IP address space as the company network thus blocking access to company resources.  This bridge network was separate from the Docker0 bridge network which the docker engine sets up.  Docker was configured with a bip (bridge ip) address to prevent docker form using an address space that create a conflict.

Last comment 28 December 2018
0   0 2
351

views

0

rating

Everybody has a testing environment.

Some people are lucky enough to have a totally separate environment to run production in.

-- Unknown

.

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab WorkFlow
  • GitLab CI/CD
  • CI/CD with containers

This first part deals with the cornerstone of modern software development - Git version control system and various Git flows.

Last comment 22 November 2018
+ 9   1 5
1453

views

+ 9

rating

Just got the new beta version of Docker, with depreciation warning of AUFS. It's so bad news when InterSystems does not support used by default storage driver overlay2. Recently I thought to play with Google Kubernetes Engine, and realized that I can't work with InterSystems products there due to incompatibility with Storage Driver. Maybe it's already time to think about support?

Last answer 25 July 2018 Last comment 20 November 2018
+ 3   0 6
363

views

+ 3

rating

Containers

With the launch of InterSystems IRIS Data Platform, we provide our product even  in a Docker container. But what is a container?

The fundamental container definition is that of a sandbox for a process.  

Containers are software-defined packages that have some similarities to virtual machines (VM) like for example they can be executed. 

Containers provide isolation without a full OS emulation. Containers are therefore much lighter than a VM. 

In their essence, containers are an answer to the issue of how to reliably move an application from a system to another and guarantee that it will work. By encapsulating all application dependencies inside a container and creating a process isolation space, there is a higher degree of guarantee that the application solution will run when moved between platforms.

Last comment 15 November 2018
+ 6   0 8
1053

views

+ 6

rating

I followed the First Look instructions and tried to run a Docker container with the below command:

> docker run --name iris --detach --publish 52773:52773 --volume /Users/docker:/external --env ICM_SENTINEL_DIR=/external iris:latest --key /external/iris.key --before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /external/password.txt"

It returned with a container ID and an error message:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/iris-main\": permission denied": unknown.

"docker ps -a" shows that the status of this container is "Created", not "Up". How can I resolve this "permission denied" issue?

Last answer 25 September 2018 Last comment 25 September 2018
0   0 3
292

views

0

rating

I have already mentioned my project CacheBlocksExplorer recently in two articles

  1. Internal Structure of Caché Database Blocks, Part 2
  2. Internal Structure of Caché Database Blocks, Part 3

Now I would like to inform that this project can be easily run with docker.

Version for Caché and for IRIS, now publicly available on docker hub.

Remember that you need the appropriate license key (for RedHat Linux) to be able to run this project

+ 3   1 1
0

comments

142

views

+ 3

rating

Hello!

Checking to see if anyone has experience with deploying their Docker containers on a platform like OpenShift (or vanilla Kubernetes)?

If you have, do you have any guidance or lessons-learned?  Is this even something feasible with a HealthShare/Ensemble instance?

Last answer 7 September 2018
0   0 2
0

comments

127

views

0

rating

GlobalSummit too close now, so many people going to be there from so many companies. I'm sure that somebody already uses Docker or even Kubernetes in their work, I do. And would like to share my experience and thoughts about what could be better. And want to hear other people about their experience, how you use Docker, what issues have you faced and how did you solve it. I think InterSystems will help us to find time and place when we could do it, and hope @Luca Ravazzolo will join us. I think It also could be good as the topic at Unconference.

Last comment 4 September 2018
0   0 1
116

views

0

rating

Hello!

I'm interested to hear if folks have experience using Docker containers with Caché instances using ECP. Wondering if there are any special considerations when setting up a distributed application with multiple containers communicating with ECP. Any input is appreciated!

Last answer 24 August 2018 Last comment 24 August 2018
0   0 4
215

views

0

rating

Hi, Community!  

Check the second Developer Community Video of the week:

Docker Containers: Essential Knowledge

 

Last comment 23 August 2018
+ 1   0 4
249

views

+ 1

rating

Hi All,

Who, in the age of digital transformation, doesn't want to reap more benefits out of any process, procedure, and resource we have? At InterSystems Solution Developers Conference (part of InterSystems Global Summit 2018) we will have sessions on how to improve the way applications are built with modern tools like Docker containers, Gitlab, Circle CI, Travis, etc., how continuous integration and continuous delivery (CI/CD) processes can help us deliver more value quickly to the end-user, and how we can start thinking about modernizing traditional applications.

+ 4   1 1
0

comments

163

views

+ 4

rating

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as

+ 7   2 4
0

comments

623

views

+ 7

rating

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • GitLab CI/CD using containers

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.

In the third article, we covered GitLab installation and configuration and connecting your environments to GitLab

In the fourth article, we wrote a CD configuration.

In the fifth article, we talked about containers and how (and why) they can be used.

In this article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.

+ 3   1 2
0

comments

769

views

+ 3

rating

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • GitLab CI/CD using containers

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.

In the third article,  we covered GitLab installation and configuration and connecting your environments to GitLab

In the fourth article, we wrote a CD configuration.

In this article, let's talk about containers and how (and why) they can be used.

+ 3   0 2
0

comments

393

views

+ 3

rating

hi, new here, and new to cache and deepsee.

i've been trying to setup a copy of our production server so we can use it for testing/development.

i did a full  backup. moved it to the new server.  ran the DBREST command.  got it to restore but seems like permissions get all messed up. and it just generates a bunch of errors.

is there an easier/better way of doing this?

Last answer 8 March 2018 Last comment 9 March 2018
0   0 2
221

views

0

rating

Container Images

In this second post on containers fundamentals, we take a look at what container images are.

What is a container image?

A container image is merely a binary representation of a container.

A running container or simply a container is the runtime state of the related container image.

Please see the first post that explains what a container is

Last comment 3 February 2018
+ 6   0 2
618

views

+ 6

rating

How suitable is Docker for standalone deployment of an Ensemble version and Ensemble application together?

The context is deployment by an application partner of an integration application and the supporting Ensemble version as a single package (single file ideally), to multiple environments and to multiple customer sites.

I don't have experience with Ensemble on Docker so I'm wondering what gaps and pitfalls may exist.

The focus of the question is deploying the Ensemble product and application code - I do understand that consideration is needed on management of the application data, including any retained Ensemble messages.

This would be for new applications on Ensemble 2016.1+ on Linux. Linux containers on Windows are of interest - but how mature is the Windows support for them?

Last answer 8 December 2016 Last comment 22 January 2018
+ 3   0 6
813

views

+ 3

rating