Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment.
I found the need to merge 2 Docker images (e.g. intersystems/iris-community:2020.2.0.199.0 + my home grown NodeJS Image). I found some advice on the Web but no real convincing solution.
Is there an InterSystems supported dotnet core library or community contributed repo on the horizon? At this time we are exploring installing the ODBC driver in our containers but would rather use more robust solution.
For a solo developer developing web applications what will be the best technology to use IRIS or Studio with cache database and containers for deployment
I'm interested to hear if folks have experience using Docker containers with Caché instances using ECP. Wondering if there are any special considerations when setting up a distributed application with multiple containers communicating with ECP. Any input is appreciated!
I want to add ports 9100 and 9101 in addition to 52773. I read on docker container documentation that this is not possible on a already ran image. Currently it starts the google cloud IRIS health container automatically without me able to specify the additional ports. How can I add ports to Google cloud IRIS Health container?
When deploying IRIS in a container, what are recommendations for permitting users terminal access? There is docker exec -it, but that does not work from a user's workstation where - in the old deployment model - they would connect to the server over ssh using Putty and open a terminal session from there.
Just curious how many companies use in their work Docker containers, I mean not only with InterSystems products. And if such companies exist, which of them uses docker and doesn't use it for InterSystems products by some reasons. What are the reasons? For companies which already uses InterSystems in containers, how do you use it? Development environment, testing or even in production ?
And if you don't use but thought about it, what are the reasons which stop you.
As for me, I've been using InterSystems Caché inside a Docker container in some different cases:
Working from home during these Corona-days I'm short on resources. - no Linux machine available - limited disk space So I decided to give Docker in Windows 10 (named Docker Desktop) a try.
I am trying to experiment the Docker container in our development environment. I have successfully build an image and running the container. When I access the CSP portal home page ( http://<host-ip>:57772/csp/sys/%25CSP.Portal.Home.zen?$NAMESPACE=%25SYS), I am getting the following error:
i've been trying to setup a copy of our production server so we can use it for testing/development.
i did a full backup. moved it to the new server. ran the DBREST command. got it to restore but seems like permissions get all messed up. and it just generates a bunch of errors.
Does anyone have experience with installing the Arbiter Container using Podman instead of Docker in a Red Hat environment? I was able to pull down the docker image, but unsure what are the next steps as I am confused on how to start the container using Podman and ensure the parameters are set appropriately? Does anyone have the steps that I should take? Should I go through the WRC? Does the WRC have experience using Podman?
Or should I just install the ISC Agent instead of using the Container?
After attending the Intersystems DACH conference in Germany at the end of last November and taking part in a few workshops about containers and microservices I took the plunge and installed Docker on home Windows PC, downloaded IRIS 2018.2 preview and got that up and running. Good stuff and I gave myself a pad on the shoulder at that point.
How to programmatically setup docker-compose file to have IRIS container with OS authentication enabled? And have the following while entering the terminal:
We are using IRIS health 2023.1 to build an application that runs on kubernetes cluster as container images. In the container image, we have our own PRODUCTION "APP" created with its routines database and global database located at:
My team works on implementing an Interoperability solution utilizing InterSystems Kubernetes Operator on Red Hat OpenShift container platform.
We are trying to determine how many messages we can process in any given time. We have a Feeder app running in 10 containers sending 50k messages each to a load balancer all beginning at the same time.
Messages are received via HTTPS protocol by webgateway containers.
Interoperability production runs in compute pods with persistent data, journals, and WIJ volumes.
IRIS offers Durable %SYS Directory as a highly useful feature for working with containers.
Before inventing the wheel once more I'd like to know if a similar feature also exists for Caché / Ensemble. Official documentation is quite silent about. Though I have some names in mind that might know more about ( @Luca Ravazzolo? @Dmitry Maslennikov ? @Eduard Lebedyuk ? )
I've been working on deploying an IRIS for Health environment in EKS. There is a video session in the InterSystems learning portal about this feature but I have not succeeded in finding the proper documentation and resources to use this in my Kubernetes cluster.
Has this been deprecated/discontinued? Any idea where can I find the resources? Should I stick to StatefulSets instead of using the IrisCluster resource type provided by this operator?
The title says it all. I’m building an IRIS image with docker-compose using a separate Dockerfile. Pretty straightforward procedure: I import a Installer script inside the container containing a Installer Manifest I defined. Within the manifest, I create a namespace with code and data databases in separate locations. My intention is to keep the code database inside the container, so whenever I build the container, the imported code is replaced. The data, however, should be persistent.
InterSystems Learning Services is working to identify and create libraries of high-quality learning resources for third-party technologies, platforms, and systems that are part of, integrated with, or commonly used with InterSystems products and technologies. we don't create content for these ourselves, but want to support our clients, external and internal, in learning about them and how to use them.
We are developing some containarized cloud application level iris instances and using CPF Merge to do a lot of the initial buildout for the iris instance (i.e. create databases, namespaces, map globals/routines, ecp setup, etc...)
I am trying to figure out how to get package mappings into a namespace config, via cpf merge if possible... ?
This is the document I am working from to develop the cpf merge file -
Has anyone seen this before? I'm new/bad at Docker and not even sure how to debug.
$ sudo docker logs <container> [INFO] Starting InterSystems IRIS instance IRIS... [INFO] Unable to read the startup.last file to identify the location of the write image journal file (IRIS.WIJ) that was last in use by this instance. Please fix the permissions of the startup.last file.
I'm trying to deploy a container based on IRIS Community for Health ML image available from this url but when I start the container the memory consumption skyrockets to 99% making impossible to work with the instance (it never goes below the 95% of the memory). When I do the same with the IRIS Community for Health image it never goes over 80% of memory.