Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them.
I have been playing around with the Management Portal deployment tool, which involves: Ensemble > Manage > Deployment Changes > Deploy and Production Settings > Actions > Export Production Settings > Actions > Re-Export
Everything was going fine , until I came across this:
So I've got the IRIS AMI spun up in AWS EC2; it seems to be running fine.
I've added an EBS volume to it for persistent storage, and now I'm pondering how to make it actually do something useful.
What's the best way to do deploy code to this instance? I can think of a few ways to do it, but what's the least painful way? Push my code to an S3 bucket and figure out how to load it at system start? Github project?
I'm exploring using installation manifests to deploy Ensemble configuration changes. I noticed that the documentation uses a macro within an "Error" tag.
<Error Status="$$$NamespaceDoesNotExist">
So I thought this would also be possible with "Var" tags, "If" tags, etc. For example:
Is it possible to save Cache code into a file and then run it via command line?
IE: csession [ini] -U [ini] /path/cacheCodeFile.?
What I need to do is run a Cache script from the Linux command line. The script will navigate data to produce a file and then it will exit back to the command line.
Does anyone have experience using SCCM or any other enterprise application management tool to deploy updates to a thick client (internally developed standalone app) across the enterprise? I have a client that is attempting to get this working for their customer and running into some trouble. If you have any knowledge or experience you can share it would be greatly appreciated.
I have read some MS Technet pages that appear to indicate this is possible
I'm trying to write an installer manifest that can create a namespace, resources (%DB_namespace) and a role (with the resource, above), based on the namespace. So you could pass in "ABC", or "XYZ", and it would create the %DB_ABC resource and the ABC role with %DB_ABC:RW permissions; or it will create the %DB_XYZ resource and the XYZ role with %DB_XYZ:RW permissions, accordingly.
I am trying to copy an xml file generated on an Apache server into the Jenkins workspace post-build. I was thinking to use a 'send files over ssh' post-build script, but have not done this before and do not know how to refer to the file location on Apache server vs on Jenkins.
For example, if I want to copy from Apache's location of: "classes/UnitTest/Results.xml" into Jenkin's workspace: "/ReportFiles/Results.xml",
How does the script differentiate between whether the location address refers to the Apache server or Jenkins workspace?
I'm trying to build my project on a Linux machine using Docker.
In my development environment, I use Windows 10 Pro with Docker Desktop version 2.3.0.5. Everything works fine, and the docker-compose build runs flawlessly.
But, when I tried to run the same project in a Linux. Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1025-azure x86_64), docker --version Docker version 19.03.6, build 369ce74a3c
We have some custom tools that do project based deployment from environment to environment. In VS Code, we're struggling to find the best way to export PRJ files that contain the contents of a project the way we were able to in Studio. We can edit existing projects, and in server side editing we can create projects, but we can't seem to extract the project files (Not the classes, the wrapper) to store in our source control solution for deployment purposes.
we have Angular solution and Cache server. We need to have separate users and sessions on same browser (laptop, table etc) for every user and for one user with many connections.
I just deployed my production from test to acceptance but I found that the deployment misses some Soap Webclient classes which are used by my business operation components. I have used the management portal to create the deployment (i.e. production settings -> Export) and I expected that all classes used by the production were automatically included. Apparantly, that is not the case. Is this default behaviour for Ensemble? And can I somehow force Ensemble to automatically include these classes?
The Installer Manifest has the option to modify the production level settings for AutoStart but is there a way to change settings such as ActorPoolSize and other settings? What would the format be to change such a setting to change the ActorPoolSize to 2?
I have a problem with the deployment. When I deploy using the Ens.Deployment.Deploy class, I no longer receive the logs in the terminal. However, the deployment went well, I see it in the history on the portal.
It works on our environment, but not on the client's.
I'm discovering IRIS and I need to POC the solution, with a constraint: containerization. I'm used to deploy my apps in a Swarm cluster, and all my bind volumes are written on a GlusterFS volume.
The problem here, when I start my stack, the first log is:
[WARN] ISC_DATA_DIRECTORY is located on a mount of type 'fuse.glusterfs' which is not supported, consider a named volume for '/iris_conf'
This is like my first post in this community. Let's see how this turns out. I have a question about the Intersystems Kubernetes Operator and the deployment of the webgateways.
I am responsible for the hosting and deployment of the apps. For the future we are planning to host our application in a kubernetes cluster. I am using the IKO for this. I am using webgateways, for external access as separate pods. And sidecar containers for internal access, like the management portal.