John,

it looks like you are on AWS and you selected an AMI, most probably the default one, that offers AWS Linux distribution. It is a downstream of Fedora. We don't support it officially so pick and AMI with Ubuntu and Red hat.

wrt context: you'll find that in general the cloud runs on Linux. You'll get used to automate procedures with scripts and a plethora of tools that are available in the industry like bash, Terraform, Ansible or any config.management tool of your liking.

HTH

Hi Robert,

We should probably document this apparent anomaly. InterSystems Cloud SQL is, at present, a SQL-only end point. Powerful InterSystems IRIS-engine extensions like the creation of class methods are not allowed as our formidable support for procedural languages could create security vulnerability that we decided to avoid in its first implementation.

Thank you

Hi Ben,

The short answer is yes, you are correct.

The longer one :-)
Stating the obvious, from a tool point of view, to be able to rollback operations means understanding the present state of an instance and of course have a record of all the previous states. In order to be able to do that one needs the concept of a "release state". As soon as you get into maintaining state you quickly escalate the complexity of a solution. See Terraform for example and ICM itself that supports the replication of its state via Consul.

There are tools like Helm, ArgoCd, etc. that help in that, however that is left to the user. Enhancing InterSystems IRIS is an option but that is not available now. At present we rely on a GitOps approach. 

GitOps is a paradigm that incorporates best practices applied to the application development workflow all the way to the operating infrastructure of a system.

Embracing GitOps give us some benefits like:

  • Deploying faster and more often (with a DB we could argue those adjectives, nevertheless we can still appreciate the benefits)
  • Easier and quicker error handling and recovery
  • Self documenting deployments
  • Increased developer productivity and an enhanced experience for teams
  • Greater visibility on the lifecycle of developed features

However, GitOps itself is not the delivery & deployment panacea of this complex area. GitOps has issues too. There are shortcomings when auto-scaling and dynamic resources are implemented; there is no standard for managing secrets; observability is immature; rollbacks don't have a standard practice, etc.

The powerful CRUD operations that we can run with the CPF merge feature adds to the complexity. A solution needs to be implemented that may leverage one or more tools that organizations use in their automated provisioning pipeline, just like you would do when embracing the GitOps paradigm.

I think there are two ways to solve our rollback issue, at present.
The first one would be a programmatic approach, maybe a diff operation on the git hash declarations of (last_op_def) vs (last_op_def  -  1)
If last_op_def contains a Create-resource I then need to rollback that with a Delete-resource or Modify-resource. And even in this simple case how do you determine that? Human intervention is probably needed.

The second option, simpler and safer, would be to simply re-run the container, the base state we know, and apply configuration settings #1 and #2 only.

There are probably other options involving verifying the CPF file. However, the present CPF file does not hold all of an instance settings.
There are also other issues to these type of automations, like: what if you want to rollback after the creation of a database and data was written to it?

It's complex.

Hi Raj:

This article is not about AWS but it clearly explains how to set it up with Gitlab so I hope you can find your answers.

If you use containers, when you build them you'd be installing your solution code from a file or a series of files. you'd probably be copying them in the container and importing and compiling them. You could also do that before and just copy in the container the IRIS.DAT containing the code.. it depends.

You should also look and consider the ZPM package manager for packaging and moving your code around.

I hope this is of some help in your investigation

@David, Yes ICM could help and be very useful if you had provisioned the cluster with it. Please note that it supports both container based and containerless deployments.

I think that in the case of @Sylvie Greverend we have a tough case as we are missing a generic REST API for some of these operations and then again you get into the issue of who is authorized to run these ops? How do you handle keys and/or certificates? Etc. So it's not an easily solvable issue.

It appears also that the environment is not homogeneous (most probably not a cloud cluster where "things" are typically more "even") and so we have the issue of modularity and want to be dynamic as we approach the different services (IRIS instances).

I'd like to hear more about the use case to get a fuller picture. I understand the frustration. We are looking at innovating in this area. 

For the time being it sounds like you could do with a dictionary/table/XML/JSON of your nodes that would define the operations you want to run on each instance. Based on that and a %Installer XML template (don't be fooled by the class name; it can be used for configuring an instance post-installation) create a different config for each instance. Subsequently, architect a delivery mechanism of the Installer/config (bash, Ansible, etc.) and run it (load the code; run a method).

Doc on %Installer and an example.

The above idea might sound like overkill but with a varied and wide range of instances all with different configuration and needs, there is no other way to be dynamic, modular & efficient IMO.

I hope to see you at Global Summit 2019 for a chat on this type of issues.

We have a session on ICM but also some interesting preliminary news in this area and we'll be talking about Kubernetes too if this is an area of system management that interests you.

TTYL

1) if they are DevOps engineers they don't do things manually. It's an axiom. :-)

2) the issue IMO is not so much which tool one needs to pick -you mention Ansible, but understand the requirements of the architecture.

3) it's not so much the numbers of nodes -although important, but the fact that the software needs to be much more dynamic and provide clear APIs that one can call. With a static architecture the monolith lives on ;)

4) one very important fact when working in the cloud is networking: you may not know in advance some facts like IP addresses. In general apps use discovery services to find things out. This is the part we need to grow and provide more "dynamicity". You can, however, adapt and evolve via ZSTART and tell the cluster who you are and you you're after.

5) I was playing with this few days ago and because of the lack of API I had to create one of those horrible scripts with <CR> redirection, blind options selections etc. ;) - Not the most robust solution for Dev or Ops engineers ;)

HTH in your endeavour Timur

Hi Francis

if you just want to test things, develop or even go into production without all these issues, why not give Docker containers a try?

You'll never be pulling your hair again for these type of issues. You container comes installed (Dockerfile); you just spin it up and when you're done you just dispose of it.

However, you'll have other hair-pulling moments for other things but by then you'll be so deep into DevOps, learning and collaboration that you won't notice it :-)

Seriously, though, it's a great technology for some perfectly fitting use cases, even production if you embrace microservices and you'll be able to run a CentOS, RH, SUSE or Ubuntu container on your Debian Jessie without a glitch.

HTH

Thanks for sharing more info Scott. Sorry I'm late on this. Travelling...

Deployment & management: This should be totally automated. There shouldn't be the need for a GUI (it slow things down). I've given my views on another thread/post you started on this exact subject. There is much to chew on these things and you might be under time pressure, however, it's an unavoidable point (automation) if we all want to be more competitive.

Your last paragraph (human error) highlights why we need to embrace more automation. So to answer the original post question: I'd put my complexity in automation :)

I understand what you're saying BTW; I wish you well with this workflow and the whole project.

All the best

Hi Scott,

Why do you want to use 1 single interface for multiple providers? Don't answer "why not" :-)

SPOF can be applied to anything. If that interface is critical and it goes down, it's like if your whole production, your all cluster and solution were down. Not good. It's just like in the microservices world. No difference and dangerous. Just like in the security world: it's not IF they'll attack you, it's just a matter of WHEN. Same here. Therefore, I need to ask you a second question: What is your service strategy for availability for the Production and this BS (assuming it's critical here)?

Personally, I would de-couple and leave the responsibility of files accuracy (name, timestamp, upload time-frame vs polling etc. or whatever else) to the customers. Do you have FIFO to respect?  Other considerations? 

Ensemble is ideal for having a centralized orchestrating BP handling all the incoming requests from the business services. Your BS implementation would be simpler as you implicitly would know what customer you're reading from. Be careful also of the number of files in the reading directory. OSs have limits and issues have been witnessed before (we don't know the numbers here but splitting in various DIRs alleviate the issue mentioned).

Anyway, it might just get down to personal preferences at the end or to one particular context variable we're not aware of here.

just some thoughts... there is no right or wrong in some of these architectures... just point of views, considerations, previous experiences and best practices...

HTH