go to post Luca Ravazzolo · Nov 18, 2024 John, it looks like you are on AWS and you selected an AMI, most probably the default one, that offers AWS Linux distribution. It is a downstream of Fedora. We don't support it officially so pick and AMI with Ubuntu and Red hat. wrt context: you'll find that in general the cloud runs on Linux. You'll get used to automate procedures with scripts and a plethora of tools that are available in the industry like bash, Terraform, Ansible or any config.management tool of your liking. HTH
go to post Luca Ravazzolo · Apr 16, 2023 Hi Robert, We should probably document this apparent anomaly. InterSystems Cloud SQL is, at present, a SQL-only end point. Powerful InterSystems IRIS-engine extensions like the creation of class methods are not allowed as our formidable support for procedural languages could create security vulnerability that we decided to avoid in its first implementation. Thank you
go to post Luca Ravazzolo · Apr 4, 2022 Hi @Michael Lei, I have not worked with Docker service/Swarm for years but I'd say that it sounds like there is some networking/NAT issue...
go to post Luca Ravazzolo · Oct 25, 2021 Also, please upgrade to the lastest container version available that is 2021.1 Thanks
go to post Luca Ravazzolo · Oct 12, 2021 Hi Ben, The short answer is yes, you are correct. The longer one :-)Stating the obvious, from a tool point of view, to be able to rollback operations means understanding the present state of an instance and of course have a record of all the previous states. In order to be able to do that one needs the concept of a "release state". As soon as you get into maintaining state you quickly escalate the complexity of a solution. See Terraform for example and ICM itself that supports the replication of its state via Consul. There are tools like Helm, ArgoCd, etc. that help in that, however that is left to the user. Enhancing InterSystems IRIS is an option but that is not available now. At present we rely on a GitOps approach. GitOps is a paradigm that incorporates best practices applied to the application development workflow all the way to the operating infrastructure of a system. Embracing GitOps give us some benefits like: Deploying faster and more often (with a DB we could argue those adjectives, nevertheless we can still appreciate the benefits) Easier and quicker error handling and recovery Self documenting deployments Increased developer productivity and an enhanced experience for teams Greater visibility on the lifecycle of developed features However, GitOps itself is not the delivery & deployment panacea of this complex area. GitOps has issues too. There are shortcomings when auto-scaling and dynamic resources are implemented; there is no standard for managing secrets; observability is immature; rollbacks don't have a standard practice, etc. The powerful CRUD operations that we can run with the CPF merge feature adds to the complexity. A solution needs to be implemented that may leverage one or more tools that organizations use in their automated provisioning pipeline, just like you would do when embracing the GitOps paradigm. I think there are two ways to solve our rollback issue, at present.The first one would be a programmatic approach, maybe a diff operation on the git hash declarations of (last_op_def) vs (last_op_def - 1)If last_op_def contains a Create-resource I then need to rollback that with a Delete-resource or Modify-resource. And even in this simple case how do you determine that? Human intervention is probably needed. The second option, simpler and safer, would be to simply re-run the container, the base state we know, and apply configuration settings #1 and #2 only. There are probably other options involving verifying the CPF file. However, the present CPF file does not hold all of an instance settings.There are also other issues to these type of automations, like: what if you want to rollback after the creation of a database and data was written to it? It's complex.
go to post Luca Ravazzolo · Jul 28, 2021 Hi Johan, When you say "Uber type application" what exactly do you refer to? Their DISCO system? The overall architecture? Their implementation of service oriented architecture? Their supply service or demand service? They started with a monolith and Python and broke it up later... Let us know & all the best with the new app! Luca
go to post Luca Ravazzolo · Feb 1, 2021 Hi @Michael Jobe that last "\n" was indeed missing. We have fixed it and are re-spinning... Thanks for your diligence. May I ask you what product(s) you are using and how this caused you issues? Thanks
go to post Luca Ravazzolo · Oct 21, 2020 Hi Raj: This article is not about AWS but it clearly explains how to set it up with Gitlab so I hope you can find your answers. If you use containers, when you build them you'd be installing your solution code from a file or a series of files. you'd probably be copying them in the container and importing and compiling them. You could also do that before and just copy in the container the IRIS.DAT containing the code.. it depends. You should also look and consider the ZPM package manager for packaging and moving your code around. I hope this is of some help in your investigation
go to post Luca Ravazzolo · Jul 15, 2020 Hi David, Thanks for your feedback! Constrained understood and work for https already scheduled for SAM v2.0.
go to post Luca Ravazzolo · Mar 9, 2020 Hi Guys Thank you for your interest on the InterSystems Kubernetes Operator. We are working hard at preparing it to be available for 2020.2 timeframe. Stay tuned.
go to post Luca Ravazzolo · Sep 4, 2019 @David, Yes ICM could help and be very useful if you had provisioned the cluster with it. Please note that it supports both container based and containerless deployments.I think that in the case of @Sylvie Greverend we have a tough case as we are missing a generic REST API for some of these operations and then again you get into the issue of who is authorized to run these ops? How do you handle keys and/or certificates? Etc. So it's not an easily solvable issue.It appears also that the environment is not homogeneous (most probably not a cloud cluster where "things" are typically more "even") and so we have the issue of modularity and want to be dynamic as we approach the different services (IRIS instances).I'd like to hear more about the use case to get a fuller picture. I understand the frustration. We are looking at innovating in this area. For the time being it sounds like you could do with a dictionary/table/XML/JSON of your nodes that would define the operations you want to run on each instance. Based on that and a %Installer XML template (don't be fooled by the class name; it can be used for configuring an instance post-installation) create a different config for each instance. Subsequently, architect a delivery mechanism of the Installer/config (bash, Ansible, etc.) and run it (load the code; run a method).Doc on %Installer and an example.The above idea might sound like overkill but with a varied and wide range of instances all with different configuration and needs, there is no other way to be dynamic, modular & efficient IMO.I hope to see you at Global Summit 2019 for a chat on this type of issues.We have a session on ICM but also some interesting preliminary news in this area and we'll be talking about Kubernetes too if this is an area of system management that interests you.TTYL
go to post Luca Ravazzolo · Jul 25, 2018 Hi Dmitry,It looks like they put up with us :-) for about a year (see reported issues)We are addressing this. Stay tuned and thanks for your prompt alerting.
go to post Luca Ravazzolo · May 3, 2017 Hi John,InterSystems has shifted gear into a more agile, cloud-oriented approach that is going to leverage & be better integrated with a DevOps modus operandi.InterSystems will unveil such new approach and what goes with it at this year Global Summit in September.HTH
go to post Luca Ravazzolo · Jun 16, 2016 1) if they are DevOps engineers they don't do things manually. It's an axiom. :-)2) the issue IMO is not so much which tool one needs to pick -you mention Ansible, but understand the requirements of the architecture.3) it's not so much the numbers of nodes -although important, but the fact that the software needs to be much more dynamic and provide clear APIs that one can call. With a static architecture the monolith lives on ;)4) one very important fact when working in the cloud is networking: you may not know in advance some facts like IP addresses. In general apps use discovery services to find things out. This is the part we need to grow and provide more "dynamicity". You can, however, adapt and evolve via ZSTART and tell the cluster who you are and you you're after.5) I was playing with this few days ago and because of the lack of API I had to create one of those horrible scripts with <CR> redirection, blind options selections etc. ;) - Not the most robust solution for Dev or Ops engineers ;)HTH in your endeavour Timur
go to post Luca Ravazzolo · May 20, 2016 Hi Francisif you just want to test things, develop or even go into production without all these issues, why not give Docker containers a try?You'll never be pulling your hair again for these type of issues. You container comes installed (Dockerfile); you just spin it up and when you're done you just dispose of it.However, you'll have other hair-pulling moments for other things but by then you'll be so deep into DevOps, learning and collaboration that you won't notice it :-)Seriously, though, it's a great technology for some perfectly fitting use cases, even production if you embrace microservices and you'll be able to run a CentOS, RH, SUSE or Ubuntu container on your Debian Jessie without a glitch.HTH
go to post Luca Ravazzolo · Feb 5, 2016 Thanks for sharing more info Scott. Sorry I'm late on this. Travelling... Deployment & management: This should be totally automated. There shouldn't be the need for a GUI (it slow things down). I've given my views on another thread/post you started on this exact subject. There is much to chew on these things and you might be under time pressure, however, it's an unavoidable point (automation) if we all want to be more competitive. Your last paragraph (human error) highlights why we need to embrace more automation. So to answer the original post question: I'd put my complexity in automation :) I understand what you're saying BTW; I wish you well with this workflow and the whole project. All the best
go to post Luca Ravazzolo · Feb 2, 2016 Hi Scott, Why do you want to use 1 single interface for multiple providers? Don't answer "why not" :-) SPOF can be applied to anything. If that interface is critical and it goes down, it's like if your whole production, your all cluster and solution were down. Not good. It's just like in the microservices world. No difference and dangerous. Just like in the security world: it's not IF they'll attack you, it's just a matter of WHEN. Same here. Therefore, I need to ask you a second question: What is your service strategy for availability for the Production and this BS (assuming it's critical here)? Personally, I would de-couple and leave the responsibility of files accuracy (name, timestamp, upload time-frame vs polling etc. or whatever else) to the customers. Do you have FIFO to respect? Other considerations? Ensemble is ideal for having a centralized orchestrating BP handling all the incoming requests from the business services. Your BS implementation would be simpler as you implicitly would know what customer you're reading from. Be careful also of the number of files in the reading directory. OSs have limits and issues have been witnessed before (we don't know the numbers here but splitting in various DIRs alleviate the issue mentioned). Anyway, it might just get down to personal preferences at the end or to one particular context variable we're not aware of here. just some thoughts... there is no right or wrong in some of these architectures... just point of views, considerations, previous experiences and best practices... HTH