Luca Ravazzolo · Jun 16, 2016 go to post

1) if they are DevOps engineers they don't do things manually. It's an axiom. :-)

2) the issue IMO is not so much which tool one needs to pick -you mention Ansible, but understand the requirements of the architecture.

3) it's not so much the numbers of nodes -although important, but the fact that the software needs to be much more dynamic and provide clear APIs that one can call. With a static architecture the monolith lives on ;)

4) one very important fact when working in the cloud is networking: you may not know in advance some facts like IP addresses. In general apps use discovery services to find things out. This is the part we need to grow and provide more "dynamicity". You can, however, adapt and evolve via ZSTART and tell the cluster who you are and you you're after.

5) I was playing with this few days ago and because of the lack of API I had to create one of those horrible scripts with <CR> redirection, blind options selections etc. ;) - Not the most robust solution for Dev or Ops engineers ;)

HTH in your endeavour Timur

Luca Ravazzolo · May 20, 2016 go to post

Hi Francis

if you just want to test things, develop or even go into production without all these issues, why not give Docker containers a try?

You'll never be pulling your hair again for these type of issues. You container comes installed (Dockerfile); you just spin it up and when you're done you just dispose of it.

However, you'll have other hair-pulling moments for other things but by then you'll be so deep into DevOps, learning and collaboration that you won't notice it :-)

Seriously, though, it's a great technology for some perfectly fitting use cases, even production if you embrace microservices and you'll be able to run a CentOS, RH, SUSE or Ubuntu container on your Debian Jessie without a glitch.

HTH

Luca Ravazzolo · May 17, 2016 go to post

Thanks for sharing the code Tani.

IMO I think these type of monitoring should be done directly from the core Caché like from sensors in SYSMONMGR and should be provided by the system. I'm hoping to open source SAM (System Alerting and Monitoring) soon as it was demoed last year. The idea was to have a plug-&-play component to drop on all instances to monitor and an appliance that would gather those warnings and alerts.

Luca Ravazzolo · Mar 25, 2016 go to post

@Dmitry Konnov thanks for the pull request; great idea; all merged now.

On the article you're working on: we hope you can share it once translated into English so that a wider audience can benefit from your precious work. Depending on how you spin up containers there are a couple of critical areas I'd like you to describe so that we can all learn from each other experiences here:

  1. Data Volumes
    • Containers are ephemeral and although we can mount host volumes we need to think ahead
      • What do those volume-drivers offer us? (See Rancher Convoy, Flocker & EMC Rex-Ray, etc.)
      • How integrated are they with the underline FS or SAN? (think snapshots, replication etc.)
  2. Networking
    • if you have more than one node it's tough to get containers to communicate so you'll have to run an overlay network.
      • my two favourites are (a) Weave.works, (b) project Calico and (c) flannel by the CoreOS guys.

 

Both the above aspects are fundamental to have a production system.

Let us know how you get on. Hope to see you in Phoenix.

Luca Ravazzolo · Mar 24, 2016 go to post

@Dimitri, first of all, you don't have to use ccontainermain, you can script your own solution. It's just a quick-start utility. Caché ccontrol -as the name implies, is the database image control process and does many things. None of them are concerned with keeping the container alive which is not -by definition, a Linux distribution.

InterSystems has been testing and running its QD testing suites against Docker containers from the 2nd quarter of 2015. Stating that we support Docker containers means that we support our platform in this environment. However, there are MANY gotchas and anybody interested in "containerizing" their app or more to the point, switching to a micro-service architecture,  should approach it as a new platform.

Luca Ravazzolo · Mar 17, 2016 go to post

The subject made me smile :) "service"
 

Shouldn't we wrap the above method and expose these services via a RESTful "service" API and even encapsulate the inside system Caché ObjectScript API we have with such "service"?

So the above would look something like:

GET /server/system/v1/services

and then of course you'd have all the other methods implemented...

Luca Ravazzolo · Feb 26, 2016 go to post

use the calculator for Google here

while for AWS the cost matrix is here

and you'd be right in thinking that is tough comparing apples with apples :)

HTH

Luca Ravazzolo · Feb 26, 2016 go to post

Hi Daniel: I had started on this and I was able to produce few variants (AD*, OM* etc.) . You can add more segments, more randomization etc. I'll have to dig out the code. It was part of a larger project...

let me know

Luca Ravazzolo · Feb 26, 2016 go to post

I never knew that...

Now the question is about Atelier of course ;)

I guess I could extend the Atelier talks to?

Luca Ravazzolo · Feb 17, 2016 go to post

AWS:
I find it compelling when you go beyond basics... The UI could do with a refresh but I have all the services I want and even a cloud compute micro-service engine that runs my functions as needed (Lambda)... :)

Luca Ravazzolo · Feb 15, 2016 go to post

Thank you Eduard.

So it sounds like you use other public cloud providers that offer more enterprise-level solutions like snapshots etc.

Do you use DO (DigitalOcean) only for development & testing and other providers for production?

Thank you

Luca Ravazzolo · Feb 5, 2016 go to post

Thanks for sharing more info Scott. Sorry I'm late on this. Travelling...

Deployment & management: This should be totally automated. There shouldn't be the need for a GUI (it slow things down). I've given my views on another thread/post you started on this exact subject. There is much to chew on these things and you might be under time pressure, however, it's an unavoidable point (automation) if we all want to be more competitive.

Your last paragraph (human error) highlights why we need to embrace more automation. So to answer the original post question: I'd put my complexity in automation :)

I understand what you're saying BTW; I wish you well with this workflow and the whole project.

All the best

Luca Ravazzolo · Feb 5, 2016 go to post

Scott:

it depends what you mean by a deployment tool. It opens up a whole new world of automation so you'll have to start to think about versioning your artifacts etc. It depends how you want to embrace your whole provisioning and deployment and management process.

My suggestion for infrastructure provisioning & deployment: Terraform; for management Ansible: agentless and easy to learn and use.

Of course, Puppet & Chef are strong valid alternatives but you'll have to take on board other considerations...

This subject cannot be done with one post and it really depends on how much automation you want to bring to this process. I would even suggest you consider containers. 

Once automation is seriously considered there is no turning back :-)

All the best

Luca Ravazzolo · Feb 2, 2016 go to post

Hi Scott,

Why do you want to use 1 single interface for multiple providers? Don't answer "why not" :-)

SPOF can be applied to anything. If that interface is critical and it goes down, it's like if your whole production, your all cluster and solution were down. Not good. It's just like in the microservices world. No difference and dangerous. Just like in the security world: it's not IF they'll attack you, it's just a matter of WHEN. Same here. Therefore, I need to ask you a second question: What is your service strategy for availability for the Production and this BS (assuming it's critical here)?

Personally, I would de-couple and leave the responsibility of files accuracy (name, timestamp, upload time-frame vs polling etc. or whatever else) to the customers. Do you have FIFO to respect?  Other considerations? 

Ensemble is ideal for having a centralized orchestrating BP handling all the incoming requests from the business services. Your BS implementation would be simpler as you implicitly would know what customer you're reading from. Be careful also of the number of files in the reading directory. OSs have limits and issues have been witnessed before (we don't know the numbers here but splitting in various DIRs alleviate the issue mentioned).

Anyway, it might just get down to personal preferences at the end or to one particular context variable we're not aware of here.

just some thoughts... there is no right or wrong in some of these architectures... just point of views, considerations, previous experiences and best practices...

HTH

Luca Ravazzolo · Feb 2, 2016 go to post

SPOF? single point of failure? and probably other reasons? de-coupling? Monitoring of single interfaces etc.

Luca Ravazzolo · Feb 2, 2016 go to post

yup! They have become, over 2015, a very serious contender in the IaaS cloud arena... as long as they keep open and let people be innovative ;) they have a tendency... IMO :)

Luca Ravazzolo · Feb 2, 2016 go to post

I like  DO too. Great for POC and test cases as you said. Also, they have great documentation on setting things up not strictly related to their infrastructure like firewalls, Docker, Mesos etc.

Luca Ravazzolo · Feb 1, 2016 go to post

Hi Dmitry,

This year GS is going to be different. I'm sure there will be a session on Docker containers.

in the meantime, you might want to pick a Dockerfile example and the ccontainermain exe from here:

https://github.com/zrml/ccontainermain

If you have specific questions please feel free to open up a Docker specific thread under this Cloud group.

Thanks