Question
· Mar 15, 2018

Deployment Strategies: Do You Compile ObjectScript on a Production Site?

Hi, Community!

Please share your experience on code deployment on production site. Do you compile ObjectScript on Production? Is it OK? 

Or you only compile on Test site and copy cache.dat to a Production?

Discussion (22)3
Log in or sign up to continue

If DATA and CODE are separated then taking over CACHE.DAT from a final test environment could be an option.

But as the default for a namespace is DATA+CODE  and this is widely spread also in large applications in
real environments recompiling is the only possibility. Many years back even a special change was implemented in
Caché to support compiling during runtime of the code.

I personally dislike both and fought for clear separation of CODE from DATA. With very limited success.sad

I agree totally with Robert, I much prefer the separation of routines and data but  I like the ability to simply replace cache.dat assumes nothing else is going on.

for me, one downside of replacing cache.dat for the routines is that you have to stop the cache instance to allow the replacement at file level,

I also dislike the compile options. I had a problem where (my mistake), I compiled the main class assuming dependant classes would also be compiled. That was a mistake, none of the dependant classes knew of my change.

that problem was solved by adding extra letters to the compiler in particular, I was told to add "bry" to the compile options. Not had a problem since, but if Intersystems knows this, then why not make that the default ? (I also had to add those letters to the default of all users that could issue the compile command - a real pain)

Hi Robert and all

You can achieve the separation quite easily with routine and class package mapping.

I have a client that has overseas affiliates they all share the same code/class base but have their own namespace with routines and packages mapped to the master namespace.

Works just fine

The only issue is that developing the code is more complex as the different affiliates have started to need different functionality starting from the same base screen

Peter

Just realised that copying the Cache.dat is only sensible for a single developer
If you have more than one developer working on different projects all deploying to the same Test server then copying won't work - you would get bits and pieces from different projects

Even for a single developer it's a bit dodgy - you could be working on two or more projects at the same time - waiting on user acceptance testing - if one passes then you want to deploy that to live but not the  others.

The more I think about it the more I believe that my method of working is the only one that works for all possibilities - or so I believe - if anyone has a better method please tell

Peter

Kevin,

We have the following solution when having the code and data dats separated. We would have a CODE1 dat and a CODE2 dat. Which ever code dat wasn't being used by the namespace, we would overwrite that file and then switch the namespace to point to the new code database. Keeps our deployment downtime to only a few milliseconds. 

The only situation we've run into is this doesn't seem to be ideal for ensemble without having to stop the ensemble production.

Hi Evgeny

What situation do you have in mind that could cause the compilation to be unsuccessful?

With proper version control and release procedure this has rarely happened in my experience - when it does it's been due to unresolved dependencies - and in this case a re-compile fixes it.

There is one possibility where it *could* happen - that is if the VC system allows multiple reservations/branches for the same class - bet we don't allow that.

= =
I can't see how deploying/copying the Cache.dat will avoid problems when you have multiple developers or multiple projects on the test server.

= =
I guess the only 100%  way is to have a staging server where a deployment can be copied to and tested before deploying to the Live server - in this case it is tightly controlled and copy the Cache.dat is possible

Peter

Hi, Peter!

What situation do you have in mind that could cause the compilation to be unsuccessful?

E.g. compilations using a projection when the result of compilation could be totally unpredictive. 

Also, compilation can be a time-consuming process, comparing to replacing cache.dat file - so it potentially a longer pause in production operation. 

I can't see how deploying/copying the Cache.dat will avoid problems when you have multiple developers or multiple projects on the test server.

I'm not saying that coping cache.dat strategy should be used for a test server. Indeed we can compile the branch on a build/test and then transfer cache.dat to a production if testing goes well.

Hi Evgeny

Fascinating conversation.....

I am aware of projections but don't use them in my systems.
I think there is some confusion when I use the term "Test" server  - in my usage this is used for End User Acceptance testing and there are usually more than one project/release undergoing End User Acceptance Testing at any one time - copying the Cache.dat would take over releases that are not ready to go.

= =

I guess it depends on the nature of the operation - I work for individual clients rather than having a monolithic product - and (as above) there will be several projects on the go at each time for each client - so what I do works for me.

If there is a possible problem with the compile (your projections) then, I think, the solution is a staging server - where individual releases are deployed to this and once proven that cache.dat is copied to the Live

My method works in my situation 
I guess there is no single "correct" solution that will work for all cases.

Peter

Hi Evgeny

Source control library is not mine - It was a commercial product created by GlobalWare - saddly did not make it commercially and the company went defunct a few years ago.

But...
The main owner of the company now works for ISC in Boston wrc - Jorma Sunamo by name - maybe you should contact him direct (jorma.sunamo@intersystems.com) to discuss.

Peter

Hi All

The way I work is:- Personal Development Machines - Deploy to Test Server for User Acceptance Testing - Deploy to Live

The version control system that I use is TrakWarePro from Globalware - sadly no longer in existence  - but it works for us.  Not only to maintain versions but to ship releases between the three environments.

When deploying the classes need to be compiled (obviously) but I don't trust ISC compiling *all* the required classes and SQL statements etc.  Neither does $System.OBJ.CompileAll() work 100% in resolving the dependencies in the correct order.

Also a release will need to set SQL access on tables for any new data classes.

So I have developed a do ##class(setup.rCompileAll).doit() method that does all the necessary - in the correct order, set the SQL Access etc.

Usually a deployment will require changing data/updating indices/adding pages to the access database etc etc - so there is usually a setup class that contains the code to do this.

So I have

  • a Studio project "pfcXXX"
  • a version control project "pfcXXX"
  • a setup class "setup,rPFCxxx.cls"

And all this works 99.9% over 10 plus years - I can't actually remember wen it went wrong but nothing in this world is 100% smiley

The downtime can be as little as 5 minutes for a simple release or up to 1 hour or so if the data changes are complex.

The only downside is system is unavailable to the user whilst this process is happening - I know about the technique of using mirroring and updating one mirror and then swapping - but this is overkill for my users.

Peter

Hi, I am new to IRIS and We are planning to setup a CI pipeline on AWS VM deploying the iris data platform container. I am trying to find out which folders needs to be inside the source control and where (exact folder) the updated code needs to be pulled in the container. I would be much obliged if anyone cant point the CI CD related documentation.

Thanks,
Raj