AFAIK, docker encourages to implement microservices. What about classic Cache based app deployment, can docker be useful in this case? By classic I mean the system accessible via several services (ActiveX, TCP sockets, SOAP and REST), multi-tasking, etc with initial 1-2 GB database inside. At the moment I'm to choose a solution to roll up several dozen of rather small systems of such kind. Should I look at docker technology in this very case?

I am bound to ver. 2015.1.4. Host OS most likely will be Windows 2008/2012, while I'd prefer to deploy our system on Linux, that's why I am searching a light solution how to virtualize or containerize it. Thank you!

Several years ago our company (SP.ARM) had developed Krasnoyarsk Kray Regional HIS, which was finally deployed using haproxy to balance CacheActiveX traffic load among 7 ECP application servers. This kind of traffic is based on the same protocol as ODBC, so the behavior should be quite the same (long sessions via 1972/tcp). To achieve high availability of the proxy server, it was deployed in two similar copies connected with ucarp.

Our initial solution for load balancing and high availability of proxy was based on LVS+keepalived, but the engineers of maintainer company (Rostelecom) prefered to use haproxy+ucarp as they felt themselves more comfortable with them; we didn't mind.

Hi Michael,
Using Studio's keyboard heavily, I can add to your list:
Shift-F2 - jump to previous bookmark
F6, Shift-F6 - switch to next and previous opened routine/class window
Ctrl-F7 - compile current routine/class
F7 - [re]compile the whole project
Ctrl-F9, F9, Shift-F9 - deal with break points (similar logic as with bookmarks)
Ctrl-E, Ctrl-Shift-E - switch to full commands/functions names, switch back to shortcuts
Ctrl-Shift-O - open project
Ctrl-O - open routine/class/etc.

P.S. Have not tried Atelier yet.

Mark, may I ask your for some clarification? You wrote:

As a result THP are not used for shared memory, and are only used for each individual process. 

What's a problem here? Shared memory can use "normal" huge pages, meanwhile individual processes - THP. The memory layout on our developers' serber shows that it's possible.

# uname -a

Linux ubtst 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

# tail -11 /proc/meminfo
AnonHugePages:    131072 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:    1890
HugePages_Free:     1546
HugePages_Rsvd:      898
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      243808 kB
DirectMap2M:    19580928 kB
DirectMap1G:    49283072 kB

# ccontrol list

Configuration 'CACHE1'
        directory: /opt/cache1
        versionid: 2015.1.4.803.0.16768
        ...
# cat /opt/cache1/mgr/cconsole.log | grep Allocated
...
01/27/17-16:41:57:276 (1425) 0 Allocated 1242MB shared memory using Huge Pages: 1024MB global buffers, 64MB routine buffers

# grep -e AnonHugePages  /proc/*/smaps | awk  '{ if($2>4) print $0} ' |  awk -F "/"  '{print $0; system("ps -fp " $3)} '
...
/proc/165553/smaps:AnonHugePages:      2048 kB
UID         PID   PPID  C STIME TTY          TIME CMD
cacheusr 165553   1524  0 фев07 ?     00:00:00 cache -s/opt/cache1/mgr -cj -p18 SuperServer^%SYS.SERVER
...

You can get an output from a Caché routine provided it do some output to its principal device (= stdout), e.g. (approach #1):

$ csession CACHE "##class(%SYSTEM.License).CKEY()"

Cache Key display:
Based on the active key file '/vol/cachesys/mgr/cache.key'

     LicenseCapacity =   Cache 2017.1 Enterpriser - Concurrent Users:1000000, Anything You Need
     CustomerName =      ZAO XX.XXX
     OrderNumber =       9876543210
     ExpirationDate =    10/15/2114
     AuthorizationKey =  3141592653592718281828662607004081
     MachineID =

     currently available =    9997
     minimum   available =      97
     maximum   available = 1000000
$

as there is no way to pass an arbitrary text from COS routine to OS directly. To bypass it, just incorporate into your shell script a calling sequence like this (approach #2):

#!/bin/bash
csession CACHE <<EOF >output.txt 
write ##class(%SYS.System).InstanceGUID()
halt
EOF

After calling it, you will get output.txt like this:

Node: tdb01.xxxxx.com, Instance: CACHE

USER>
8BCD407E-EE5E-11E4-B9C2-2EAEB3D6024F
USER>

(Missing 'halt' command causes an error). All you have to do is to parse an output. To avoid this, you may want to write a small wrapper in COS that will provide an output as you need it, than you'll be able to follow an approach #1.

HTH.

Thank you, Timur and other CPM contributors, you are doing the great job.

Freedom and independence are 2 greatest values which you can not buy for money. And you should keep it as long as possible.

This great slogan should be set in stone and cast in steel! Meanwhile, your (potential) target audience could be wider prefer you building FOSS product keeping proprietary COS/Caché features usage minimized and localized as much as possible.

While we at SP.ARM would hardly drop our home-brewed PM as it is pretty well integrated with our flagship product (qMS), personally I appreciate emerging of CPM and wonder why it is not CMPM?

You may also want to try Monitoring for Disconnect Mode (D mode)
(http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...).
This mode prevents a Caché process from silent halting on disconnect, which occurs by default when the socket device is the process $principal. D mode garranties that process's error trap code will handle <DISCONNECT> errors anyway.

Class methods calls "decorating" with macros seems to be possible using #def1arg macros.

But how to automate the substutitution of all flavors of an instance method calls to macros in a real world project? The method calls may use different <oref> variables in different parts of a project, or even use ".." or $this:
 set res=oref.Method1(x,y,.z)
 do Ref.Method1(x,y)
 if ..Method1(x,,.z) { ... }
 set a=$this.Method1(x,y)

So we need to define a macro with at least two arguments: 1) for oref, 2) for all other method's arguments. Having no variable macro arguments list, how to achieve it?

Note: we discussed direct calls only; taking in account $method() and indirection (@, Xecute), the situation would be even worse.

Always call "do a.AmethodName()"

I don't think it's a good idea as each extra nested call inevitably introduces extra performance drawback.

Anyway, imagine an existing large app with 100x classes and 1000x of methods calls, some of which can be done indirectly. In most common case an ability to do such transformation (all a.m() to a.Am()) sounds unrealistic, while your approach may fit small projects with simple calling conventions, especially when they are being started from scratch.

1) The following:

USER>do a.Awtf()
 
    do $method(, "OnBefore" _ Method, Args...)
    ^
<METHOD DOES NOT EXIST>%DispatchMethod+2^Utils.Annotations1.1 *OnBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBefo,Utils.Annotations1
USER 58d1>

is not a great problem as can be easily corrected by adding a check whether a method named $e(method,2,*) really exists in the class.

2) The whole approach seems to be too complicated as you omitted an important step:
all calls of all app's methods should be switchable from "do a.methodName" to "do a.AmethodName" back and  forth. How to implement this?

My answer would be simple: monitor your database growth and do the same on global level. It can help to estimate the future growth and with some luck even to point out inefficient solutions that may waste the space. 

E.g. we have a task that calculates all globals sizes (usually once a night) and store this data in a global that can be used as a source for visualization and allerting by means of some external toolset. At the moment integration with Zabbix is implemented.

 set mbSize=$$AllocatedSize^%GSIZE($Name(@Gref))/1048576 // Global @Gref size in MegaBytes

Pros:
- uses proven calculation algorithm, the same as interactive %GSIZE utility uses (we met errors in %GlobalEdit::GetGlobalSizeBySubscript() calculation, while it was in "rather old" v.2015.1.2, not sure about 2017.1);
- supports the whole global (Gref="^A") as well as its parts (Gref=$name(^A("subs1") or Gref=$name(^A("subs1","subs2")) or ...) as arguments.
Con(s):
- allows to perform only quick global size estimation, while in most cases it's quite enough;
- presize (slow) version can also be derived from %GSIZE source, but it's more tricky exercise...

Murray, did you mean that LVM snapshot volume is a sequential file in contrast to VMWare snapshot delta disk which is random access one? So, with the similar DB write load the snapshot size would be greater in case of LVM, won't it?

I tried LVM snapshot and backup with Caché freezing once upon a time. AFAIR, the space should be preliminary reserved inside LV for snapshot volume, and if it was too low the snapshot could fail. 

Hi Timur, thank you for the thorough writing! 

BTW, our company uses home-brewed package manager (while we don't call it that way) for our app updates distribution. Typical package for Caché comprises classes and routines in XML format, globals in FF (our proprietary) format, and a manifest file which contains the update description as well as its before- and after installation actions.