You only partially correct:

  • yes, as lazy developers we prefer to write only 1 method call, instead of 2 chained together;
  • but, no, this is not %Connect (which may be expensive operation) which should move to the %OnNew, but rather other way around.

I.e. for the cases when we need both (not actually in 100% of a cases, rather 90%) we could create combined classmethod, which will create instance of a class via call to %New() and then will proceed the necessary side-effect. i.e. 

ClassMethod %ConnectNew(Config As %Object) As Sample.RemoteProxy { ... }

In general, you should rather avoid creating huge DOM tree of an objects, or proceeding network operations inside of %New constructor. Constructor needs to allocate just bare minimum of memory, necessary for beginning of operations, and initialize fields to their default values [that will be done automatically]. That's it.

Also about comment to move %Connect code inside of %New.

This is not, generally a good idea to insert potentially long and slow code inside of object constructor. I prefer to have slim and fast %New, which might be nested elsewher to some wrapping onjects. While keeping slow, and expensive functions like %Connect in this case, outside of constructor, independently callable.

For example, try to use incorrect login details here and then see how long it will take to fail such connection (i.e. timeout period).


  1. I hate long list of arguments passed to function, especially when most of them are optional;
  2. In similar cases I prefer to use named-arguments approach, whcih I saw otiginally in Perl (here is the quick link I've found which shows this idiom). Named arguments allow to pass arguments in any order, which allows to avoid many related errors if (optional) argument passed in the worng order.
  3. Named-arguments were actually creating hash object in Perl, with which we worked later, accessing it's key-value pairs. 
  4. But at the end of a day the new, JSON dynamic objects we have in Cache' are semantically equivalent to hash-objects we were operating in Perl in the past;

Thus similar idiom could be used in the ObjectScript.


Though I agree, that was some stretching to use this idiom in this particular case, with not that much large number of arguments. But at least it didn't make code less readable. :)

Actually I don't see any value for checking $data for intermediate subscript (and check their consistency only at the most beginning of a function). Here is my [hopefully] simpler version

CompareArrays(refL, refR)
    if $data(@refL) '= $data(@refR) {
        // they are not consistent: one is non-array
        return 0
    do {
        // fetch next data node subscript and it's value
        set refL = $query(@refL, 1, valueL), refR = $query(@refR, 1, valueR)
        if refL="" || (refR="") {
        set subL = $qlength(refL), subR = $qlength(refR)

        if subL'=subR || (valueL '= valueR) {
            return 0
        // check each subscipt individually
        for i=1:1:subL {
            if $qsubscript(refL, i) '= $qsubscript(refR, i) {
                return 0
    while refL'="" && (refR'="")

    // only after all checks passed
    return refL=refR

    set m(1,1,1)=11,m(1,2)=12,m(2,1)=133
    set n(1,1,1)=11,n(1,2)=12,n(2,1)=133
    write $$CompareArrays($name(m),$name(n)),!
    set n(3,1)=0
    write $$CompareArrays($name(m),$name(n)),!

So you are using ODBC access in WinSQL to connect to CacheODBC source, from particular namespace...



Did you check you are using DSN pointing to the desired namespace? Did you check the bitness (32- or 64-bit) for DSN you use in WinSQL? 

Small correction though: the referred github sources are fork, and have been created not by Dmitry Maslennikov (@daimor) but by Eduard Lebedyuk (@eduard93)

Thanks, Dima, [I did expect you will publish it] and this advice is very interesting and easier to apply by "lazy devops engineer". Though some explanations and comments won't harm. Hope you'll find some time eventually to write article. 


Could not resist and not say my few notes about your docker file:

- from pure micro-services point of view for the generic case of multiple ECP clients it makes no much sense IMHO to install csp gateway to each of instantiated docker instances;

- I'd invoke it at the master (ECP database server) instance, or probably as separate docker image;

- [though I suspect, that for HAproxy scenario you might needed to have this CSP-gateway services spread over each instance just for high-availability scenario. I'll be curious that Luca would recommend here from micro-services prospective?]

Let put aside software architecture (I'll write later some number of articles abut what I mean here), let talk about dirty details. 

If you have any oncrete details about the way you use Swarm, Ansible, Chef, or similar, then I (and community) will highly appreciate.


It will simplify things a lot if we could configure ECP mapping at the runtime via some set of API calls, and not statucally via editing cache.cpf. Something like it's done in MongoDB for adding new shard:


But not for the scenario of adding shard to shard-manager in particular, but for something more generic for ECP or mapping. I suspect there is something related already implemented for EM, but I have no clue how to use it for my case.


And I know there is already implemented AssignShards call in the forthcoming product, but it's too much specific, creating particular set of mappings. I'd need to have it more generic.