You should know, that namespace it is a just definition in which databases stored some types of data, and how it mapped.

So, you can only check in which database package stored

write ##class(%SYS.Namespace).GetPackageDest("SAMPLES","%Activate.Enum")
^/opt/cache/mgr/cachelib/

delimited output, first part is a system like in ECP-configuration, and second part is a database path

you can compare this path with default path for packages in this namespace.

w ##class(%SYS.Namespace).GetPackageDest("SAMPLES","")
^/opt/cache/mgr/samples/

You have two cases, you change storage definition or you can use calculated value. But in case with calculated value, you have two options. It's an object's access and sql access, and such getter it is just only for object's access. For SQL you should define SqlComputeCode. Something like this.

Property Invalid As %Library.Boolean [Calculated, SqlComputed, SqlComputeCode = { s {*}={+$G(^GLOBAL({Code},"INVALID"))} }];

more details in documentation

There are no particular recommended SourceControl systems. Everything depends on your choice.

Some time ago, with versions cache less then 2016.2 and before Atelier was appeared, we could use nothing except Studio. And in this case to work with particular SourceControl we should write own addon for Studio as a wrapper for this SourceControl. You can find one as a good example here on github, which works with GIT

Now, when we have Atelier, we could forget about this part, and use lots of available plugins for different SourceControl systems.

BTW, I prefer git, but most of time used SVN at work, and git for my own projects.

It's a right way, but don't you think, that you forgot to save your changes ? something like this

w rtn.%Save()

or you can just use $system.OBJ.Load("les.mac") or $system.OBJ.Load("some.class.cls") in versions 2016.2+
if you have less version but not less then 2014.1, you can load class with %Compiler.UDL.TextServices  as I've already offered to use it for export on stackoverflow

Every routines stores directly in database, so, you can't just open it as any file on your filesystem as well as you do it with csp files.

but you can open it with %RoutineMgr class, something like this

USER>zn "samples"

SAMPLES>set rtn=##class(%RoutineMgr).%OpenId("hello.mac")

SAMPLES>while 'rtn.Code.AtEnd { write !,rtn.Code.ReadLine()}
 
hello ; hello world routine
 write !, "hello world"
 write !, "bye"
end quit  ; end
 

if you also need to get list of such files, you can use query StudioOpenDialog in %RoutineMgr class

SAMPLES>set st=##class(%SQL.Statement).%New()
 
SAMPLES>set sc=st.%PrepareClassQuery("%RoutineMgr","StudioOpenDialog")
 
SAMPLES>set rs=st.%Execute("*.mac")
 
SAMPLES>do rs.%Display()
 
 
Dumping result #1
Name    IsDirectory     Type    Size    Date    Description     IconType
DocBook .       9                               0
badroutine.mac          0       62      2004-09-28 13:05:45             0
CinemaData.mac          0       10258   2016-05-10 22:32:04             0
compareloop.mac         0       1201    2004-12-02 18:23:57             0
datent.mac              0       3089    2002-01-03 12:05:01             0
datentobj.mac           0       2627    2002-09-06 00:15:23             0
dbconvert.mac           0       1532    2002-01-03 12:05:52             0
fibonacci.mac           0       365     2002-01-03 12:06:03             0
forexample.mac          0       502     2004-11-30 16:00:08             0
funcexample.mac         0       333     2002-01-03 12:06:30             0
hello.mac               0       251     2016-07-13 12:07:04.520803              0
LDAP.mac                0       68720   2013-12-16 11:48:09.962335              0
lookup.mac              0       8484    2008-08-21 19:17:48             0
lookup1.mac             0       1465    2002-01-03 12:07:15             0
lookup2.mac             0       5984    2002-01-07 18:08:46             0
lookupobj.mac           0       7857    2008-08-21 21:58:25             0
loopend.mac             0       242     2002-01-03 12:08:12             0
loopstart.mac           0       217     2002-01-03 12:08:21             0
nameloop.mac            0       552     2002-01-03 12:08:33             0
passbyref.mac           0       604     2002-01-07 12:01:47             0
postcond.mac            0       341     2002-01-03 12:08:51             0
procexample.mac         0       423     2002-01-03 12:08:59             0
publicvarsexample.mac           0       357     2002-01-03 12:09:09             0
RightTriangle.mac               0       1836    2011-02-24 18:56:29             0
root.mac                0       149     2004-11-30 15:57:29             0
simpleloop.mac          0       161     2002-01-03 12:09:59             0
SQLGatewayTest.mac              0       2480    2016-05-10 22:32:04             0
survivor.mac            0       179     2002-01-03 12:10:32             0
ZAUTHENTICATE.mac               0       37302   2015-03-10 10:48:43.589807              0
ZAUTHORIZE.mac          0       12120   2016-05-10 22:32:04             0
 
30 Rows(s) Affected

As I've already had some experience in such systems in production, and one of our projects, has similar architecture exclude docker, just on windows, some physical servers with ECP-Client + CSPGateway + Apache, and one HAPorxy server for all of this server. And in this case all this scheme is quite static, and adding new ECP-client means some manual works, on all levels. But with docker I expect, just call something like this command 

docker-compose scale ecp=10

and just get some new working instances, which just after gets their new web-clients

To work it as microservices and split CSPgateway and Cache instance as different containers, I need to have simple package just only with CSPGateway, but FieldTest versions does not contain it. But yes sure, I think it is good way too, and in this case I can have more WEB-containers then Cache does, if it would be needed.

Timur, you can look at my example at github which I wanted to use in to article about using docker but have not managed yet. In this example I have Dockerfile for ECP-client, which can be build for particular ECP-server. And with %Installer manifest possible to make backward data channel too, while we have already known about where our server placed, we can connect to him via %Net.RemoteConnect or something else and make new backward connection, the problem is in this case how to remove old ones. I played only on one machine. But in anyway, Ansible still could be useful, but in case to prepare servers to work in docker cluster,  which should be prepared before we could use docker-compose and so on. And My example also contains web-server (apache), to have an access to this new instance. What I wanted to do then is to use some load balancer, HAProxy or traefik (as recommended Luca), to get one point access to my application, and dynamically expandable, without any manual operations, except scaling.

Yes, such language extension is very useful, but there is one more feature less known, and unfortunately completely undocumented. Its structured system variables, some of them could me familiar it is: ^$JOB,  ^$LOCK, ^$GLOBAL and ^$ROUTINE. More information in documentation here. But not all knows that it is possible to make your own such global. You should create routine with name SSVN"global_name" in %SYS namespace, for example SSVNZA for ^$ZA global

and for example with this code below, you SET, GET, KILL any values in this global, and this global will contain individual data for every session, and it is possible to use $order or even $query for  such global

ROUTINE SSVNZA
#define global $na(^CacheTemp.ZA($select($isobject($get(%session)):%session.SessionId,1:$job)))

fullName(glob) {
    set global=$$$global
    for i=4:1:glob set global=$na(@global@(glob(i)))
    quit global
}
set(glob, value) public {
    set global=$$fullName(.glob)
    set @global=value
}
get(glob) public {
    set global=$$fullName(.glob)
    quit @global
}
kill(glob, zkill=0) public {
    set global=$$fullName(.glob)
    if zkill {
        zkill @global
    } else {
        kill @global
    }
}
order(glob, dir) public {
    set global=$$fullName(.glob)
    quit $order(@global, dir) 
}
query(glob, dir) public {
    set global=$$fullName(.glob)
    set result=$query(@global, dir)
    quit:result="" ""
    set newGlobal=$name(^$ZA)
    for i=$ql($$$global)+1:1:$ql(result) {
        set newGlobal=$name(@newGlobal@($qs(result,i)))
    }
    quit newGlobal 
}

But as I said before, as it is completely undocumented feature a bit difficult to get complete compatibility with ordinary globals. But it still could be useful, in other cases, like it used by InterSystems.

but WaitMsg it is on parent's side, my control code ususaly like like this

    set resName="someresource"
    job ..someJob(jobsData,resName)
    set child=$zchild
    
    for {
        set $lb(sc,data)=$system.Event.WaitMsg(resName, 1)
        if sc<=0 {
            quit:'$data(^$Job(child))
        }
        continue:data=""
        // do some staff with incomming data
    }

But if you want to interrupt your child process, of course will be possible only from such process, if your process have some loops you may just check if parent still exists, if not, just stop working.