I just want to add to this some other benefits of the way the data is stored. The first access of a global will have to read the pointer tree to find the particular node. A subsequent access of another node in the same global would only need to read from disk at the point where that node diverges in the pointer tree from the first. Even if the data blocks are not the same, a 2nd global access would likely be faster, since some of the tree would be cached, so fewer disk reads would be necessary.

As far as ECP is concerned, you are only creating remote DATABASES. You can map 1 NAMESPACE on a system to point to any databases it has configured, local or remote. You could easily have app server instance A connect to both data server instances B and C and create a remote database for each of them. You could then have 1 namespace on A use the databases from B and C for various things (globals, routines, specific mappings).

You can also do this via ^SECURITY

%SYS>d ^SECURITY
 
1) User setup
2) Role setup
3) Service setup
4) Resource setup
5) Application setup
6) Auditing setup
7) Domain setup
8) SSL configuration setup
9) Mobile phone service provider setup
10) OpenAM Identity Services setup
11) Encryption key setup
12) System parameter setup
13) X509 User setup
14) Exit
 
Option? 12
 
1) Edit system options
2) Edit authentication options

There is one extra piece to this that you might want to look at, and that is the GetReason() classmethod of %SYS.Journal.File. It will tell you "by backup" if the file was switched by ExternalFreeze. The following code will search back from the current file and print out the file name of the journal file that was switched to for the most recent ExternalFreeze or Caché online backup (there is no error checking, so it could fail if you don't have any files created that way):

file = ##class(%SYS.Journal.System).GetCurrentFile()
reason = ""
{
reason = file.GetReason(file.Name)
q:reason="by backup"
##class(%SYS.Journal.File).GetPrev(file.Name,.name)
file = ##class(%SYS.Journal.File).%OpenId(name)
}
file.Name

If Bob's suggestion doesn't fix your problem, I recommend opening up a WRC issue about this. In addition to the encryption keys, you could have an issue with the SSL setup (SSL is required for mirrors if you're using journal encryption).  For what it's worth, you could take a look at my article about creating an SSL-enabled mirror with keys/certificates generated using the Public Key Infrastructure to see if that helps.

Larry, you probably want to open up a WRC issue to address this. The first step will be to try to figure out if the problem is with Caché or with the operation specifically. To do so, you'll want to try to make this connection in a terminal. If any of these statuses are not successful, you should 'd $system.OBJ.DisplayError(<variable>)' and collect the local file you set up with the trace mask:

set ssh=##class(%Net.SSH.Session).%New()
do ssh.SetTraceMask(511,<newlocalfile>)
st1 = ssh.Connect("<SFTP HOST>","22")
st2 = ssh.AuthenticateWithUsername("<UserName>","<Password>")
st3 = ssh.OpenSFTP(.sftp)
st4 = sftp.Put("<local path and filename","<remote path and filename>")

Do you have a local instance installed that you want to connect to? If so, the IP would be 127.0.0.1 (or localhost). You shouldn't need to configure the others unless this is a specialized instance setup. If you simply installed a development instance and are using that version of Studio, there is nothing you need to actually setup. As for the docs, maybe this section will be better. It also may be useful for you to start with the Using Caché Studio book.

Dmitry makes some good suggestions above. I would suggest collecting a 24-hour pButtons report on this server (you can also collect a 30 minute one simultaneously so you have some data more quickly) and opening a WRC issue with that data. It will contain the mgstat data reference, along with performance metrics at the OS level, among other things. Performance issues are not generally quick to solve, and take a bit of time reviewing a lot of data before conclusions can be made. 

I will also say that, if this is happening frequently, it's likely your disk is simply too slow for the amount of work the write daemon needs to do.

This means that the SuperServer spawned a child job to handle incoming connections (this is what happens when any incoming connection comes to the SuperServer). After this child job starts up, it tried to read the connection information from the client, but the message wasn't there. This could be caused either by a slow network or by something that's simply attempting to connect to the SuperServer but not actually making a real connection (possibly testing that port). I would take a look at that IP address (10.251.10.16) to see what that is. My best guess is it's some sort of monitoring software if all 4 servers report the same IP address in their errors. 

If all you're really looking to do is shut down the instance, you can do this with a $zf(-2) callout to the OS to call 'ccontrol stop <instance> quietly restart' (or 'ccontrol stopstart <instance>' if you are on Windows). Do not use $zf(-1) for this, as shutdown would wait for the $zf(-1) calling process until reaching the ShutdownTimeout value. See this doc for an example of using $zf(-2):

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...