we'll add the USER> prompt and a link to the Terminal doc to these instructions, and get rid of the extraneous word.

in deference to the wisdom of Salva and Doug, i have removed the section about mounting files individually.

If you are doing database encryption only, this is very unlikely to be the problem. Journal encryption is a bit tricky as indicated by the text I cited, but as that text also says, "there are no specific mirroring-related requirements for database encryption". 

Excellent work, Pete! we still have openings in Documentation ...

to your list of reference doc i would add Edit a failover member  (GHA_mirror_set_configmembers_edit_failover) which is germane to your article and has a brief procedure for adding SSL/TLS to an existing mirror.

exactly. but in terms of your question about preparation, you want to do everything you can to make it likely that the DR asyncs you have designated for potential promotion under this scenario are most likely to have the most recent possible journal data. for an obvious example, planning the location of the DRs and the network configuration to minimize latency, perhaps at the expense of reporting async latency. doing whatever you can before the fact to keep your go-to DR(s) as close as possible to caught up at all times is the best disaster recovery approach you can take, i think.

Alexey,

1)  it's interesting that you ask this, as i have just been writing up a great new set of features for this situation, which i believe will be in 2017.1. with these changes:

  • a DR async promoted with no partner checks with all connected members (presumably other asyncs) to see if they have more recent journal data than it does
  • when promoting a DR async with no partner, you have the option of setting no failover, which stages the promotion but keeps the DR from actually becoming primary until you clear it. among other things, this gives you the opportunity to get any asyncs that may be disconnected from the mirror connected so the DR can check their journal data before becoming primary
  • messages in the console log tell you which asyncs have been successfully contacted and what the results were,  and which could not be contacted, guiding you in the process of restoring connections to maximize the chance of getting the most recent journal data

prior to these changes, i believe the manual procedure for obtaining any more recent journal data from other asyncs would be the same as for getting them from the backup in the procedure you linked to, except that instead of looking in the console log to confirm that the backup was active at the time of the failure and that you are therefore getting the most recent journal data, you would compare the name and date of the most recent journal file from the async to the most recent journal file on the DR you are going to promote to see if you can get more recent journal data, which may not be the most recent. we didn't document this because we would typically prefer that the customer be working with the WRC when there is the possibility of data loss to ensure they are made aware of all possible alternatives before committing to such a course, and that they have expert guidance in minimizing data loss if it is inevitable.

as to what you can do to prepare, the most important things are to optimize journal performance to get all asyncs caught up as quickly as possible so the potential gap will be less in the event of a disaster scenario, and to make sure the journal directories of all members will be accessible no matter what happens; see the "Journaling performance and journal storage" bullet in the mirror configuration guidelines (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_config_guidelines). you can also optimize your network configuration to maximize the speed of journal transfer; see http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_comm and http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_comm_network.

2) asynchronous journal transfer does not mean that the async pulls the journal files, or that the primary does not know  whether an async has received each journal file. the primary does keep track; this is how the Mirror Monitor (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_monitor) on the primary provides Journal Transfer and Dejournaling statuses for all connected members. furthermore, the primary does not purge journal files until they have been received by the backup and all asyncs; to cover potential failover or promotion, the backup and DR asyncs also hold onto them until all asyncs have them. the one exception is that when an async has been disconnected for more than 14 days, journal files on the primary and backup can be purged when local purge criteria are met even if that async has not yet received them. so the bottom line is you are covered for two weeks; see "Purge Journal Files" and "Purge Mirror Journal Files" (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_journal#GCDI_journal_purge) in the Caché Data Integrity Guide for complete details. (as to trying to restore the journals to a disconnected async after more than two weeks, i suppose in theory it could be done, but my guess is it would be a whole lot easier to rebuild the async. if you want to pursue this question i can try to get an answer for you.)

i have alerted Tom and Ray Fucillo to this post, Anzelem. hopefully they will respond soon.

yes, now i see the problem.  i didn't realize there is no option for installing a standalone agent in a different directory; i've never done it, and i assumed since you were installing it on an system without Caché you would be able to choose the installation directory. (of course, if that were the case, it should have been noted in the ISCAgent section in the Mirroring chapter.) adding that as you suggest seems to make a lot of sense. but that would need to be validated by Tom Woodfin and the mirroring team.

have you filed a prodlog on this?