it is not enough for the restored database to be colocated with the Caché instance on the target server. the new database must be configured within Caché.

  1. in the management portal on the target system, select System Administration > Configuration > System Configuration > Local Databases.
  2. on the Local Databases page, click the Create New Database button.
  3. on the first panel of the Database Wizard, enter a name for the new database and the local path to the database you restored to this host, and click Next.
  4. the next panel says:
    Database file, CACHE.DAT, already exists in directory. If you do not want to use it, please press the [Back] button and modify the Directory.
  5. click Finish to configure the database. other than the name, the characteristics of the source database you backed up are carried over to the newly configured database on the target system.

Raghu,

For information on configuring the journal directories using the management portal, please see Configuring Journal Settings in the Caché Data Integrity Guide. You can also configure the WIJ directory using the page described there.

It looks like you probably saw the general discussion of best practices regarding  the separation of journal, WIJ, and database directories in File System Recommendations in the Caché Installation Guide. I will make sure to update this section with an explicit link to the above section, and make it clear that these directories are not specified during installation. (For a separate discussion of journaling best practices regarding the locations of these directories, see Journaling Best Practices, also in the Data Integrity Guide.)

rcb

Raghu,

For information on configuring the journal directories using the management portal, please see Configuring Journal Settings in the Caché Data Integrity Guide. You can also configure the WIJ directory using the page described there.

It looks like you probably saw the general discussion of best practices regarding  the separation of journal, WIJ, and database directories in File System Recommendations in the Caché Installation Guide. I will make sure to update this section with an explicit link to the above section for instructions on how to change the locations. (For a separate discussion of journaling best practices regarding the locations of these directories, see Journaling Best Practices, also in the Data Integrity Guide.)

rcb

The section Defragmenting Globals in a Database (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_manage#GSA_defrag_databases_freespace) in the  System Administration Guide says:

In general, it is not necessary to run defragmentation on any regular basis. Some workloads, however, particularly those that read large portions of a database sequentially, can benefit from having global blocks organized sequentially. 

This section was carefully revised a couple of releases ago and that advice represents current wisdom about the operation. Defragmenting globals is preferred over OS-level defragmentation.

See that section also for information about scheduling the operation and other concerns.

exactly. but in terms of your question about preparation, you want to do everything you can to make it likely that the DR asyncs you have designated for potential promotion under this scenario are most likely to have the most recent possible journal data. for an obvious example, planning the location of the DRs and the network configuration to minimize latency, perhaps at the expense of reporting async latency. doing whatever you can before the fact to keep your go-to DR(s) as close as possible to caught up at all times is the best disaster recovery approach you can take, i think.

Alexey,

1)  it's interesting that you ask this, as i have just been writing up a great new set of features for this situation, which i believe will be in 2017.1. with these changes:

  • a DR async promoted with no partner checks with all connected members (presumably other asyncs) to see if they have more recent journal data than it does
  • when promoting a DR async with no partner, you have the option of setting no failover, which stages the promotion but keeps the DR from actually becoming primary until you clear it. among other things, this gives you the opportunity to get any asyncs that may be disconnected from the mirror connected so the DR can check their journal data before becoming primary
  • messages in the console log tell you which asyncs have been successfully contacted and what the results were,  and which could not be contacted, guiding you in the process of restoring connections to maximize the chance of getting the most recent journal data

prior to these changes, i believe the manual procedure for obtaining any more recent journal data from other asyncs would be the same as for getting them from the backup in the procedure you linked to, except that instead of looking in the console log to confirm that the backup was active at the time of the failure and that you are therefore getting the most recent journal data, you would compare the name and date of the most recent journal file from the async to the most recent journal file on the DR you are going to promote to see if you can get more recent journal data, which may not be the most recent. we didn't document this because we would typically prefer that the customer be working with the WRC when there is the possibility of data loss to ensure they are made aware of all possible alternatives before committing to such a course, and that they have expert guidance in minimizing data loss if it is inevitable.

as to what you can do to prepare, the most important things are to optimize journal performance to get all asyncs caught up as quickly as possible so the potential gap will be less in the event of a disaster scenario, and to make sure the journal directories of all members will be accessible no matter what happens; see the "Journaling performance and journal storage" bullet in the mirror configuration guidelines (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_config_guidelines). you can also optimize your network configuration to maximize the speed of journal transfer; see http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_comm and http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_comm_network.

2) asynchronous journal transfer does not mean that the async pulls the journal files, or that the primary does not know  whether an async has received each journal file. the primary does keep track; this is how the Mirror Monitor (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_monitor) on the primary provides Journal Transfer and Dejournaling statuses for all connected members. furthermore, the primary does not purge journal files until they have been received by the backup and all asyncs; to cover potential failover or promotion, the backup and DR asyncs also hold onto them until all asyncs have them. the one exception is that when an async has been disconnected for more than 14 days, journal files on the primary and backup can be purged when local purge criteria are met even if that async has not yet received them. so the bottom line is you are covered for two weeks; see "Purge Journal Files" and "Purge Mirror Journal Files" (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_journal#GCDI_journal_purge) in the Caché Data Integrity Guide for complete details. (as to trying to restore the journals to a disconnected async after more than two weeks, i suppose in theory it could be done, but my guess is it would be a whole lot easier to rebuild the async. if you want to pursue this question i can try to get an answer for you.)

yes, now i see the problem.  i didn't realize there is no option for installing a standalone agent in a different directory; i've never done it, and i assumed since you were installing it on an system without Caché you would be able to choose the installation directory. (of course, if that were the case, it should have been noted in the ISCAgent section in the Mirroring chapter.) adding that as you suggest seems to make a lot of sense. but that would need to be validated by Tom Woodfin and the mirroring team.

have you filed a prodlog on this?