Bob Binstock · Jun 22, 2022 go to post

Tom, can you provide a little more info? 

  • was this a different IRIS instance than the one you successfully connected to from VS Code? was it remote? what port # did you use in the successful case, and what # in the failed one?
  • i think the error message is from VS Code, is that right?
Bob Binstock · May 19, 2022 go to post

they were in one of the lists in the section David linked to, but now they are in all four. thanks for catching this David!

Bob Binstock · May 18, 2022 go to post

inadvertently omitted, sorry. will add to the doc as soon as i get a chance.

Bob Binstock · Apr 30, 2019 go to post

we'll add the USER> prompt and a link to the Terminal doc to these instructions, and get rid of the extraneous word.

Bob Binstock · May 23, 2018 go to post

in deference to the wisdom of Salva and Doug, i have removed the section about mounting files individually.

Bob Binstock · Mar 23, 2017 go to post

If you are doing database encryption only, this is very unlikely to be the problem. Journal encryption is a bit tricky as indicated by the text I cited, but as that text also says, "there are no specific mirroring-related requirements for database encryption". 

Bob Binstock · Mar 23, 2017 go to post

William, is it possible that the DR async and failover members do not have all of the needed encryption keys loaded? See Activating Journal Encryption in a Mirror in the High Availability Guide. This text applies to releases supporting multiple encryption keys; what release are you running?

Bob Binstock · Feb 28, 2017 go to post

it is not enough for the restored database to be colocated with the Caché instance on the target server. the new database must be configured within Caché.

  1. in the management portal on the target system, select System Administration > Configuration > System Configuration > Local Databases.
  2. on the Local Databases page, click the Create New Database button.
  3. on the first panel of the Database Wizard, enter a name for the new database and the local path to the database you restored to this host, and click Next.
  4. the next panel says:
    Database file, CACHE.DAT, already exists in directory. If you do not want to use it, please press the [Back] button and modify the Directory.
  5. click Finish to configure the database. other than the name, the characteristics of the source database you backed up are carried over to the newly configured database on the target system.
Bob Binstock · Feb 3, 2017 go to post

Raghu,

For information on configuring the journal directories using the management portal, please see Configuring Journal Settings in the Caché Data Integrity Guide. You can also configure the WIJ directory using the page described there.

It looks like you probably saw the general discussion of best practices regarding  the separation of journal, WIJ, and database directories in File System Recommendations in the Caché Installation Guide. I will make sure to update this section with an explicit link to the above section, and make it clear that these directories are not specified during installation. (For a separate discussion of journaling best practices regarding the locations of these directories, see Journaling Best Practices, also in the Data Integrity Guide.)

rcb

Bob Binstock · Feb 3, 2017 go to post

Raghu,

For information on configuring the journal directories using the management portal, please see Configuring Journal Settings in the Caché Data Integrity Guide. You can also configure the WIJ directory using the page described there.

It looks like you probably saw the general discussion of best practices regarding  the separation of journal, WIJ, and database directories in File System Recommendations in the Caché Installation Guide. I will make sure to update this section with an explicit link to the above section for instructions on how to change the locations. (For a separate discussion of journaling best practices regarding the locations of these directories, see Journaling Best Practices, also in the Data Integrity Guide.)

rcb

Bob Binstock · Jan 11, 2017 go to post

Excellent work, Pete! we still have openings in Documentation ...
to your list of reference doc i would add Edit a failover member  (GHA_mirror_set_configmembers_edit_failover) which is germane to your article and has a brief procedure for adding SSL/TLS to an existing mirror.

Bob Binstock · Dec 6, 2016 go to post

The section Defragmenting Globals in a Database (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_manage#GSA_defrag_databases_freespace) in the  System Administration Guide says:

In general, it is not necessary to run defragmentation on any regular basis. Some workloads, however, particularly those that read large portions of a database sequentially, can benefit from having global blocks organized sequentially. 

This section was carefully revised a couple of releases ago and that advice represents current wisdom about the operation. Defragmenting globals is preferred over OS-level defragmentation.

See that section also for information about scheduling the operation and other concerns.

Bob Binstock · Oct 31, 2016 go to post

exactly. but in terms of your question about preparation, you want to do everything you can to make it likely that the DR asyncs you have designated for potential promotion under this scenario are most likely to have the most recent possible journal data. for an obvious example, planning the location of the DRs and the network configuration to minimize latency, perhaps at the expense of reporting async latency. doing whatever you can before the fact to keep your go-to DR(s) as close as possible to caught up at all times is the best disaster recovery approach you can take, i think.

Bob Binstock · Oct 31, 2016 go to post

Alexey,

1)  it's interesting that you ask this, as i have just been writing up a great new set of features for this situation, which i believe will be in 2017.1. with these changes:

  • a DR async promoted with no partner checks with all connected members (presumably other asyncs) to see if they have more recent journal data than it does
  • when promoting a DR async with no partner, you have the option of setting no failover, which stages the promotion but keeps the DR from actually becoming primary until you clear it. among other things, this gives you the opportunity to get any asyncs that may be disconnected from the mirror connected so the DR can check their journal data before becoming primary
  • messages in the console log tell you which asyncs have been successfully contacted and what the results were,  and which could not be contacted, guiding you in the process of restoring connections to maximize the chance of getting the most recent journal data

prior to these changes, i believe the manual procedure for obtaining any more recent journal data from other asyncs would be the same as for getting them from the backup in the procedure you linked to, except that instead of looking in the console log to confirm that the backup was active at the time of the failure and that you are therefore getting the most recent journal data, you would compare the name and date of the most recent journal file from the async to the most recent journal file on the DR you are going to promote to see if you can get more recent journal data, which may not be the most recent. we didn't document this because we would typically prefer that the customer be working with the WRC when there is the possibility of data loss to ensure they are made aware of all possible alternatives before committing to such a course, and that they have expert guidance in minimizing data loss if it is inevitable.

as to what you can do to prepare, the most important things are to optimize journal performance to get all asyncs caught up as quickly as possible so the potential gap will be less in the event of a disaster scenario, and to make sure the journal directories of all members will be accessible no matter what happens; see the "Journaling performance and journal storage" bullet in the mirror configuration guidelines (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_config_guidelines). you can also optimize your network configuration to maximize the speed of journal transfer; see http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_comm and http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_comm_network.

2) asynchronous journal transfer does not mean that the async pulls the journal files, or that the primary does not know  whether an async has received each journal file. the primary does keep track; this is how the Mirror Monitor (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_monitor) on the primary provides Journal Transfer and Dejournaling statuses for all connected members. furthermore, the primary does not purge journal files until they have been received by the backup and all asyncs; to cover potential failover or promotion, the backup and DR asyncs also hold onto them until all asyncs have them. the one exception is that when an async has been disconnected for more than 14 days, journal files on the primary and backup can be purged when local purge criteria are met even if that async has not yet received them. so the bottom line is you are covered for two weeks; see "Purge Journal Files" and "Purge Mirror Journal Files" (http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_journal#GCDI_journal_purge) in the Caché Data Integrity Guide for complete details. (as to trying to restore the journals to a disconnected async after more than two weeks, i suppose in theory it could be done, but my guess is it would be a whole lot easier to rebuild the async. if you want to pursue this question i can try to get an answer for you.)

Bob Binstock · Oct 25, 2016 go to post

i have alerted Tom and Ray Fucillo to this post, Anzelem. hopefully they will respond soon.

Bob Binstock · Oct 25, 2016 go to post

yes, now i see the problem.  i didn't realize there is no option for installing a standalone agent in a different directory; i've never done it, and i assumed since you were installing it on an system without Caché you would be able to choose the installation directory. (of course, if that were the case, it should have been noted in the ISCAgent section in the Mirroring chapter.) adding that as you suggest seems to make a lot of sense. but that would need to be validated by Tom Woodfin and the mirroring team.

have you filed a prodlog on this? 

Bob Binstock · Oct 25, 2016 go to post

Yes, that's correct. The ISCAgent should be running on the backup member at all times. The "Application Considerations" section of the Using Veritas Cluster Server for Linux with Caché appendix  of the High Availability Guide 
(http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_veritas_clusters#GHA_veritas_clusters_app_considerations) says:

  • If any Caché instance that is part of a failover cluster is to be added to a Caché mirror, the ISCAgent, which is installed with Caché, must be properly configured; see Configuring the ISCAgent in the “Mirroring” chapter of this guide for more information. For any node on which Caché was not installed as part of cluster setup (see Installing a Single Instance of Caché), install the ISCAgent on that node (see Installing the Arbiter in the “Mirroring” chapter for information about using a standalone ISCAgent install kit for this purpose) and then configure it.

We had a discussion about this some time back and concluded there is no reason not to have the agent running at all times on all mirror members regardless of their status in the cluster.

Anzelem, maybe I should move this up into the "Install a SIngle Instance of Caché" procedures in all the cluster appendixes?

Bob Binstock · Oct 20, 2016 go to post

Jared, I'm not sure what you mean by "an async mirror over a shadow configuration". A reporting async in a mirror has similarities to a shadow but is not the same thing. The minimum downtime upgrade procedures for mirrors that you cited don't involve any shadow configurations, and in fact the "minimum downtime" aspect depends on mirror failover, which is something a shadow cannot do. As I noted, this is a clear advantage of mirroring over shadowing.
You are correct that, as I noted, mirroring provides the option of DR async promotion, which provides superior disaster recovery to shadow-based DR. But I want to clarify that DR asyncs can be and are often "off-site", that is, at a geographical remove from the primary data center. They fulfill this need as well as a shadow does and provide the added benefit of promotion.

Bob Binstock · Sep 27, 2016 go to post

Wolf, you can certainly set up a mirror consisting of one failover member and a reporting async member.

Mirror management and the mirroring user interface are at this point considerably better developed than the equivalent for shadowing. As someone who has been through the procedures for setting up and managing mirrors and shadows, I would certainly opt for the former for usability and monitoring advantages if nothing else.

Mirroring has capabilities that make planned maintenance easier; you don't have to maintain a second failover member or DR async, but you can add one to serve as temporary primary when you want to bring the primary down for maintenance or do a minimal downtime upgrade. In addition, having a mirror allows you to easily add capabilities in the future. For example, you can add a DR async for disaster recovery that is superior to what shadowing or a reporting async provides (since a DR async can be easily promoted to primary) or convert your reporting async to a DR async. You can also add reporting asyncs as your query needs evolve.

Perhaps someone with more expertise regarding the architectural advantages mirroring may have over shadowing in this setup can elaborate on those.

Bob Binstock · Sep 23, 2016 go to post

While there is certainly no harm in following the procedure Mike describes, InterSystems recommends consulting the documentation (in this case http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_windows) before and during your Caché install, especially if you plan to keep the Caché instance around for specific purposes. For example, understanding the implications of choosing a security level, of 8-bit vs unicode, of whether to run Caché under the local system account or under other credentials, and so on before making these decisions is very useful. And having the background information provided is generally helpful in understanding and working with Caché.

All of the latest Caché and Ensemble documentation is always available at http://docs.intersystems.com, and by using the Search function here on Developer Community as well.

Bob Binstock · Sep 22, 2016 go to post

All Caché and Ensemble documentation is also accessible using the Search function right here on Developer Community. By using the Categories filter, you can quickly find a useful entry point into the documentation, and then browse around and search within it if need be. 

For example, if I search for "Cache SQL" and then set Categories to Product Documentation and scroll down a bit, I find Caché QuickStart Tutorial  ▶  Querying the Database and a little later many links into the Caché SQL Reference, for example Caché SQL Reference  ▶  SELECT. Needless to say, searching "Cache SQL Reference" narrows it down quite a lot.

Clicking the title of a search result gives you a Developer Community display of the material, but clicking the View the related documentation link for any entry takes you into the documentation itself. 

Note that the search filters can also be used to narrow searches for a lot of other useful things. For example, try searching Ensemble with Categories set to Video and Tags set to Ensemble for a long list of very interesting Ensemble-related videos. (You can choose tags within the Ensemble group to narrow it down.)

Finally, about the "beta documentation" Mike refers to:  

Bob Binstock · Sep 8, 2016 go to post

actually, you can deploy a mirror consisting of a primary and a disaster recovery (DR) async. this is indeed a real DR solution. see the article!

Bob Binstock · Sep 8, 2016 go to post

Kevin, see  http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…. The failover members are not required to be running the same operating system; as long as they have the same endianness and the Caché instances are of the same version and have the same character width and locale, the mirror will work.

Remember, however, that there is no defined primary. As the article notes, "Since the failover members can trade roles as primary and backup at any time, they should be as similar as possible; CPU and memory configuration should be the same or close, and the storage subsystems should be comparable." I would extend this to Alexey's point about applications. As long as the two machines will operate exactly the same way in response to application connections to the databases, handle the same loads, and so on, you are OK.

Bob Binstock · Jul 26, 2016 go to post

i agree, Mike, i wouldn't discourage anyone from just trying it out. but even the simplest installation of Caché presents choices that have lots of significant implications, for example 8-bit vs. unicode and security settings. it doesn't add a lot of overhead to have the documentation open so you can read along with the procedure as you execute it, and at the least it may save you the trouble of having to uninstall and start over again.

Bob Binstock · Jul 26, 2016 go to post

thank you, Eric. i should have been more specific about the new file attribute command. internally we tend to focus on the most recent releases, i will remember next time that there are plenty of sites using older kits.