go to post Anzelem Sanyatwe · Oct 26, 2016 Hi Heikki;I just recently been faced with the same situation you are in. I doubt it shadow is supported between those two versions, you can check it here 4) Supported Version Interoperability http://docs.intersystems.com/documentation/ISP/ISP-20162.pdfThe method we used was a Full Backup on old system and a restore on the new system just before the Migration day, and then during the downtime after users are stopped in the migration window we did a cumulative backup and restore (1, the cumulative backup and restore minimizes downtime 2, it's quick to copy over - as it is smaller). This plan worked well for us.
go to post Anzelem Sanyatwe · Oct 26, 2016 Dear Alexey;We do not have 2 different DR approaches.The mirror config is only with Primary (at Production Site) and DR async (at DR site) so two instances in total.The Production site has two physical boxes in a Veritas Cluster config for H.A purposes. Should the first one have an issue, Cache fails over to the second node and still comes up as Primary. Should these two nodes get tossed up, or we lose the entire Production site, then we promote the DR async instance. The same applies to the DR site. In this environment the decision to fail-over to DR is not an automatic process, it needs to be announced first.
go to post Anzelem Sanyatwe · Oct 26, 2016 Hi Mark;I like, those are logical steps to follow. last time I checked you guys did not have a Veritas lab test environment to validate this, because the moment it becomes a cluster resource it will then need to conform to Veritas facets, e.g start, monitor, offline, etc. My instance is only in Prod mode, we have little room to experiment with this. Hence, the other, easy, quick way was that suggestion to break out ISCAgent directory. I just tested the 'rsync' copy and the directory edit in the service script and it seems to start up well.
go to post Anzelem Sanyatwe · Oct 25, 2016 Hi Bob;Would appreciate if you can hook me up with Tom Woodfin and the mirroring team. The calls to peruse through are 861211 which was a continuation from 854501 .There are all sorts of suggestions, but if this can be bounced back to be validated.
go to post Anzelem Sanyatwe · Oct 25, 2016 I previously wrote this to WRC, and still waiting for it to be ratified, if this is a viable alternative. """"The one I’ve been thinking of all along which could be an easy way forward if it is possible you can re-package ISCAgent installer to install in a different directory instead of the default one. The default directory is the one giving us headaches as it is linked back to the cluster disk.This I mean if I’m on the secondary node without the cluster disks, this is what you will encounter: []# pwd/usr/local/etc[]# ls -al cachesyslrwxrwxrwx. 1 root root 43 May 28 2015 cachesys -> /tcl_prod_db/labsys/usr/local/etc/cachesys/ (This path resides on a cluster disk).[]# cd /usr/local/etc/cachesys-bash: cd: /usr/local/etc/cachesys: No such file or directory So in this scenario I cannot install the ISCAgent independently in its default format as it will fail as above.That link we cannot touch as that will break the Cluster FailOver. So the modifications I’m talking about will be:to change the default directory by creating a new one to ‘/usr/local/etc/iscagent’Then modify the etc/init.d/ISCAgent script on this line from AGENTDIR=${CACHESYS:-"/usr/local/etc/cachesys"} to AGENTDIR=${CACHESYS:-"/usr/local/etc/iscagent"} After the installation this seems achievable by doing this :-rsync -av /usr/local/etc/cachesys/* /usr/local/etc/iscagent/Then edit etc/init.d/ISCAgent as suggested on 2. AboveThe issue I’ve with this is that there could be other references in the installer that I might not be aware of. If so, hence suggesting you guys re-package it with the modifications as suggested above. This way we make ISCAgent independent and resides locally on the TWO nodes (primary and secondary failover node), as it’s binaries don’t really need to follow the Cluster Resources all the time. This way we also make etc/init.d/ISCAgent start automatically with the OS."""'''
go to post Anzelem Sanyatwe · Oct 25, 2016 Hi Pete;Unfortunately, ISCAgent is not part of cluster service groups. The ISC Veritas 'online' script only does the Cache portion of it.
go to post Anzelem Sanyatwe · Oct 25, 2016 Hi Bob;I would like you to understand where the complication is coming from. It is actually a bit up in that page "Install a Single Instance of Cache", point number 2). Create a link from /usr/local/etc/cachesys to the shared disk. This forces the Caché registry and all supporting files to be stored on the shared disk resource you have configured as part of the service group. And they further suggest commands to run. Now because the default install directory is linked out, you can not install a standalone kit of ISCAgent on that second node because the cluster disks are not present. Typically you will get this:[]# pwd/usr/local/etc[]# ls -al cachesyslrwxrwxrwx. 1 root root 43 May 28 2015 cachesys -> /tcl_prod_db/labsys/usr/local/etc/cachesys/ (This path resides on a cluster disk).[]# cd /usr/local/etc/cachesys-bash: cd: /usr/local/etc/cachesys: No such file or directory The default install directory of ISCAgent is the same as the path that is mapped out to shared cluster disks, hence the complication and why am reaching out.I also agree that the ISCAgent can run on each node independently. There is no big reason for it's binaries to always follow the cluster resources all the time.
go to post Anzelem Sanyatwe · Oct 25, 2016 Hi Jeffrey;Isn't that tool late, for this to be processed the ISCAgent needs to be up and running already.common message in the console.log is this one - before any Mirror checks happens."Failed to verify Agent connection...(repeated 5 times"