Dear Alexey;

We do not have 2 different DR approaches.

The mirror config is only with Primary (at Production Site) and DR async (at DR site) so two instances in total.

The Production site has two physical boxes in a Veritas Cluster config for H.A purposes. Should the first one have an issue, Cache fails over to the second node and still comes up as Primary.  Should these two nodes get tossed up, or we lose the entire Production site, then we promote the DR async instance. The same applies to the DR site. In this environment the decision to fail-over to DR is not an automatic process, it needs to be announced first.

Hi Mark;

I like, those are logical steps to follow. last time I checked you guys did not have a Veritas lab test environment to validate this, because the moment it becomes a cluster resource it will then need to conform to Veritas facets,  e.g start, monitor, offline, etc. My instance is only in Prod mode, we have little room to experiment with this. Hence, the other, easy, quick way was that suggestion to break out ISCAgent directory. I just tested the 'rsync' copy and the directory edit in the service script and it seems to start up well.

I previously wrote this to WRC, and still waiting for it to be ratified, if this is a viable alternative.

 

""""

The one I’ve been thinking of all along which could be an easy way forward if it is possible you can re-package ISCAgent installer to install in a different directory instead of the default one. The default directory is the one giving us headaches as it is linked back to the cluster disk.

This I mean if I’m on the secondary node without the cluster disks, this is what you will encounter:

 

[]# pwd

/usr/local/etc

[]# ls -al cachesys

lrwxrwxrwx. 1 root root 43 May 28  2015 cachesys -> /tcl_prod_db/labsys/usr/local/etc/cachesys/ (This path resides on a cluster disk).

[]# cd /usr/local/etc/cachesys

-bash: cd: /usr/local/etc/cachesys: No such file or directory

 

So in this scenario I cannot install the ISCAgent independently in its default format as it will fail as above.

That link we cannot touch as that will break the Cluster FailOver.

 

So the modifications I’m talking about will be:

  1. to change the default directory by creating a new one to ‘/usr/local/etc/iscagent’
  2. Then modify the  etc/init.d/ISCAgent script on this line from AGENTDIR=${CACHESYS:-"/usr/local/etc/cachesys"}  to AGENTDIR=${CACHESYS:-"/usr/local/etc/iscagent"}

 

After the installation this seems achievable by doing this :-

  1. rsync -av /usr/local/etc/cachesys/* /usr/local/etc/iscagent/
  2. Then edit etc/init.d/ISCAgent as suggested on 2. Above

The issue I’ve with this is that there could be other references in the installer that I might not be aware of. If so, hence suggesting you guys re-package it with the modifications as suggested above.

 

This way we make ISCAgent independent and resides locally on the TWO nodes (primary and secondary failover node),  as it’s binaries don’t really need to follow the Cluster Resources all the time. This way we also make etc/init.d/ISCAgent start automatically with the OS.


"""'''

Hi Bob;

I would like you to understand where the complication is coming from. It is actually a bit up in that page "Install a Single Instance of Cache", point number 2). Create a link from /usr/local/etc/cachesys to the shared disk. This forces the Caché registry and all supporting files to be stored on the shared disk resource you have configured as part of the service group. And they further suggest commands to run. 

Now because the default install directory is linked out, you can not install a standalone kit of ISCAgent on that second node because the cluster disks are not present. Typically you will get this:

[]# pwd

/usr/local/etc

[]# ls -al cachesys

lrwxrwxrwx. 1 root root 43 May 28  2015 cachesys -> /tcl_prod_db/labsys/usr/local/etc/cachesys/ (This path resides on a cluster disk).

[]# cd /usr/local/etc/cachesys

-bash: cd: /usr/local/etc/cachesys: No such file or directory

 

The default install directory of ISCAgent is the same as the path that is mapped out to shared cluster disks, hence the complication and why am reaching out.

I also agree that the ISCAgent can run on each node independently. There is no big reason for it's binaries to always follow the cluster resources all the time.