High availability (HA) refers to the goal of keeping a system or application operational and available to users a very high percentage of the time, minimizing both planned and unplanned downtime.
We are trying to script a High Availability Shutdown/Start script in case we need to fail over to one of our other servers we can be back up within mins. Is there a way to configure the startup procedure to Automatically Stop/Start the JDBC server when shutting down or starting up cache? is there an auto setting we can change?
If I set up a Mirror and add a new database to the mirror, do I need to create the new database on every single member of the mirror or will it automatically appear on Mirror members? How about HSCUSTOM database, which already got created when installing IRIS?
We are seeing more and more customers being lured with latest infrastructure technologies, particularly Composable Infrastructure. Coming with all sorts of data center consolidations and costs savings.
Question is: are there any concerns for HealthShare/TrakCare being run on these platforms or things to look out for? Anyone out there, already on these platforms?
To be more specific this is HPe Synergy with 480 Compute blades booting as bare metal.
We will have a Arbiter, Two Failover members (A,B), and a Async (DR) member (C). I have the two failover members in sync and are configured for Arbiter Control.
My question is about the Async member, when I initially set it up I pointed it to the mirror on the primary node A.
I am looking into creating a ZSTOP as you probably have seen from my previous posts, is there a way to capture the type of shutdown that occurred? So say if there was an unknown hardware failure (forced), vs a user shutdown? Mainly looking for user or system shutdown when we force another destination to become the primary in the mirror. So if a user shutdown the production to do.,... Task A, Task B etc..
I need to give an answer for a RFP where it'll be considered an extra to have a failover system where each member of the failover is located in a different datacenter, separated by more than 100-200 miles.
ISCAgent is automatically installed with Cache, runs as a service and can be configured to start with the system. This is fine – but the complication comes when this is on VCS clusters with Mirroring on. When installing a Single Instance of Cache in a Cluster, point number 2. Says “Create a link from /usr/local/etc/cachesys to the shared disk. This forces the Caché registry and all supporting files to be stored on the shared disk resource you have configured as part of the service group.”
looking for what the timeout value is for the connection for shadowing. and if it is known the timeout for mirroring also.
I'm currently getting this error and when we go to mirroring it would be good to know also, this way we can go back to our network vendor and give them as much info as possible.
10/19/16-12:19:40:696 (d30b6) 2 [SHADOWING] SHADOW SERVER (PRODSHAD): <ZREAD>ReadNone+3^SHDWUTIL;ERROR #1071: TCP read timed out - remote server is not responding (repeated 16 times)
Currently we have 2 Windows servers in a clustered environment. Is there a setting in HealthShare for a TCP/IP Operation to use the virtual IP address when initiating the connection, rather than the host IP address?
Is it possible to make the cache terminal available over a mirrored vip address for a healthshare mirrored environment? So that connecting to a terminal for a mirrored environment will always connect to the Live Node?
I'm looking to write a Powershell script to run against the system and need to connect to the Live Node in a mirrored setup. Is this possible or am I going to have to log onto each node to establish which is Live. Or does this even matter?
I have Ensemble/Healthshare running in a production environment which is setup with a mirror failover and an arbiter sitting between them.
In the event of a failover we have a number of connections that need stopping/monitoring and starting in a certain order.
Is there a programmatic way we can detect the failover and stop certain services and operations immediately and then start them up again in the required order, checking their connection state before starting the next connection.