Question
· Mar 8, 2016

Cache Mirroring on Cloud servers.

Hi,

Can a Cache Mirror be  used in the cloud ? (ie stand up a Primary and Backup member instances in a High Availability Cache Mirroring configuration) 

I'm investigating the validity of this configuration, because I was of the understanding that this may not possible due to these cloud servers not (typically) having fixed ip addresses, which interferes with the Virtual IP settings for the mirror set.

Is this correct, and if there are workarounds (like Load Balancing ?) can I have details on how this should be configured ?

I'm assuming other options like traditional Windows Clustering, or a VMWare High Availability arhitecture remain valid options, but I'm interested on any approaches or experiences used to date hat have worked.

 

Thanks,

Steve.

Discussion (8)4
Log in or sign up to continue

Yes.  Database mirroring within cloud infrastructure is possible.  As you point out the use of the virtual IP address (VIP) in most cases is not doable.  This is due to cloud network management/assignments/rules not particularly liking having IP addresses changing outside of the cloud management facilities.

Having said that, the use of 3rd party load balancers offers a solution in the form of a virtual appliance available in most cloud marketplaces in a Bring-Your-Own-License (BYOL) model.  As an example F5 LTM Virtual Edition.  With these appliances there are usually two methods available to control network traffic flow.  

The first option uses an API called from ^ZMIRROR during failover to instruct the load balancer that a particular server is now the primary mirror member.  The API methods range from CLI type scripting to REST API integration.

The second option uses load balancer polling to determine which mirror member is primary.  This involves creating a simple CSP page or listening socket to respond whether a given server in the load balanced pool is the primary mirror member.

The second option is more portable and load balancer agnostic since it doesn't rely on specific syntax or integration methods from a given load balancer vendor or model.  However the limitation is the frequency of polling.  In most cases polling can be as low as a few seconds - which in most scenarios is acceptable.

I will be soon posting a long article here on the Community detailing some examples using F5 LTM VE and providing a sample CSP status page and REST API integration to cover both options mentioned above.  I will also be presenting a session during our upcoming Global Summit.

Hi All,

Last year we deployed cloud based Regional MIS of Krasnoyarsky Region of Russia. It was based on O7 National Cloud Platform which had VMWare vSphere inside. One of the Cloud components was a virtual router which allowed us to do NAT/PAT between external and internal networks. So, mirroring VIP was maintained in internal network and permanently mapped to external (fixed) IP without the need to remap it on mirror nodes role exchange.

This project was the real challenge for us not only due to its large scale (7000 named and 1800 concurrent users, > 600GB database), but mostly due to exciting technology mix we were to try and use: virtualization, ECP, Mirroring, application side load balancing and fault tolerance, etc.

Hi Alexey,

Thank you for the post on your deployment.  I'm very interested to understand more how a virtual router helped in your deployment.  If I'm understanding correctly, because of the use of VMware vSphere and the network rules allowing, the use of the actual VIP within database mirroring was used as normal - meaning Cache' was able to remove/assign the VIP to whichever node was the primary mirror member.  

As a side note - with ECP clients in the mix, the VIP is not actually a requirement because ECP clients are "mirror-aware" unless some portion of the application needed to access the database server directly.

I'm curious to learn more how you used the virtual router and what components were NAT/PAT'd to and from.  For example, did the vRouter sit between a single external address to an internal load balancer or server pool of ECP clients or web servers?  

It's great to hear alternatives to solutions.  I look forward to hearing back on your deployment.

Kind regards,
Mark B-

ECP clients are "mirror-aware" meaning when you create remote databases on a given ECP client, they are marked as "mirrored".  When the ECP client connects to either mirror member it will be redirected to whichever is the active/primary mirror member.  It will also reconnect to a new primary member during failover.  Our documentation has good detail about this available here:

http://docs.intersystems.com/cache20152/csp/docbook/DocBook.UI.Page.cls?...

Specifically in the Notes: (1)

ECP application servers do not use the VIP and will connect to any failover member or promoted DR member that becomes primary, so the VIP is used only for users' direct connections to the primary, if any.

Hi Mark,

You are right, in our setup Cache' is able to remove/assign the VIP to whichever node is the primary mirror member. All production traffic goes through ECP, so the mirror's VIP is used to simplify some administrative tasks only (just to recognize primary node quicker).

Virtual router is a standard component of vCloud Director, so called Edge Gateway. It sits between a single external address and internal load balancer implemented as a couple of Linux based VMs, each of which has haproxy for balancing and ucarp for VIP/failover. Load balancer distributes incoming client connections among several ECP application servers (Linux based VMs as well). 

That's how it works, in short. If somebody is interested in more verbose description, I could write more about it.

Regards,
Alex