Murray, thank you for the series of articles.

A couple of questions I have.

1) Documentation (2015.1) states that Rdratio is a Ratio of physical block reads to logical block reads,
while one can see in mgstat log Rdratio values >> 1 (usually 1000 and more). Don't you think that the definition should be reversed?

2) You wrote that:
If you do see high PhyRds on your system there are a couple of strategies you can consider:
...
- Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise impact on interactive users and to offload system resource use such as CPU and IOPS.

I heard this advice many times, but how to return report results back to primary member? (ECP) mounting of remote database that resides on primary member is prohibited on backup member, and vise versa. Or these restrictions does not apply to asynchronous members (never played with them yet)?
 

Hi Mark,

You are right, in our setup Cache' is able to remove/assign the VIP to whichever node is the primary mirror member. All production traffic goes through ECP, so the mirror's VIP is used to simplify some administrative tasks only (just to recognize primary node quicker).

Virtual router is a standard component of vCloud Director, so called Edge Gateway. It sits between a single external address and internal load balancer implemented as a couple of Linux based VMs, each of which has haproxy for balancing and ucarp for VIP/failover. Load balancer distributes incoming client connections among several ECP application servers (Linux based VMs as well). 

That's how it works, in short. If somebody is interested in more verbose description, I could write more about it.

Regards,
Alex

Hi All,

Last year we deployed cloud based Regional MIS of Krasnoyarsky Region of Russia. It was based on O7 National Cloud Platform which had VMWare vSphere inside. One of the Cloud components was a virtual router which allowed us to do NAT/PAT between external and internal networks. So, mirroring VIP was maintained in internal network and permanently mapped to external (fixed) IP without the need to remap it on mirror nodes role exchange.

This project was the real challenge for us not only due to its large scale (7000 named and 1800 concurrent users, > 600GB database), but mostly due to exciting technology mix we were to try and use: virtualization, ECP, Mirroring, application side load balancing and fault tolerance, etc.