Hi Steve,

There are multiple ways to accomplish this and really depends on the security policies of a given organization.  You can do as you have outlined in the original post, you can do as Dmitry has suggested, or you can even take it a step further and provide an external facing DMZ (eDMZ) and an internal DMZ (iDMZ).  The eDMZ contains only the load balancer with firewall rules only allowing HTTPS access to load balance to only the web servers in the iDMZ, and then the iDMZ has firewall rules to only allow TLS connections to the super server ports on the APP servers behind all firewalls.

Here is a sample diagram describing the eDMZ/iDMZ/Internal network layout.

So, as you can see there are many ways this can be done, and the manner in which to provide network security is up to the organization.  It's good to point out that InterSystems technologies can support many different methodologies of network security from the most simple to very complex designs depending on what the application and organization would require.

Kind Regards,

Mark B

Hi all, I'd like to offer some input here.  Ensemble workloads are traditionally mostly updates when used as purely message ingestion, some transformations, and outbound to one or more outbound interfaces.  As a result, expect to see low Physical Reads rates (as reported in ^mgstat or ^GLOSTAT), however if there are additional workloads such as reporting or applications built along with the Ensemble productions they may have do a higher rate of physical reads.  

As a general rule to size memory for Ensemble we use 4GB of RAM for each CPU (physical or virtual CPU) and then use 50-75% of that RAM for global buffers.  So in a 4 core system, the recommendation is 16GB of RAM with 8-12GB allocated to the global buffers.  This would leave 4-8GB for OS kernel and Ensemble processes.  When using very large memory configurations (>64GB), using the 75% rule rather than only 50% is ideal because the OS kernel and processes won't need so much memory.

One additional note is we highly recommend the use of huge_pages (Linux) or Large_pages (Windows) to provide a much more efficient memory management.  

Hello,

I cannot name specific customers, however this is a configuration used with TrakCare and TrakCare Lab deployments (prior to TrakCare Lab Enterprise which now integrates lab directly as a module into a single TrakCare instance), where each the TrakCare and TrakCare Lab are separate failover mirror sets and TrakCare Analytics is defined as a single Reporting Async mirror member to be the source data to build/support the TrakCare Analytics DeepSee cubes and dashboards in a single instance.

This is our standard architecture for TrakCare based deployments.  I hope this helps.  Please let me know if there are specific questions or concerns with this deployment model.

King regards,

Mark B-

Yes.  Database mirroring within cloud infrastructure is possible.  As you point out the use of the virtual IP address (VIP) in most cases is not doable.  This is due to cloud network management/assignments/rules not particularly liking having IP addresses changing outside of the cloud management facilities.

Having said that, the use of 3rd party load balancers offers a solution in the form of a virtual appliance available in most cloud marketplaces in a Bring-Your-Own-License (BYOL) model.  As an example F5 LTM Virtual Edition.  With these appliances there are usually two methods available to control network traffic flow.  

The first option uses an API called from ^ZMIRROR during failover to instruct the load balancer that a particular server is now the primary mirror member.  The API methods range from CLI type scripting to REST API integration.

The second option uses load balancer polling to determine which mirror member is primary.  This involves creating a simple CSP page or listening socket to respond whether a given server in the load balanced pool is the primary mirror member.

The second option is more portable and load balancer agnostic since it doesn't rely on specific syntax or integration methods from a given load balancer vendor or model.  However the limitation is the frequency of polling.  In most cases polling can be as low as a few seconds - which in most scenarios is acceptable.

I will be soon posting a long article here on the Community detailing some examples using F5 LTM VE and providing a sample CSP status page and REST API integration to cover both options mentioned above.  I will also be presenting a session during our upcoming Global Summit.