Hi Ron,

There are many options available for may different deployment scenarios.  Specifically for the multi-site VPN you can use the Azure VPN Gateway.  Here is a diagram provided by Microsoft's documentation showing it.  

Here is the link as well to the multi-site VPN details.

As for Internet gateways, yes they have that concept and the load balancers can be internal or external.  You control access with network security groups and also using the Azure Traffic Manager and also using Azure DNS services.  There are tons of options here and really up to you and what/how you want to control and manage the network.  Here is a link to Azure's documentation about how to make a load balancer Internet facing.

The link to the code for some reason wasn't marked as public in the github repository.  I'll take care of that now.

Regards,

Mark B-

Hi Matthew,

Thank you for your question. Pricing is tricky and best discussed with your Microsoft representative.  When looking at premium storage accounts, you only pay for the provisioned disk type not transactions, however there are caveats.  For example if you need only 100GB of storage will be be charges for a P0 disk @ 128GB.  A good Microsoft article to help explain the details can be found here.

Regards,

Mark B

setting the TZ environment variable needs to be done in the system-wide profile such as /etc/profile.  This should define it properly for you.  I would recommend a restart of Caché after setting it /etc/profile.  

Also the impact of the TZ environment variable not being set should be reduced (eliminated) with the current 2016.1+ releases where we have changed the way this operates.

Kind regards,
Mark B-

Hi Steve,

There are multiple ways to accomplish this and really depends on the security policies of a given organization.  You can do as you have outlined in the original post, you can do as Dmitry has suggested, or you can even take it a step further and provide an external facing DMZ (eDMZ) and an internal DMZ (iDMZ).  The eDMZ contains only the load balancer with firewall rules only allowing HTTPS access to load balance to only the web servers in the iDMZ, and then the iDMZ has firewall rules to only allow TLS connections to the super server ports on the APP servers behind all firewalls.

Here is a sample diagram describing the eDMZ/iDMZ/Internal network layout.

So, as you can see there are many ways this can be done, and the manner in which to provide network security is up to the organization.  It's good to point out that InterSystems technologies can support many different methodologies of network security from the most simple to very complex designs depending on what the application and organization would require.

Kind Regards,

Mark B

Hi all, I'd like to offer some input here.  Ensemble workloads are traditionally mostly updates when used as purely message ingestion, some transformations, and outbound to one or more outbound interfaces.  As a result, expect to see low Physical Reads rates (as reported in ^mgstat or ^GLOSTAT), however if there are additional workloads such as reporting or applications built along with the Ensemble productions they may have do a higher rate of physical reads.  

As a general rule to size memory for Ensemble we use 4GB of RAM for each CPU (physical or virtual CPU) and then use 50-75% of that RAM for global buffers.  So in a 4 core system, the recommendation is 16GB of RAM with 8-12GB allocated to the global buffers.  This would leave 4-8GB for OS kernel and Ensemble processes.  When using very large memory configurations (>64GB), using the 75% rule rather than only 50% is ideal because the OS kernel and processes won't need so much memory.

One additional note is we highly recommend the use of huge_pages (Linux) or Large_pages (Windows) to provide a much more efficient memory management.  

Hello,

I cannot name specific customers, however this is a configuration used with TrakCare and TrakCare Lab deployments (prior to TrakCare Lab Enterprise which now integrates lab directly as a module into a single TrakCare instance), where each the TrakCare and TrakCare Lab are separate failover mirror sets and TrakCare Analytics is defined as a single Reporting Async mirror member to be the source data to build/support the TrakCare Analytics DeepSee cubes and dashboards in a single instance.

This is our standard architecture for TrakCare based deployments.  I hope this helps.  Please let me know if there are specific questions or concerns with this deployment model.

King regards,

Mark B-

Hi Alex,

You are correct that latency is only a major consideration for synchronous (failover) mirror members.  In an async member, latency to/from the primary mirror member does not slow down the primary mirror member processing.  Like you mentioned it only impacts the delay in the async mirror member being "caught up".   Your example is perfectly fine for DR Async, and if the DR Async should fall behind for any reason, it will put itself into "catch up mode".  In all cases this does not impact the primary mirror member performance.

I'd like to mention that in DR Async mirror members we also use compression as a means to be sensitive to bandwidth requirements, so if sizing a WAN link for DR Async consider that the bandwidth requirements will be less due to compression.

As for cascading mirrors, that currently is not a feature we support today.

Thanks again for your excellent questions.

Kind regards,
Mark B-

Yes.  Latency is a major factor when considering geographically splitting synchronous mirrors.  You will need to really understand the given application and workload to know how much latency can be tolerated.  Some applications can accept latency (to a certain level) however others may not.

We do have deployments with each synchronous member located in different locations and latency is single digit millisecond latency and only separated by about 100 miles, so there is tolerable latency in this configuration for this application.

Unfortunately there is no absolute formula here to determine if a particular application can leverage that type of a deployment strategy.  The first things to consider is monitor the current journal physical write rate of the application with ^mgstat or ^pButtons during peak workloads.  You also need to understand if ECP is heavily used because this will have an impact on the the number of journal sync calls for ECP durability guarantees .  Usually looking at IO rates with iostat (Linux or UNIX) or PERFMON.EXE (Windows) of the journal volume will give you a good indication of the mirror throughput you will need.  Using that figure you can work out what maximum latency should be as a start.

Here is an example:

Say on a given system you see the journal write rate from pButtons/mgstat is relatively low at only 10-20 journal writes per second.  Let's assume these are full 64KB journal buffer writes - so bandwidth requirements will be in the neighborhood of 1.3 Mbytes / second (or 10Mbit / second) as a minimum.  I would recommend allocating at least 20Mbit or more to ensure spikes can be efficiently handled.  However when looking at iostat output you notice the journal volume is doing 200 writes per second because the application is using ECP clients (application servers).  

So with this example, we know that at a minimum synchronous mirroring will need at least 20Mbps of bandwidth and latency less than 5 milliseconds.  I came to the 5 millisecond requirement by taking 1000 milliseconds (1 second) and divide by 200 journal IOPS.  This gives the maximum latency of 5ms to sustain 200 IOPS.  This is by no means the absolute requirement for the application.  This is a simple starting point to understanding the requirement scope for WAN connectivity, and the application needs to be thoroughly tested to confirm transaction/processing response times are adequate.

I hope this helps.

Regards,

Mark B-

Hi Alexey,

WAN connectivity varies significantly and many factors play into the requirements and latency.  You can get very good (fast and reliable) WAN connectivity, however distance impacts latency, so you need to be careful in your planning.

As for deciding which mirror to promote...  This is one of the reasons we do not recommend automating the promotion of a DR Async member to become primary.  You will want to evaluate the state (or reported latency) within the ^MIRROR utility on each DR Async member to determine which one (maybe both?) are current or not.  If they are out of sync with each other, you will need to manually rebuild the "new backup" in the secondary data center based on the newly promoted DR Async member.

Regards,

Mark B-