Article
· Apr 25, 2023 12m read

Configuring Mirror in Docker

A common need for our customers is to configure both HealthShare HealthConnect and IRIS in high availability mode.

It's common for other integration engines on the market to be advertised as having "high availability" configurations, but that's not really true. In general, these solutions work with external databases and therefore, if these are not configured in high availability, when a database crash occurs or the connection to it is lost, the entire integration tool it becomes unusable.

In the case of InterSystems solutions, this problem does not exist, as the database is part and the core of the tools themselves. And how has InterSystems solved the problem of high availability? With abstruse configurations that could drag us into a spiral of alienation and madness? NO! At InterSystems we have listened and attended to your complaints (as we always try to do ;) ) and we have made the mirroring function available to all our users and developers.

Mirroring

How does the Mirror work? The concept itself is very simple. As you already know, both IRIS and HealthShare work with a journaling system that records all update operations on the databases of each instance. This journaling system is the one that later helps us to recover the instances without data loss after a crash. Well, these journal files are sent between the instances configured in mirror, allowing and keeping the instances configured in mirror permanently updated.

Architecture

Let's briefly explain what the architecture of a system configured in Mirror would look like:

  • Two instances configured in failover mode:
    • Active node – Receives all regular read/write operations.
    • Passive node: in reading mode, it synchronously receives any changes produced in the active node.
  • 0-14 asynchronous instances: as many asynchronous instances as you want to use, they can be of two types:
    • DR async (Disaster Recovery): nodes in reading mode that are not part of the Failover, although they can be promoted manually. If so, they could be automatically promoted to primary node in the event of the failure of the other two Failover nodes. The update of your data is in asynchronous mode, so its freshness is not guaranteed.
    • Reporting Asyncs: Nodes updated asynchronously for use in BI tasks or data mining. They cannot be promoted to failover since writes can be performed on the data.
  • ISCAgent: Installed on each server where an instance is located. It will be in charge of monitoring the status of the instances of said server. It is another way of communication between the Mirror servers in addition to direct communication.
  • Arbiter: it is an ISCAgent installed independently from the servers that make up the Mirror and allows increasing security and control of failovers within it by monitoring the ISCAgents installed and the IRIS/HealthShare instances. Its installation is not mandatory.

This would be the operation of a Mirror formed by a failover with only two nodes:

In an InterSystems IRIS mirror, when the primary becomes unavailable, the mirror fails over to the backup.

Previous warning

The project associated with this article does not have an active license that allows the configuration of the mirror. If you want to try it, send me an email directly or add a comment at the end of the article and I will contact you.

Deployment in Docker

For this article, we are going to set up a small project in Docker that allows us to set up 2 failover instances with an Arbiter. By default, the IRIS images available for Docker have the ISCAgent already installed and configured, so we can skip that step. It will be necessary to configure the project that is associated with the article from a Visual Studio Code, since it will allow us to work more comfortably with the server files later on.

Let's see what form our docker-compose.yml would have:

version: '3.3'
services:
  arbiter:
      container_name: arbiter
      hostname: arbiter
      image: containers.intersystems.com/intersystems/arbiter:2022.1.0.209.0
      init: true
      command:
        - /usr/local/etc/irissys/startISCAgent.sh 2188
  mirrorA:
    image: containers.intersystems.com/intersystems/iris:2022.1.0.209.0
    container_name: mirrorA
    depends_on:
      - arbiter
    ports:
    - "52775:52773"
    volumes:
    - ./sharedA:/shared
    - ./install:/install
    - ./management:/management
    command:
      --check-caps false
      --key /install/iris.key
      -a /install/installer.sh
    environment:
    - ISC_DATA_DIRECTORY=/shared/durable
    hostname: mirrorA
  mirrorB:
    image: containers.intersystems.com/intersystems/iris:2022.1.0.209.0
    container_name: mirrorB
    depends_on:
      - arbiter
      - mirrorA
    ports:
    - "52776:52773"
    volumes:
    - ./sharedB:/shared
    - ./install:/install
    - ./management:/management
    command:
      --check-caps false
      --key /install/iris.key
      -a /install/installer.sh
    environment:
    - ISC_DATA_DIRECTORY=/shared/durable
    hostname: mirrorB

We can see that we have defined 3 containers:

  • Arbiter: it corresponds to the ISCAgent (even though the image is called Arbiter) that will be deployed to control the IRIS instances that will form the Mirror Failover. When starting the container it will execute a shell file that will start the ISCAgent listening on port 2188 of the container.
  • mirrorA: container in which the IRIS v.2022.1.0.209 image will be deployed and which we will later configure as the primary Failover node.
  • mirrorB: container in which the IRIS v.2022.1.0.209 image will be deployed and which we will later configure as a secondary Failover node.

When we execute the docker-compose up -d command, the defined containers will be deployed in our Docker, and it should look like this in our Docker Desktop (if we do it from Windows).

Mirror configuration.

With our containers deployed we will proceed to access the instances that we are going to configure in mirror, the first will be found listening on port 52775 (mirrorA) and the second on 52776 (mirrorB). The access user and password will be superuser / SYS

Due to the fact that the instances are deployed in Docker, we will have two options to configure the IPs of our servers. The first is to directly use the name of our containers in the configuration (which is the easiest way) or check the IPs that Docker has assigned for each container (opening the console and executing an ifconfig that returns the assigned IP). For reasons of clarity we will use for the example the names that we have given to each container as the address of each one within Docker.

First we will configure the instance that we will use as the active node of the FailOver. In our case it will be what we have called mirrorA.

The first step will be to enable the mirroring service, so we will access the mirror menu from the management portal: System Administration --> Configuration --> Mirror Settings --> Enable Mirror Service and mark the Service Enabled check:

With the service enabled we can start configuring our active node. After enabling the service you will be able to see that new options have been enabled in the Mirror menu:

In this case, since we do not have any mirror configuration already created, we must create a new one with the Create Mirror option. When we access this option, the management portal will open a new window from which we can configure our mirror:

Let's take a closer look at each of the options:

  • Mirror Name: the name with which we will identify our mirror. For our example we will call it MIRRORSET
  • Require SSL/TLS: for our example we will not configure a connection using SSL/TLS, although in production environments it would be more than convenient to prevent the journal file from being shared without any type of encryption between the instances. If you are interested in configuring it, you have all the necessary information at the following URL of the documentation.
  • Use Arbiter: this option is not mandatory, but it is highly recommended, since it adds a layer of security to our mirror configuration. For our example we will leave it checked and we will indicate the IP in which we have our Arbiter running. For our example the IP will be in container name arbiter.
  • User Virtual IP: in Linux/Unix environments this option is very interesting since it allows us to configure a virtual IP for our active node that will be managed by our Mirror. This virtual IP must belong to the same subnet as the failover nodes. The operation of the virtual IP is very simple, in case of failure of the active node the mirror will automatically configure the virtual IP on the server where the passive node to be promoted is located. In this way, the promotion of the passive to active node will be completely transparent for the users, since they will continue to be connected to the same IP, even though it will be configured on a different server. If you want to know more about the virtual IP you can review this URL of the documentation.

The rest of the configuration can be left as it is. On the right side of the screen we will see the information related to this node in the mirror:

  • Mirror Member Name: name of this mirror member, by default it will take the name of the server along with the name of the instance.
  • Superserver Address: Superserver IP address of this node, in our case, mirrorA.
  • Agent Port: port in which the ISCAgent corresponding to this node has been configured. By default 2188.

Once the necessary fields are configured, we can proceed to save the mirror. We can check how the configuration has been from the mirror monitor (System Operation --> Mirror Monitor).

Perfect, here we have our newly configured mirror. As you can see, only the active node that we have just created appears. Very good, let's go then to add our passive node in the Failover. We access the mirrorB management portal and access the Mirror Settings menu. As we already did for the mirrorA instance, we must enable the Mirror service. We repeat the operation and as soon as the menu options are updated we will choose Join as Failover.

Here we have the mirror connection screen. Let's briefly explain what each of the fields means:

  • Mirror Name: name that we gave to the mirror at the time of creation, in our example MIRRORSET.
  • Agent Address on Other System: IP of the server where the ISCAgent of the active node is deployed, for us it will be mirrorA.
  • Agent Port: listening port of the ISCAgent of the server in which we created the mirror. By default 2188.
  • InterSystems IRIS Instance Name: the name of the IRIS instance on the active node. In this case it coincides with that of the passive node, IRIS.

After saving the mirror data we will have the option to define the information related to the passive node that we are configuring. Let's take a look again at the fields that we can configure of the passive node:

  • Mirror Member Name: name that the passive node will take in the mirror. By default formed by the name of the server and the instance.
  • Superserver Address: IP address of the superserver in our passive node. In this case mirrorB.
  • Agent Port: listening port of the ISCAgent installed on the passive node server that we are configuring. By default 2188.
  • SSL/TLS Requirement: not configurable in this example, we are not using SSL/TLS.
  • Mirror Private Address: IP address of the passive node. As we have seen, when using Docker we can use the container name mirrorB.
  • Agent Address: IP address to the server where the ISCAgent is installed. Same as before, mirrorB.

We save the configuration as we have indicated and we return to the mirror monitor to verify that we have everything correctly configured. We can visualize the monitor of both the active node in mirrorA and the passive one in mirrorB. Let's see the differences between both instances.

Mirror monitor on active node mirrorA:

Mirror monitor on passive node mirrorB:

As you can see the information shown is similar, basically changing the order of the failover members. The options are also different, let's see some of them:

  • Active node mirrorA:
    • Set No Failover: prevents the execution of the failover in the event of a stoppage of any of the instances that are part of it.
    • Demote other member: Removes the other failover member (in this case mirrorB) from the mirror configuration.
  • Passive node mirrorB:
    • Stop Mirror On This Member: Stops mirror synchronization on the failover passive node.
    • Demote To DR Member: demotes this node from being part of the failover with its real-time synchronization to Disaster Recovery mode in asynchronous mode.

Perfect, we already have our nodes configured, now let's see the final step in our configuration. We have to decide which tables will become part of the mirror and configure it on both nodes. If you look at the README.md of the Open Exchange project associated to this article, you will see that we configure and deploy two applications that we usually use for training. These applications are deployed automatically when we start Docker containers and NAMESPACES and databases are created by default.

The first application is COMPANY that allows us to save company records and the second is PHONEBOOK that allows us to add personal contacts related to registered companies, as well as customers.

Let's add a company:

And now let's go to create a personal contact for the previous company:

The company data will be registered in the COMPANY database and the contact data in PERSONAL, both databases are mapped so that they can be accessed from the Namespace PHONEBOOK. If we check the tables in both nodes we will see that in mirrorA we have the data the company and the contact, but in mirrorB there is still nothing, as is logical.

Companies registered in mirrorA:

Alright, let's proceed to configure the databases on our mirror. To do this, from our active node (mirrorA), we access the local database administration screen (System Administrator --> Configuration --> System Configuration --> Local Databases) and click on the Add to Mirror option, we have to select from the list all the databases that we want to add and read the message from the screen:

Once we add the databases to the mirror from the active node, we have to make a backup of them or copy the database files (IRIS.dat) and restore them on the passive node. If you decide to make a direct copy of the IRIS.dat files, keep in mind that you have to freeze the writes in the database to be copied, you can see the necessary commands in the following URL of the documentation. In our example, it will not be necessary to pause, since no one but us is writing to it.

Before making this copy of the database files, let's check the status of the mirror from the monitor of the active node:

Let's see the passive node:

As we can see, from the passive node we are being informed that although we have 3 databases configured in the mirror, the configuration has not yet been done. Let's proceed to copy the databases from the active node to the passive one, let's not forget that we must dismount the databases of the passive node to be able to make the copy and for this we will access from the management portal to System Configuration --> Databases and accessing each one of them we proceed to dismount them.

Perfect! Dismounted databases. Let's access the project code associated with the article from Visual Studio Code and see that we have the folders where the IRIS installations are located, sharedA for mirrorA and sharedB for mirrorB. Let's access the folders where the COMPANY, CUSTOMER and PERSONAL databases are located in (/sharedA/durable/mgr) and proceed to copy the IRIS.dat of each database in the mirror to the appropriate directories of mirrorB (/sharedB/durable/mgr).

Once the copy is finished, we mount the mirrorB databases again and check the status of the configured databases from the mirror monitor in mirrorB:

Bingo! Our mirror has recognized the databases and now we just need to activate and update them. To do this, we will click on the Activate action and then on Catchup, which will appear after activation. Let's see how they end up:

Perfect, our databases are already correctly configured in mirror, if we consult the COMPANY database we should see the record that we registered from mirrorA before:

Obviously our COMPANY database has the record that we entered previously in mirrorA, we have copied the entire database after all . Let's proceed to add a new company from mirrorA that we will call "Another company" and proceed to query the COMPANY database table again:

Here we have it. We will only have to make sure that our databases configured in mirror are in read only mode for the passive node mirrorB:

And they are there! in R mode for reading. Well, we already have our mirror configured and our databases synchronized. In the event that we had productions running, it would not be a problem since the mirror is automatically in charge of managing them, starting them in the passive node in the event of a fall in the active node.

Thank you very much to all of you who have reached this point! It was long but I hope you find it useful.

Discussion (4)1
Log in or sign up to continue