Article
· Dec 10, 2024 5m read

Getting Started Using Istio Service Mesh with Mirrored IRIS Environment in Kubernetes

The Istio Service Mesh is commonly used to monitor communication between services in applications. The "battle-tested" sidecar mode is its most common implementation. It will add a sidecar container to each pod you have in your namespace that has Istio sidecar injection enabled.

It's quite easy to get started with, just put the istioctl executable in your PATH, and label your namespace such that it tells Istio to acitvate side car injection there.

>> kubectl get po -n iris
NAME                                              READY   STATUS    RESTARTS        AGE
intersystems-iris-operator-amd-8588f64559-xqsfz   1/1     Running   0               52s
>> istioctl install
This will install the Istio 1.22.2 "default" profile (with components: Istio core, Istiod, and Ingress gateways) into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete                                                                                                                                     Made this installation the default for injection and validation.
>> kubectl label namespace iris istio-injection=enabled
namespace/iris labeled
>> kubectl delete po intersystems-iris-operator-amd-8588f64559-xqsfz -n iris
pod "intersystems-iris-operator-amd-8588f64559-xqsfz" deleted
>> kubectl get po -n iris
NAME                                              READY   STATUS    RESTARTS     AGE
intersystems-iris-operator-amd-8588f64559-c4gfh   2/2     Running   1 (8s ago)   12s

For example, if our pods previously had ready: 1/1 you will now see 2/2, the second pod being the sidecar (note that this will require a pod restart if the pod was up before the namespace was labeled).
See Open Exchange App for GitHub Repo and find irisSimple.yaml to understand this deployment.

>> kubectl get po -n iris
NAME                                              READY   STATUS    RESTARTS        AGE
intersystems-iris-operator-amd-8588f64559-c4gfh   2/2     Running   1 (9m10s ago)   9m14s
iris-data-0                                       2/2     Running   0               64s
iris-webgateway-0                                 2/2     Running   0               28s

Similarly, if I were to have had a WebGateway sidecar, we would see the data node have 3 pods in the container (IRIS, WG, Istio):

See Open Exchange App for GitHub Repo and find irisSimpleSideCar.yaml to understand this deployment

>> kubectl get po -n iris
NAME                                              READY   STATUS    RESTARTS      AGE
intersystems-iris-operator-amd-8588f64559-c4gfh   2/2     Running   1 (11m ago)   11m
iris-data-0                                       3/3     Running   0             35s
iris-webgateway-0                                 2/2     Running   0             2m43s

The idea is that Istio will receive and forward all communication without needing user intervention. The problem with such black boxes, is that when something goes wrong it can be difficult to understand why.

Let's set up a mirrored architecture and see what happens.

First thing you will notice is that iris-data-0-1 will take an extremely long time to start-up. In my tests is is about ~65 minutes. This is because it is attempting to connect to the arbiter and other data node but is not succeeding. There is a timer that is out of the scope of this article that says "after one hour go up with or without mirror setup".

See Open Exchange App for GitHub Repo and find irisMirrorSideCar.yaml to understand this deployment.

kubectl get po -w -n iris
NAME                                              READY   STATUS    RESTARTS       AGE
intersystems-iris-operator-amd-8588f64559-c4gfh   2/2     Running   1 (117m ago)   117m
iris-arbiter-0                                    2/2     Running   0              61m
iris-data-0-0                                     3/3     Running   0              61m
iris-data-0-1                                     2/3     Running   0              60m

Eventually (a bit more than 60 minutes after) we get the following:

>> kubectl get po -n iris
NAME                                              READY   STATUS    RESTARTS        AGE
intersystems-iris-operator-amd-8588f64559-c4gfh   2/2     Running   1 (3h26m ago)   3h26m
iris-arbiter-0                                    2/2     Running   0               150m
iris-data-0-0                                     3/3     Running   0               149m
iris-data-0-1                                     3/3     Running   0               149m
iris-webgateway-0                                 2/2     Running   0               87m

Looking into the logs we see:

2 [Utility.Event] Error creating MirrorSet 'irismirror1' member 'backup': ERROR #2087: Mirror member IRIS-DATA-0-0 is unreachable with IRIS-DATA-0-0.IRIS-SVC.DEFAULT.SVC.CLUSTER.LOCAL,1972

Istio's automatic service and endpoint detection has failed to pass on communications between the arbiter, backup, and primary.

It identifies, by default, that mTLS is being used even though it is not (at least in this situation). In order to get around this we can explicitly tell Istio not to use mTLS. We do this as follows:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: disable-mtls
  namespace: default
spec:
  mtls:
    mode: DISABLE

Once we unblock the communications we get a properly configured mirror:

And finally, we can properly harness the power of Istio, for example with View The Dashboard

Istio has many Custom Resource Definitions, and Peer Authentication is not the only one you may want to look deeper into. Virtual Services and Destination Rule also warrant a look.

 

If you do want to set up mTLS I would point you to these three sources:

1) InterSystems Official mTLS documentation

2) Steve Pisani's great step-by-step explainer

3) IKO Docs - TLS Security (note that some connections here are mTLS while some are only TLS)

Discussion (0)1
Log in or sign up to continue