Article
· 6 hr ago 9m read

IKO Plus: Operator Works From Home - IrisCluster Provisioning Across Kubernetes Clusters

IKO Helm Status: WFH

Here is an option for your headspace if you are designing an multi-cluster architecture and the Operator is an FTE to the design.  You can run the Operator from a central Kubernetes cluster (A), and point it to another Kubernetes cluster (B), so that when the apply an IrisCluster to B the Operator works remotely on A and plans the cluster accordingly on B.  This design keeps some resource heat off the actual workload cluster, spares us some serviceaccounts/rbac and gives us only one operator deployment to worry about so we can concentrate on the IRIS workloads.

IKO woke up and decided against the commute for work, despite needing to operate a development workload of many IrisClusters at the office that day.  Using the saved windshield time, IKO upgraded its helm values on a Kubernetes cluster at home, bounced itself, and went for a run.  Once settled back in, inspecting its logs, it could see it had planned many IrisClusters on the Office Kubernetes cluster, all at the cost of its own internet and power.

Here is how IKO managed this...

Clusters

Lets provision two Kind clusters, ikohome and ikowork.

 
ikokind.sh

After running the above, you should have two clusters running, loaded with the Cilium CNI and ready for business.

Install IKO at HOME 🏠

First we need to make the home cluster aware of the work cluster and load up its kubeconfig as a secret, you can get this done with the following.

kubectl create secret generic work-kubeconfig --from-file=config=ikowork.kubeconfig --kubeconfig ikohome.kubeconfig

Now, we need to make some changes to the IKO chart to WFH.

  • Mount the kubeconfig Secret as a Volume in the Operator
  • point to the kubeconfig in the operator arguments ( --kubeconfig )

The deployment.yaml in its entirety is below, edited right out of the factory, but here are the important points called out in the yaml

Mount

        volumeMounts:
...
        - mountPath: /airgap/.kube
          name: kubeconfig
          readOnly: true
      volumes:
...
      - name: kubeconfig
        secret:
          secretName: work-kubeconfig
          items:
          - key: config
            path: config

Args

The args to the container too... I tried this with the env "KUBECONFIG" but after taking a look at the controller code, found out there was a precedence to such things.

      containers:
      - name: operator
        image: {{ .Values.operator.registry }}/{{ .Values.operator.repository }}:{{ .Values.operator.tag }}
        imagePullPolicy: {{ .Values.imagePullPolicy  }}
        args:
        - run
...
        - --kubeconfig=/airgap/.kube/config
...

 

 
deployment.yaml

Chart

Same with the values.yaml, here I disabled the mutating and validating webhooks.

 
values.yaml

Deploy the chart @ home and make sure its running.

 

Install CRDS (only) at WORK 🏢

This may be news to you, it may not, but understand that the operator actually installs the CRDS in the cluster, so in order to work from home, the CRDS need to exist in the work cluster (but without the actual operator). 

For this we can pull this maneuver:

kubectl get crd irisclusters.intersystems.com --kubeconfig ikohome.kubeconfig -o yaml > ikocrds.yaml
kubectl create -f ikocrds.yaml --kubeconfig ikowork.kubeconfig

IKO WFH

Now, lets level set the state of things:

  • IKO is running at home, not at work
  • CRDS are loaded at work, only

When we apply IrisClusters at work, the operator at home will plan and schedule them from home.

Luckily, the whole burn all .gifs things in the 90's  got worked out for the demo.

 

Operator


IrisCluster




💥

Discussion (0)1
Log in or sign up to continue