Article
· Feb 2 3m read

Run Your InterSystems Solution In Your Kubernetes Environment With Guaranteed Quality of Service

All pods are assigned a Quality of Service (QoS). These are 3 levels of priority pods are assigned within a node.

The levels are as following:

1) Guaranteed: High Priority

2) Burstable: Medium Priority

3) BestEffort: Low Priority

It is a way of telling the kubelet what your priorities are on a certain node if resources need to be reclaimed. This great GIF below by Anvesh Muppeda explains it.

If resources need to be freed, firstly pods with Best Effort QoS will be evicted, then those with Burstable, and finally those with Guaranteed. The idea is that hopefully by evicting the pods that are lower priority, we will reclaim enough resources on the node to not have to evict the Guaranteed QoS pods.

Thus we want our critical applications to run with Guaranteed Quality of Service, and InterSystems pods certainly fall into the category of critical applications.

See the attached Open Exchange Application / Github Repo for a template to see how you can upgrade your IrisCluster to have Guaranteed QoS for all of your InterSystems pods.

Up to now you may have been deploying by specifying resources in the podTemplate:

podTemplate:
  spec:
    resources:
      requests:
        memory: "8Gi"
        cpu: "2"
      limits:
        memory: "8Gi"
        cpu: "2"

but assuming you are using the following in your IKO values.yaml (this is the default behaviour):

useIrisFsGroup: false 

then you are disregarding initContainers and a potential side-car and your pod will only have Burstable QoS.

Per Kubernetes Docs on QoS:
For a Pod to be given a QoS class of Guaranteed:

  • Every Container in the Pod must have a memory limit and a memory request.
  • For every Container in the Pod, the memory limit must equal the memory request.
  • Every Container in the Pod must have a CPU limit and a CPU request.
  • For every Container in the Pod, the CPU limit must equal the CPU request.

This includes initContainers and side-cars. To specify the resources for the initContainer you must overwrite it:

      podTemplate:
        spec:
          initContainers:
          - command:
            - sh
            - -c
            - /bin/chown -R 51773:51773 /irissys/*
            image: busybox
            name: iriscluster-init
            resources:
              requests:
                memory: "1Gi"
                cpu: "1"
              limits:
                memory: "1Gi"
                cpu: "1"
            securityContext:
              runAsGroup: 0
              runAsNonRoot: false
              runAsUser: 0
            volumeMounts:
            - mountPath: /irissys/data/
              name: iris-data
            - mountPath: /irissys/wij/
              name: iris-wij
            - mountPath: /irissys/journal1/
              name: iris-journal1
            - mountPath: /irissys/journal2/
              name: iris-journal2
          resources:
            requests:
              memory: "8Gi"
              cpu: "2"
            limits:
              memory: "8Gi"
              cpu: "2"

See a complete IrisCluster example including initContainers and side-cars in the attached Open Exchange Application.

Alternatively, you could consider changing IKO default behaviour in values.yaml to:

useIrisFsGroup: true 

to avoid initContainers in some scenarios but complications can arise and useIrisFsGroup really deserves an article of its own. I plan to write about it next.

Discussion (0)1
Log in or sign up to continue