Article
· Apr 7, 2023 2m read

Autoscaling IRIS Workloads. My adventure with IKO, HPA, and Traffic Cop

This week I was able to demo a proof of concept for our FMS interface on traffic cop architecture to my team. We are working on modernizing an Interoperability production running on mirrored Health Connect instances. We deploy IRIS workloads on Red Hat OpenShift Container Platform using InterSystems Kubernetes Operator (IKO). We can define any number of replicas for the compute stateful set where each compute pod runs our Interoperability production. We introduced Horizontal Pod Autoscaler (HPA) to scale up the number of compute pods based on memory or CPU utilization. But IKO scaled down because it wanted to keep the defined replicas. When compute pods receive shutdown signal while they are busy, messages in queues do not get processed right away. 
We are transitioning to "Traffic Cop" architecture to enable us to autoscale our workloads. Instead of deploying interoperability production on multiple compute pods, we deploy it on a mirrored data pod which functions as a traffic cop. We will create more REST interfaces where the message processing happens on stateless compute pods which can be deployed without IKO and no interoperability production will be on stateless computes. 
Compute and webgateway containers run as sidecar containers in one pod where webgateway receives requests to be processed in its paired compute. 
Along the way I have created a REST service running on our stateless compute pods which was started with a Swagger API specification. InterSystems IRIS API Management tools generated the code for the REST interface instead of manually coding it.

Discussion (0)1
Log in or sign up to continue