Article
· 16 hr ago 9m read

IKO Plus: VIP in Kubernetes on IrisClusters

Power your IrisCluster serviceTemplate with kube-vip

If you're running IRIS in a mirrored IrisCluster for HA in Kubernetes, the question of providing a Mirror VIP (Virtual IP) becomes relevant. Virtual IP offers a way for downstream systems to interact with IRIS using one IP address. Even when a failover happens, downstream systems can reconnect to the same IP address and continue working.

The lead in above was stolen (gaffled, jacked, pilfered) from techniques shared to the community for vips across public clouds with IRIS by @Eduard Lebedyuk ...

Articles: ☁ vip-aws | vip-gcp | vip-azure

This version strives to solve the same challenges for IRIS on Kubernetes when being deployed via MAAS, on prem, and possibly yet to be realized using cloud mechanics with Manged Kubernetes Services.  

 

Distraction

This distraction will highlight kube-vip, where it fits into a Mirrored IrisCluster, and enabling "floating ip" for layers 2-4 with the serviceTemplate, or one of your own.  I'll walk through a quick install of the project, apply it to a Mirrored IrisCluster and attest to a failover of the mirror against the floating vip is timely and functional.

IP

Snag an available IPv4 address off your network and set it asside for use as the VIP for the IrisCluster (or a range of them).  For this distraction we value the predictability of a single ip address to support the workload.

192.168.1.152

This is the one address to rule them all, and in use for the remainder of the article.

Kubernetes Cluster

The Cluster itself is running Canonical Kubernetes on commodity hardware of 3 physical nodes on a flat 192.X network, home lab is the strictest definition of the term.

Nodes

You'll want to do this step through some slick hook to get some work done on the node during the scheduling for implementing the virutal interface/ip.  Hopefully your nodes will have some consistentcy with the NIC hardware, making the node prep easy.  However my cluster above had some varying network interfaces, as its purchase spanned over multiple prime days, so I virtualized all them by aliasing the active nic to vip0 as an interface.

I ran the following on the nodes before I get started to add a virtual nic to a physical interface and ensure it starts at boot on the nodes.

 
vip0.sh

You should see the system assign vip0 interface and tee it up for start on boot.

If your commodity network gear lets you know when something new has arrived on the network, you may get something like this after adding those interfaces on your cell.


 

 

💫 kube-vip

A descprition of kube-vip from the ether:

kube-vip provides a virtual IP (VIP) for Kubernetes workloads, giving them a stable, highly available network address that automatically fails over between nodes — enabling load balancer–like or control plane–style redundancy without an external balancer.

The commercial workload use case is prevelant for secure implementations where the IP address space is limited and DNS is a bit tricky, like HSCN connectivity for instance in England.  The less important, but thing to solve for most standing up clusters outside of the public cloud, is basically ALB/NLB like connectivity to the workloads... have solved this with Cilium, MetalLB, and now have added kube-vip to my list.

On each node, kube-vip runs as a container, via a Daemonset, that participates in a leader-election process using Kubernetes Lease objects to determine which node owns the virtual IP (VIP). The elected leader binds the VIP directly to a host network interface (for example, creating a virtual interface like eth0:vip0) and advertises it to the surrounding network. In ARP mode, kube-vip periodically sends gratuitous ARP messages so other hosts route traffic for the VIP to that node’s MAC address. When the leader fails or loses its lease, another node’s kube-vip instance immediately assumes leadership, binds the VIP locally, and begins advertising it, enabling failover. This approach effectively makes the VIP “float” across nodes, providing high-availability networking for control-plane endpoints or load-balanced workloads without relying on an external balancer.

Shorter version:

The kube-vip containers have an election to employ a leader to determine which node should own the Virtual IP, and then binds the IP to the interface accordingly on that node and advertises it to the network.  This enables the IP address to "float" across the nodes, bind only to healthy ones, and make services accessible via IP... all using Kubernetes native magic.

The install was dead simple, no immediate chart needed, but very easy to wrap up in one if desired, here we will just deploy the manifests that support its install as specified on the getting started md of the project.

 
kube-vip.yaml

Apply it! sa, rbac, daemonset.



Bask in the glory of the running pods of the Daemonset (hopefully)

IrisCluster

Nothing special here, but a vanilla mirrorMap of ( primary/failover ).

 
IrisCluster.yaml ( abbreviated )

Apply it, and make sure its running...

kubectl apply -f IrisCluster.yaml -n ikoplus

Its alive (and mirroring) !

Annotate

This binds the VirtualIP to the Service, and is the trigger for setting up the service to the vip.

Keep in mind here, we can specify a range of ip addresses too, which you can get creative with your use case, this skips the trigger and just pulls one from a range, recall we foregoed it in the install of kube-virt, but check the yaml for the commented example.


 

kubectl annotate service nginx-lb kube-vip.io/loadbalancerIPs="192.168.1.152" --overwrite -n ikoplus


Attestation

Lets launch a pod that continually polls against the smp url constructed with the vip and watch teh status codes during fail over, then we will send one of the mirrors members "casters up" and see how the vip takes over on the alternate node.

 
podviptest.yaml

 🎉