Multi-Architecture Kubernetes Cluster and nodeAffinity
References
Introduction
One of the issues with running a Kubernetes cluster on a group of Raspberry Pi single board computers is that you are limited to container images that are built against an arm64 architecture. As anyone that has spent any time with Docker or Kubernetes knows, a large percentage of the available images on hub.docker.com are built for amd64 architecture only, and don't have an arm64 version of the image built. While you could always rebuild the image from the dockerfile (if available) this isn't always possible and sometimes doesn't build properly, and you spend more time debugging the image build.
Multi-Architecture Cluster
One way to solve this problem is to create a multi-architecture cluster. The way I accomplished this was to create three amd64 LXD containers on my virtualization servers, and added those to the cluster.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-1-rpi4 Ready control-plane,etcd,master 75d v1.22.5+k3s1 192.168.7.51 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
node-2-lxc Ready control-plane,etcd,master 63d v1.22.5+k3s1 192.168.7.52 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.5.8-k3s1
node-3-lxc Ready control-plane,etcd,master 63d v1.22.5+k3s1 192.168.7.53 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.5.8-k3s1
node-4-lxc Ready <none> 75d v1.22.5+k3s1 192.168.7.54 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-amd64 containerd://1.5.8-k3s1
node-5-rpi4 Ready <none> 75d v1.22.5+k3s1 192.168.7.55 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
node-6-rpi4 Ready <none> 75d v1.22.5+k3s1 192.168.7.56 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
node-7-rpi4 Ready <none> 75d v1.22.5+k3s1 192.168.7.57 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
Identifying Node Architecture
Having a multi-architecture cluster is only half of the solution though. We have to have a way to ensure that arm64 images run on an arm64 node, and amd64 images on an amd64 node. First we have to be able to identify what architecture each node is. Luckily, the system adds the architecture as a label on each node.
$ kubectl get node node-1-rpi4 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node-1-rpi4 Ready control-plane,etcd,master 23h v1.21.5+k3s2 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=node-1-rpi4,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s
-------------------------------------------
$ kubectl get node node-2-lxc --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node-2-lxc Ready control-plane,etcd,master 23h v1.21.5+k3s2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2-lxc,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s
Node Affinity
The final piece of the puzzle is having a way of using the architecture label to direct deployment of images. This is done with a configuration option known as 'nodeAffinity'. In the deployment manifest, we can add a section to identify the type of node that we want to deploy the container images to.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
Here's an example of a complete deployment manifest, showing where the 'nodeAffinity' section is placed.
$ cat website-wiki_deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: website-wiki
spec:
selector:
matchLabels:
app: website-wiki
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: website-wiki
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: website-wiki
image: m0wer/tiddlywiki
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: TZ
value: "America/Toronto"
- name: USERNAME
value: "radar231"
- name: PASSWORD
valueFrom:
secretKeyRef:
name: website-wiki-pass
key: WIKI_PASSWD
ports:
- containerPort: 8080
name: "website-wiki"
volumeMounts:
- name: website-wiki
mountPath: "/var/lib/tiddlywiki"
volumes:
- name: website-wiki
persistentVolumeClaim:
claimName: website-wiki-pvc
# EOF
Conclusion
While I only make use of the architecture label, 'nodeAffinity' can be used against any label. Custom labels can be created as well, and these can also be used.
Created: 2021-06-08 02:58
Last update: 2022-03-21 12:01