Kubernetes Ingress Load-Balancer using HAProxy
References
- https://kubernetes.io/docs/concepts/services-networking/ingress/
- https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
- https://haproxy-ingress.github.io/docs/getting-started/
- https://www.haproxy.com/documentation/kubernetes/latest/usage/ingress/
- https://www.haproxy.com/blog/use-helm-to-install-the-haproxy-kubernetes-ingress-controller/
Introduction
This page describes how to extend the HAproxy configuration from Kubernetes API Load-Balancer using HAProxy to also act as a load-balancer for a cluster ingress controller as well.
I won't go into any detail on Kubernetes ingress or ingress-controllers. The first two links in the references provide ample detail for these topics.
I will describe how I have ingress set up on my k3s based cluster, and how I use HAProxy to act as a load-balancer for accessing all web applications on the cluster.
Ingress-Controller Selection
I've disabled the default traefik based ingress controller on my k3s cluster using the "--disable traefik
" option during installation.
To replace traefik I've installed the haproxy-ingress ingress controller to my k3s cluster. I used the 'daemonset' installation, which brings up an haproxy ingress pod on each node. This means that for any web application that has an ingress configuration set up, it can be accessed on any of the nodes. While a local DNS could be set up to point a cname for each application at an arbitrary node, this is difficult to maintain, and is prone to problems due to node failure or maintenance, as previously described for the API load-balancer.
A better way is to set up a load-balancer and point the application cnames at the load-balancer.
Rather than set up a new HAProxy load-balancer, I've simply extended the one that I was using for Kubernetes API load-balancing.
Docker-Compose file
This is the extended docker-compose.yml file.
$ cat docker-compose.yml
---
version: '3'
services:
haproxy:
container_name: haproxy
image: haproxytech/haproxy-alpine:2.4.7
volumes:
- ./config:/usr/local/etc/haproxy:ro
environment:
- PUID=1000
- PGID=1000
- TZ=America/Toronto
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "6443:6443"
- "8404:8404"
# EOF
HAProxy Config
This is the extended haproxy.cfg file.
$ cat config/haproxy.cfg
global
stats socket /var/run/api.sock user haproxy group haproxy mode 660 level admin expose-fd listeners
log stdout format raw local0 info
defaults
log global
mode http
option httplog
option dontlognull
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
frontend stats
bind *:8404
stats enable
stats uri /
stats refresh 10s
frontend k8s-api
bind *:6443
mode tcp
option tcplog
option forwardfor
default_backend k8s-api
frontend ingress-80
bind *:80
default_backend ingress-80
frontend ingress-443
bind *:443
default_backend ingress-443
backend k8s-api
mode tcp
option ssl-hello-chk
option log-health-checks
default-server inter 10s fall 2
server node-1-rpi4 192.168.7.51:6443 check
server node-2-lxc 192.168.7.52:6443 check
server node-3-lxc 192.168.7.53:6443 check
backend ingress-80
option log-health-checks
server node-1-rpi4 192.168.7.51:80 check
server node-2-lxc 192.168.7.52:80 check
server node-3-lxc 192.168.7.53:80 check
server node-4-lxc 192.168.7.54:80 check
server node-5-rpi4 192.168.7.55:80 check
server node-6-rpi4 192.168.7.56:80 check
server node-7-rpi4 192.168.7.57:80 check
backend ingress-443
option log-health-checks
server node-1-rpi4 192.168.7.51:443 check
server node-2-lxc 192.168.7.52:443 check
server node-3-lxc 192.168.7.53:443 check
server node-4-lxc 192.168.7.54:443 check
server node-5-rpi4 192.168.7.55:443 check
server node-6-rpi4 192.168.7.56:443 check
server node-7-rpi4 192.168.7.57:443 check
Conclusion
Using this configuration, HAProxy will now act as a load-balancer for both the Kubernetes API access, as well as any HTTP or HTTPS ingress configurations set up on the cluster.
Created: 2021-10-22 12:54
Last update: 2021-10-22 13:29