Skip to content

Using VLANs with K3S

Refs

Intro

  • tested using k3s + metallb with VLANs
  • idea was to see if we could provision a VLAN IP via metallb that is different than the node IP
  • the test also required that the desired VLANs had been trunked to the host (or in the case of a vm, the proxmox host, with 'VLAN aware' on the bridge interface)

Configuration

  • configured networking on a debian vm as such;
$ cat /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug ens18
iface ens18 inet static
    dhcp4 no
    dhcp6 no
    address 192.168.20.231/24
    gateway 192.168.20.1
    lan-nameservers 192.168.20.21 192.168.20.22
    lan-search lan

auto ens18.40
iface ens18.40 inet manual
    VLAN-raw-device ens18
    post-up ifconfig $IFACE up
    predown ifconfig $IFACE down

auto ens18.50
iface ens18.50 inet manual
    VLAN-raw-device ens18
    post-up ifconfig $IFACE up
    predown ifconfig $IFACE down
  • next deployed a single node k3s cluster, without the built-in load balancer
$ curl -sfL https://get.k3s.io | K3S_TOKEN=testcluster sh -s - server --disable=traefik --disable=servicelb --cluster-init
  • next pulled down the metallb yaml manifest file for version 0.13.9
$ wget https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
  • then applied the following ipaddresspool manifest
$ cat metallb_ipaddresspool.yml 
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: VLAN30
  namespace: metallb-system
spec:
  addresses:
  - 192.168.30.80-192.168.30.89
  autoAssign: false
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: VLAN40
  namespace: metallb-system
spec:
  addresses:
  - 192.168.40.80-192.168.40.89
  autoAssign: false
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: VLAN50
  namespace: metallb-system
spec:
  addresses:
  - 192.168.50.80-192.168.50.89
  autoAssign: false
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system

# EOF

  • the test was to deploy a simple nginx server (deployment and service), one for each ip range, and apply an lb config to each
$ cat nginx30_deployment.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx30
spec:
  selector:
    matchLabels:
      app: nginx30
  replicas: 1
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx30
    spec:
      containers:
        - name: nginx30
          image: nginx
          ports:
            - containerPort: 80
              name: "nginx30"

# EOF

$ cat nginx30_service.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: nginx30
spec:
  ports:
    - name: http
      port: 80
  selector:
    # apply service to any pod with label app: nginx
    app: nginx30

# EOF

$ cat nginx30_lb.yml 
---
apiVersion: v1
kind: Service
metadata:
  name: nginx30
  annotations:
    metallb.universe.tf/address-pool: VLAN30
spec:
  loadBalancerIP: 192.168.30.80
  ports:
  - port: 80
    targetPort: 80
    name: port80
  selector:
    app: nginx30
  type: LoadBalancer

# EOF
  • repeated each config for 192.168.30.80 (VLAN 30), 192.168.40.80 (VLAN 40) and 192.168.50.80 (VLAN 50)

Results

$ kubectl get all -o wide

NAME                           READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
pod/nginx30-6d44574b8f-4r547   1/1     Running   0          115s   10.42.0.38   vlantest   <none>           <none>
pod/nginx40-8688664566-khhkx   1/1     Running   0          98s    10.42.0.39   vlantest   <none>           <none>
pod/nginx50-574945fd7-9f9pn    1/1     Running   0          48s    10.42.0.40   vlantest   <none>           <none>

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE    SELECTOR
service/kubernetes   ClusterIP      10.43.0.1       <none>          443/TCP        79m    <none>
service/nginx30      LoadBalancer   10.43.194.109   192.168.30.80   80:30154/TCP   111s   app=nginx30
service/nginx40      LoadBalancer   10.43.168.15    192.168.40.80   80:32392/TCP   94s    app=nginx40
service/nginx50      LoadBalancer   10.43.32.126    192.168.50.80   80:30434/TCP   44s    app=nginx50

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES   SELECTOR
deployment.apps/nginx30   1/1     1            1           115s   nginx30      nginx    app=nginx30
deployment.apps/nginx40   1/1     1            1           98s    nginx40      nginx    app=nginx40
deployment.apps/nginx50   1/1     1            1           48s    nginx50      nginx    app=nginx50

NAME                                 DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES   SELECTOR
replicaset.apps/nginx30-6d44574b8f   1         1         1       115s   nginx30      nginx    app=nginx30,pod-template-hash=6d44574b8f
replicaset.apps/nginx40-8688664566   1         1         1       98s    nginx40      nginx    app=nginx40,pod-template-hash=8688664566
replicaset.apps/nginx50-574945fd7    1         1         1       48s    nginx50      nginx    app=nginx50,pod-template-hash=574945fd7
  • the nginx servers at 192.168.40.80 and 192.168.50.80 were accessible
  • the nginx server at 192.168.30.80 was not accessible, but that was because we had not configured the node for VLAN 30

  • assuming this would also work for a physical host, as long as the VLANs were trunked to the host


Created: 2023-11-20 22:00
Last update: 2023-11-20 22:20