Skip to content

K3S Nodes in LXD Containers

References

Introduction

The page describes the process to enable running k3s nodes in an LXD container. The benefits of running an application under an LXD container instead of a virtual machine should be clear. No virtualization overhead, better deployment density, configuration flexibility; these are only a few examples.

Container storage can't be on a ZFS or BTRFS storage pool

It seems that k3s has issues allocating storage when the backing storage for an LXD container is a ZFS or BTRFS storage pool. The simplest way to solve this is to make a new pool of type LVM, and use that for k3s LXD containers. You could also use a DIR type storage pool, but be aware that there are performance issues and limitations with DIR based storage pools.

$ lxc storage create k3s lvm size=50GiB

New profile for k3s containers

A number of configuration parameters need to be added to the k3s LXD container. The easiest way to do this is by using a profile. As we're using a custom pool, and a bridged profile, we'll create a profile that encapsulates both the custom settings required for a k3s node, as well as the bridged network configuration.

$ cat k3s.cnf 

config:
  security.nesting: "true"
  security.privileged: "true"
  limits.cpu: "2"
  limits.memory: 4GB
  limits.memory.swap: "false"
  linux.kernel_modules: overlay,nf_nat,ip_tables,ip6_tables,netlink_diag,br_netfilter,xt_conntrack,nf_conntrack,ip_vs,vxlan
  raw.lxc: |
    lxc.apparmor.profile = unconfined
    lxc.cgroup.devices.allow = a
    lxc.mount.auto=proc:rw sys:rw
    lxc.cap.drop =
description: Profile settings for a bridged k3s container
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  kmsg:
    path: /dev/kmsg
    source: /dev/kmsg
    type: unix-char
  root:
    path: /
    pool: k3s
    type: disk
name: k3s
used_by:

To create the new profile, execute the following commands;

$ lxc profile create k3s
$ lxc profile edit k3s <k3s.cnf

Packages to add to container after launch

This step is partially unique to my setup. If your k3s nodes are running Ubuntu then you'll probably also require the apparmor-utils package.

  • need to add the following packages to get k3s running
    • curl (to install k3s)
    • nfs-common (to access nfs storage)
    • apparmor-utils (if running Ubuntu on the k3s node container)

Conclusion

At this point the LXD container should be ready for deploying a k3s node using the standard procedure.


Created: 2021-10-07 17:21
Last update: 2023-03-11 11:01