The page describes the process to enable running k3s nodes in an LXD container. The benefits of running an application under an LXD container instead of a virtual machine should be clear. No virtualization overhead, better deployment density, configuration flexibility; these are only a few examples.
It seems that k3s has issues allocating storage when the backing storage for an LXD container is a ZFS or BTRFS storage pool. The simplest way to solve this is to make a new pool of type LVM, and use that for k3s LXD containers. You could also use a DIR type storage pool, but be aware that there are performance issues and limitations with DIR based storage pools.
# mkdir /opt/lxd_pool2 $ lxc storage create k3s lvm size=50GiB
A number of configuration parameters need to be added to the k3s LXD container. The easiest way to do this is by using a profile. As we're using a custom pool, and a bridged profile, we'll create a profile that encapsulates both the custom settings required for a k3s node, as well as the bridged network configuration.
$ cat k3s.cnf config: security.nesting: "true" security.privileged: "true" limits.cpu: "2" limits.memory: 4GB limits.memory.swap: "false" linux.kernel_modules: overlay,nf_nat,ip_tables,ip6_tables,netlink_diag,br_netfilter,xt_conntrack,nf_conntrack,ip_vs,vxlan raw.lxc: | lxc.apparmor.profile = unconfined lxc.cgroup.devices.allow = a lxc.mount.auto=proc:rw sys:rw lxc.cap.drop = description: Profile settings for a bridged k3s container devices: eth0: name: eth0 nictype: bridged parent: br0 type: nic kmsg: path: /dev/kmsg source: /dev/kmsg type: unix-char root: path: / pool: k3s type: disk name: k3s used_by:
To create the new profile, execute the following commands;
$ lxc profile create k3s $ lxc profile edit k3s <k3s.cnf
This step is partially unique to my setup. If your k3s nodes are running Ubuntu then you'll probably also require the apparmor-utils package.
At this point the LXD container should be ready for deploying a k3s node using the standard procedure.
(created: 2021-10-07, last modified: 2022-01-18 at 10:16:36)