IaC: LXD as the Vehicle and Ansible as the Engine
References
- IaC (Infrastructure as Code) Definition
- Ansible Community.General Collection
- Ansible Community.General Support Modules for LXD
- Ansible LXD Deployment Roles and Playbook
Introduction
Adopting "DevOps" practices for a homelab may seem like overkill, but it could be appropriate if the homelab is being used as a training or educational tool for current or future employment. At the very least, there are aspects of the DevOps methodology that could be beneficial in any homelab. One of these is the concept of "Infrastructure as Code" (IaC), which dictates that the configuration of any virtualized infrastructure host (either a system container or virtual machine, in the case of LXD) be specified in a configuration file. This configuration file is then fed into a deployment engine to create the virtualized host. The benefit of this practice is that it is easy to ensure that all virtualized hosts are deployed in a predictable and consistent manner.
A common tool used for IaC deployment is Terraform. Terraform can be integrated into many virtualization systems, including LXD.
Fortunately this isn't required for LXD, as containers and virtual machines are typically deployed from images stored on one of the default image sources (which can be listed using lxc remote list
). This means that containers and virtual machines deployed from stock images will be consistent from deployment to deployment, short of configurations made via cloud init.
Ansible as an IaC Engine
Ansible has long had a prominent role in performing an IaC function. Playbooks that perform system configurations or install software to a freshly deployed host are performing an aspect of IaC. This is a bit of a grey area though, as using the same playbooks to install software to an existing host is often also looked at as "Configuration Management". Either way, having a consistent method to perform configurations or install software to a host is a key IaC role.
What normally is missing for Ansible is the initial deployment of the virtualized host itself. There are many Ansible modules to interface with a wide variety of virtualization systems. In the case of LXD there are Ansible modules that integrate into LXD container and virtual machine deployment and management.
Ansible LXD Modules: lxd_container and lxd_connection
The two modules we'll look at here for the purpose of IaC are the "lxd_container" and "lxd_connection" modules. Links to the documentation for each are listed in the References section.
Ansible lxd_container Module
The lxd_container module is the first module that we will use for LXD virtualized host deployment. This module provides a means to define key values to control deployment of a desired container or virtual machine from a specified image source. The image source can either be on one of the standard remote image servers, a custom remote server, or an image existing locally on the local LXD server.
Ansible lxd_connection Module
The lxd_connection module allows for running commands within an LXD container via the LXD REST API. This is functionally equivalent to using lxc exec (container) -- (command and arguments)
. While you could perform pretty much all software installation or configuration using the lxd_connection module, I use it only for getting the networking and SSH control configured for the new virtualized host and then switch to a standard Ansible connection via SSH for further configuration. Where this could be useful would be for a situation where you have access to the LXD server but do not have direct network access to the deployed virtualized host.
A Caveat for the lxd_container Module
The lxd_container module works well for virtualized hosts deployed directly to the local LXD server. There is also a capability to deploy to a specific target LXD server if operating within a cluster environment.
Unfortunately, while there seems to be capabilities within the module to operate on remote LXD servers in a non-cluster environment, I haven't been successful in getting that to work. As my homelab runs multiple LXD servers that are not clustered, this is an issue for me. Until I manage to figure out the "magic sauce" to get this to work, I use a work-around for initial deployment. My Ansible development system also has LXD installed on it, with all of my remote LXD servers defined as "remotes". I then use a simple Ansible shell command to run an lxc launch ...
command to perform the actual deployment of the virtualized host.
Deploying and Configuring an LXD Container
This is a walk-through of a deployment using my Ansible code. The links to the roles and playbook are located in the References.
Please note that most of my Ansible code is only configured to run on either Debian or Ubuntu distributions. Extension of most of the code to run on other distributions should be fairly easy, but isn't included here.
The Inventory Host Definition
The first step is to define the parameters of the virtualized host to be deployed. I do that within the inventory host definition file, and have a template file ("inventory/host_vars/(hostname).yml") as an example.
$ cat inventory/host_vars/_template.yml
---
#######################################
# Host inventory definition
# template
#######################################
# host network configuration
ansible_host: 192.168.20.233
ip_gw: 192.168.20.1
ip_ns1: 192.168.20.21
ip_ns2: 192.168.20.22
#######################################
# VM/Container LXD configuration
# LXD Container or VM
host_type: Container
# LXD profile to apply
profile: bridged
# LXD image selection
image_name: "ubuntu"
image_vers: "22.04"
image_location: "images"
# where to deploy container
remote_name: hollister
#######################################
# Host virtual hardware configuration
# CPU cores, Memory, Root disk size
cpu: 4
mem: 8
root: 100
#######################################
# Ansible roles to apply to host
# - uncomment to select
# - create_user includes create_user, sudoers, vim_setup, bash_mods and gitconfig roles
# - use "nil" for no ansible configuration management
host_config:
- nil
# - base_pkgs
# - create_user
# - du_backups
# - monitorix
# - nagios_agent
# - docker
# - k3s
#######################################
# user definition for "create_user" role
user: rmorrow
pw: resetthispasswd
home: /home/rmorrow
# EOF
Most of the fields of this file should be pretty much self explanatory, but in general I define aspects of the target host, such as CPU, memory and root disk size, network parameters, source image, system configuration groups, user account to create, as well as what remote server to deploy to. The "host_config" list controls playbook execution of specific system configuration and software installation roles after the virtualized host has been deployed and configured for SSH access, and can be selected or deselected, as required. For a bare host with no configuration to be performed, all groups can be commented out with just the "nil" entry left uncommented.
All of these values are imported into variables that will be used in the roles or playbooks to control the deployment and configuration of the virtualized host.
The deploy-host Playbook
The deploy-host playbook controls the deployment and configuration of the virtualized host. It first calls the lxc_deploy role to perform the actual deployment of the host, and then calls the lxdhost role which will perform the networking and SSH configuration to allow the deployed virtualized host to be managed via Ansible using the SSH connection.
Next the playbook will call the setup-host playbook in order to perform a number of package configuration roles.
The setup-host Playbook
The actual configuration of the host after deployment happens in the setup-host playbook. The "host_config" section is used to control while configuration roles are run, and any other variables required by these roles are also contained in the inventory host definition file.
This also means that for an existing Container or VM, or for a physical machine, The same inventory host definition file and same host configuration playbook can be used to perform the exact same system configuration. The only step that will need to be completed before using this playbook is to perform the root user SSH key setup to allow for Ansible management. An example of how to do this is the "debinit" file from the delinit_files repository.
The lxc_deploy Role
The lxd_deploy role is responsible for the initial deployment of the virtualized host. Normally this would be performed via the "lxd_container" module, but for now I simply use an Ansible shell function to run an lxc launch ...
command, using the values defined in the inventory file to control the initial configuration of the deployed host.
The role differentiates between a container and a virtual machine deployment. For containers the only post deployment action is to confirm that python3 is installed on the virtualized host.
For virtual machines, in addition to python3, installation of the "cloud-guest-utils" and "fdisk" packages are confirmed. These packages are required to perform the post deployment resizing of the root disk.
The lxdhost Role
The lxdhost role uses the "lxd_connection" module to communicate directly with the deployed virtualized host using the LXD REST API. This way we are able to modify the network configuration of the deployed host without losing connectivity.
There are templates for both Ubuntu and Debian network configuration files. The first thing the role does is to replace the existing network configuration with one containing the network configuration from the inventory host definition file. The virtualized host is then rebooted to allow the new network configuration to become active.
After the host has finished booting, the role then configures the virtualized host SSH server to allow SSH key only logins for the root user and copies the Ansible user SSH public key into the authorized_key file for the root user.
There is a point to note here. Ansible has the capability to run as a non-root user, and use privilege escalation (ie, sudo) to become root. You can have Ansible prompt you for the privilege escalation password at playbook run time. This would be the most secure way of accomplishing privilege escalation. However, if you need to perform unattended non-interactive execution, and use passwordless sudo, then there is effectively no difference between that and running directly as the root user. The key point to keep in mind is that whichever method you use, make sure that the user you use for Ansible execution cannot be logged into from the network using a password.
As these playbooks and roles are running on my isolated homelab network, I'm not too concerned with the way I have it configured. If however I was running this in a production environment over a widely distributed corporate network then I would probably change to using a password protected privilege escalation for most of the code execution.
Conclusion
To tie it all together, to use this playbook and roles you first define your desired destination host(s) in your Ansible inventory. Once you have the inventory host files created and the hostname entries added to the main inventory file, you can run the lxdhost playbook to perform the deployment;
$ ansible-playbook -l host1,host2,host3 -i inventory/inventory_file.yml deploy-host.yml
If you are performing a configuration on an existing host, you can call the setup-host playbook instead;
$ ansible-playbook -l host1,host2,host3 -i inventory/inventory_file.yml setup-host.yml
This is only one way to perform IaC on a homelab that uses LXD servers. It works well for my purposes and helps to keep all of my homelab LXD host deployments consistent.
Created: 2023-01-10 19:54
Last update: 2023-01-29 01:32