Ansible Deployment of Kubernetes Workloads - Refactored
References
- Ansible Deployment of Kubernetes Workloads
- https://git.radar231.com/radar231/k8s_website-wiki
- https://git.radar231.com/radar231/role_k8s_website-wiki_deploy
- https://git.radar231.com/radar231/playbook_k8s-deployment
- https://docs.ansible.com/ansible/latest/user_guide/collections_using.html
- https://galaxy.ansible.com/docs/using/installing.html
- https://git.radar231.com/radar231/ansible_dev_env
Introduction
This post is a followup to my previous post on using ansible to deploy kubernetes workloads. I've since refactored the way I'm using ansible to deploy to my kubernetes cluster, so a second post seemed in order.
Refactoring Ansible Code
Until a few months ago, I had been keeping all of my ansible code in a single monolithic repository. While this simplified the usage of the playbooks, management of the code was starting to become a challenge. Taking a page from the recent restructuring that took place with the ansible base and collections split, I decided to refactor all of my ansible code and break all of the roles and playbooks out into separate repositories.
My Development Environment (Collections and Roles)
There are a lot of ways to organize ansible projects, but in my case I've decided to go the ansible requirements.yml route for my roles. While I have all of my roles in my own git server, I can still use requirements.yml files to install both the modules I require from ansible galaxy collections, as well as my own roles from my git server.
- clip from ansible_dev_env/roles/requirements.yml
---
(...)
- src: https://git.radar231.com/radar231/role_k8s_website-wiki_deploy
name: website-wiki_deploy
scm: git
(...)
# EOF
There are also a lot of ways to use collections and roles. They can be specified and included on a project by project basis, but I've chosen to install the galaxy collection modules and all of my roles centrally into my ansible development environment. The default locations are in ''~/.ansible/collections/'' and ''~/.ansible/roles/'', but this can be changed in ''~/.ansible.cfg''. I have a shell script that sets up my development environment by symlinking in all of my playbooks and requirements files, as well as specific shell scripts into my development directory. Once done, my ansible development environment directory ends up like this;
$ tree ansidev
ansidev
├── ansible.yml -> /home/rmorrow/dev/git.radar231.com/playbook_ansible/ansible.yml
├── base_pkgs.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/base_pkgs.yml
├── bash_mods.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/bash_mods.yml
├── chk_upgrades.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/chk_upgrades.yml
├── collections -> /home/rmorrow/dev/git.radar231.com/ansible_dev_env/collections
├── create_user.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/create_user.yml
├── del_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_del-updates/del_inventory.yml
├── docker.yml -> /home/rmorrow/dev/git.radar231.com/playbook_docker/docker.yml
├── dotfiles.yml -> /home/rmorrow/dev/git.radar231.com/playbook_dotfiles/dotfiles.yml
├── do_updates.sh -> /home/rmorrow/dev/git.radar231.com/playbook_del-updates/do_updates.sh
├── du_backups.yml -> /home/rmorrow/dev/git.radar231.com/playbook_du_backups/du_backups.yml
├── k3s_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_k3s-cluster/k3s_inventory.yml
├── k3s.yml -> /home/rmorrow/dev/git.radar231.com/playbook_k3s-cluster/k3s.yml
├── k8s-deployment.yml -> /home/rmorrow/dev/git.radar231.com/playbook_k8s-deployment/k8s-deployment.yml
├── lxdhost_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_lxdhost/lxdhost_inventory.yml
├── lxdhost.yml -> /home/rmorrow/dev/git.radar231.com/playbook_lxdhost/lxdhost.yml
├── microk8s_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_microk8s-cluster/microk8s_inventory.yml
├── microk8s.yml -> /home/rmorrow/dev/git.radar231.com/playbook_microk8s-cluster/microk8s.yml
├── mk_dev_env_links -> ../git.radar231.com/ansible_dev_env/mk_dev_env_links
├── monitorix.yml -> /home/rmorrow/dev/git.radar231.com/playbook_monitorix/monitorix.yml
├── nagios_agent.yml -> /home/rmorrow/dev/git.radar231.com/playbook_nagios_agent/nagios_agent.yml
├── pfetch.yml -> /home/rmorrow/dev/git.radar231.com/playbook_pfetch/pfetch.yml
├── rem_base_pkgs.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/rem_base_pkgs.yml
├── roles -> /home/rmorrow/dev/git.radar231.com/ansible_dev_env/roles
├── run_role.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/run_role.yml
├── setup-host.yml -> /home/rmorrow/dev/git.radar231.com/playbook_setup-host/setup-host.yml
├── update_roles.sh -> /home/rmorrow/dev/git.radar231.com/ansible_dev_env/update_roles.sh
├── updates.yml -> /home/rmorrow/dev/git.radar231.com/playbook_del-updates/updates.yml
└── vim_setup.yml -> /home/rmorrow/dev/git.radar231.com/playbook_vim_setup/vim_setup.yml
I use the following shell script to refresh my roles after a collection has been updated, or I've made changes to a role or additions to a requirements.yml file.
$ cat update_roles.sh
#!/bin/bash
ansible-galaxy install -r roles/requirements.yml --force
ansible-galaxy install -r collections/requirements.yml --force
Application Deployment Role
Now, for the topic of the post. First off, this is the deployment role for my website-wiki application, which is the same application that I highlighted in my previous post.
The tasks file for the role is pretty much the same as the playbook from the previous post.
- Role Directory Structure
$ tree role_k8s_website-wiki_deploy
role_k8s_website-wiki_deploy
├── meta
│ └── main.yml
├── README.md
└── tasks
└── main.yml
- Role Tasks File
$ cat role_k8s_website-wiki_deploy/tasks/main.yml
---
#####################################################################
#
# website-wiki_deploy role
#
# - requires that the 'devpath' variable be set
#
#####################################################################
# tasks file for website-wiki_deploy role
- debug: msg="Deploying website-wiki app."
- name: Create the tiddlywiki namespace
community.kubernetes.k8s:
name: tiddlywiki
api_version: v1
kind: Namespace
state: present
- name: Create the PV object
community.kubernetes.k8s:
state: present
src: "{{ devpath }}/k8s_website-wiki/website-wiki_pv.yml"
- name: Create the PVC object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_pvc.yml"
- name: Create the secrets object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_secret.yml"
- name: Create the deployment object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_deployment.yml"
- name: Create the service object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_service.yml"
- name: Create the ingress object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_ingress.yml"
# EOF
Top-Level Deployment Playbook
The top-level deployment playbook pulls it all together, and sequentially calls all of the application deployment roles.
$ cat playbook_k8s-deployment/k8s-deployment.yml
---
#####################################################################
#
# k8s-deployment playbook
#
# - requires that the 'devpath' variable be set to the path of the
# kubernetes application manifests.
#
# - requires that the 'haproxy_ingress_ver' and 'metallb_ver' variables
# be set to the desired version of each to install
#
#####################################################################
- hosts: localhost
tasks:
roles:
# haproxy ingress controller
- role: haproxy_deploy
# metallb load-balancer
- role: metallb_deploy
# delfax namespace
- role: ddclient_deploy
- role: delinit_deploy
- role: website_deploy
# guacamole namespace
- role: maxwaldorf-guacamole_deploy
# home-automation namespace
- role: home-assistant_deploy
- role: mosquitto_deploy
- role: motioneye_deploy
# homer namespace
- role: homer_deploy
# k8stv namespace
- role: flexget_deploy
- role: transmission-openvpn_deploy
# nagios namespace
- role: nagios_deploy
# pihole namespace
- role: pihole_deploy
# tiddlywiki namespace
- role: journal-wiki_deploy
- role: notes-wiki_deploy
- role: website-wiki_deploy
- role: wfh-wiki_deploy
vars:
devpath: "/home/rmorrow/dev/git.radar231.com"
haproxy_ingress_ver: 0.13.4
metallb_ver: v0.10.3
# EOF
Conclusion
While it was a fair bit of work to refactor my ansible code, in the end it was well worth the effort. The code is much clearer and more manageable, and it is simpler to grab a specific role or playbook from the repository.
Created: 2021-11-12 16:03
Last update: 2021-11-12 20:15