I've a relatively simple homelab, set up to primarily run virtualized workloads.
I have four Raspberry Pi4-8GB's set up as the core of a Kubernetes cluster. The cluster also includes three LXD containers running on the virtualization servers. This gives me a seven node multi-architecture cluster, maximizing the flexibility for running workloads of either arm64 or amd64 architecture.
There are also five "servers" and a Synology NAS. The servers are used as virtualization servers, while the NAS is used as a file server.
The Kubernetes cluster is the primary location for running workloads on the homelab. The core of the cluster are four Raspberry Pi4-8GB's, and this is where most of the cluster workloads are intended to be run. However, as it is multi-architecture I can also run amd64 workloads on the LXD containers.
There are five virtualization servers, set up to run workloads on;
There are also three old laptops being used as additional virtualization servers.
The NAS houses four 3TB HDD's, which are set up in a Synology SHR configuration, providing approximately 9TB of usable space. The NAS provides file shares and media storage for the network. There is also an NFS share, which provides persistent storage to the applications running on the Kubernetes cluster.
There is a 2TB external USB HDD attached to the NAS that is used as a destination for both the NAS backups, as well as server duplicity rsync backups from across the LAN.
This is a logical diagram showing the applications and services currently running on the homelab, and how they are structured.
I use a number of tools for monitoring the applications and services running on the homelab, including Netdata and Monitorix for server system stats, virt-manager for keeping an eye on the virtual machines, portainer for the docker containers, as well as a number of command line management utilities for Kubernetes, LXD, KVM and Docker.
The primary monitoring of the homelab is done using Nagios Core.
(created: 2021-06-10, last modified: 2022-02-11 at 20:04:55)