Running a Homelab
I’ve been running a homelab for a couple of years now. What started as a few Raspberry Pis and Pi-hole has grown into a single Proxmox node, a structured Ubiquiti network, and around 15 Docker services. At some point the question of how to actually deploy things becomes a real question, not just “ssh in and run docker compose up”.
Network
The network backbone is all Ubiquiti — a UDM Pro handles routing and firewall, a UniFi switch connects everything, and a couple of access points cover the house. The network is split into six VLANs:
The addressing scheme is intentionally boring and consistent. The whole lab lives under 10.0.0.0/16, and each VLAN gets exactly one /24. The gateway is always .1, static devices live in .2 through .49, DHCP hands out .50 through .200, and .201 through .254 stays reserved for infrastructure or future use.
I avoid 192.168.x.x on purpose. Too many home networks, hotel networks, coffee shops, and consumer routers use those ranges by default. Using 10.0.0.0/16 makes VPN routing and remote access less annoying because my home subnets are less likely to overlap with whatever network I am currently on.
graph TD
UDM[UDM Pro]
UDM --- Infra["Infra · VLAN 1 · 10.0.1.0/24\nManagement — switches, UDM, APs"]
UDM --- Trusted["Trusted · VLAN 10 · 10.0.10.0/24\nPhones, laptops, personal devices"]
UDM --- IoT["IoT · VLAN 20 · 10.0.20.0/24\nSmart home, sensors"]
UDM --- Workbench["Workbench · VLAN 30 · 10.0.30.0/24\nLab machines, experiments"]
UDM --- Guest["Guest · VLAN 50 · 10.0.50.0/24\nInternet-only"]
UDM --- Servers["Servers · VLAN 40 · 10.0.40.0/24"]
Servers --- d1["docker01 · 10.0.40.103"]
Servers --- d2["docker02 · 10.0.40.107"]
Servers --- d3["docker03 · 10.0.40.108"]
Each VLAN is isolated by firewall rules. IoT devices can’t initiate connections to Trusted or Servers — they can talk to the internet and that’s it. Workbench sits between IoT and Trusted: machines I’m actively tinkering with that need more network access than IoT allows but don’t get the same trust as personal devices. Guest is internet-only with no visibility into the rest of the network. The Servers VLAN is where all the Proxmox containers live.
DNS
Local DNS runs on Technitium, deployed as a container on the Servers VLAN. UniFi is configured to hand out the Technitium container’s IP as the DNS server for each network (except Guest, which uses a public resolver). Technitium handles internal name resolution for all local services, blocks ads, and forwards everything else upstream.
This setup means internal services resolve to their actual LAN IPs rather than going out to Cloudflare and back in — and it gives full control over local zones without fighting the UDM’s built-in DNS.
For internal services I use a wildcard *.int.DOMAIN record in Technitium that points at Traefik. Traefik then routes requests to the right service based on hostname: Immich, Jellyfin, RustFS, Uptime Kuma, and the rest of the LAN-only stack all sit behind that internal domain.
Technitium is also a better fit for where I want the DNS setup to go next. Pi-hole worked well early on, but keeping multiple Pi-hole instances in sync can get annoying once you have more than one resolver. Technitium has a cleaner cluster story, so adding another DNS node later should be easier than manually keeping blocklists and local records aligned.
Infrastructure
Everything runs on a single Proxmox node now. I started with physical Raspberry Pis for most services, and for a while that was fine — Pi-hole, some bots, a few compose stacks. By mid-2025 the RPis were hitting their limits: not enough RAM for heavier services, slow storage I/O, and managing four separate machines was becoming tedious. In November 2025 I moved everything into Proxmox LXC containers on one x86 machine. The RPi targets are still in the spot config but all four are powered down — removing them is on the list.
Proxmox containers follow a simple IP convention: the container’s ID maps directly to the last octet of its IP on the Servers VLAN. Container 103 gets 10.0.40.103, container 107 gets 10.0.40.107, and so on. It makes it trivial to correlate deployment targets with what you see in the Proxmox UI without maintaining a separate lookup table.
I run Docker inside three unprivileged LXC containers, each with a clear role:
docker01is the external host — public-facing web apps reachable over the internet via Cloudflare Tunnel.docker02is the internal host — dashboards, monitoring, and personal tools that never leave the LAN.docker03is shared infrastructure — Traefik, object storage, and things everything else depends on.
Keeping them separate means a bad deploy on a public app can’t take down internal tooling, and shared infra changes are isolated from both.
Storage is backed by ZFS on the Proxmox host. There is a fast NVMe pool for workloads that benefit from it, and a larger tank pool on NAS HDDs for backups, media, and bulk storage. Proxmox VM and LXC backups are automated and retained for the last 30 days, with another NAS used as an additional backup target. For bind mounts, I attach Proxmox-managed storage to the containers and include that data in the backup plan too.
External access
External traffic goes through Cloudflare Tunnel — no open ports on the firewall, no port forwarding rules in UniFi:
graph LR
Internet --> CF[Cloudflare]
CF <-->|"outbound tunnel"| cloudflared[cloudflared\nexternal host]
cloudflared --> apps[public apps\ndocker01]
apps --> shared[shared services\ndocker03]
cloudflared runs as a container on the external host, opens an outbound connection to Cloudflare’s edge, and public traffic flows back through that tunnel to the app containers on docker01. I don’t open any inbound ports on the firewall and there are no port forwarding rules in UniFi. Some public apps call services on docker03 as dependencies, but Traefik is not in the public request path.
Traefik is for the internal network. It lives on docker03, receives requests for *.int.DOMAIN, and routes them to services like Immich, Jellyfin, RustFS, and Uptime Kuma. That split keeps external exposure intentionally small while still making LAN services easy to reach by name.
Updates
Some image updates are handled automatically by Watchtower, which runs on the internal host and polls for new images on a schedule. I mainly use it for my own apps, where tests run before I publish a new image, plus a few selected apps on docker01. Everything else is updated deliberately. Spot is for config changes, new services, and anything that requires syncing files to the host.
Uptime Kuma watches the internal services and sends notifications to Telegram when something stops working. For public apps on docker01, Cloudflare provides the external monitoring and notifications. It is simple, but it means a broken deploy or bad image update is visible quickly.
Each service lives in its own directory under jobs/ with a docker-compose.yml and, in some cases, an .env file. The repo is private, but storing .env files in plain git is still a compromise and not where I want this to stay.
How it started
The first version of deployment was a bash script. Around March 2024 it looked something like this:
1
2
3
4
5
6
7
8
9
10
11
json_data=$(cat "deployment.json")
jobs=$(echo "${json_data}" | jq -r '.jobs[] | @base64')
for job in $jobs; do
job_name=$(echo "$job" | base64 --decode | jq -r '.name')
nodes=$(echo "$job" | base64 --decode | jq -r '.nodes[]')
for node in $nodes; do
ssh $node "mkdir -p ~/docker/jobs/$job_name"
scp -r ./jobs/$job_name $node:~/docker/jobs/
ssh $node "cd ~/docker/jobs/$job_name && docker compose up -d --pull always"
done
done
Read a JSON file, loop over jobs and nodes, scp the directory over, SSH in and bring it up. Straightforward. It worked fine for a few services but scp copies everything every time and there’s no way to deploy a single job without editing the script.
The Rust CLI
By December 2025 — a month after the Proxmox migration, with the service count growing — the bash script was becoming annoying enough to replace. I rewrote it as a Rust CLI called infra-cli. It read a deployment.yaml instead of JSON, used rsync instead of scp (so only changed files transferred), and let you target specific jobs or nodes from the command line.
1
2
3
4
5
6
7
8
9
# deployment.yaml
defaults:
remote_base: ~/.docker/jobs
jobs:
- name: traefik
nodes: [docker03]
- name: quoterism
nodes: [docker01]
1
2
infra-cli deploy traefik
infra-cli deploy --node docker01
It was a fun project and the Rust was good practice. But the more I used it the more it felt like I was maintaining something that already existed. The CLI was essentially reimplementing rsync-over-SSH + shell script execution. Around the same time I added a justfile to wrap the build-and-run steps, which made it a bit more ergonomic but also a bit more awkward — you’d run just deploy traefik and that would cargo build the CLI and then invoke it.
Spot
In April 2026 I came across spot — a lightweight SSH-based deployment tool. It does exactly what my Rust CLI did, but it’s a single binary you install once and it’s been doing this longer than my CLI had. I wanted the simplest possible solution for rsync-over-SSH deployment; Ansible would work, but for this setup it felt like more machinery than I needed. The config is a YAML playbook:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
targets:
docker01:
hosts: [{host: "10.0.40.103", user: docker}]
docker03:
hosts: [{host: "10.0.40.108", user: docker}]
tasks:
- name: traefik
targets: [docker03]
tags: [networking]
commands:
- name: sync
sync: {src: "jobs/traefik/", dst: "/home/docker/.docker/jobs/traefik/", delete: true}
- name: pre-deploy
script: docker stop traefik || true
- name: deploy
script: |
cd /home/docker/.docker/jobs/traefik
docker compose pull
docker compose --env-file .env up -d
The justfile now just wraps spot directly:
1
2
3
4
5
deploy job:
spot -p spot.yml --ssh-agent -v -n
deploy-dry job:
spot -p spot.yml --ssh-agent -v -n --dry
Dry-run mode is the thing I use most — just deploy-dry traefik shows exactly what will sync and what scripts will run without touching anything. The --ssh-agent flag means it picks up whatever keys are loaded in the agent, so no key paths hardcoded anywhere.
Current state
The day-to-day is quiet. Watchtower handles selected updates in the background. The only time I touch spot is when something needs a config change or a new service goes up:
1
2
just deploy quoterism # push config change
just deploy-dry traefik # preview before touching the reverse proxy
The whole jobs/ directory is the source of truth for what’s running where. Spot handles the rsync and the SSH; just handles the ergonomics. Deploys still run from my laptop, not CI, and UniFi firewall rules are still managed through the UniFi console.
I deleted about 400 lines of Rust when I migrated. I don’t regret writing the CLI — it clarified exactly what I needed — but spot is the right tool for this.
There are still rough edges. The old Raspberry Pi targets need to come out of the Spot config, and the next real cleanup is moving secrets out of plain git. Even in a private repo, committing .env files is not a habit I want to keep.