Homeprod Management with Docker Compose

Recently I decided to change how I manage my homeprod environment (home production, i.e. the things that other people in my household rely on and tell me if they're down). I moved everything over to docker-compose stacks managed with a small(ish) shell script. Skip down a bit if you don't want backstory.

Backstory

The situation before was a mish mash of various things that I've tried over the years. For awhile I was deploying everything with dokku on one big VM running on a proxmox host. That worked for some things but the 12 factor architecture doesn't always fit.

For those things I used Portainer to launch docker-compose stacks. Again, works for a lot but sometimes it's really annoying. Portainer's docker-compose support is somewhat limited in so far as you can't really ship configs along with a docker-compose file to be mounted into a container, so for anything custom you need to build your own derived container. Once you have that there are a lot of pointy-clicky steps to actually refreshing and deplolying an update.

The Fleet

I have a handful of systems running in homeprod:

  • hypnotoad is the VM host. It's an HP Elitedesk 800 G3 mini running Proxmox 7
  • netsvc1 is a Dell Wyse 3040 thin client running Technitium for DNS and DHCP with a nice web UI
  • There are 4 other Dell Wyse 3040 thin clients scattered around, one each in the house, office, garage, and shed. These have z-wave and zigbee USB sticks and are running zwavejs2mqtt and zigbee2mqtt.

All of the thin clients and most of the VMs are running Alpine Linux because the small runtime footprint meshes well with the 2gb of memory and 8gb of storage on the thin clients.

Every VM and physical node is connected to my Tailscale network as well.

Docker Compose Stack

The whole idea of this refactor is to centralize and simplify management without having to run SPOF orchestrators or heavy agents on the nodes (again: see the thin client specs above). I thought about a lot of things but nothing really clicked until one day while perusing a Hacker News thread I came across a tossed-off comment from someone who said they just used the offical docker compose container, bundled their stack into the container, passed in the docker socket and everything Just Worked.

This sounded like magic, so of course I had to try it. Of course it worked, but it's very limiting. It implies a single host per derived image, for one thing, and that doesn't work with my fleet.

The kernel of the idea was really great, though, so I iterated on it and came up with docker-compose-stack. Docker compose stack (terrible naming, sue me) is basically the same idea taken to a higher extreme.

How it works

Here's the workflow that docker-compose-stack runs:

  1. At startup, start.sh loads secrets from disk and, optionally, from a script named download_secrets.sh. It then runs run_compose.sh.
  2. Run anything declared in hosts.yml for the current host as a pre-start script within the context of the dockerstack-root container
  3. Copy declared configs into /var/lib/docker/stack_configs
  4. Creates a .env file from envirionment variables declared in hosts.yml
  5. Runs sha256sum over the contents of /var/lib/docker/stack_configs and stuffs the result in CONFIGS_SHA.
  6. Composes a base docker-compose invocation from the list of stacks declared in hosts.yml
  7. Drops some cron jobs
  8. Runs docker-compose
  9. execs into crond to run the crons set up in step 7

Even without an entry in hosts.yml, a host running docker-compose-stack will always run a watchtower container on a very short refresh cycle. Watchtower will check every 30 seconds to see if any container images have been updated, download the update, and re-create each updated container. That allows me to update my docker-compose-stack container with GitHub Actions and every host just updates themselves accordingly.

That's it. That's the whole thing. I've been running my implementation of docker-compose-stack across 12 VMs and physical nodes for about a week and it's been ticking along nicely.

Configs and Secrets

There's some nuance around secrets and configs that probably deserves some explanation.

Configs are exactly what you expect. Nginx configs, whatever. These get dropped into a directory on the host to be mapped in as container bind mounts.

Secrets are loaded from a special file on disk. Secrets can also be loaded from a script. Loading works by running the script and capturing the output, evaling the output, and dropping the sha256sum of the output into a file in the config directory.

The nuance comes in when we want to update secrets or configs on running containers. By including CONFIGS_SHA in the list of environment variables for a service, it will automatically be recreated when that SHA changes. Otherwise changes aren't picked up very well.

In my homeprod environment I'm managing secrets using tailscale-op-proxy which lets me tag a node with a Tailscale ACL tag and grab any secrets out of 1Password tagged with that same ACL tag. Nodes only get the secrets that they need and I get to manage secrets with the 1Password application rather than ssh'ing into each machine and managing them with vim.

Alternatives Considered

I looked at Kubernetes, but that was way too heavy. I also looked at Nomad which felt limiting, especially with regards to not wanting to run a SPOF orchestrator at all. I'm sure I could have implemented all of this with ansible or salt stack or puppet or whatever else.

I honestly just thought the docker-compose-within-docker idea was clever and decided to run with it.

Docker compose also has profiles built in, but those are confusing because if you decide to remove a service from a node, if you're using profiles that service won't actually be removed because it's still part of the stack formation, according to the compose authors. That doesn't work for me, hence the machinations around building a compose command from a bunch of stack files.

You too?

I have esoteric tastes and don't really care about exploring production-grade infra systems like k8s in a home environment. My primary concern is that services stay up.

If that fits you too, and you run some stuff that other people rely on in your house, maybe take a look at docker-compose-stack.

Posted in: Home Lab  

Tagged: Software