Skip to main content
  1. Posts/

Composing Myself

·1792 words·9 mins
RaspberryPi - This article is part of a series.
Part 2: This Article

The Goal
#

In my last post, “Working 22/7,” I introduced my Raspberry Pi home lab. The goal was to build a platform on a simple device that was powerful yet resilient, and easy to tinker with. The key constraints were cost and physical size, but the most important requirement was stability. I needed peace of mind, with no risk of a failed experiment breaking the internet and facing the wrath of a Wi-Fi-less family.

The Setup
#

The key to this entire setup is defining the whole stack as code, orchestrated by Docker. The secret lies in a single file: docker-compose.yml. Think of it as the blueprint for the entire project. It tells Docker exactly which services to run, how to configure their network, and where to store their data, ensuring the setup is repeatable and easy to manage.

services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    hostname: pihole
    networks:
      macvlan:
        ipv4_address: ${IP_PIHOLE}
    environment:
      TZ: "America/New_York"
      FTLCONF_webserver_api_password: ${PIHOLE_UI_PASSWORD}
      FTLCONF_dns_listeningMode: "all"
      FTLCONF_dns_upstreams: "${IP_UNBOUND}" # Uses Unbound's parameterized IP
      FTLCONF_dhcp_active: "true"
      FTLCONF_dhcp_start: "${PIHOLE_DHCP_START}"
      FTLCONF_dhcp_end: "${PIHOLE_DHCP_END}"
      FTLCONF_dhcp_router: "${PIHOLE_DHCP_ROUTER}"
      FTLCONF_dhcp_leaseTime: "24h"
      FTLCONF_dhcp_ipv6: "true"
    volumes:
      - "./pihole:/etc/pihole"
      #- './etc-dnsmasq.d:/etc/dnsmasq.d'
    cap_add:
      - NET_ADMIN
      - SYS_TIME
      - SYS_NICE
    restart: unless-stopped

  unbound:
    container_name: unbound
    image: "mvance/unbound-rpi:latest"
    hostname: unbound
    networks:
      macvlan:
        ipv4_address: ${IP_UNBOUND}
    volumes:
      - "./unbound:/opt/unbound/etc/unbound:ro"
    restart: unless-stopped

  caddy:
    container_name: caddy
    hostname: caddy
    image: ghcr.io/caddybuilds/caddy-cloudflare:latest
    restart: unless-stopped
    networks:
      macvlan:
        ipv4_address: ${IP_CADDY}
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile
      - /srv:/srv
      - caddy_data:/data
      - caddy_config:/config
    environment:
      - CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
      - ACME_DNS=cloudflare
      - IP_GRAFANA=${IP_GRAFANA}
      - IP_UPTIME_KUMA=${IP_UPTIME_KUMA}
      - IP_UMAMI_APP=${IP_UMAMI_APP}

  umami_db:
    image: postgres:alpine
    container_name: umami_db
    hostname: umami_db
    restart: unless-stopped
    networks:
      macvlan:
        ipv4_address: ${IP_UMAMI_DB}
    volumes:
      - umami_db_data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: ${UMAMI_DB_USER}
      POSTGRES_PASSWORD: ${UMAMI_DB_PASSWORD}
      POSTGRES_DB: ${UMAMI_DB_NAME}
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${UMAMI_DB_USER} -d ${UMAMI_DB_NAME}"]
      interval: 10s
      timeout: 5s
      retries: 5

  umami_app:
    image: docker.umami.is/umami-software/umami:postgresql-latest
    container_name: umami_app
    hostname: umami_app
    restart: unless-stopped
    networks:
      macvlan:
        ipv4_address: ${IP_UMAMI_APP}
    depends_on:
      umami_db:
        condition: service_healthy
    environment:
      DATABASE_URL: postgresql://${UMAMI_DB_USER}:${UMAMI_DB_PASSWORD}@${IP_UMAMI_DB}:5432/${UMAMI_DB_NAME} # Uses Umami DB's parameterized IP
      DATABASE_TYPE: postgresql
      APP_SECRET: ${UMAMI_APP_SECRET}
      TZ: "America/New_York"
      DISABLE_LOGIN: "false"

  # --- Monitoring Services ---
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    hostname: prometheus
    restart: unless-stopped
    volumes:
      - ./prometheus/config:/etc/prometheus
      - prometheus_data:/prometheus
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
      - "--storage.tsdb.retention.time=30d"
      - "--storage.tsdb.path=/prometheus"
      - "--web.enable-lifecycle"
    networks:
      macvlan:
        ipv4_address: ${IP_PROMETHEUS}

  grafana:
    image: grafana/grafana-oss:latest
    container_name: grafana
    hostname: grafana
    restart: unless-stopped
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
      - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
      - TZ="America/New_York"
    networks:
      macvlan:
        ipv4_address: ${IP_GRAFANA}

  rpi_node_exporter:
    image: prom/node-exporter:latest
    container_name: rpi_node_exporter
    hostname: rpi_node_exporter
    restart: unless-stopped
    network_mode: host
    pid: host
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - "--path.procfs=/host/proc"
      - "--path.sysfs=/host/sys"
      - "--path.rootfs=/rootfs"
      - "--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    hostname: cadvisor
    restart: unless-stopped
    # privileged: true
    devices:
      - /dev/kmsg:/dev/kmsg
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    networks:
      macvlan:
        ipv4_address: ${IP_CADVISOR}

  blackbox_exporter:
    image: prom/blackbox-exporter:latest
    container_name: blackbox
    hostname: blackbox
    restart: unless-stopped
    volumes:
      - ./blackbox/config:/config
    command:
      - "--config.file=/config/blackbox.yml"
    networks:
      macvlan:
        ipv4_address: ${IP_BLACKBOX}

  uptime_kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime_kuma
    hostname: uptime_kuma
    restart: unless-stopped
    volumes:
      - uptime_kuma_data:/app/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - TZ="America/New_York"
    networks:
      macvlan:
        ipv4_address: ${IP_UPTIME_KUMA}

volumes:
  caddy_data:
  caddy_config:
  prometheus_data:
  grafana_data:
  uptime_kuma_data:
  umami_db_data:

networks:
  macvlan:
    name: pi0vlan
    driver: macvlan
    driver_opts:
      parent: "${MACVLAN_PARENT_INTERFACE}"
    ipam:
      config:
        - subnet: "${MACVLAN_SUBNET}"
          gateway: "${MACVLAN_GATEWAY}"
          ip_range: "${MACVLAN_IP_RANGE}"
          aux_addresses:
            host_shim_ip: "${MACVLAN_HOST_SHIM_IP}"
A Note on Networking

I use a macvlan network for most of the services. This gives each container its own unique IP address on my home network, just like a physical device. This avoids port conflicts between services and makes network configuration much cleaner.

The Core Services
#

These are the foundational services that provide core functionality for my network and the services I host.

Pi-hole
#

Pi-hole is the cornerstone of my network management. Its first job is as a network-wide DNS sinkhole, effectively blocking ads and trackers for every device. Its second crucial role is as my network’s DHCP server, which gives me granular control over IP address assignment.

Caddy
#

Caddy is my reverse proxy of choice. It’s lightweight, incredibly easy to configure, and its killer feature is fully automated HTTPS. It handles all my TLS certificate acquisition and renewal from Let’s Encrypt without any manual intervention. I use the Cloudflare plugin to solve DNS challenges, which lets me secure internal services that aren’t directly exposed to the internet.

Umami
#

Since I’m running a blog, I wanted privacy-focused analytics. Umami is a fantastic self-hosted alternative to Google Analytics. It gives me the traffic insights I need without harvesting user data. It runs as two containers: the application itself and a Postgres database to store the data.

The Monitoring Stack
#

To keep an eye on everything, I run a suite of monitoring tools that follow the standard Prometheus/Grafana model.

  • Prometheus: The core of my monitoring is Prometheus, a powerful time-series database that pulls (scrapes) metrics from various sources.
  • Grafana: This is the visualization layer. Grafana queries the data stored in Prometheus and renders it into useful dashboards, showing the health of everything from the Pi’s CPU temperature to container memory usage.
  • The Exporters: These are the agents that expose the metrics.
    • Node Exporter provides host-level metrics about the Raspberry Pi itself.
    • Cadvisor exposes metrics for all the running Docker containers.
    • Blackbox Exporter probes endpoints (like my blog) to test for uptime and responsiveness.
  • Uptime Kuma: As a user-friendly frontend, I also run Uptime Kuma. It provides a simple, clean status page and can send notifications if any service goes down.

The Secrets and Safeguards
#

A setup like this has sensitive information like API keys and passwords. It also has a lot of moving parts, making automation and safeguards critical. Here’s how I handle the secrets and keep the configuration robust.

  • Unbound: To enhance privacy, Pi-hole doesn’t use a public DNS provider. Instead, it forwards requests to a local Unbound container. Unbound is a validating, recursive DNS resolver that communicates directly with authoritative DNS servers. This means no single third party sees all my DNS traffic.
  • SOPS: To manage secrets in a Git repository, I use SOPS (Secrets OPerationS). It allows me to encrypt a file containing environment variables (secrets.sops.env). This encrypted file is safe to commit to the repository, while the plaintext version remains local and ignored by Git.
  • Pre-commit Hooks: To prevent mistakes like accidentally committing the plaintext secrets file, I use pre-commit hooks. These are automated scripts that run before a commit is finalized. My hooks check for the presence of the plaintext .env file and will block the commit if it’s found, preventing secrets from ever leaving my machine.

The Automation
#

The final piece of this setup is automation. A manual deployment process is prone to error and time-consuming. The goal here is a completely hands-off ‘GitOps’ style workflow, where a git push triggers the entire deployment.

The process is handled by a GitHub Actions workflow defined in deploy-prod.yaml. When I push a change to the main branch, a self-hosted runner on the Raspberry Pi itself executes the following steps:

name: Deploy Docker Stack to Prod

on:
  push:
    branches:
      - main
    paths:
      - "docker/**"
      - "secrets.sops.env"
      - ".github/workflows/deploy-prod.yaml"
  workflow_dispatch:

jobs:
  deploy:
    name: Deploy to Prod
    runs-on: self-hosted

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Verify SOPS and age installation on runner
        run: |
          echo "Checking SOPS version..."
          sops --version --check-for-updates
          echo "Checking age version..."
          age --version

      - name: Decrypt secrets.sops.env to .env file on Pi
        env:
          SOPS_AGE_KEY: ${{ secrets.RUNNER_AGE_PRIVATE_KEY }}
          TARGET_ENV_FILE: ${{ secrets.DOCKER_DIR }}/.env
          # Using an intermediate env var for the target file path for clarity in the script
        run: |
          echo "Decrypting secrets from $GITHUB_WORKSPACE/secrets.sops.env to ${TARGET_ENV_FILE}"
          sudo -E sops --decrypt "$GITHUB_WORKSPACE/secrets.sops.env" > "${TARGET_ENV_FILE}"
          sudo -E chmod 600 "${TARGET_ENV_FILE}"
          echo ".env file created at ${TARGET_ENV_FILE} with restricted permissions."

      - name: Prepare Docker Configurations (Template Configs)
        env:
          DOCKER_CONFIG_TARGET_DIR: ${{ secrets.DOCKER_DIR }} # For the .env file path
        run: |
          echo "Sourcing environment variables from ${DOCKER_CONFIG_TARGET_DIR}/.env"
          set -a # Automatically export all variables from sourced file
          if [ -f "${DOCKER_CONFIG_TARGET_DIR}/.env" ]; then
            source "${DOCKER_CONFIG_TARGET_DIR}/.env"
          else
            echo "ERROR: .env file not found at ${DOCKER_CONFIG_TARGET_DIR}/.env. Cannot proceed with templating."
            exit 1 # Exit if .env file is crucial and not found
          fi
          set +a

          # Define input and output paths using GITHUB_WORKSPACE directly
          INPUT_TEMPLATE_PATH="$GITHUB_WORKSPACE/docker/prometheus/config/prometheus.yml.template"
          OUTPUT_PROCESSED_PATH="$GITHUB_WORKSPACE/docker/prometheus/config/prometheus.yml"

          # --- Prometheus ---
          echo "Templating Prometheus configuration..."
          echo "Input template: ${INPUT_TEMPLATE_PATH}"
          echo "Output file: ${OUTPUT_PROCESSED_PATH}"

          # Check if the template file actually exists before trying to use it
          if [ ! -f "${INPUT_TEMPLATE_PATH}" ]; then
            echo "ERROR: Template file NOT FOUND at ${INPUT_TEMPLATE_PATH}"
            echo "Listing contents of $GITHUB_WORKSPACE/docker/prometheus/config/ for debugging:"
            ls -la "$GITHUB_WORKSPACE/docker/prometheus/config/" || echo "Could not list directory."
            exit 1
          fi

          # Perform the substitution
          # All variables sourced from .env should now be available in the environment for envsubst
          envsubst < "${INPUT_TEMPLATE_PATH}" > "${OUTPUT_PROCESSED_PATH}"

          if [ $? -eq 0 ]; then
            echo "Prometheus configuration templated successfully to ${OUTPUT_PROCESSED_PATH}"
          else
            echo "ERROR: envsubst command failed."
            exit 1
          fi

          # --- Unbound ---
          UNBOUND_TEMPLATE_PATH="$GITHUB_WORKSPACE/docker/unbound/unbound.conf.template"
          UNBOUND_OUTPUT_PATH="$GITHUB_WORKSPACE/docker/unbound/unbound.conf" # Final config name
          echo "Templating Unbound configuration..."
          echo "Input template: ${UNBOUND_TEMPLATE_PATH}"
          echo "Output file: ${UNBOUND_OUTPUT_PATH}"
          if [ ! -f "${UNBOUND_TEMPLATE_PATH}" ]; then
            echo "ERROR: Unbound template file NOT FOUND at ${UNBOUND_TEMPLATE_PATH}"
            exit 1
          fi
          # IP_PIHOLE and IP_BLACKBOX are now in the environment from the sourced .env file
          # List only the specific variables needed for Unbound to prevent unwanted substitutions
          envsubst '${IP_PIHOLE} ${IP_BLACKBOX}' < "${UNBOUND_TEMPLATE_PATH}" > "${UNBOUND_OUTPUT_PATH}"
          if [ $? -eq 0 ]; then
            echo "Unbound configuration templated successfully."
          else
            echo "ERROR: envsubst for Unbound command failed."
            exit 1
          fi

      - name: Sync configuration files
        run: |
          echo "Syncing repository contents to ${{ secrets.DOCKER_DIR }}"
          rsync -av --checksum \
            "$GITHUB_WORKSPACE/docker/" \
            "${{ secrets.DOCKER_DIR }}/" \
            --exclude ".git/" \
            --exclude ".github/" \
            --exclude "*.sops.env" \
            --exclude "prometheus/config/prometheus.yml.template" \
            --exclude "unbound/unbound.conf.template"

      - name: Navigate to Docker Compose directory
        run: cd "${{ secrets.DOCKER_DIR }}"

      - name: Pull latest Docker images
        run: |
          cd "${{ secrets.DOCKER_DIR }}"
          sudo docker compose pull

      - name: Apply Docker Compose changes
        run: |
          cd "${{ secrets.DOCKER_DIR }}"
          sudo docker compose up -d --remove-orphans

      - name: Prune unused Docker images
        if: success()
        run: |
          echo "Pruning unused Docker images..."
          sudo docker image prune -af
  1. Checkout Code: Pulls the latest version of the repository.
  2. Decrypt Secrets: Uses SOPS and a key stored in GitHub Secrets to decrypt secrets.sops.env into the required .env file.
  3. Template Configs: Injects secrets and variables from the .env file into configuration templates for services like Prometheus.
  4. Sync Files: Copies the final configuration files to the Docker directory on the Pi.
  5. Pull Images: Fetches the latest versions of all Docker images defined in the docker-compose.yml file.
  6. Restart Stack: Runs docker compose up -d to apply any changes, restarting containers as needed.
  7. Prune Images: Cleans up old, unused Docker images to save space.

The result is a fully automated deployment pipeline. A simple git push ensures that any change—from updating a service version to changing a dashboard—is applied consistently and reliably, with no manual intervention required.

The Future
#

In my next post, I’ll cover using Google’s Gemini to help me learn and create these services. Let’s just say it’s been interesting.

GeeksBsmrt
Author
GeeksBsmrt
My name’s Adam, and for the better part of two decades, I’ve been working in IT support and on-prem engineering. Making things work, keeping them secure, and automating them as much as possible. Why? Buttons and humans. I hate buttons, especially those my wife and kids love to push. Then there are the humans. And no, I don’t hate humans. However, they are error prone, and any IT professional will tell you, users are the most error prone humans on the planet. Automation minimizes both.
RaspberryPi - This article is part of a series.
Part 2: This Article