My configs for nixos and continued exploration into security conscious, low maintenance, higly reproducible, linux systems.
This config can't really be deployed elsewhere or copied directly. Feel free to take whatever you want out of it, or to give me any feedback on it, but the default configs include references to services deployed by this config.
This config is security conscious, but I wouldn't recommend this as any sort of reference for best practices. There's a lot of effort put into exploring high level ideas, enforcement of ephemeral systems, and access management, but these are still machines that I log into as a user and interact with physically, so there's a lot of room for improvement. There's very little effort put into physical security, and user space has a lot of privileges I wouldn't recommend in a production environment.
High level overview:
- Flake based config with NiOS and Home-Manager for multiple hosts
- Encrypted secrets with
sops-nix - YubiKey for portable signing / encryption / auth via GPG and FIDO/U2F for:
- login
- sudo
- ssh
- sops secrets
- git commit signing
- ZFS root filesystems with native ZFS encryption and LUKS key management and decryption
- Secure boot with lanzaboote and TPM LUKS unlocks
- Ephemeral root file system with opt-in persistence via
impermanence - Hydra CI/CD automatically pre-populates
nix-servebinary cache.- Automatic dependency update workflows
- Automatic upgrades from CI / cache with nixos-hydra-upgrade
- VMs with PCI device passthrough via OVMF
hyprlandwayland desktop environment- vanity custom desktop shell built with astal
- Custom install / debug usb, along with bootstrap, installation, and dubugging scripts
- Local DNS along with Let's Encrypt certificate protected services at
https://<service>.<host>.decent.id/. - Cloudflare tunnels for public services at
https://<service>.decent.id/.
This repo implements the dendritic nix pattern. I have a blog post about why I migrated to this pattern and my thoughts about it here.
The TLDR is that every file in the modules directory is a flake-parts module, and they are all recursively imported in flake.nix.
More information on individual modules can be found in the modules directory.
magnolia: Laptop, runs no services, mobile work only.oak: Desktop workstation, extensive libvirt / QEMU config with hardware passthrough.books.oak.decent.id:kavitaebook library.cache.oak.decent.id:nix-servebinary cache for other less beefy machines.hydra.oak.decent.id:hydracontinuous integration and continuous delivery.jellyfin.oak.decent.id:jellyfinmedia server.ntfy.oak.decent.id/ntfy.decent.id:ntfypub-sub / push notification service.rss.oak.decent.id:minifluxrss feed reader and browser app.
redbud: Retired laptop. Headless server acting as a RAOP (AirPlay) audio receiver and metrics server.dash.redbud.decent.id:grafanaDashboard and data visualization service.logs.redbud.decent.id:lokilog aggregation service.metrics.redbud.decent.id:prometheusmonitoring and alerting service.
warden: NUC mini-pc, headless, tailscale exit node.adguard.warden.decent.id:AdGuardHomeDNS level adblocker and DNS server.ha.warden.decent.id:Home Assistanthome automation service.
All of these <service>.<host>.decent.id hostnames are only defined in local DNS, and are only accessible locally or by VPN connection.
Hosts all utilize ZFS for root partitions.
I've made a scheme that allows systems to have a single self contained ZFS zpool that offers both native ZFS encryption and LUKS key management and unlock options. If you're interested in this I have a blog post getting into more detail about it here, and it is managed by my zfs module.
ZFS is extremely stateful. It's the one place where I didn't try to declaratively manage things with NixOS. Instead it is treated like a layer that NixOS builds on top of.
This idea is mostly taken from Graham Christensen's blog post "Erase your darlings".
NixOS boots as long as it has access to /boot and /nix. If hyperparabolic.zfs.rollbackSnapshot is specified, zfs rollback -r %snapshot% is executed immediately after filesystems are mounted in initrd, in this case rolling back to a blank filesystem snapshot of /rpool/crypt/local/root. By default there is zero config drift, and the system always boots with "new system smell."
Impermanence allows opt-in persistence of specific files and directories between boots. /persist is a mirror of the root filesystem only containing directories and files to persist between boots, and the Impermanence config sets up links in the root filesystem pointing to their persisted counterparts.
This is huge for reproducibility. All state needs to be explicitly declared.
My mind compares it to the idea of container images and volumes. Your root filesystem is the container image, impermanence persisted filesystems are overlaryed on top of the root filesystem acting as volumes.
Impermanence combined with ZFS this has several very nice features. zfs diff rpool/crypt/local/root@blank shows every filesystem change that isn't persisted, or temporary snapshots may even be used to see the diff of single commands:
zfs snapshot rpool/crypt/local/root@tmp1
# do something, install a package, run a command, etc.
zfs diff rpool/crypt/local/root@tmp1This makes it trivially easy to explore changes that packages and services make on a machine to find out how to persist them.
Stateful changes to a systems are also stored in a specific dataset. All of the systems I maintain have been re-imaged in place utilizing ZFS snapshots to persist data between instances of the same host.
zpool scrub performed on all zpools weekly. Automatic trim runs continuously.
Auto-snapshotting can be configured on a dataset by dataset basis. The local and safe datasets primarily act as policy containers for these changes. local generally doesn't get automatically persisted (only manual snapshots on .../local/nix if performing operations that change the nix db in ways that can't be reversed), where safe datasets generally automatically keeps a set of rolling snapshots that automatically rotate and may occasionally zfs send to perform remote backups. ZFS performs copy on write operations, so these snapshots generally consume little disk space unless you are modifying files regularly.
zfs allow enables non-root users to zfs send backups from systems without disk redundancy to systems with raidz or mirrored zpools for backups.
Any degradation detected during these automatic operations is automatically reported to a centralized location via webhooks using ZED. Usage metrics are reported to Grafana dashbards.
Secrets are stored encrypted in this repo. sops is used to encrypt secrets using my YubiKey stored PGP key in addition to each hosts' SSH host keys utilizing age keys. sops-nix decrypts these secrets at activation time, keeping them encrypted even in the nix store.
The host oak includes configuration to run VMs with the following features:
- Hardware passthrough, including GPU passthrough via OVMF for near native graphics performance
- Seamless host / vm audio device sharing via pipewire with user sessions
- Evdev input device passthrough (left-ctrl + right-ctrl to transition mouse and keyboard between host and vm)
- CPU pinning and process isolation for performance and security
- All running as a non-root user
Many ideas pulled from these awesome people: