Motivation

To understand why Nixos is cool go to nixos.org and download the operating system ISO for Gnome and install it in a VM. If presented with a choice of desktops, select Gnome for the purpose of this exercise. Make sure you download Nixos, and not just the Nix package manager. Now, once you’ve done got the Gnome ISO installed open /etc/nixos/configuration.nix as super user and search for desktopManager.gnome and replace gnome with plasma, then find displayManager.gdm and replace gmd with sddm. Now write the file and close your editor. Back in your shell run sudo nixos-rebuid boot && sudo reboot. When your system comes back up, take a second to absorb the fact that you’re now looking at SDDM. Now log in. For all intents and purposes, you no longer have any Gnome applications on your system. Once you’ve come to grips with the fact that you switched from the Gnome+GTK ecosystem to the Plasma+QT ecosystem as fast as your interet would allow, go ahead and reboot, but don’t let the boot process past grub/systemd boot. Hit the down arrow once in grub/systemd boot to select what is known as the previous “generation” and hit enter. Now pick your jaw off the floor again after you see GDM and Gnome and not SDDM and Plasma. If you reboot again from here without doing anything, you’ll be back in KDE.

If that isn’t at least intriguing, then I apologize for wasting your time and feel free to stop here. If you’re intrigued or impressed at all, keep reading because a whole new world awaits. With that out of the way let’s get started.

What is Nixos?

Nixos is a linux distribution where the core focus of Nixos is system level reproducibility. The idea here is that you define a system on machine A in some way and then have the ability to take that definition to another machine, B, and get an exact copy of the system you have running on system A on system B, then repeat that process for machines C through Z and beyond without having to worry about forgetting a firewall rule, a drive mount, or anything else.

How?

The way Nixos accomplishes the above is by creating a special directory at /nix/store that is, for all intents and purposes, read only. This directory contains all of the packages for the system as well as all the configuration files required for the system to run. This makes Nixos a bit of a head scratcher for the uninitiated because /etc is a symlink farm pointing to /etc/static/<file>, which is a symlink farm pointing to /nix/store/<hash><file>. Adding to the confusion /bin/is empty save for sh and /usr/bin is empty save for env. What’s going on here?

The answer lies in $PATH:

/run/wrappers/bin:/home/author/.nix-profile/bin:/etc/profiles/per-user/author/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin:/usr/sbin:/usr/local/sbin

Notice /etc/profiles/per-user/author/bin and /run/current-system/sw/bin. Both of those directories are where you’ll find most of the binaries on a fresh install of Nixos. Once again, they’re symlink farms! Once again the path contains a hash.

This is all part of Nixos’ approach to repoducibility. Every package in Nixos, and every file under the control of the package manager is hashed. Not only that, but every depenency of every package is hashed, forming what is essentially a small blockchain for every package. This completely eliminates dependency hell because you can have every version of glibc, python, rust, and nodejs on your system side by side at the same time and nothing would every confilct because of how Nixos and its package manager, Nix, manage paths packages and hashes. This is not space efficient because every package downloads all of its dependencies every time, but it’s worth it. Also, Nix is intelligent enough to use hard links where appropriate, and there are configuration options that allow you to manage how much stuff you keep around for how long. I keep every “version”, know as generations, I’ve built for 14 days and my /nix/store is 47G according to du -hs. I’m okay with this because if I ever need to roll back to another generation I can.

Now why is everything a symlink? Again Nixos has your back. Updates on Nixos are atomic, similar to something like Fedora Silverblue or OpenSuse MicroOS. Additionally, like the aforementioned distros, Nixos allows for rollbacks. Unlike the aforementioned distros, Nixos achieves these rollbacks via the symlinks mentioned earlier. This means you can install Nixos on any file system supported by Linux and not lose features. (We’ll come back to the file systems, it gets juicy for the home lab stuff.) The way Nixos accomplishes updates is by building a new generation of itself and switching the symlinks. The symlink switch only happens if the generation build is successful, so you can lose power in the middle of an update and be 100% fine. I’ve (accidentally) tested it.

Silverblue + Docker? Not so fast!

At this point you’re might be thinking “So, if I want rollbacks and no depency hell I’ll use Silverblue and Docker”. Fair enough. Until now, Silverblue and Docker do exactly the same thing Nixos does, minus the file system compatibility. This is where we step beyond Silverblue and Docker and into the system level reproducibility. Recall how /etc is a symlink farm into /nix/store and how /nix/store is read only? This presents a problem if you want to configure nginx, or samba, or nfs, or any other package that relies on /etc. So what’s the solution? The only directory in /etc that isn’t part of the symlink farm, /etc/nixos. Within /etc/nixos is a file called configuration.nix. This is where the magic happens.

Nixos is built around the Nix package manager and configured in the Nix programming language. Earlier I mentioned how on updates Nixos rebuilds itself. The rebuild happens based on the configuration defined in the /etc/nixos/configuration.nix file, which is written in the Nix language. For example, enabling NFS on Nixos looks like this:

services.nfs.server.enable = true;
services.rpcbind.enable = true;
services.nfs.server.exports = ''
        /srv 192.168.1.0/24(rw,sync,crossmnt,fsid=0)
        /srv/music 192.168.1.0/24(rw,sync)
        /srv/video 192.168.1.0/24(rw,sync)
        /srv/storage 192.168.1.0/24(rw,sync)
        /srv/home 192.168.1.0/24(rw,sync)
    '';

As you continue writing basic Nix expressions, it starts looking like wonky JSON. It’s way more powerful because it’s actually a functional programming language. A more advanced example of this is found in my Syncthing config:

services.syncthing = {
user = "author";
group="users";
enable = true;
dataDir = "/home/syncthing";
guiAddress = "0.0.0.0:8384";

extraOptions= {
    gui = {
    user = "author";
    password = "password";
    };
};
devices = {
    "kaylee" = { id = "syncthing UUID"; };
};
folders = builtins.listToAttrs ( map (f:
    {
    name="${f}";
    value = (builtins.listToAttrs [
        {name="path"; value="${config.users.users.author.home}/${f}";}
        {name="devices"; value=["kaylee"];}
    ]);
    }) sharedDirectories ) ;
};

The folders = line is a function call that maps over a sharedDirectories variable defined elsewhere in my config. The specific details of that aren’t terribly important at this point. What is important is how this gets translated into a working system. The Nix package manager (referred to as “Nix” from here on ) takes the Nix language (referred to as simply “Configuration” from here on) and makes the statements in the configuration true in the running system either immediately or upon reboot.

So what does all this mean?

All of the above means that you can take the /etc/nixos/configuration.nix file from machine A and put it on any other Nixos machine and in a dozen key strokes and a few minutes of processing have exactly the configuration you had on machine A running on the new machine. It’s infrastructure as code applied in the extreme and I love it.

But Ansible?

I’ve left a few things out, but Nix and Nixos does things that Ansible can’t do. It’s possible with a bit of shennanigans to ensure that you get the exact same versions of packages across deployments. With Nix you can pin entire systems or single packages to specific versions down to the granularity of a single git commit. It’s also possible with Nixos to audit your entire dependency chain because of the way packageing works and the hashes involved. Ansible can not any of that, or if it can, it’s not as easy as Nixos makes it. Also, this is the basics. I’m leaving out fun things like building a configuration for a system “over there” on the machine in front of you the same way you would deploy an Ansible playbook to a remote machine.

Okay I know how to set up a Nixos NFS server, but how do I install ripgrep?

It does appear that I’ve jumped in the deep end without covering the basics. Nixos and Nix have a way to do this too, and like everything else so far, it’s not what you’re used to. Within your /etc/nixos/configuration.nix there will be a line that looks like this environment.systemPackages = with pkgs; [. This line starts a list within which you tell nix what packages you want. On my system I have multiple instances of this that get concatinated together in the background, but as an example, here’s how to install ripgrep, nfs-utils, fio, iperf, and vim:

environment.systemPackages = with pkgs; [
    fio
    iperf
    nfs-utils
    ripgrep
    vim
];

Don’t worry, the Nixos repositories are huge. I left Arch to come to Nixos and haven’t once wished I had the AUR. Part of the size is definitely due to the granularity. You can install individual python libraries without PIP, or individual Haskell libraries without cabal if you wish, and you can define entire dev environments using Nix. This doesn’t mean, however, that desktop packages are ignored. Nixos has every desktop package I could possibly want.

I have a configuration defined, now what?

If you’ve been following along you may have some sort of configuration file defined and wonder how to apply said configuration. To do this there are 3 commands to know, in order of convenience:

  1. sudo nixos-rebuild test builds the configuration and applies it immediately, but the configuration is ephemeral, so if you reboot you get your previous configuration back without having to actively choose to roll back.
  2. sudo nixos-rebuild switch builds the configuration and applies it immediately. This is the command you want to use most often if you’re adding a single package for continued use.
  3. sudo nixos-rebuild boot builds the configuration and applies it on the next boot. This is particularly useful if you want to switch from Gnome to KDE or other similar things.

When applying a configuration Nix will add and remove users and groups associated with any packages or services and restart all affected services. This is particularly relevant when applying changes to GUI services like your display manager because your X/Wayland session will come to an abrupt end if you use the wrong flavor of nixos-rebuild.

Home Lab

Now that you understand the why of Nixos, let’s see how we can apply that to home lab scenarios.

NAS

I’ve taken an old “gaming” pc and converted it to a NAS using Nixos. This is where the file system support is fantastic. Nixos has ZFS support! I’m not using ZFS, having already dived headlong into the BTRFS ecosystem, but ZFS support exists and is not going to break because Nixos doesn’t break. My machine has 2 4tb Seagate Ironwolf NAS drives in a BTRFS mirror that serves as a repository for my backups as well as a few other self-hosted services including Jellyfin, Pi Hole, Nextcloud, Netdata and a Samba server. So how does this work?

Samba

I showed a simple NFS setup above. Here’s a more detailed example of a Samba configuration, including the bind mounts I use to give access without modifying permissions in unreasonable ways.

{ config, pkgs, ... }: # Nix function that takes config and pkgs as arguments
{

# Declaritive bind mount. This gets translated into /etc/fstab as needed
fileSystems."/srv/music" = {
  device = "/home/author/Music";
  options = ["bind"];
};

# Nixos comes with a firewall active by default, this opens the correct port for smb
services.samba.openFirewall = true;
services.samba-wsdd.enable = true; # make shares visible for windows 10 clients
# Begin configuring samba
services.samba = {
    enable = true; # enable the systemd service and install relevant packages
    securityType = "user";
     # extraConfig is an escape hatch from the Nix language.
     # In this case I'm using it to define some samba options in the language
     # samba expects. This gets dumped virbatim into /etc/samba/smb.conf.
    extraConfig = ''
            workgroup = WORKGROUP
            server string = browncoat
            netbios name = browncoat
            security = user
            guest account = nobody
            map to guest = bad user
            smbd profiling support = on
        '';
    shares = { # define the actual shares
      music = { # a share called "music"
          # The following are all smb options represented in a Nix key-value format
          path = "/srv/music/";
          browseable = "yes";
          "read only" = "yes";
          "guest ok" = "yes";
          "create mask" = "0644";
          "directory mask" = "0755";
          "force user" = "author";
          "force group" = "users";
      };
    };
};

# Open a TCP port in the firewall for wsdd
networking.firewall.allowedTCPPorts = [
    5357 # wsdd/samba
];

#  Open the corresponding UDP port in the firewall for wsdd
networking.firewall.allowedUDPPorts = [
    3702 # wsdd/samba
];
}

Jellyfin

Now, with that out of the way, lets configure Jellyfin.

services.jellyfin.enable = true;
services.jellyfin.openFirewall = true;
users.users.author.extraGroups = ["jellyfin"];

That’s it. There are a few more options, but the rest of the config happens in the Jellyfin UI. In my config, the bind mount above maps the music directory in my home folder to /srv/music where the permissions are more open. I then told Jellyfin to look at the /srv/music directory when going through the configuration wizard in the UI.

Nextcloud

Nextcloud is equally simple if you’re okay with running Nextcloud using a sqlite backend.

services.nextcloud = {
enable = true;
hostName = "browncoat.local";
home = "/srv/storage/nextcloud/home/";
datadir = "/srv/storage/nextcloud/data/";
config.adminpassFile = "${pkgs.writeText "adminpass" "test123"}";
package= pkgs.nextcloud26;
enableBrokenCiphersForSSE = false;

config.extraTrustedDomains = [
    "192.168.1.111"
];
};

There are more robust ways to configure Nextcloud so that it uses Postgres or Mysql/MariaDB, but I’m okay with sqlite since this is a home NAS.

A brief diversion for databases

Speaking of databases and as a bit of an aside it’s possible to define what databases you have and what users those databases have via Nix. The configuration below assumes Unix socket authentication, but it will install mariadb, create a testing database and ensure the author user has all privileges on all tables. Similar things are possible with Postgres

services.mysql.enable = true;
services.mysql.ensureDatabases = [
    "testing"
];

services.mysql.ensureUsers = [
    {
        name = "author";
        ensurePermissions = {
            "testing.*" = "ALL PRIVILEGES";
        };
    }
];

Docker

Most NAS setups I’m aware of use something like Portainer to configure Docker/Podman containers. Nixos can also do this declaratively and reproducibly. Portainer is great, but if something goes wrong it’s possible that the configuration for some or all of your container configurations could be lost. Yes, a good home labber, sysadmin, or engineer takes religious backups, but Nixos and the configuration.nix file reduces the need for such religous backups. My install of Pi Hole looks like this:

{ config, pkgs, ... }:
{

  virtualisation.podman = {
    enable = true;
    autoPrune.enable = true;
    dockerCompat = true;

    # Required for containers under podman-compose to be able to talk to each other.
    # Commented out because docker dns and pihole dns were fighting like an old
    # married couple.
    # defaultNetwork.settings.dns_enabled = true;
  };
  virtualisation.oci-containers = {
    containers = {
      "pihole" = {  # a name for the container
        autoStart = true;
        # Use specific tag for reproducibility and to make updates easier.
        # Using :latest is okay, but if I were to deploy this on another machine
        # that other machine would get a different container. This goes against
        # the ethos of Nixos.
        #
        # Also, using specific tags makes telling Nixos to update the container easier.
        image = "pihole/pihole:2023.05.2";
        ports = [
          "53:53/udp"
          "53:53/tcp"
          "8080:80/tcp" # Because nextcloud uses 80, remap to 8080
        ];
        volumes = [
          "/srv/pihole/etc/pihole:/etc/pihole"
          "/srv/pihole/etc/dnsmasq.d:/etc/dnsmasq.d"
        ];

        extraOptions = [ "-h=pihole" ];
        environment = {
          WEBPASSWORD =  "a very long secure password";
        };
      };
    };
  };

  networking.firewall.allowedTCPPorts = [
    53
    8080
  ];

  networking.firewall.allowedUDPPorts = [
    53
    8080
  ];
}

The above can be modified and repeated as for as many containers as you can imagine. There are disadvantages and compromises, but I’ve not been disappointed. Heck, Nixos users frequently have their configurations in git, I know I do. This means that I have an essentially immutable history of my OS, and I can roll back to any particular version of it that I want at any time.

I’ve tried to show in this article how to use Nixos as a reproducible way to configure a home lab, including using it as a reproducible way to configure docker hosts. I’m aware there are other solutions with other advantages and compromises, but I’m familiar with Nixos at this point I can’t speak on other solutions, at least not from experience. There are times where NixOS gets in the way, but when that happens Docker comes to the rescue.