Post

Nixifying a homelab

TLDR: Deploying containers and VMs on Proxmox utilizing my personal NixOS configs on my Github repo

The Nix rabbit hole

Nix is a declarative build environment and functional language. NixOS is an operating system built around Nix and applies the concept of fully reproducible and declarative configuration of the operating system. As such, NixOS does not follow the FHS standard and instead symlinks many commonly accessed paths to the Nix store, which is read-only and immutable by default. Previously, I was a life-long Debian user because of its simplicity, but the aspect of reproducible and declarative environments became highly appealing.

A new homelab approach

My previous homelab was a concoction of services and configurations from a result of experimentation and lack of knowledge. Average amatuer’s first homelab. I will still be deploying Proxmox on my PowerEdge T440, but here’s what I’m doing differently:

Issues I have currently:

  • Guess-work for what should be in an LXC and what should be in a VM.
  • Configs were just splayed all over the place.
  • Documentation was highly lacking.
  • No solid DNS or host records.

Ground rules I want to lay out:

  • Better classification and organization of both containers and VM’s.
  • Centralized config and deployment of services (to the best of my abilities).
  • Detailed documentation.
  • Standardized DNS zone records and if possible, to a NGINX reverse proxy.

And how I want to achieve it:

  • More temperate decisions between LXC vs VM
  • Utilize the Nix ecosystem to declare host-specific definitions for certain services and system configurations.
  • Document deployment and configuration processes.
  • Implement a proper central DNS zone binding to local services.

The first step

We deploy Proxmox. Start from scratch. A blank canvas.

Ideally, I wanted the “backbone” set up first, and that would be a NixOS LXC hosting a nix-serve instance as well as the appropriate credentials to deploy host configurations to the LXC’s and VM’s on my network.


LXC: nix.gladiusso.com

Source: hosts/homelab/nix

Nothing special, I give this container quite a bit of storage, considering the Nix store will be quite large as it’s building closures for multiple hosts. It also gets a good chunk of memory (8GB) and 8 assigned CPU cores. Obviously I want closures to build fast. I clone my NixOS config repo and it’s basically all set.

Going remote

Now that we have a central hub LXC for all the host configs, it would be a good idea to write up a simple baseline config that will allow the nix LXC to be able to deploy closures to it remotely. The idea is to have the freshy deployed LXC/VM switch to this config which would then let the nix LXC give it the real config. Outside of an autogenerated hardware configuration, the goals are:

  • Make git available.
  • Permit root login through SSH.
  • Define authorized SSH keys.

Here’s what it looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{modulesPath, ...}: {
  # Baseline configuration for initial remote deployment.
  imports = [
    (modulesPath + "/virtualisation/proxmox-lxc.nix")
    ./hardware-configuration.nix  # make sure to run nixos-generate-config to get the hardware config
  ];

  # Default stateVersion
  system.stateVersion = "24.11";

  # Enable git
  programs.git.enable = true;

  # Permit root login (password and key)
  services.openssh.settings.PermitRootLogin = "yes";

  # Define the allowed pubkey for ssh access
  users.users.root.openssh.authorizedKeys.keys = [
    "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG1hJBN8Urub5StYxhnxIB6+QVrx9T+4704Uam7HHEWC joe@gladiusso.com"
  ];
}

Source: hosts/homelab/_base

Deploying a new LXC/VM

I don’t exactly know how to do steps 3-5 efficiently, and to be frank, all of this could be automated somehow, I haven’t gotten to it yet.

  1. Clone my NixOS repo.
    1
    
    # nix-shell -p git --run 'git clone https://github.com/V3ntus/nixos
    
  2. Switch to the baseline config.
    1
    
    # nixos-rebuild test -I nixos-config=./nixos/hosts/homelab/_base/configuration.nix
    
  3. Deploy the config from nix LXC to the new LXC/VM.
    1
    
    # nixos-rebuild switch --flake .#<name> --target-host root@<IP>
    
  4. Copy over age keys for sops-nix from nix LXC to new LXC/VM.
    1
    
    # scp ~/.config/sops/age/keys.txt root@<IP>:/var/lib/nix-state/secrets/secret.key
    
  5. Do step 3 again to properly deploy all the secrets and user passwords needed (this is kind of essential as the user accounts will be password-less and bricked if the secrets couldn’t be decrypted on rebuild).

LXC: net.gladiusso.com

Source: hosts/homelab/net

Now that the nix backbone is up, we need a network “backbone” (not to be confused with an actual network backbone). The idea is to achieve standard internal DNS bindings either directly to the host or to an Nginx reverse proxy instance that goes to individual services.

Nginx Reverse Proxy

NixOS makes Nginx reverse proxy configurations butter easy. See the Wiki example here.

To reduce boilerplate, I’ll write a simple function that generates a virtual host attrset:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
let
  base = locations: {
    inherit locations;

    # Placeholder if adding other options in the future, such as SSL support.
  };
  proxy = ip: port: base {
    # Generate a virtual host proxy pass config for the root location.
    "/" = {
      proxyPass = "http://" + ip + ":" + toString port + "/";
      extraConfig = ''
        proxy_pass_header Authorization;
      '';
    };
  };
  # Define all virtual hosts here
  virtualHosts = [
    "dns.gladiusso.com" = proxy "127.0.0.1" 5380;
  ];
in {
  services.nginx = {
    enable = true;
    recommendedProxySettings = true;

    inherit virtualHosts
  };
}

Source: hosts/homelab/net/nginx.nix

Technitium DNS

For this host, I’ll also set up Technitium with a DNS zone for gladiusso.com. We’ll also utilize this DNS server for ad-block lists for global network ad-blocking. Aside from the NS and SOA records, I’ll add an A record for dns.gladiusso.com to point to the Nginx instance, which is this LXC.

When we add new services, I’ll add a virtualhost/proxy entry to the Nginx config and an A record to Technitium.


VM: files.gladiusso.com

This VM is not going to be a NixOS system, but will be a TrueNAS instance. This will be for movies, TV shows, and music. Might set up some datasets for other purposes in the future, but for now these datasets will be NFS exports and SMB shares (and hopefully iSCSI when I get bigger drives).


VM: *arr.gladiusso.com

Source: hosts/homelab/arr

Back to NixOS, we’ll deploy an *arr server stack on this VM which includes:

As much as I like the native Transmission web UI, I decided to try out Flood for Transmission this time.

The reason I picked a VM over an LXC was due to some weird mount issues which I think actually originated from an incorrect NFS path. An LXC might be fine, but I’m sticking with a VM since I’m already here.

Other services I’ll add in the future might be Flaresolvarr (there’s an active issue currently that prevents the webdriver from starting), Sonarr for TV shows (girlfriend is probably going to want this more than I do), maybe a better interface such as Jellyseerr or Botdarr (my girlfriend and I are on Discord a lot).

Source: hosts/homelab/arr/arr.nix

GPU transcoding?

So my T440 has a Tesla P40 GPU, which should do some decent HEVC transcoding (sadly no AV1, it is a Pascal era GPU. I would need a Turing card). Slight issue though, it is passed through to another VM for LLM and Stable Diffusion apps. I could either utilize vGPU’s to split the GPU for both the AI and *arr VM’s (which is difficult to get running on NixOS at the moment because of the NVIDIA GRID driver requirements), or see if I can get rffmpeg running on NixOS which should allow me to remote transcode via the AI VM (I’ll eventually add a working flake to my repo rffmpeg.nix).

But for now, no hardware accelerated transcoding. Jellyfin’s software encoding is so terrible on the Xeon Silver’s though, so that’s a good incentive for me to move to one of the solutions above.


VM: ai.gladiusso.com

Source: hosts/homelab/ai

A big reason of why I bought the Tesla P40 GPU back when it was $150 used was for AI applications, experimenting with LLM’s for coding and SD.NEXT image generation. Time to come back around and bring up a NixOS instance.

vGPU Woes (and why you probably shouldn’t try)

In the *arr setup above, I mentioned splitting my GPU for use in both transcoding and this AI VM using vGPU’s. I attempted to do so below using GRID drivers:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
  # NVIDIA vGPU guest configuration
  hardware.nvidia = {
    # Explicitly use the GRID drivers from NVIDIA
    package = config.boot.kernelPackages.nvidiaPackages.mkDriver {
      version = nvidiaVersion;
      url = "https://storage.googleapis.com/nvidia-drivers-us-public/GRID/vGPU16.4/NVIDIA-Linux-x86_64-${nvidiaVersion}-grid.run";
      sha256_64bit = "sha256-o8dyPjc09cdigYWqkWJG6H/AP71bH65pfwFTS/7V9GM=";
      useSettings = false;
      usePersistenced = false;
    };
  };
}

This sorta works, but as stated in this GitHub comment in the nixos-nvidia-vgpu repo, some work needs to be done to ensure the extra services required for the GRID drivers set up the vGPU correctly. I was not able to get the vGPU licensed even though I had FastAPI-DLS sending out a license. Such a hacky and unsupported solution.

Moving on…

Open WebUI + ollama

LLM’s are wonderful tools. At my job, we pay for ChatGPT 4o, which has been nice, but ChatGPT is slow, and 4o needs some extra guidance (or I just need to be more verbose in prompting). Since I have a decent GPU with quite a bit of VRAM (24GB to be exact), why not self-host an LLM? The biggest model I was able to run was dolphin-mixtral:8x7b. I was also able to run Stable Diffusion models on the GPU as well, running 512x768 with around 3it/s using RealisticVision_v6 (SD.NEXT benchmark database). This could be optimized more, but I haven’t dedicated that time.

Source: hosts/homelab/ai/ai_services.nix

The end?

As time goes on, my list of services hosted at home will change. Self-hosting is a broad field, software is nearly limitless. Deploying is (mostly) even faster now thanks to the Nix package manager and its community-maintained repositories.

nix-in-a-nutshell

This post is licensed under CC BY 4.0 by the author.