Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

enchantednatures.github.io

Introduction

Welcome! I am Hunter.

Homelab Documentation

Overview

This homelab is managed through Tailscale for secure remote access. The setup consists of a 20U rack with various servers and networking equipment optimized for virtualization, storage, and Kubernetes workloads.

Physical Infrastructure

Rack Setup

  • Size: 20U rack
  • Management: All hosts managed through Tailscale
  • Power: Standard rack power distribution

Network Equipment

Primary Switch

  • Model: Netgear JGS524e v2
  • Type: 24-port Gigabit Ethernet switch
  • Purpose: Main network connectivity for all rack equipment

Servers

Leviathan (Threadripper Server)

  • Hardware: AMD Threadripper-based server
  • Hypervisor: Proxmox VE
  • Primary Purpose: Kubernetes cluster hosting via Talos VMs
  • Status: Active

Virtual Machines

  • seko: Arch Linux VM running on Leviathan

SuperMicro FatTwin Server

Model: SuperMicro F627R3-R72B+ 4U FatTwin Server 4-Node X9DRFR
CPU Sockets: 8x LGA2011 (2 per node)
CPU Support: Intel E5-2600v2 series

Node Configuration

Tower (Node 1)
  • Hostname: tower
  • OS: Unraid
  • Purpose: NFS storage server
  • Services:
    • NFS storage for cluster
    • MinIO Docker container for S3-compatible storage
  • Status: Active
Melusine (Node 2)
  • Hostname: melusine
  • OS: Proxmox VE (installed but not running)
  • Purpose: Virtualization host (planned)
  • Status: Installed, not active
Node 3
  • Hostname: node3
  • OS: Proxmox VE (planned)
  • Status: Hardware issue - CPU needs reseating
  • Notes: Will install Proxmox once hardware is fixed
Node 4
  • Hostname: node4
  • OS: Proxmox VE (planned)
  • Status: Not yet configured

Storage Configuration

Direct HDD Passthrough

For optimal storage performance on the SuperMicro nodes, direct HDD passthrough is configured via the SuperMicro RAID controller. This allows VMs to have direct access to physical drives without virtualization overhead.

Implementation: Following specialized configuration for SuperMicro hardware to bypass RAID virtualization layer.

Raspberry Pi Cluster

  • Count: 3 units
  • Status: Currently powered off
  • Purpose: Available for lightweight services or testing

Network Architecture

Current Network Challenges

  • IP address exhaustion on home network
  • Need for better network segmentation
  • Kubernetes cluster isolation requirements

Planned VLAN Segmentation

See Network Design & VLAN Strategy for detailed network planning.

Planned VLANs:

  • VLAN 1: Management (Proxmox, switch management)
  • VLAN 10: Kubernetes cluster (10.10.0.0/16)
  • VLAN 20: Storage traffic (NFS, MinIO)
  • VLAN 30: General services and VMs
  • VLAN 40: IoT devices and RPi cluster
  • VLAN 99: Guest network isolation

Management Network

All management interfaces accessible via Tailscale on VLAN 1:

  • Leviathan Proxmox GUI
  • Tower Unraid GUI
  • Melusine Proxmox GUI (when active)
  • Node 3 Proxmox GUI (planned)
  • Node 4 Proxmox GUI (planned)

Service Networks

  • NFS: Tower on dedicated storage VLAN
  • S3 Storage: MinIO on storage VLAN
  • Kubernetes: Isolated on dedicated VLAN with API access from management

Current Workloads

Active Services

  1. Kubernetes Cluster: Running on Talos VMs (Leviathan)
  2. NFS Storage: Unraid on Tower
  3. S3-Compatible Storage: MinIO on Tower
  4. Development Environment: Arch VM (seko) on Leviathan

Planned Expansions

  1. Additional Proxmox nodes (Melusine, Node 3, Node 4)
  2. Expanded virtualization capacity
  3. Potential Raspberry Pi cluster activation

Maintenance Notes

Pending Tasks

  • Reseat CPU on Node 3
  • Complete Node 4 initial setup
  • Activate Melusine Proxmox installation
  • Consider Raspberry Pi cluster utilization

Hardware Considerations

  • SuperMicro FatTwin provides excellent density for virtualization
  • Direct storage passthrough optimizes I/O performance
  • Tailscale provides secure remote management without VPN complexity

Network Design & VLAN Strategy

Current Network Challenges

  • Running out of IP addresses on home network
  • Need better network segmentation for security
  • Kubernetes cluster requires isolated network with management access

Switch Capabilities

Netgear JGS524e v2 Plus Switch

  • 24 Gigabit Ethernet ports
  • 802.1Q VLAN support
  • Port-based and tagged VLAN configuration
  • Web-based management interface

VLAN Segmentation Strategy

VLAN Design Overview

VLAN IDNamePurposeSubnetNotes
1Default/ManagementProxmox hosts, switch management192.168.1.0/24Native VLAN
10KubernetesK8s cluster nodes and services10.10.0.0/16Isolated cluster network
20StorageNFS, MinIO, storage traffic10.20.0.0/24High-bandwidth storage
30ServicesGeneral services, VMs10.30.0.0/24Application workloads
40IoT/DevicesFuture IoT devices, RPi cluster10.40.0.0/24Restricted internet access
99GuestGuest network isolation10.99.0.0/24Internet only, no LAN access

VLAN Access Requirements

Management VLAN (1) - 192.168.1.0/24

Purpose: Infrastructure management and inter-VLAN routing Hosts:

  • Leviathan Proxmox management interface
  • Tower Unraid management interface
  • Melusine Proxmox management interface
  • Node 3/4 Proxmox management interfaces
  • Switch management interface
  • Router/firewall management

Access Rules:

  • Full access to all VLANs for management
  • Tailscale endpoints terminate here
  • SSH/HTTPS management protocols

Kubernetes VLAN (10) - 10.10.0.0/16

Purpose: Kubernetes cluster isolation with large address space Hosts:

  • Talos VMs on Leviathan
  • Kubernetes API server (accessible from Management VLAN)
  • Pod networks (CNI-managed subnets)
  • LoadBalancer services

Access Rules:

  • Management VLAN can access K8s API server (port 6443)
  • Storage VLAN access for persistent volumes
  • Outbound internet access
  • No direct access from other VLANs except management

Storage VLAN (20) - 10.20.0.0/24

Purpose: High-performance storage traffic isolation Hosts:

  • Tower NFS services
  • Tower MinIO S3 services
  • Storage-specific interfaces on Proxmox hosts
  • Backup services

Access Rules:

  • Kubernetes VLAN access for PV storage
  • Services VLAN access for VM storage
  • Management VLAN access for administration
  • High QoS priority for storage traffic

Services VLAN (30) - 10.30.0.0/24

Purpose: General application and VM workloads Hosts:

  • seko Arch VM
  • Future application VMs
  • Development environments
  • Non-critical services

Access Rules:

  • Storage VLAN access for data
  • Management VLAN access for administration
  • Outbound internet access
  • Limited inter-service communication

Network Topology

Internet
    |
[Router/Firewall] - VLAN routing & firewall rules
    |
[Netgear JGS524e v2] - 802.1Q VLAN switch
    |
    ├── Port 1-4: Leviathan (trunk: 1,10,20,30)
    ├── Port 5-8: SuperMicro FatTwin (trunk: 1,20,30)
    │   ├── Tower: VLAN 1,20 (management + storage)
    │   ├── Melusine: VLAN 1,30 (management + services)
    │   ├── Node 3: VLAN 1,30 (management + services)  
    │   └── Node 4: VLAN 1,30 (management + services)
    ├── Port 9-11: Raspberry Pi (access: VLAN 40)
    ├── Port 12: Uplink to Router (trunk: all VLANs)
    └── Port 13-24: Available for expansion

Implementation Plan

Phase 1: Infrastructure Preparation

  1. Router Configuration

    • Configure VLAN interfaces and routing
    • Set up firewall rules between VLANs
    • Configure DHCP scopes for each VLAN
  2. Switch Configuration

    • Create VLANs 10, 20, 30, 40, 99
    • Configure trunk ports for servers
    • Set up access ports for devices

Phase 2: Server Network Configuration

  1. Leviathan (Proxmox)

    • Configure VLAN interfaces on Proxmox host
    • Bridge configuration for VM networks
    • Migrate Talos VMs to VLAN 10
  2. SuperMicro Nodes

    • Configure management interfaces on VLAN 1
    • Set up storage interfaces on VLAN 20
    • Configure service interfaces on VLAN 30

Phase 3: Service Migration

  1. Kubernetes Cluster

    • Migrate to VLAN 10 network
    • Update API server accessibility
    • Reconfigure storage connections
  2. Storage Services

    • Move NFS/MinIO to VLAN 20
    • Update client configurations
    • Test performance improvements

Security Considerations

Inter-VLAN Firewall Rules

Management (1) → All VLANs: Allow (administrative access)
Kubernetes (10) → Storage (20): Allow NFS/S3 ports
Kubernetes (10) → Internet: Allow outbound
Services (30) → Storage (20): Allow NFS/S3 ports  
Services (30) → Internet: Allow outbound
IoT (40) → Internet: Allow outbound only
Guest (99) → Internet: Allow outbound only
All other inter-VLAN: Deny

Port Security

  • Enable port security on access ports
  • MAC address learning limits
  • DHCP snooping where supported
  • Storm control for broadcast traffic

Monitoring & Troubleshooting

Network Monitoring

  • SNMP monitoring of switch ports
  • VLAN traffic analysis
  • Inter-VLAN routing metrics
  • Bandwidth utilization per VLAN

Troubleshooting Tools

  • VLAN membership verification
  • Trunk port configuration validation
  • Inter-VLAN connectivity testing
  • Performance baseline measurements

Future Expansion

Additional VLANs

  • VLAN 50: DMZ for public services
  • VLAN 60: Backup network isolation
  • VLAN 70: Lab/testing environment

Advanced Features

  • VLAN QoS prioritization
  • Link aggregation for high-bandwidth hosts
  • VLAN-aware monitoring and alerting
  • Automated VLAN provisioning

VLAN Configuration Guide

Netgear JGS524e v2 Switch Configuration

Initial Setup

  1. Connect to switch web interface (default: 192.168.0.239)
  2. Login with default credentials
  3. Update firmware if needed
  4. Change default admin password

VLAN Creation

Step 1: Create VLANs

Navigate to Switching → VLAN → 802.1Q → VLAN Configuration

Create the following VLANs:

VLAN ID: 10, Name: Kubernetes
VLAN ID: 20, Name: Storage  
VLAN ID: 30, Name: Services
VLAN ID: 40, Name: IoT
VLAN ID: 99, Name: Guest

Step 2: Configure VLAN Membership

Navigate to Switching → VLAN → 802.1Q → VLAN Membership

Port Configuration:

Ports 1-4 (Leviathan): Tagged on VLANs 1,10,20,30
Ports 5-8 (SuperMicro): Tagged on VLANs 1,20,30
Ports 9-11 (RPi): Untagged on VLAN 40
Port 12 (Uplink): Tagged on all VLANs
Port 24 (Management): Untagged on VLAN 1

Step 3: Configure Port VLAN ID (PVID)

Navigate to Switching → VLAN → 802.1Q → Port PVID Configuration

Ports 1-8: PVID 1 (Management default)
Ports 9-11: PVID 40 (IoT devices)
Port 12: PVID 1 (Uplink)
Remaining ports: PVID 1 (Default)

Router/Firewall Configuration

VLAN Interfaces

Configure the following interfaces on your router:

# Management VLAN (existing)
interface vlan1
  ip address 192.168.1.1/24
  
# Kubernetes VLAN  
interface vlan10
  ip address 10.10.0.1/16
  
# Storage VLAN
interface vlan20
  ip address 10.20.0.1/24
  
# Services VLAN
interface vlan30
  ip address 10.30.0.1/24
  
# IoT VLAN
interface vlan40
  ip address 10.40.0.1/24
  
# Guest VLAN
interface vlan99
  ip address 10.99.0.1/24

DHCP Configuration

Set up DHCP scopes for each VLAN:

# Kubernetes VLAN DHCP
dhcp pool kubernetes
  network 10.10.0.0/16
  default-router 10.10.0.1
  dns-server 10.10.0.1
  range 10.10.1.100 10.10.1.200

# Storage VLAN DHCP  
dhcp pool storage
  network 10.20.0.0/24
  default-router 10.20.0.1
  dns-server 10.20.0.1
  range 10.20.0.100 10.20.0.200

# Services VLAN DHCP
dhcp pool services
  network 10.30.0.0/24
  default-router 10.30.0.1
  dns-server 10.30.0.1
  range 10.30.0.100 10.30.0.200

# IoT VLAN DHCP
dhcp pool iot
  network 10.40.0.0/24
  default-router 10.40.0.1
  dns-server 10.40.0.1
  range 10.40.0.100 10.40.0.200

# Guest VLAN DHCP
dhcp pool guest
  network 10.99.0.0/24
  default-router 10.99.0.1
  dns-server 8.8.8.8
  range 10.99.0.100 10.99.0.200

Proxmox VLAN Configuration

Leviathan Network Setup

Configure network bridges for each VLAN:

# Edit /etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
    address 192.168.1.10/24
    gateway 192.168.1.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0

# Kubernetes VLAN bridge
auto vmbr10  
iface vmbr10 inet static
    address 10.10.0.10/16
    bridge-ports eno1.10
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes

# Storage VLAN bridge
auto vmbr20
iface vmbr20 inet static
    address 10.20.0.10/24
    bridge-ports eno1.20
    bridge-stp off
    bridge-fd 0

# Services VLAN bridge  
auto vmbr30
iface vmbr30 inet static
    address 10.30.0.10/24
    bridge-ports eno1.30
    bridge-stp off
    bridge-fd 0

SuperMicro Nodes Network Setup

Configure each node with appropriate VLAN interfaces:

Tower (Node 1) - Storage Focus:

# Management interface
auto eno1
iface eno1 inet static
    address 192.168.1.11/24
    gateway 192.168.1.1

# Storage interface  
auto eno1.20
iface eno1.20 inet static
    address 10.20.0.11/24

Melusine (Node 2) - Services:

# Management interface
auto eno1
iface eno1 inet static
    address 192.168.1.12/24
    gateway 192.168.1.1

# Services interface
auto eno1.30  
iface eno1.30 inet static
    address 10.30.0.12/24

Kubernetes Cluster Migration

Talos Configuration Update

Update Talos configuration to use new VLAN network:

# talos-config.yaml
machine:
  network:
    interfaces:
      - interface: eth0
        addresses:
          - 10.10.0.20/16
        routes:
          - network: 0.0.0.0/0
            gateway: 10.10.0.1

cluster:
  network:
    podSubnets:
      - 10.244.0.0/16
    serviceSubnets:
      - 10.96.0.0/12

API Server Access

Ensure Kubernetes API server is accessible from management VLAN:

# Firewall rule to allow management access to K8s API
iptables -A FORWARD -s 192.168.1.0/24 -d 10.10.0.0/16 -p tcp --dport 6443 -j ACCEPT

Firewall Rules

Inter-VLAN Access Control

# Management to all VLANs (administrative access)
iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT

# Kubernetes to Storage (NFS, S3)
iptables -A FORWARD -s 10.10.0.0/16 -d 10.20.0.0/24 -p tcp --dport 2049 -j ACCEPT  # NFS
iptables -A FORWARD -s 10.10.0.0/16 -d 10.20.0.0/24 -p tcp --dport 9000 -j ACCEPT  # MinIO

# Services to Storage
iptables -A FORWARD -s 10.30.0.0/24 -d 10.20.0.0/24 -p tcp --dport 2049 -j ACCEPT  # NFS
iptables -A FORWARD -s 10.30.0.0/24 -d 10.20.0.0/24 -p tcp --dport 9000 -j ACCEPT  # MinIO

# Allow outbound internet for Kubernetes and Services
iptables -A FORWARD -s 10.10.0.0/16 -o wan0 -j ACCEPT
iptables -A FORWARD -s 10.30.0.0/24 -o wan0 -j ACCEPT

# IoT and Guest internet only
iptables -A FORWARD -s 10.40.0.0/24 -o wan0 -j ACCEPT
iptables -A FORWARD -s 10.99.0.0/24 -o wan0 -j ACCEPT

# Deny all other inter-VLAN traffic
iptables -A FORWARD -j DROP

Testing & Validation

Connectivity Tests

# Test VLAN connectivity
ping 10.10.0.1  # Kubernetes gateway
ping 10.20.0.1  # Storage gateway  
ping 10.30.0.1  # Services gateway

# Test inter-VLAN access
# From management VLAN, test K8s API access
curl -k https://10.10.0.20:6443

# Test storage access from K8s VLAN
showmount -e 10.20.0.11  # NFS exports

VLAN Verification

# Verify VLAN membership on switch
show vlan brief

# Check port VLAN assignments  
show interfaces switchport

# Verify trunk port configuration
show interfaces trunk

Troubleshooting

Common Issues

  1. No inter-VLAN connectivity: Check router VLAN interfaces and routing
  2. DHCP not working: Verify DHCP relay configuration
  3. Trunk ports not passing traffic: Check VLAN membership and tagging
  4. API server unreachable: Verify firewall rules for management access

Diagnostic Commands

# Switch diagnostics
show vlan
show mac address-table
show interfaces status

# Linux VLAN diagnostics  
ip link show
ip addr show
cat /proc/net/vlan/config

Migration Checklist

  • Configure VLANs on switch
  • Set up router VLAN interfaces and DHCP
  • Configure Proxmox host networking
  • Update VM network configurations
  • Migrate Kubernetes cluster to new VLAN
  • Update storage service network configs
  • Configure firewall rules
  • Test connectivity between VLANs
  • Update documentation with final IP assignments
  • Monitor network performance post-migration

Rendering dot in mdBook

The mdBook theme allows the addition of custom elements to the top of the rendered HTML pages using a file named theme/head.hbs. This is a useful feature for adding specific metadata or scripts that need to be included in the <head> section of each page. Referencing this blog post on rendering dot in markdown file, there are four scripts required. The first three need to be added to head.hbs

<script src="https://unpkg.com/[email protected]/dist/d3.min.js"></script>
<script src="https://unpkg.com/@hpcc-js/[email protected]/dist/index.min.js"></script>
<script src="https://unpkg.com/[email protected]/build/d3-graphviz.min.js"></script>

and copy a function into a file script.js1

function d3ize(elem) {
    var par = elem.parentElement;
    d3.select(par).append('div').graphviz().renderDot(elem.innerText);
    d3.select(elem).style('display', 'none');
}

console.log(document.getElementsByClassName(".language-dot"));
var dotelems = document.getElementsByClassName("language-dot");
for (let elem of dotelems) {
    d3ize(elem);
}

in the book.toml file add these lines2.

[output.html]
copy-fonts = true
additional-js = ["script.js"]

and your graph should appear.

digraph G { rankdir = LR; a -> b }

  1. I would have thought I could have included the script itself in the head.hbs file, however, when I attempted it the graph did not render. Which is unfortunate, as it would then require only a single modification.

  2. I also tried adding the remote scripts to the additional-js property, but mdBook build complained of not being able to find the files.

Keyboard Layout

Keyboard Layout

to regenerate, navigate to the zmk-config directory and run:

keymap parse -z config/corne.keymap > .keymap && keymap draw .keymap > keyboard-layout.svg