A complete, beginner-friendly guide for building a high-performance Thunderbolt 4 mesh network with Ceph storage on Proxmox VE 9.
This guide helps you set up a 3-node Proxmox cluster using Thunderbolt 4 for ultra-fast Ceph storage replication. Instead of expensive 10GbE/25GbE network switches, you connect your nodes directly via TB4 cables in a mesh topology.
| Metric | Result |
|---|---|
| Write Throughput | 1,300+ MB/s |
| Read Throughput | 1,760+ MB/s |
| Latency | Sub-millisecond (~0.6ms) |
| MTU | 65520 (jumbo frames) |
| Packet Loss | 0% |
- 3x nodes with dual Thunderbolt 4 ports (tested on MS-01 mini-PCs)
- 64GB RAM per node (recommended for Ceph performance)
- NVMe drives for Ceph OSDs
- TB4 cables for mesh connectivity (quality matters!)
- Standard Ethernet for management network
- Proxmox VE 9.0+ (with test repository for latest Ceph)
- Basic Linux/networking knowledge
- SSH access to all nodes
# 1. Clone this repository
git clone https://github.com/taslabs-net/proxmox-tb4.git
cd proxmox-tb4
# 2. Copy and edit the configuration
cp config.env.example config.env
nano config.env # Edit with your node IPs and settings
# 3. Run the preflight check
./scripts/00-preflight-check.sh
# 4. Follow the guided setup
./scripts/01-setup-ssh.sh
./scripts/02-install-tb4-modules.sh
# ... continue with remaining scripts┌─────────────────────────────────────────────────────────────────┐
│ Network Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Management Network (vmbr0): 10.11.11.0/24 │
│ ├── Proxmox cluster communication │
│ ├── SSH access │
│ └── Web UI access │
│ │
│ VM Network (vmbr1): 10.1.1.0/24 │
│ ├── Virtual machine traffic │
│ └── Backup cluster communication │
│ │
│ TB4 Mesh Network (en05/en06): 10.100.0.0/24 │
│ ├── Ceph cluster_network (OSD replication) │
│ ├── High-speed, low-latency │
│ └── 65520 MTU jumbo frames │
│ │
└─────────────────────────────────────────────────────────────────┘
Physical TB4 Mesh Topology (Ring):
┌──────────┐
│ N2 │
│ en05 en06│
└──┬────┬──┘
│ │
en05 │ │ en06
│ │
┌──────┘ └──────┐
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ N3 │◄────►│ N4 │
│ en05 en06│ │ en05 en06│
└──────────┘ └──────────┘
en06 ◄──► en05
| Guide | Description |
|---|---|
| 00 - Overview | Architecture, concepts, and planning |
| 01 - Prerequisites | Hardware/software requirements |
| 02 - SSH Setup | Passwordless SSH configuration |
| 03 - TB4 Foundation | Kernel modules and hardware detection |
| 04 - Network Config | Interface configuration and udev rules |
| 05 - SDN Setup | Proxmox OpenFabric configuration |
| 06 - Ceph Setup | Monitors, OSDs, and pools |
| 07 - Performance | Optimization settings |
| 08 - Troubleshooting | Common issues and fixes |
| 09 - Benchmarking | Testing and validation |
All scripts are designed to be:
- Idempotent - Safe to run multiple times
- Interactive - Confirm before making changes
- Logged - Track what was done for troubleshooting
scripts/
├── 00-preflight-check.sh # Verify prerequisites
├── 01-setup-ssh.sh # Deploy SSH keys
├── 02-install-tb4-modules.sh # Load kernel modules
├── 03-configure-interfaces.sh # Set up TB4 networking
├── 04-setup-udev-rules.sh # Create automation rules & boot service
├── 05-setup-systemd.sh # Enable systemd-networkd & verify
├── 06-verify-mesh.sh # Test connectivity
├── lib/
│ ├── common.sh # Shared functions
│ └── colors.sh # Output formatting
└── utils/
├── troubleshoot.sh # Diagnostic commands
└── benchmark.sh # Performance testingconfigs/
├── network/
│ └── interfaces.template # /etc/network/interfaces template
├── systemd/
│ ├── 00-thunderbolt0.link # Interface renaming
│ ├── 00-thunderbolt1.link
│ └── thunderbolt-interfaces.service
├── udev/
│ └── 10-tb-en.rules # Hot-plug automation
└── scripts/
├── pve-en05.sh # Interface bringup
├── pve-en06.sh
└── thunderbolt-startup.sh # Boot-time initThis project builds upon excellent foundational work:
- @scyto - Original TB4 research and kernel module strategies
- @taslabs-net - PVE 9 integration and Ceph optimization
Thanks to everyone who helped refine this guide through testing and feedback:
- @Allistah - Ceph network bottleneck discovery, /32 addressing scheme
- @aelhusseiniakl - GUI-focused alternative guide
- @Yearly1825 - Comprehensive boot fix script (udev + systemd ordering), cold boot module loading fix, PVE 9.1.4 testing
- @ikiji-ns - /31 addressing documentation
- @pSyCr0, @scloder - Testing and troubleshooting feedback
No! The TB4 mesh connects nodes directly. You only need a basic switch for the management network.
This guide is optimized for 3 nodes (the minimum for Ceph quorum). Adjustments are possible but not covered here.
USB4 should work similarly, but TB4 is recommended for consistent performance. Ensure your cables are certified.
No, you can use static point-to-point routes instead. SDN provides GUI integration and easier management.
GPL-3.0 License - See LICENSE for details.
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
https://flarelylegal.com/docs/proxmox/tb4-ceph-cluster/
Questions? Open an issue or check the Troubleshooting Guide.