Quit Yapping
Setting Up a DIY Storage Server To Go Fast!
27:12
Watch on YouTube ↗
L
Level1Techs·Tech

Setting Up a DIY Storage Server To Go Fast!

TL;DR

Building a DIY high-speed storage server using Kioxia CM7 NVMe drives and 400GbE networking to hit 40 GB/s for VFX and video production workloads.

Key Points

  • 1.Four Kioxia CM7 NVMe drives are the storage foundation. Each CM7 delivers 15 GB/s, meaning four drives can theoretically saturate a 400GbE link at ~40 GB/s, requiring PCIe Gen 5 (16 lanes cap at 64 GB/s) to avoid bottlenecks.
  • 2.The network adapter is a dual 200GbE Broadcom-based LRL card. It uses 16 PCIe Gen 5 lanes and connects to a MikroTik CRS812 switch via a $186 400G breakout DAC cable splitting into two 200G QSFP56 ports.
  • 3.RDMA is essential for low latency and high throughput. Without RDMA, the CPU must process every network packet, causing 100% CPU usage and poor performance — RDMA offloads bulk transfers to the NIC at the hardware level, bypassing CPU overhead.
  • 4.Rocky Linux 10 with KSMBD is the chosen software stack. Rocky Linux (a Red Hat/CentOS clone) is preferred because Red Hat has done enterprise-grade RDMA plumbing; KSMBD supports SMB Direct/RDMA multichannel, though a custom DKMS module was needed to compile it.
  • 5.ZFS users must use Rocky Linux 9, not Rocky 10. As of March 2026, Rocky 10 ZFS support is not yet stable, and the Level One Tech forum guide explicitly recommends Rocky 9 for ZFS-based storage builds.
  • 6.PCIe slot placement and signal integrity are critical pitfalls. PCIe Gen 5 errors can silently degrade performance; on some ASUS Thread Ripper boards, lower slots with retimers actually outperform upper slots wired directly to the CPU, and AER error reporting must stay enabled.
  • 7.The client-side workload uses a 96-core Thread Ripper with 768 GB RAM and 16 TB local flash. Running Houdini, DaVinci Resolve, and Deadline render jobs over a 100GbE uplink to the 400GbE backbone, this machine stress-tests whether the storage server can prevent one render job from monopolizing bandwidth.
  • 8.Windows RDMA support with Mellanox ConnectX-5 remains unresolved for Part 1. RDMA on Windows 11 Pro with ConnectX-5 wouldn't reliably enable, while Linux clients worked; MikroTik's QoS feature can manage per-client bandwidth at the switch level once the storage bottleneck is always the network, not the drives.

Life's too short for long videos.

Summarize any YouTube video in seconds.

Quit Yapping — Try it Free →