Doradus Research
Notes from running a small on-prem AI cluster — consumer-grade GPUs, multi-vendor inference stacks, multi-model serving. Operator perspective: what actually breaks, what the docs don't say, and the configurations that ended up working in production.
Code at github.com/DoradusResearch. Hardware: 3 GPU compute nodes carrying 10× RTX PRO 6000 Blackwell (95 GiB each) + 4× RTX 5090, 2× DGX Spark (GB10, 128 GiB UMA), 2× Mac Studio M3 Ultra (256 GiB UMA each). ~1.3 TB system RAM, ~75 TB tiered storage across local NVMe, MinIO erasure-coded warm cluster, QNAP NFS, and Synology cold archive. Orchestrated with Nomad + Consul on Tailscale. All on-prem.