Doradus Research
Notes from running an on-prem AI cluster — consumer + workstation-class GPUs, multi-vendor inference stacks, multi-model serving. Operator perspective: what actually breaks, what the docs don't say, and the configurations that ended up working in production.
Code at github.com/DoradusResearch. Hardware: 3 GPU compute nodes carrying 10× RTX PRO 6000 Blackwell (95 GiB each) + 4× RTX 5090, 2× DGX Spark (GB10, 128 GiB UMA), 2× Mac Studio M3 Ultra (256 GiB UMA each). ~1.3 TB system RAM, ~75 TB tiered storage across four tiers — hot NVMe, erasure-coded warm cluster, shared NFS model cache, SMB cold archive. 100GbE fiberoptic backbone, modern scheduler + service mesh with mTLS, perimeter firewall appliance. All on-prem.