Intel Xeon 6 selected as host CPU for Nvidia DGX Rubin NVL8 systems — Intel wins a contract as Nvidia enters data center processor market with Vera CPUs
Xeon 6 brings a 2.3x memory bandwidth improvement over its predecessor.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Intel announced today at Nvidia GTC 2026 in San Jose that its Xeon 6 processor will serve as the host CPU in Nvidia's DGX Rubin NVL8 systems, extending the x86 pairing the two companies established with the Xeon 6776P in current DGX B300 Blackwell-based platforms.
The DGX Rubin NVL8 is Nvidia's next-generation flagship AI server system. In that configuration, the host CPU is responsible for task orchestration, memory management, scheduling, and data movement to the GPU accelerators. With inference workloads shifting toward agentic AI and reasoning systems, those functions place increasingly heavy demands on per-core performance and memory bandwidth.
Intel said Xeon 6 addresses those demands through a combination of memory capacity, bandwidth, and I/O capabilities. The platform supports up to 8TB of system memory, which Intel cited as key for supporting large language models with growing key-value caches.
Article continues belowMeanwhile, memory bandwidth has improved 2.3 times generation-on-generation via MRDIMM technology, raising the rate at which data reaches the GPU accelerators. PCIe 5.0 lanes handle high-bandwidth accelerator connectivity, and a feature Intel calls Priority Core Turbo dedicates strong single-thread performance to orchestration, scheduling, and data movement tasks, keeping GPU utilization high as workload complexity increases.
Security coverage extends across the CPU-to-GPU data path through Intel Trust Domain Extensions (TDX), which adds hardware-rooted isolation and attestation via an Encrypted Bounce Buffer. Intel said end-to-end confidential computing is increasingly required as AI inference scales across data center, cloud, and edge deployments. Xeon 6 also now supports Nvidia Dynamo, an inference orchestration framework that enables heterogeneous scheduling across CPU and GPU resources within the same cluster.
"In this new era, the host CPU is mission-critical," said Jeff McVeigh, corporate vice president and general manager of Data Center Strategic Programs at Intel. "It governs orchestration, memory access, model security, and throughput across GPU-accelerated systems."
Intel also cited Xeon's x86 software ecosystem and enterprise deployment history as factors in the selection, noting compatibility with existing AI software stacks. The DGX Rubin NVL8 configuration builds on the same architectural foundation as DGX B300, giving operators platform continuity between Blackwell and Rubin generations.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
