Intel's make-or-break 18A process node debuts for data center with 288-core Xeon 6+ CPU — multi-chip monster sports 12 channels of DDR5-8000, Foveros Direct 3D packaging tech
Intel unveils x86 CPU with the industry's highest core count.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Intel this week formally introduced its Xeon 6+ processors codenamed 'Clearwater Forest' that pack up to 288 energy-efficient Darkmont cores and are the first data center CPUs made on the company's 18A fabrication process (1.8nm-class). Intel aims its Xeon 6+ 'Clearwater Forest' processors primarily for telecom, cloud, and edge AI workloads as they feature Advanced Matrix Extensions (AMX), QuickAssist Technology (QAT), and Intel vRAN Boost technologies.
Intel's Xeon 6+ processors with up to 288 cores combine 12 compute chiplets containing 24 energy-efficient Darkmont cores per tile that are produced using 18A manufacturing technology, two I/O tiles made on Intel 7 production node, as well as three active base tiles made on Intel 3 fabrication process. The compute tiles are stacked on top of the base dies using Intel's Foveros Direct 3D technology, whereas lateral connections are enabled by Intel's EMIB bridges.


Intel's 'Darkmont' efficiency cores have received rather meaningful microarchitectural upgrades. Each core integrates a 64 KB L1 instruction cache, a broader fetch and decode pipeline, and a deeper out-of-order engine capable of tracking more in-flight operations. The number of execution ports has also been increased in a bid to improve both scalar and vector throughput under heavily threaded server workloads.
From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. As a result, the aggregate last-level cache across the full package surpasses 1 GB, roughly 1,152 MB in total. This unusually large pool is intended to keep data close to hundreds of active cores and reduce dependence on external memory bandwidth, which in turn is meant to both increase performance and lower power consumption.
Platform-wise, the processor remains drop-in compatible with the current Xeon server socket, so the CPU has 12 memory channels that support DDR5-8000, 96 PCIe 5.0 lanes with 64 lanes supporting CXL 2.0.
Intel positions Clearwater Forest for telecom and cloud workloads. The company says operators deploying 5G Advanced and future 6G networks increasingly rely on server CPUs for virtualized RAN and edge AI inference, as they do not want to re-architect their data centers in a bid to accommodate AI accelerators. By combining matrix/vector acceleration, vRAN offloads (using the vRAN Boost), large caches, and broad I/O in one platform, the CPU can perform jobs that are normally reserved for various accelerators that consume more power and take up space.
Also, extreme core count of Xeon 6+ 'Clearwater Forest' CPUs — that approaches 288 cores for uniprocessor configurations and 576 cores in dual socket configurations, enabling a single server to host dozens or even hundreds of virtual machines while maintaining power efficiency and low latency.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Systems based on Intel's Xeon 6+ processors will be available later this year.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

-
bit_user Last things first:Reply
The article said:Systems based on Intel's Xeon 6+ processors will be available later this year.
Okay, so not a launch, but rather just a formal announcement.
"as a result"?? No, 4 MB per 4 cores = 1 MB per core. So, the total amount of L2 cache is just 288 MB.The article said:From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. As a result, the aggregate last-level cache across the full package surpasses 1 GB
I have seen where it has 1152 of last level cache, but the way it's written makes it sound like you're just talking about L2.
Wait, this has AMX?? Or, is it just the Diamond Rapids members of the family that would have that?The article said:Intel aims its Xeon 6+ 'Clearwater Forest' processors primarily for telecom, cloud, and edge AI workloads as they feature Advanced Matrix Extensions (AMX)
I'd be rather surprised if Clearwater Forest has AMX, since that adds a non-trivial amount of die area per core.
Also, what about AVX-512? -
bit_user So, reviewing what they've previously disclosed about, I see that mentions of AMX, AVX-512, and AVX10 are conspicuously absent.Reply
Https://www.tomshardware.com/desktops/servers/intel-reveals-288-core-xeon
Another downside is that it's still PCIe 5.0-based (and CXL 2.0). By contrast, AMD's Venice, probably launching around the same time, is moving up to PCIe 6.0 / CXL 3.0.
Here are some other juicy tidbits, from the above article (17% IPC boost on SPECint!): -
abufrejoval Funny how that piece of exciting news is so utterly boring to me, not just as a consumer, but also as a former technical architect.Reply
The scale you need to make these worth having is so far beyond anything a technically minded individual might be responsible for, it can really only appeal to bean-counters, or perhaps a very abstract mind.
I've thrived on hands-on and the confidence I had after trying to break things. Breaking these would be way above my pay grade, so you'd have to use blind faith. That turns to cause bigger messes, not guarantee fewer. -
abufrejoval Reply
These aren't meant to replace P-core Xeons, but front-end ARMs.bit_user said:So, reviewing what they've previously disclosed about, I see that mentions of AMX, AVX-512, and AVX10 are conspicuously absent.
I'd have been surprised to see AMX there, not even sure there is much of any floating-point in front-ends.
Quite honestly it's the kind of workload I'd have expected to move into network ASICs as IP blocks, but that's a hosting customer decision and I'm not sure which hyperscalers expose programmable fabrics today to customers. -
abufrejoval I am a bit sceptic when it comes to this giagantism. On one hand, even chips much bigger are likely to never catch everyones scale-up requirements, so scale-out needs to be part of the design anyway.Reply
When you do scale-out, putting more eggs in a basket creates a conflict between the cost of consolidation vs the cost of administering more instances. Finding that sweet spot won't be easy and you may need to maintain that across several generations outside a physical server, since complete slash-and burn or green-field deployments aren't normal beyond the 1st build-out.
Well good thing, that's no longer my job!