Arm debuts a 136-core AGI
Arm's first foray into selling production chips, not just licensing IP.
Receive 3DTested's top stories and detailed evaluations, delivered directly to your email.
You are now subscribed
Your newsletter sign-up was successful
Arm now announced the AGI CPU, an up-to 136-core data center processor series that the firm engineered and will market as complete silicon. The chip, built on TSMC's 3nm process with Neoverse V3 cores, was co-developed with Meta and represents the first time in Arm's 35-year history that the company has shipped its own production processor rather than Authorizing IP usage for associates.
The AGI CPU has been designed for what Arm calls "agentic AI infrastructure," the CPU-side orchestration work required to coordinate accelerators and manage data movement in large-scale AI deployments.
300W powers 136 Ne
This processor houses up to 136 Neoverse V3 cores operating at up to 3.2 GHz on all cores and 3.7 GHz in boost mode over two dies, all while maintaining a 300-watt TDP. This hardware handles 12 channels of DDR5 memory at speeds of 8800 MT/s, yielding more than 800 GB/s of combined memory bandwidth or 6GB/s per core with an objective of sub-100ns latency. Connectivity provides 96 PCIe Gen6 lanes and built-in CXL 3.0 capabilities for memory growth and aggregation.
Article continues belowArm's reference platform is a 10U dual-node server compliant with the Open Compute Project's DC-MHS standard. Two AGI CPUs fit per blade, and a standard air-cooled 36kW rack holds 30 blades for 8,160 cores total. Arm has additionally teamed up with Supermicro for a liquid-cooled 200kW setup containing 336 chips and exceeding 45,000 cores.
Arm claims the AGI CPU delivers more than two times the performance per rack compared to the latest x86 platforms. This number is, naturally, derived from the firm's own private projections at this point, rather than external benchmarks.
GPUs have dominated the majority of news reports regarding AI hardware until now, but a requirement for more robust general-purpose processing is growing as agentic systems like OpenClaw rapidly gain traction. Arm is clearly hoping it can meet and cash in on this demand — and hopefully that won’t be to the detriment of non-AI customers, who seem to have long since been forgotten by the likes of Nvidia and Micron.
OpenAI among early customers
Meta acted as the primary collaborator for the initiative and intends to implement the AGI CPU alongside its custom MTIA accelerators. Santosh Janardhan, head of infrastructure at Meta, said the two companies worked together on the chip and are committed to a multi-generation roadmap.
Receive 3DTested's top stories and detailed evaluations, delivered directly to your email.
Beyond Meta, Arm confirmed commercial commitments from Cerebras, Cloudflare, F5, OpenAI, Positron, Rebellions, SAP, and SK Telecom. Sachin Katti, head of industrial compute at OpenAI, said the AGI CPU will play a role in OpenAI's infrastructure by strengthening the orchestration layer that coordinates large-scale AI workloads.
Arm has traditionally functioned as a firm that licenses intellectual property. Its collaborators, ranging from Apple to Nvidia to AWS, develop their unique processors utilizing Arm's instruction set architecture and fundamental blueprints. The AGI CPU introduces a third alternative alongside IP licensing and Arm's Compute Subsystems (CSS) program: Arm-designed, market-ready silicon that clients can implement immediately.
Arm stated that the AGI CPU series shall proceed concurrently with The Arm Neoverse CSS product strategy, and that subsequent releases have already been pledged. The company seems keen to point out that this is an additive move rather than a pivot that competes with existing licensees, though how Arm manages that as it sells chips into the same data centers as Nvidia Grace, AWS Graviton, Google Axion, and Microsoft Cobalt remains to be seen.
Follow 3DTested on Google News, or add us as a preferred source, to obtain our newest reports, breakdowns, & appraisals via your feeds.
