Exploring Meta and AMD's $100 billion agreement, and the reasons AMD is yielding a portion of the firm to secure GPU contracts — Meta remains platform-neutral while formulating its AI compute strategy

MEMBER EXCLUSIVE
Lisa Su next to data center hardware
(Image credit: Getty Images / Bloomberg)

Yesterday, AMD and Meta signed a deal to deliver 6GW of AI infrastructure, mirroring AMD's existing deal with OpenAI, signed last year. If AMD's stock reaches $600 by 2031, it would reward Meta with 10% of the company in exchange for purchasing its GPUs. While this might seem drastic at first glance, peering deeper into the deal reveals more than initially thought.

Firstly, AMD is clearly making a play for its Instinct MI400-based systems to be used in AI data centers, despite lacking the edge that Nvidia's Grace Blackwell and upcoming Rubin architectures can provide. Truly, Nvidia continues to be the favored processor for individuals utilizing AI accelerators for training tasks.

Meta's position in the AI race and the

During the previous year, Meta's AI labs pivoted from contending with frontier model development, instead focusing its resources on building what it terms Personal Superintelligence. At the start of January, the firm launched its Meta Compute organization, tasked with expanding its data center capabilities. The framework of this recent entity will consolidate control of its entire technology stack. So, while Meta isn't publicly competing with the likes of the latest frontier AI models any longer, it has a clear goal and wants to scale its AI operations up.

Meta's plans for AI spending in 2026 could reach up to $135 billion in 2026, up from $72 billion in 2025. Only last week, the firm also disclosed that it would use Nvidia's NVL72 rack-scale systems, implementing Arm-driven Grace server CPUs and Spectrum-X switches with co-packaged optics. Thus, Meta is evidently pursuing a platform-independent strategy as it expands its AI computing project, which is scheduled to utilize tens of gigawatts of power.

AMD

(Image credit: AMD)

It's most likely that Meta will use the NVL72 racks to serve frontier model development, training, and bleeding-edge heavy-duty inferencing tasks. Nevertheless, AMD's role in the arrangement grows more obvious when examining the strategy from a broader perspective.

Meta CEO Mark Zuckerberg clarified that the hardware deployed by AMD will primarily be used for 'inference' and 'personal superintelligence' workloads. While the inference market is currently becoming increasingly crowded with all manner of custom silicon from the likes of Sambanova, Qualcomm, and Broadcom. Meta is also understood to be developing its own ASIC, As part of the Meta Training and Inference Accelerator (MTIA) project, and we've still to witness the results of their work.

"Meta is taking a hybrid approach, with AMD becoming a major, if not primary, partner for specific AI inference workloads, while Nvidia continues to supply high-end hardware for training." Says analyst Jon Peddie. Meta is apparently balancing its AI rollouts among different companies to prevent being entirely dependent on Nvidia's CUDA-based competitive barrier.

He continued to comment that AMD's hardware will likely serve workloads for live AI traffic, such as sticker generation and image editing. So, while Meta might not be currently playing in the ongoing frontier pool, the company is still deploying AI at scale across a range of products.

Inside AMD's approach

While Meta clearly sees an opportunity to not only use and deploy AMD's rack-scale Helios clusters, AMD's stock showed some positive growth following the announcement. The incentive-linked warrant values shares at a mere penny each, with 160 million units of common equity available as Instinct GPU deliveries are fulfilled. The first portion is set to vest after a 1-gigawatt shipment, increasing to the total six-gigawatts through 2031, provided it matches a $600 stock price level.

"Meta will be a lead customer for 6th Gen AMD EPYC CPUs, codenamed “Venice,” and “Verano,” a next-generation EPYC processor designed with workload-specific optimizations to deliver leadership performance-per-dollar-per-watt." AMD's statement says. Following the announcement, AMD discussed the deal in a conference call. CEO Dr. Lisa Su stated that AMD intends to provide a tailored MI450-based accelerator, specifically tuned for Meta's operational requirements, with the initial rollout of this equipment anticipated during the latter part of 2026, regarding the hardware itself. Presently in a hardware and software verification stage.

"The Meta deployment is expected to generate data center AI revenue of significant double-digit billions of dollars per gigawatt. Income will start during the final six months of 2026 and grow in conjunction with our MI450 rollout for additional clients," stated Jean Hu, CFO at AMD.

AMD

(Image credit: AMD)

In the course of a Q&A session during the call, Vivek Arya from BofA Securities brought up a significant issue: If there is such robust demand for AMD's AI accelerators, why does the firm seem to be offering equity to guarantee sales? This isn't the first deal of its kind, almost exactly mirroring OpenAI's deal from October 2025. Su argues that such deals are beneficial for AMD shareholders, as it guarantees a certain level of earnings to show off to shareholders, and help AMD shape its own roadmap, as the companies co-develop hardware, and will Keep acting in this manner for the length of the agreement.

"So if you look at the structure of our warrants in this case, is -- again, it's a very aligned incentive structure. Meta is making a big bet on deploying at large scale for AMD, which is great. AMD gains from this extensive implementation, which provides financial growth, network stability, and application refinement. And assuming that we satisfy all of the purchases as well as the share price thresholds, AMD shareholders will benefit significantly, and Meta gets to benefit as part of that." Su said.

In short, in exchange for equity, AMD gets to deliver a certain guarantee of orders, and in turn, value for its shareholders. Earlier in the call, Su noted that the deal would help AMD's long-term financials. Specifically, the firm intends for 80% of its Compound Annual Growth Rate (CAGR) to be driven by data centers and AI. After that objective is met, the organization might assign a $20 per-share valuation to its data center endeavors, which AMD plans to realize before 2031.

This highlights a major difference from Nvidia, which avoids giving out equity for contracts — the buyers congregate regardless. If AMD delivers on its commitments with both Meta and OpenAI, it will have deployed 12 gigawatts in compute and have given up 20% of the company in return for locking-in AI accelerator buy-in. As long as the company's shares rise to $600 by 2031, that might become a new reality for one of the chip industry's longest-standing companies.

TOPICS
Sayem Ahmed
Subscription Editor