AMD denies reports of delays, asserting that its H100-scale offerings remain on track, with no disruption to the planned timeline for next-gen deployments.
Is UALink to blame?
Receive 3DTested's top stories and detailed evaluations, delivered directly to your email.
You are now subscribed
Your newsletter sign-up was successful
AMD's forthcoming Instinct MI455X might experience manufacturing and user implementation holdups, per a story by SemiAnalysis, though AMD immediately contested the allegation.
By contrast, Nvidia's Vera Rubin platform for AI data centers may show up on the market earlier than anticipated (according to Evercore, via @halfblindmonkey) as silicon is already in mass production. The firm needs to conclude its AI server and NVL72 VR200 rack-scale solution design and validate it with clients shortly to begin mass deliveries to fulfill its ambitious assertions regarding platform availability at CES This year.
Engineering samples of the MI300X are expected to arrive in the first half of 2024, with volume production following in the second half of the year. Produced by Semi, as noted in the report generated by the company, with the production originating from the same source.
"Well, your assessment is still wrong," stated Anush Elangovan, corporate vice president of software development at AMD, within an X post. "On target for 2H 2026."
AMD's Helios rack-scale solutions for AI pack 72 Instinct MI455X AI accelerators with 31 TB of HBM4 memory that are designed to deliver 2.9 FP4 exaFLOPS for AI inference and 1.4 FP8 exaFLOPS for AI training. Originally, anticipations suggested that AMD's debut rack-scale AI configuration would utilize UALink interconnections for scale-up networking to reach peak performance. However, it appears that the initial Helix units will rely on a different approach, with U.S. Ethernet connections potentially offering better performance.
We don’t know for sure whether UAFI’s delay is to blame, but recent reports suggest the issue lies with the timing, not the technology.
"Solid traction continues to develop with respect to UALink with a vibrant ecosystem, including product announcements, broad IP availability, and compliance methodologies being finalized," said Jitendra Mohan, chief An official from Astera Labs, throughout the corporation's teleconference with market researchers and stakeholders. Recent announcements and ongoing efforts reveal growing adoption, with AWS and other stakeholders advancing together. UALink continues to be the most efficient and fastest completely open option for AI expansion networking, and we are set to meet the first waves of customer platform deployments during 2027.
Receive 3DTested's top stories and detailed evaluations, delivered directly to your email.
Meanwhile, if reports are to be believed, Nvidia’s forthcoming chip could arrive as early as next year, with the company potentially accelerating its timeline. Given that Jensen Huang mentioned the Vera Rubin platform entered production by Early January, it is quite likely that several of Nvidia's top clients will obtain the fresh AI platform sooner than anticipated.
"Some believe that China ban has enabled Nvidia to leverage suppliers that have typically served China to work on worldwide product development, enabling Rubin to be 3 – 6 months ahead of schedule," an Evercore note for Clients reads. "Some would not be surprised if Rubin shipments happen by end of Q2 2026. Hyperscalers note that Vera CPU, Rubin GPU [are] already in fabrication and running test/validation."
If Nvidia manages to accelerate its timeline while AMD lags behind, it could solidify its lead, with both still navigating incremental advances in their respective roadmaps. Creators of cutting-edge AI systems will persist in utilizing Nvidia's equipment.
Follow 3DTested on Google News, or add us as a preferred source, to obtain our newest reports, breakdowns, & appraisals via your feeds.
