OpenAI was unable to fund its server facilities, leading it to manage the equipment directly — the firm's semiconductor development goals trail those of Google and Amazon.
Financing shortfalls triggered a year of bruising negotiations that reshaped OpenAI’s infrastructure ambitions.
Receive 3DTested's top stories and detailed evaluations, delivered directly to your email.
You are now subscribed
Your newsletter sign-up was successful
OpenAI devoted a significant part of 2025 to establishing its own AI data hubs, but ultimately realized it was unable to obtain capital at attractive rates. Based on an account from The Information, that collapse triggered a sequence of discussions and concessions that eventually shifted objectives further down the stack. Instead of possessing actual property, OpenAI has allegedly shifted focus toward managing the internal components while at the same time organizing a highly ambitious multi-supplier processor acquisition plan within the sector.
If you find yourself unable to build it, rent it
This reportedly commenced directly after the White House’s Stargate announcement in January 2025. OpenAI staff dispersed across the nation, investigating possible locations able to accommodate facilities between 800 megawatts and 1.2 gigawatts each, focusing on spots where substantial energy would become available in 2026 and 2027. Leaders supposedly suggested establishing Stargate as an independent organization that would build infrastructures and rent them back to OpenAI, while others considered utilizing it strictly as a monetary instrument to secure funding for Chips and infrastructure.
Ultimately, none of this came to pass. According to sources cited by The Information, when OpenAI ran the numbers, it became clear the company would pay a significant premium to secure financing on its own. Lenders reportedly offered materially better terms when a more creditworthy tenant, like Oracle, signed the lease instead.
A blockbuster deal between Oracle and OpenAI ensued, as the pair pledged to establish 4.5 gigawatts of data center capability throughout several U.S. Sites, with the two companies allegedly sharing economic risk on cost overruns and savings — a small but important detail that had not previously been publicly disclosed.
The Texas compromise
A specific location in Texas — a 1 GW installation in Milam County — was especially intriguing for OpenAI. The company had reportedly hoped that the site would become its first self-built data center, while SoftBank, the other principal Stargate partner, wanted to develop and own it outright. From September through October 2025, the staff at OpenAI traveled several times to Japan for direct discussions with SoftBank's Masayoshi Son, as negotiations allegedly lasted for many hours over various meetings.
These meetings led to a compromise announced on January 9 of this year, when OpenAI and SoftBank each invested $500 million into SB Energy, with OpenAI selecting SB Energy to build and operate the Milam County campus. According to the legal practice Sullivan & Cromwell, which provided counsel to OpenAI, it was stated in a release that the agreement combines OpenAI's proprietary data center architecture with the established proficiency of SB Energy regarding pace, financial rigor, and integrated energy delivery,” while OpenAI’s president Greg Brockman described the arrangement as combining SB Energy's "strength in data center infrastructure and energy development" with "OpenAI's deep domain expertise in data center engineering — in other words, SoftBank builds and owns the project, while OpenAI controls design.
As stated by The Information, design management encompasses cluster architecture, cooling setups, rack layouts, and power infrastructure, four areas that collectively dictate every significant hardware choice produced within a facility.
Control over cluster architecture means that it’s OpenAI, not SoftBank, that decides how GPUs or custom accelerators are grouped, how many form a single training or inference unit, and how they’re interconnected. So, while OpenAI doesn’t own the land or the physical building, it does have full say over all hardware decisions — that is, no doubt, what OpenAI wanted in the first place, even if the compromise meant that it doesn’t “own” the plan in writing.
Late to the party
OpenAI has developed a significant semiconductor roadmap following the Texas agreement/settlement, with the majority of it officially verified. During September, OpenAI and Nvidia announced a letter of intent to roll out no less than 10 gigawatts of Nvidia systems, as Nvidia aims to contribute as much as $100 billion to OpenAI when objectives are met, with the initial gigawatt focused on the second half of 2026 on the Vera Rubin platform.
That arrangement has since evolved: Nvidia is now reportedly moving toward a $30 billion direct equity stake in OpenAI, not tied to deployment milestones — as part of OpenAI's current funding round at a $730 billion pre- Money valuation. By December, Nvidia's CFO verified that the final contract was still pending — and uncertainties remain — as OpenAI's acquisitions continued to move through third-party cloud providers such as Microsoft and Oracle. The actual figures, in other words, remain a work in progress.
Then there’s AMD, with whom OpenAI announced a definitive agreement in October. This encompasses 6 gigawatts of AMD Instinct GPUs, starting with the MI450 series during the latter half of 2026, as AMD provides OpenAI a warrant for as many as 160 million shares that vests as installation targets are met. A week later, on October 13, OpenAI and Broadcom announced a term sheet encompassing 10 gigawatts of OpenAI-engineered bespoke AI accelerators, with racks "scaled entirely with Ethernet and other connectivity solutions from Broadcom,” and in January, a confirmed $10 billion An agreement with Nvidia rival Cerebras secured 750 megawatts of Wafer Scale Engine 3 supply until 2028 for low-latency inference tasks.
This split between training and inference makes sense because Nvidia’s GPU ecosystem remains extremely difficult to displace for large-scale model training, where CUDA's maturity introduces switching costs that can’t be eliminated. That’s not the case for inference, where Cerebras' wafer-scale architecture eliminates the inter-chip communication latency that constrains GPU clusters for latency-sensitive tasks, and custom ASICs highlight the same cost considerations that Google, Amazon, Meta, and Microsoft have already faced: at sufficient scale, the upfront cost of chip design is dwarfed by per-unit savings across hundreds of thousands of chips. Amazon, for example, claims 30% to 40% cost savings on specific workloads using Trainium versus equivalent Nvidia hardware.
The limitation is that OpenAI is reaching this conclusion significantly after its competitors. Google started TPU creation in 2013. Amazon introduced Inferentia in 2018. Microsoft initiated its Maia project around 2019. Each firm on that roster will observe that it's not the chip, but the software stack that requires years to reach maturity, and OpenAI is starting that evolution now.
During November 2025, OpenAI announced the recruitment of Intel’s previous chief technology and AI officer, Sachin Katti, to head its infrastructure division. According to The Information, his mandate is to develop OpenAI's data center intellectual property so future deals are built around the company's own hardware requirements. He reportedly oversees chip selection and the full compute roadmap, with the heads of data centers and industrial compute now reporting to him.
Consequently, while it’s certain that OpenAI still doesn’t possess any data centers, the firm maintains architectural oversight for every site it inhabits, a validated custom accelerator scheme, operational use of Cerebras hardware, And a hardware leader whose responsibility is to bridge the divide, pursuing the identical trajectory that all other hyperscalers have already taken.
