OpenClaw-fueled ordering frenzy creates Apple Mac shortage — delivery for high Unified Memory units now ranges from 6 days to 6 weeks
AI is coming for high-end Mac Studios and Mac minis, too.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Some Apple customers have recently been taken by surprise due to the order lead times on several Mac models with upgraded Unified Memory quotas. While you can still get base model units of the MacBook Air, iMac, M4 Mac mini, and other basic models on the same day, upgrading memory can now increase delivery wait times by up to three weeks.
However, going for the highest possible memory capacity on high-end models greatly increases your waiting time, with the M3 Ultra Mac Studio with 512GB of Unified Memory taking up to five to six weeks to be delivered.
Alex Finn, founder and CEO of Creator Buddy, linked this shortage in his X post to demand driven by “the world’s first true AI agent,” which we presume to be OpenClaw (previously Clawdbot/Moltbot), the locally-run open-source AI agent that’s taking the internet by storm.
Something big is happening. First Mac Minis. Now Mac Studios.Completely sold out.When I bought 2 Mac Studios a month ago my wait was 14 days. Now the wait is 54 days.The world has changed more in the last month than in the previous 100 years combined.The world's first… https://t.co/GMDgLeQzQu February 13, 2026
While data centers are hungry for AI GPUs and some startups are using multi-gaming GPU setups to train AI models, they’re not ideal for personal agentic AI run locally. This is especially true if you use a huge 70-billion parameter model in FP16 for your agent, which would require around 140GB of memory just for weights, according to AI investor Ben Pouladian. That means that it wouldn’t fit inside a single RTX 5090 with 32GB of VRAM, and even if you manage to connect five graphics cards for a total of 160GB of memory, you’re still bound by the PCIe bottleneck.
Apple’s Unified Memory architecture fixes that problem. Even though HBM is still way faster than the LPDDR used in Macs and MacBooks, the fact that all the processing units — CPU, GPU, and NPU — share the same memory means that they don’t have to deal with PCIe bottlenecks or require technologies similar to NVLink, which is typically only found on data center class graphics cards.
The world is just catching up on what we’ve been doing since 2024.For the last 2 years, at Eternal AI, we’ve been running clusters and clusters of Mac Studios.These Mac clusters are perfect for long-running agentic tasks and local private LLMs.Welcome home, @openclaw 🦞 https://t.co/LBMEeD5Cwi pic.twitter.com/EIPn7B6rqR February 13, 2026
Because of this, more and more people who want to run their own local AI agent are purchasing high-memory Mac models. This isn’t limited to M3 Ultra Mac Studio units with 512GB of memory. Even Mac minis and MacBook Pros with upgraded memory now have a waiting time of two to three weeks.
We cannot definitively say that these delays are caused entirely by huge numbers of people buying these devices to run their own AI models, as Apple CEO Tim Cook admitted that it’s chasing memory supply to meet high customer demand. However, additional pressure from the consumer side will definitely not help with the memory chip shortage that, as of the moment, is primarily driven by AI hyperscalers and institutional buyers.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
