SpaceX acquires xAI in a bid to make orbiting data centers a reality — Musk plans to launch a million tons of satellites annually, targets 1TW/year of space-based compute capacity
Is this the start of Skynet?
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
SpaceX has officially announced its acquisition of xAI, allowing the two companies to vertically integrate their operations and help Elon Musk achieve his dream of artificial intelligence in space. According to the company’s announcement, space is the only logical solution to scaling AI data centers, as we do not have enough resources on Earth to power these systems.
“Current advances in AI are dependent on large terrestrial data centers, which require immense amounts of power and cooling. Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term, without imposing hardship on communities and the environment,” the company said in its statement. “In the long term, space-based AI is obviously the only way to scale. To harness even a millionth of our Sun’s energy would require over a million times more energy than our civilization currently uses!”
The company has already begun taking the first steps to achieving this dream with its latest FCC filing mentioning plans to launch a million satellites into orbit. These orbital data centers would directly harness the power of the sun without interference from the Earth’s atmosphere or rotation, allowing it to run more efficiently compared to terrestrial infrastructure.
This isn’t a small project, either. Musk says that “launching a million tons per year of satellites generating 100 kW of compute power per ton would add 100 gigawatts of AI compute capacity annually, with no ongoing operational or maintenance needs.” He even mentioned launching up to 1TW/year, which would make this orbital data center the most powerful one operated by an AI tech company.
Although launching satellites into space is quite an expensive and resource-intensive endeavor, Musk claims that the efficiency of these data centers would make them “the lowest cost way to generate AI compute.” This is made possible by SpaceX’s advancements with the reusable Starship rocket, which will also be launching the newer, much bigger V3 Starlink satellites this year. He also mentioned his plans of using the platform to build a manufacturing base on the moon and use it to launch up to 1,000TW/year into deep space and help humanity become a Kardashev Type II civilization.
Despite Musk’s massive financial resources, his dream still faces some challenges, which is why Nvidia CEO Jensen Huang doubts whether this project will work. For one, electronics like advanced AI chips are susceptible to cosmic radiation, corrupting data and frying circuits. There’s also the question of cooling, as the usual solutions that work on Earth’s surface aren’t applicable in space, instead relying on the vacuum of space to serve as an "infinite heatsink." And last, but not least, putting so many satellites in orbit around the Earth risks a Kessler Syndrome event, which would throw enough space junk in orbit to make launching anything — from satellites to crewed deep-space missions — an utter impossibility for the next couple of hundred years.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
Get 3DTested's best news and in-depth reviews, straight to your inbox.

-
Tanakoi Reply...electronics like advanced AI chips are susceptible to cosmic radiation, corrupting data and frying circuits...
NASA has already ran and tested AI chips on the ISS station, and Starcloud has operated NVidia's H100 in orbit.
There’s also the question of cooling, as the usual solutions that work on Earth’s surface aren’t applicable in space
A simple ammonia loop as the ISS used works fine for cooling. Or for higher efficiency a multi-stage system using PAO or even water as the working fluid.
...putting so many satellites in orbit around the Earth risks a Kessler Syndrome event...
This is just anti-tech fear porn. There's far too much volume in a 1000-km thick shell around the earth, especially when one considers these satellites are designed to deorbit at EOL. They're also built to resist fragmentation far better than 1970s-era satellites which gave rise to the concept. -
bit_user Oh, I knew this would happen. Musk is eventually going to merge all of his companies. They'll all eventually sink into the abyss, together.Reply
But this one made the most sense to happen first. SpaceX is the one that's "too big to fail" (i.e. Too systemically important) and probably has the best ability to raise money. Meanwhile, xAI is burning through vast amounts of cash. So, SpaceX is bailing out xAI and then it can either issue more shares or get a government bailout.
Eventually, after SpaceX buys Tesla and inevitably enters chapter 11, I hope a judge forces it to spin off the rocket business and lets the rest die in a fire. -
bit_user Reply
The scale of the cooling needed is what poses a problem for orbital datacenters. I'm not saying it can't be solved, but doing so will probably add a lot of mass, which disproportionately increases launch costs.Tanakoi said:A simple ammonia loop as the ISS used works fine for cooling. Or for higher efficiency a multi-stage system using PAO or even water as the working fluid.
You probably need to radiate the heat out into space with an array of heatsinks that's approximately the same size as the solar array. I think that's what we worked out, in the first thread about orbital datacenters. And for that to work, you need to transfer the heat out across that entire structure. That sounds pretty massive, to me. -
basel8 Reply
Radiative cooling is far less efficient than convective or conductive so you're right that you will need a lot more materials to cool the data center down. Just like the hyperloop there's several fatal flaws in this whole notion that questions the viability of this.bit_user said:The scale of the cooling needed is what poses a problem for orbital datacenters. I'm not saying it can't be solved, but doing so will probably add a lot of mass, which disproportionately increases launch costs.
You probably need to radiate the heat out into space with an array of heatsinks that's approximately the same size as the solar array. I think that's what we worked out, in the first thread about orbital datacenters. And for that to work, you need to transfer the heat out across that entire structure. That sounds pretty massive, to me. -
bigdragon "SpaceX Bails Out xAI" should be the headline. Usually when someone rich has one of their companies acquire another company of theirs it's because the company being acquired is failing financially while having some important IP worth protecting.Reply
I think the bigger story is what's going on at Tesla. How much longer until SpaceX has to acquire Tesla too? You know, the car company that decided it no longer wants to be a car company and just wants to do robotics, subscriptions, and autonomous taxis? Yeah, that Tesla. -
bit_user Reply
Yup. I predict that's not far off.bigdragon said:I think the bigger story is what's going on at Tesla. How much longer until SpaceX has to acquire Tesla too? -
Findecanor Reply
Last year, the man for which "Kessler Syndrome" got its name: Donald Kessler, co-authored an article claiming that we are dangerously close, warning us that the amount of satellites that are planned to be launched (by the likes of StarLink) is unsustainable.Tanakoi said:This is just anti-tech fear porn. There's far too much volume in a 1000-km thick shell around the earth, especially when one considers these satellites are designed to deorbit at EOL. They're also built to resist fragmentation far better than 1970s-era satellites which gave rise to the concept.
Links: SemanticScholar, ResearchGate
Current satellites do a lot of course corrections to avoid space debris. For example, the StarLink fleet does 800 course corrections per day in total. If a solar flare disrupts satellites' ability to manoeuvre, it would take approximately 5 ½ days before there is a collision. This number has changed dramatically because of constellations such as StarLink.
The European Space Agency recommends stopping launches now, and starting active debris removal.
I got this above from a video by Sabine Hossenfelder on YouTube. She unfortunately posted direct links to the articles behind a Patron paywall, and I'm not going to google them for you. -
JayGau Reply
So easy to dismiss any criticism and danger warnings by just saying "it's just anti-tech fear porn", just like Jensen Huang did about AI recently.Tanakoi said:This is just anti-tech fear porn. There's far too much volume in a 1000-km thick shell around the earth, especially when one considers these satellites are designed to deorbit at EOL. They're also built to resist fragmentation far better than 1970s-era satellites which gave rise to the concept. -
Tanakoi Reply
I've already done (and shared the calculations here) that a datacenter can exhaust some 40MW of heat using a two-stage loop operating at 400K, using radiators less than twice as large as those already currently being used on the ISS. How is that impractical?bit_user said:The scale of the cooling needed is what poses a problem for orbital datacenters.
Oops! You forgot that Starlink does a course correction if it determines the risk of collision is even 1:1,000,000. It could stop performing those corrections for several years on average before it had a single collision. And that's with the current constellation -- this new planned constellation will operate higher, in largely unused orbits.Findecanor said:Current satellites do a lot of course corrections to avoid space debris. For example, the StarLink fleet does 800 course corrections per day in total
The laws of physics dismiss them. We're talking about a space some 600,000,000,000,000,000,000 cubic meters in volume.JayGau said:So easy to dismiss any criticism and danger warnings by just saying "it's just anti-tech fear porn"