No operational needs is obviously ... simplified. You still need to manage downlink capacity, station keeping, collision avoidance, etc. But for a large constellation the per-satellite cost of that would be pretty small.
The thing being called obvious here is that the maintenance you have to do on earth is vastly cheaper than the overspeccing you need to do in space (otherwise we would overspec on earth). That's before even considering the harsh radiation environment and the incredible cost to put even a single pound into low earth orbit.
Anyone who thinks it makes sense to blast data centers into space has never seen how big and heavy they are, or thought about their immense power consumption, much less the challenge of radiating away that much waste heat into space.
Letting them burn up in the atmosphere every time there's an issue does not sound sustainable.
I think passive cooling (running hot) reduced some of the advantages of undersea compute.
Space is pretty ridicolous, but underwater might genuinely be a good fit in certain areas.
What if you could keep them in space long enough that by the time they burn up in the atmosphere, there are newer and better GPUs anyway?
Still doesn't seem sustainable to me given launch costs and stuff (hence devil's advocate), but I can sort of see the case if I squint?
If the cost per pound, power, regulatory burden, networking, and radiation shielding can be gamed out, as well as the thousand other technically difficult and probably expensive problems that can crop up, they have to sum to less than the effective cost of running that same datacenter here on earth. It's interesting that it doesn't play into Jevon's paradox the way it might otherwise - there's a reduction in power consumption planetside, if compute gets moved to space, but no equivalent expansion since the resource isn't transferable.
I think some sort of space junk recycling would be necessary, especially at the terawatt scale being proposed - at some point vaporizing a bunch of arbitrary high temperature chemistry in the upper atmosphere isn't likely to be conducive to human well-being. Copper and aluminum and gold and so on are also probably worth recovering over allowing to be vaporized. With that much infrastructure in space, you start looking at recycling, manufacturing, collection in order to do cost reductions, so maybe part of the intent is to push into off-planet manufacturing and resource logistics?
The whole thing's fascinating - if it works, that's a lot of compute. If it doesn't work, that's a lot of very expensive compute and shooting stars.
In the back on my head this all seemed astronomically far-fetched, but 5.5 million to get 8 GPUs in space... wild. That isn't even a single TB of VRAM.
Are you maybe factoring in the cost to powering them in space in that 5 million?
Nothing in there is a lie, but any substance is at best implied. Yes, 1,000,000 tons/year * 100kW/ton is 100GW. Yes, there would be no maintenance and negligible operational cost. Yes, there is some path to launching 1TW/year (whether that path is realistic isn't mentioned, neither what a realistic timeline would be). And then without providing any rationale Elon states his estimate that the cheapest way to do AI compute will be in space in a couple years. Elon is famously bad at estimating, so we can also assume that this is his honest belief. That makes a chain of obviously true statements (or close to true, in the case of operating costs), but none of them actually tell us that this will be cheap or economically attractive. And all of them are complete non-sequiturs.
Let's say given component failure rates, you can expect for 20% of the GPUs to fail in that time. I'd say that's acceptable.
The basic idea of putting compute in space to avoid inefficient power beaming goes back to NASA in the 60s, but the problem was always the high cost to orbit. Clearly Musk expects Starship will change that.
Will that come to be? I'm skeptical, especially within the next several years. Starship would have to perform perfectly, and a lot of other assumptions hold, to make this make sense. But that's the idea.
NVIDIA H200 is 0.7 KW per chip.
To have 100K of GPUs you need 500 ISSs.
ISS cooling is 16KW dissipation. So like 16 H200. Now imagine you want to cool 100k instead of 16.
And all this before we talk about radiation, connectivity (good luck with 100gbps rack-to-rack we have on earth), and what have you.
—
Sometimes I think all this space datacenters talk is just a PR to hush those sad folks that happen to live near the (future) datacenter: “don’t worry, it’s temporary”
https://www.nytimes.com/2025/10/20/technology/ai-data-center...
I wouldn't exactly call this a success, for that matter.
And cooling. There is no cold water or air in space.
I suppose that an orbit-ready server is going to cost more, and weigh less.
The water that serves as the coolant will weigh a lot though, but it can double as a radiation shield, and partly as reaction mass for orbital correction and deorbiting.
In this case, it's all about Starship ramping up to such a scale that the cost per pound to orbit drops sufficiently for everything else to make sense - from the people who think the numbers can work, that means somewhere between $20 and $80 per pound, currently at $1300-1400 per pound with Falcon 9. Starship at scale would have to enable at least 2 full orders of magnitude decrease in price to make space compute viable.
If Starship realistically gets into the $90/lb or lower range, space compute makes sense; things like shielding and the rest become pragmatic engineering problems that can be solved. If the cost goes above $100 or so, it doesn't matter how the rest of the considerations play out, you're launching at a loss. That still might warrant government, military, and research applications for space based datacenters, especially in developing the practical engineering, but Starship needs to work, and there needs to be a ton of them for the datacenter-in-space idea to work out.
> ROSA is 20 percent lighter (with a mass of 325 kg (717 lb))[3] and one-fourth the volume of rigid panel arrays with the same performance.
And that’s not the current cutting edge in solar panels either. A company can take more risks with technology choices and iterate faster (get current state-of-the-art solar to be usable in space).
The bet they’re making is on their own engineering progress, like they did with rockets, not on sticking together pieces used on the ISS today.
Not that you would want 500+ square meters just for cooling of 200KW
And, mind you, it won’t be a simple copper radiator
https://www.nasa.gov/wp-content/uploads/2021/02/473486main_i...
Just because an idea has some factors in its favor (Space-based datacenter: 100% uptime solar, no permitting problems [2]) doesn't mean it isn't ridiculous on its face. We're in an AI bubble, with silly money flowing like crazy and looking for something, anything to invest it. That, and circular investments to keep the bubble going. Unfortunately this gives validation to stupid ideas, it's one of the hallmarks of bubbles. We've seen this before.
The only things that space-based anything have advantages on are long-distance communication and observation, neither of which datacenters benefit from.
The simple fact is that anything that can be done in a space-based datacenter can be done cheaper on Earth.
[1] https://en.wikipedia.org/wiki/A_Modest_Proposal for the obtuse
[2] until people start having qualms about the atmospheric impact of all those extra launches and orbital debris
https://www.nvidia.com/en-eu/data-center/dgx-h200/?utm_sourc...
Power draw is max 10.2 kW but average draw would be 60-70% of that. let's call it 6kW.
It is possible to obtain orbits that get 24/7 sunlight - but that is not simple. And my understanding is it's more expensive to maintain those orbits than it would be to have stored battery power for shadow periods.
Average blackout period is 30-45 minutes. So you'd need at least 6 kWh of storage to avoid draining the batteries to 0. But battery degradation is a thing. So 6 kWh is probably the absolute floor. That's in the range of 50-70 kg for off-the-shelf batteries.
You'd need at least double the solar panel capacity of the battery capacity, because solar panels degrade over time and will need to charge the batteries in addition to powering the gpu's. 12 kW solar panels would be the absolute floor. A panel system of that size is 600-800 kg.
These are conservative estimates I think. And I haven't factored in the weight of radiators, heat and radiation shielding, thermal loops, or anything else that a cluster in space might need. And the weight is already over 785 kg.
Using the $1,500 per kg, we're approaching $1.2 million.
Again, this is a conservative estimate and without accounting for most of the weight (radiators) because I'm too lazy to finish the napkin math.
A lot. As someone that has been responsible for trainings with up to 10K GPUs, things fail all the time. By all the time I don't mean every few weeks, I mean daily. From disk failings, to GPU overheating, to infiniband optical connectors not being correctly fastened and disconnecting randomly, we have to send people to manually fix/debug things in the datacenter all the time.
If one GPU fails, you essentially lose the entire node (so 8 GPUs), so if your strategy is to just turn off whatever fails forever and not deal with it, it's gonna get very expensive very fast.
And thats in an environment where temperature is very well controlled and where you don't have to put your entire cluster through 4 Gs and insane vibrations during take off.
The solar panels used in space are really lightweight, about 2 kg / m² [1], it's like ten times lighter weight than terrestrial panels. Still they need load-bearing scaffolding, and electrical conductors to actually collect the hundreds of kilowatts.
Water can't be made as lightweight though.
What if you had a fleet of Optimus robots up there who would actually operate a TSMC in space and they would maintain the data centers in space?
Hold on let me enter a K hole…
What if we just did things?
If, like, sea-water entered and corroded the system and it blew up and ate babies, and caused Godzilla, that would be a failure. It just being not quite interesting enough to go after seems... I mean I guess it is, but on a "meh" level.
Maybe with Starship the premium is less extreme? $10 million per 350 NVidia systems seems already within margins, and $1M would definitely put it in the range of being a rounding error.
But that's only the Elon style "first principles" calculation. When reality hits it's going to be an engineering nightmare on the scale of nuclear power plants. I wouldn't be surprised if they'd spend a billion just figuring out how to get a datacenter operational in space. And you can build a lot of datacenters on earth for a billion.
If you ask me, this is Elon scamming investors for his own personal goals, which is just the principle of having AI be in space. When AI is in space, there's a chance human derived intelligence will survive an extinction event on earth. That's one of the core motivations of Elon.