> My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space.
This is so obviously false. For one thing, in what fantasy world would the ongoing operational and maintenance needs be 0?
No operational needs is obviously ... simplified. You still need to manage downlink capacity, station keeping, collision avoidance, etc. But for a large constellation the per-satellite cost of that would be pretty small.
The thing being called obvious here is that the maintenance you have to do on earth is vastly cheaper than the overspeccing you need to do in space (otherwise we would overspec on earth). That's before even considering the harsh radiation environment and the incredible cost to put even a single pound into low earth orbit.
The basic idea of putting compute in space to avoid inefficient power beaming goes back to NASA in the 60s, but the problem was always the high cost to orbit. Clearly Musk expects Starship will change that.
NVIDIA H200 is 0.7 KW per chip.
To have 100K of GPUs you need 500 ISSs.
ISS cooling is 16KW dissipation. So like 16 H200. Now imagine you want to cool 100k instead of 16.
And all this before we talk about radiation, connectivity (good luck with 100gbps rack-to-rack we have on earth), and what have you.
—
Sometimes I think all this space datacenters talk is just a PR to hush those sad folks that happen to live near the (future) datacenter: “don’t worry, it’s temporary”
https://www.nytimes.com/2025/10/20/technology/ai-data-center...
> ROSA is 20 percent lighter (with a mass of 325 kg (717 lb))[3] and one-fourth the volume of rigid panel arrays with the same performance.
And that’s not the current cutting edge in solar panels either. A company can take more risks with technology choices and iterate faster (get current state-of-the-art solar to be usable in space).
The bet they’re making is on their own engineering progress, like they did with rockets, not on sticking together pieces used on the ISS today.