zlacker

[return to "xAI joins SpaceX"]
1. rybosw+u5[view] [source] 2026-02-02 22:10:52
>>g-mork+(OP)
> The basic math is that launching a million tons per year of satellites generating 100 kW of compute power per ton would add 100 gigawatts of AI compute capacity annually, with no ongoing operational or maintenance needs. Ultimately, there is a path to launching 1 TW/year from Earth.

> My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space.

This is so obviously false. For one thing, in what fantasy world would the ongoing operational and maintenance needs be 0?

◧◩
2. wongar+z8[view] [source] 2026-02-02 22:21:58
>>rybosw+u5
You operate them like Microsoft's submerged data center project: you don't do maintenance, whatever fails fails. You start with enough redundancy in critical components like power and networking and accept that compute resources will slowly decrease as nodes fail

No operational needs is obviously ... simplified. You still need to manage downlink capacity, station keeping, collision avoidance, etc. But for a large constellation the per-satellite cost of that would be pretty small.

◧◩◪
3. willis+K9[view] [source] 2026-02-02 22:26:15
>>wongar+z8
How do you make a small fortune? Start with a big one.

The thing being called obvious here is that the maintenance you have to do on earth is vastly cheaper than the overspeccing you need to do in space (otherwise we would overspec on earth). That's before even considering the harsh radiation environment and the incredible cost to put even a single pound into low earth orbit.

◧◩◪◨
4. schiff+8j[view] [source] 2026-02-02 22:57:57
>>willis+K9
If you think the primary source of electricity is solar (which clearly Musk does), then space increases the amount of compute per solar cell by ~5x, and eliminates the relatively large battery required for 24/7 operation. The thermal radiators and radiation effects are manageable.

The basic idea of putting compute in space to avoid inefficient power beaming goes back to NASA in the 60s, but the problem was always the high cost to orbit. Clearly Musk expects Starship will change that.

◧◩◪◨⬒
5. piskov+0m[view] [source] 2026-02-02 23:10:02
>>schiff+8j
My dude, ISS has 200 KW of peak power.

NVIDIA H200 is 0.7 KW per chip.

To have 100K of GPUs you need 500 ISSs.

ISS cooling is 16KW dissipation. So like 16 H200. Now imagine you want to cool 100k instead of 16.

And all this before we talk about radiation, connectivity (good luck with 100gbps rack-to-rack we have on earth), and what have you.

Sometimes I think all this space datacenters talk is just a PR to hush those sad folks that happen to live near the (future) datacenter: “don’t worry, it’s temporary”

https://www.nytimes.com/2025/10/20/technology/ai-data-center...

[go to top]