zlacker

[return to "xAI joins SpaceX"]
1. rybosw+u5[view] [source] 2026-02-02 22:10:52
>>g-mork+(OP)
> The basic math is that launching a million tons per year of satellites generating 100 kW of compute power per ton would add 100 gigawatts of AI compute capacity annually, with no ongoing operational or maintenance needs. Ultimately, there is a path to launching 1 TW/year from Earth.

> My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space.

This is so obviously false. For one thing, in what fantasy world would the ongoing operational and maintenance needs be 0?

◧◩
2. wongar+z8[view] [source] 2026-02-02 22:21:58
>>rybosw+u5
You operate them like Microsoft's submerged data center project: you don't do maintenance, whatever fails fails. You start with enough redundancy in critical components like power and networking and accept that compute resources will slowly decrease as nodes fail

No operational needs is obviously ... simplified. You still need to manage downlink capacity, station keeping, collision avoidance, etc. But for a large constellation the per-satellite cost of that would be pretty small.

◧◩◪
3. rybosw+Xa[view] [source] 2026-02-02 22:30:11
>>wongar+z8
An 8 GPU B200 cluster goes for about $500k right now. You'd need to put thousands of those into space to mimic a ground-based data center. And the launch costs are best case around 10x the cost of the cluster itself.

Letting them burn up in the atmosphere every time there's an issue does not sound sustainable.

◧◩◪◨
4. nine_k+Zm[view] [source] 2026-02-02 23:14:26
>>rybosw+Xa
A Falcon Heavy takes about 63 tons to LEO, at a cost of about $1,500 per kg. A server with 4x H200s and some RAM and CPU costs about $200k, and weighs about 60kg, with all the cooling gear and thick metal. As is, it would cost $90k to get to LEO, half of the cost of the hardware itself.

I suppose that an orbit-ready server is going to cost more, and weigh less.

The water that serves as the coolant will weigh a lot though, but it can double as a radiation shield, and partly as reaction mass for orbital correction and deorbiting.

◧◩◪◨⬒
5. rybosw+mC[view] [source] 2026-02-03 00:33:10
>>nine_k+Zm
Just so we can agree on numbers for the napkin math - an 8x H200 weighs 130 kg:

https://www.nvidia.com/en-eu/data-center/dgx-h200/?utm_sourc...

Power draw is max 10.2 kW but average draw would be 60-70% of that. let's call it 6kW.

It is possible to obtain orbits that get 24/7 sunlight - but that is not simple. And my understanding is it's more expensive to maintain those orbits than it would be to have stored battery power for shadow periods.

Average blackout period is 30-45 minutes. So you'd need at least 6 kWh of storage to avoid draining the batteries to 0. But battery degradation is a thing. So 6 kWh is probably the absolute floor. That's in the range of 50-70 kg for off-the-shelf batteries.

You'd need at least double the solar panel capacity of the battery capacity, because solar panels degrade over time and will need to charge the batteries in addition to powering the gpu's. 12 kW solar panels would be the absolute floor. A panel system of that size is 600-800 kg.

These are conservative estimates I think. And I haven't factored in the weight of radiators, heat and radiation shielding, thermal loops, or anything else that a cluster in space might need. And the weight is already over 785 kg.

Using the $1,500 per kg, we're approaching $1.2 million.

Again, this is a conservative estimate and without accounting for most of the weight (radiators) because I'm too lazy to finish the napkin math.

[go to top]