zlacker

[return to "xAI joins SpaceX"]
1. gok+h4[view] [source] 2026-02-02 22:06:22
>>g-mork+(OP)
> it is possible to put 500 to 1000 TW/year of AI satellites into deep space, meaningfully ascend the Kardashev scale and harness a non-trivial percentage of the Sun’s power

We currently make around 1 TW of photovoltaic cells per year, globally. The proposal here is to launch that much to space every 9 hours, complete with attached computers, continuously, from the moon.

edit: Also, this would capture a very trivial percentage of the Sun's power. A few trillionths per year.

◧◩
2. lugao+DK[view] [source] 2026-02-03 01:28:20
>>gok+h4
Only people who never interacted with data center reliability think it's doable to maintain servers with no human intervention.
◧◩◪
3. jmyeet+dT[view] [source] 2026-02-03 02:23:24
>>lugao+DK
There are a class of people who may seem smart until they start talking about a subject you know about. Hank Green is a great example of this.

For many on HN, Elon buying Twitter was a wake up call because he suddenly started talking about software and servers and data centers and reliability and a ton of people with experience with those things were like "oh... this guy's an idiot".

Data centers in space are exactly like this. Your comment (correctly) alludes to this.

Companies like Google, Meta, Amazon and Microsoft all have so many servers that parts are failing constantly. They fail so often on large scales that it's expected things like a hard drive will fail while a single job might be running.

So all of these companies build systems to detect failures, disable running on that node until it's fixed, alerting someone to what the problem is and then bringing the node back online once the problem it's addressed. Everything will fail. Hard drives, RAM, CPUs, GPUs, SSDs, power supplies, fans, NICs, cables, etc.

So all data centers will have a number of technicians who are constantly fixing problems. IIRC Google's ratio tended to be about 10,000 servers per technician. Good technicians could handle higher ratios. When a node goes offline it's not clear why. Techs would take known good parts and basically replacce all of them and then figure out what the problem is later, dispose of any bad parts and put tested good parts into the pool of known good parts for a later incident.

Data centers in space lose all of this ability. So if you have a large number of orbital servers, they're going to be failing constantly with no ability to fix them. You can really only deorbit them and replace them and that gets real expensive.

Electronics and chips on satellites also aren't consumer grade. They're not even enterprise grade. They're orders of magnitude more reliable than that because they have to deal with error correction terrestial components don't due to cosmic rays and the solar wind. That's why they're a fraction of the power of something you can buy from Amazon but they cost 1000x as much. Because they need to last years and not fail, something no home computer or data center server has to deal with.

Put it this way, a hardened satellite or probe CPU is like paying $1 million for a Raspberry Pi.

And anybody who has dealt with data centers knows this.

◧◩◪◨
4. skarti+dn2[view] [source] 2026-02-03 14:20:24
>>jmyeet+dT
Excellent comment.
[go to top]