zlacker

[return to "xAI joins SpaceX"]
1. gok+h4[view] [source] 2026-02-02 22:06:22
>>g-mork+(OP)
> it is possible to put 500 to 1000 TW/year of AI satellites into deep space, meaningfully ascend the Kardashev scale and harness a non-trivial percentage of the Sun’s power

We currently make around 1 TW of photovoltaic cells per year, globally. The proposal here is to launch that much to space every 9 hours, complete with attached computers, continuously, from the moon.

edit: Also, this would capture a very trivial percentage of the Sun's power. A few trillionths per year.

◧◩
2. lugao+DK[view] [source] 2026-02-03 01:28:20
>>gok+h4
Only people who never interacted with data center reliability think it's doable to maintain servers with no human intervention.
◧◩◪
3. lugao+DY5[view] [source] 2026-02-04 12:41:31
>>lugao+DK
I did some more reading and want to walk back my skepticism a bit. There is actually serious effort going into this, such as Google’s research on space-based AI infrastructure: https://research.google/blog/exploring-a-space-based-scalabl...

They highlight the exact reliability constraint I was thinking of: that replacing failed TPUs is trivial on Earth but impossible in space. Their solution is redundant provisioning, which moves the problem from "operationally impossible" to "extremely expensive."

You would effectively need custom, super-redundant motherboards designed to bypass dead chips rather than replace them. The paper also tackles the interconnect problem using specialized optics to sustain high bitrates, which is fascinating but seems incredibly difficult to pull off given that the constellation topology changes constantly. It might be possible, but the resulting hardware would look nothing like a regular datacenter.

Also this would require lots of satelites to rival a regular DC which is also very hard to justify. Let's see what the promised 2027 tests will reveal.

[go to top]