zlacker

[return to "xAI joins SpaceX"]
1. gok+h4[view] [source] 2026-02-02 22:06:22
>>g-mork+(OP)
> it is possible to put 500 to 1000 TW/year of AI satellites into deep space, meaningfully ascend the Kardashev scale and harness a non-trivial percentage of the Sun’s power

We currently make around 1 TW of photovoltaic cells per year, globally. The proposal here is to launch that much to space every 9 hours, complete with attached computers, continuously, from the moon.

edit: Also, this would capture a very trivial percentage of the Sun's power. A few trillionths per year.

◧◩
2. rainsf+RA[view] [source] 2026-02-03 00:24:23
>>gok+h4
We also shouldn't overlook the fact that the proposal entirely glosses over the implication of the alternative benefits we might realize if humanity achieved the incredible engineering and technical capacity necessary to make this version of space AI happen.

Think about it. Elon conjures up a vision of the future where we've managed to increase our solar cell manufacturing capacity by two whole orders of magnitude and have the space launch capability for all of it along with tons and tons of other stuff and the best he comes up with is...GPUs in orbit?

This is essentially the superhero gadget technology problem, where comic books and movies gloss over the the civilization changing implications of some technology the hero invents to punch bad guys harder. Don't get me wrong, the idea of orbiting data centers is kind of cool if we can pull it off. But being able to pull if off implies an ability to do a lot more interesting things. The problem is that this is both wildly overambitious and somehow incredibly myopic at the same time.

◧◩◪
3. esseph+kG[view] [source] 2026-02-03 00:59:02
>>rainsf+RA
This is such a hypebeast paragraph.

Datacenters in space are a TERRIBLE idea.

Figure out how to get rid of the waste heat and get back to me.

◧◩◪◨
4. elihu+xP[view] [source] 2026-02-03 01:59:45
>>esseph+kG
That's not a new problem that no one has dealt with before. The ISS for instance has its External Active Thermal Control System (EACTS).

It's not so much a matter of whether it's an unsolvable problem but more like, how expensive is it to solve this problem, what are its limitations, and does the project still makes economic sense once you factor all that in?

◧◩◪◨⬒
5. OneDeu+rZ[view] [source] 2026-02-03 03:13:58
>>elihu+xP
It's worth noting that the EACTS can at maximum dissipate 70kW of waste heat. And EEACTS (the original heat exchange system) can only dissipate another 14kW.

That is together less than a single AI inference rack.

And to achieve that the EACTS needs 6 radiator ORUs each spanning 23 meters by 11 meters and with a mass of 1100 kg. So that's 1500 square meters and 6 and a half metric tons before you factor in any of the actual refrigerant, pumps, support beams, valve assemblies, rotary joints, or cold side heat exchangers all of which will probably together double the mass you need to put in orbit.

There is no situation where that makes sense.

-----------

Manufacturing in space makes sense (all kinds of techniques are theoretically easier in zero G and hard vacuum).

Mining asteroids, etc makes sense.

Datacenters in space for people on earth? That's just stupid.

◧◩◪◨⬒⬓
6. marcus+bi1[view] [source] 2026-02-03 06:08:00
>>OneDeu+rZ
I'm a total noob on this.

I get that vacuum is a really good insulator, which is why we use it to insulate our drinks bottles. So disposing of the heat is a problem.

Can't we use it, though? Like, I dunno, to take a really stupid example: boil water and run a turbine with the waste heat? Convert some of it back to electricity?

◧◩◪◨⬒⬓⬔
7. jdyer9+It1[view] [source] 2026-02-03 07:48:38
>>marcus+bi1
It's a good question, but in a closed system (like you have in space) the heat from the turbine loop has to go somewhere in order to make it useful. Let's say you have a coolant loop for the gpus (maybe glycol). You take the hot glycol, run it through your heat exchanger and heat up your cool, pressurized ammonia. The ammonia gets hot (and now the glycol is cool, send it back). You then take the ammonia and send it through the turbine and it evaporates as it expands and loses pressure to spin the turbine. But now what? You have warm, vaporized, low pressure ammonia, and now you need to cool it down to start over. Once it's cool you can pressurize it again so you can heat it up to use again, but you have to cool it, and that's the crux of the issue.

The problem is essentially that everything you do releases waste heat, so you either reject it, or everything continues to heat up until something breaks. Developing useful work from that heat only helps if it helps reject it, but it's more efficient to reject it immediately.

A better, more direct way to think about this might be to look at the Seebeck effect. If you have a giant radiator, you could put a Peltier module between it and you GPU cooling loop and generate a little electricity, but that would necessarily also create some waste heat, so you're better off cooling the GPU directly.

[go to top]