Next up in the equation is surface emissivity which we’ve got a lot of experience in the automotive sector.
And finally surface area, once again, getting quite good here with nanotechnology.
Yes he’s distracting, no it’s not as impossible as many people think.
So your hot thing is radiating directly onto the next hot thing over, the one that also needs to cool down?
My car doesn't spend too much time driving in vacuum, does yours?
Seems like quite a massive difference to ignore.
It's not physically impossible. Of course not. It's been done thousands of times already. But it doesn't make any economic sense. It's like putting a McDonald's at the top of Everest. Is it possible? Of course. Is it worth the enormous difficulty and expense to put one there? Not even a little.
Same with datacenters in space, not today, but in 1000 years definitely, 100 surely, 10?
As for the economics, it makes about as much sense as running jet engines at full tilt to power them.
Yeah, pumps, tubes, and fluids are some of the worst things to add to a satellite. It's probably cheaper to use more radiators.
Maybe it's possible to make something economical with Peltier elements. But it's still not even a budget problem yet, it's not plainly not viable.
> getting quite good here with nanotechnology
Small features and fractal surfaces are useless here.
Peltiers generate a lot of heat to get the job done so even though electricity is pretty much free, probably not a sure bet.
- let's say 8x 800W GPUs and neglect the CPU, that's 6400W
- let's further assume the PSU is 100% efficient
- let's also assume that you allow the server hardware to run at 77 degrees C, or 350K, which is already pretty hot for modern datacenter chips.
Your radiator would need to dissipate those 6400W, requiring it to be almost 8 square meters in size. That's a lot of launch mass. Adding 50 degrees will reduce your required area to only about 4.4 square meters with the consequence that chip temps will rise by 50 degrees also, putting them at 127 degrees C.
No CPU I'm aware of can run at those temps for very long and most modern chips will start to self throttle above about 100
You put the cold side of the phase change on the internal cooling loop, step up the external cooling loop as high temp as you can and then circulate that through the radiators. You might even do this step up more than once.
Imagine the data center like a box, you want it to be cold inside, and there’s a compressor, you use to transfer heat from inside to outside, the outside gets hot, inside cold. You then put a radiator on the back of the box and radiate the heat to the darkness of space.
This is all very dependent on the biggest and cheapest rockets in the world but it’s a tradeoff of convenience and serviceability for unlimited free energy.
Hillary (he features on the NZ Five Dollar note) was one of those guys who does things for no good reason. He also went to both poles. This only tells us that it is indeed possible, but not that it's desirable or will become routine.
Nobody should doubt that it's possible, since it's been done. It just doesn't make any sense to do it purely for the sake of having computers do things that could be done on the ground.
There's nothing weird about using jet engines to make electricity. The design of a turbine designed to generate thrust isn't necessarily that different from a turbine designed to generate electricity. You can buy a new Avon gas turbine generator today, the same engine used in the Canberra, Comet, Draken, and many others. It makes about a million times more economic sense than putting GPUs in space to run LLMs.