I've heard stories that over a decade ago teams inside hyperscalars had calculated that running completely cryogenically cooled data centers would be vastly cheaper than what we do now due to savings on resistive losses and the cost of eliminating waste heat. You don't have to get rid of heat that you don't generate in the first place.
The issue is that at the moment there are very few IC components and processes that have been engineered to run at cryogenic temperatures. Replicating the entirety of the existing data center stack for cryogenic temps is nowhere near reality.
That said, once you have cryogenic superconducting integrated circuits you could colocate your data centers and your propellant/oxidizer depots. Not exactly "data centers off in deep space" since propoxd tend to be the highest traffic areas.
take an h100 for example. it will need something like 1kW to operate. that's less than 4 square meters of solar panel
at 70C, a reasonable temp for H100, a 4 square meter radiator can emit north of 2kW of energy into deep space
seems to me like a 2x2x2 cube could house an H100 in space
perhaps I'm missing something?
Have you considered the effects of insolation? Sunlight heats things too.
How efficient is your power supply and how much waste heat is generated delivering 1kW you your h100?
How do you move data between the ground and your satellite? How much power does that take?
If it's in LEO, how many thermal cycles can your h100 survive? If it's not in LEO, go back to the previous question and add an order of magnitude.
I could go on, but honestly those details - while individually solvable - don't matter because there is no world where you would not be better off taking the exact same h100 and installing it somewhere on the ground instead
I'm not advocating for space GPUs as a logical next step. so many unsolved problems remain
point is that launch costs per kg are a more realistic blocker than cooling