1. Getting things to space is incredibly expensive
2. Ingress/egress are almost always a major bottleneck - how is bandwidth cheaper in space?
3. Chips must be “Rad-hard” - that is do more error correcting from ionizing radiation - there were entire teams at NASA dedicated to special hardware for this.
4. Gravity and atmospheric pressure actually do wonders for easy cooling. Heat is not dissipated in space like we are all used to and you must burn additional energy trying to move the heat generated away from source.
5. Energy production will be cheaper from earth due to mass manufacturing of necessary components in energy systems - space energy systems need novel technology where economies of scale are lost.
Would love for someone to make the case for why it actually makes total sense, because it’s really hard to see for me!
1. Solving cost of launching mass has been the entire premise of SpaceX since day one and they have the track record.
2. Ingress/egress aren't at all bottlenecks for inferencing. The bytes you get before you max out a context window are trivial, especially after compression. If you're thinking about latency, chat latencies are already quite high and there's going to be plenty of non-latency sensitive workloads in future (think coding agents left running for hours on their own inside sandboxes).
3. This could be an issue, but inferencing can be tolerant to errors as it's already non-deterministic and models can 'recover' from bad tokens if there aren't too many of them. If you do immersion cooling then the coolant will protect the chips from radiation as well.
4. There is probably plenty of scope to optimize space radiators. It was never a priority until now and is "just" an engineering problem.
5. What mass manufacture? Energy production for AI datacenters is currently bottlenecked on Siemens and others refusing to ramp up production of combined cycle gas turbines. They're converting old jet engines into power plants to work around this bottleneck. Ground solar is simply not being considered by anyone in the industry because even at AI spending levels they can't store enough power in batteries to ride out the night or low power cloudy days. That's not an issue in space where the huge amount of Chinese PV overproduction can be used 24/7.
It does not make sense.
The question isn't "can you mitigate the problems to some extent?", it's "can you see a path to making satellite data centers more appealing than terrestrial?"
The answer is a flat out "no," and none of your statements contradict this.
Terrestrial will always be better:
1. Reducing the cost of launches is great, but it will never be as cheap as zero launches.
2. Radio transmissions have equally high bandwidth from Earth, but fiber is a better network backbone in almost every way.
3. Radiation events don't only cause unpredictable data errors, they can also cause circuit latch-ups and cascade into system failure. Error-free operation is still better in any case. Earth's magenetosphere and atmosphere give you radiation shielding for free, rad-hard chips will always cost more than standard (do they even exist for this application?), and extra shielding will always cost more than no shielding.
4. On Earth you can use conduction, convection, AND radiation for cooling. Space only gets you marginally more effective radiation.
5. Solar is cheaper on the ground than in space. The increase in solar collection capability per unit area in space doesn't offset the cost of launch: you can get 20kW of terrestrial solar collection for around the price of a single 1U satellite launch, and that solar production can be used on upgraded equipment in the future. Any solar you put on a satellite gets decommissioned when the inference hardware is obsolete.
And this ignores other issues like hardware upgrades, troubleshooting, repairs, and recycling that are essentially impossible in space, but are trivial on the ground.