IMO the big problem is the lack of maintainability.
And you also need it to make sense not just from a maintenance standpoint, but from a financial one. In what world would launching what's equivalent to huge facilities that work perfectly fine on the ground make sense? What's the point? If we had a space elevator and nearly free space deployment, then yeah maybe, but how does this plan square with our current reality?
Oh, and don't forget about getting some good shielding for all those precise, cutting-edge processors.
A watt is a watt and cooling isn't any different just because some heat came from a GPU. But a GPU cluster will consume order of magnitudes more electricity, and will require a proportionally larger surface area to radiate heat compared to a starlink satellite.
Best estimate I can find is that a single starlink satellite uses ~5KW of power and has a radiator of a few square meters.
Power usage for 1000 B200's would be in the ballpark of 1000kW. That's around 1000 square meters of radiators.
Then the heat needs to be dispersed evenly across the radiators, which means a lot of heat pipes.
Cooling GPU's in space will be anything but easy and almost certainly won't be cost competitive with ground-based data centers.
You can have a swarm of small, disposable satellites with laser links between them.
Starlink V2 mini satellites are around 10kW and costs $1-1.5m to launch, for a cost of $100-150m per MW.
So if Gemini is right it seems a datacenter made of Starlinks costs 10-20x more and has a limited lifetime, i.e. it seems unprofitable right now.
In general it seems unlikely to be profitable until there is no more space for solar panels on Earth.
And for data centers, the satellite wouldn't be as far apart as starlight satellites, they would be quite close instead.
And a single cluster today would already require more solar & cooling capacity than all starlink satellites combined.
I vaguely recall an article a while ago about the impact of GPU reliability: a big problem with training is that the entire cluster basically operates in lock-step, with each node needing the data its neighbors calculated during the previous step to proceed. The unfortunate side-effect is that any failure stops the entire hundred-thousand-node cluster from proceeding - as the cluster grows even the tiniest failure rate is going to absolutely ruin your uptime. I think they managed to somehow solve this, but I have absolutely no idea how they managed to do it.
With recent developments, projected use is now skyrocketing like never seen since.
Before that I thought it was calculated that if alternative energy could be sufficiently ramped up, there would be electricity too cheap to meter.
I would like to see that first.
Whoever has the attitude to successfully do "whatever it takes" to get it done would be the one I trust do it in space after that.