This process takes a while, which is partly why all the computers in space seem out of date. Because they are.
No one is going to want to use chips that are a many years out of date or subject to random bit flips.
(Although now it got me thinking, do random bit flips matter when training a trillion parameter model?)
Latency would be fine for inference, this is low earth orbit, that is about 25ms optimistically. Well within what we expect from our current crop of non local LLMs.