On one hand, LLMs do require significant amounts of compute to train. But the other hand, if you amortize training costs across all user sessions, is it really that big a deal? And that’s not even factoring in Moore’s law and incremental improvements to model training efficiency.