Yes you’re right. I misspoke.
I’m curious if there are ways to get around the monolithic nature of today’s models. There have to be architectures where a generalized model can coordinate specialized models which are cheaper to train, for example. E.g calling into a tool which is actually another model. Pre-LLM this was called boosting or “ensemble of experts” (I’m sure I’m butchering some nuance there).