Perhaps, but the relative success of trained LLMs acting with apparent generalised understanding may indicate that it is simply the interface that is really an LLM post training;
That the deeper into the network you go (the further from the linguistic context), the less things become about words and linguist structure specifically and the more it becomes about things and relations in general.
(This also means that multiple interfaces can be integrated, sometimes making translation possible, e.g.: image <=> tree<string>)