Isn't your first point purely because LLMs are canned models that aren't actively being trained aka inference only? It isn't really a fair comparison considering humans can actively learn/continuous training.
I suppose one could build an LLM around a lora that's being continuously trained to attempt to get it to adapt to new scenarios.