zlacker

[parent] [thread] 0 comments
1. fennec+(OP)[view] [source] 2024-10-22 12:33:09
Isn't your first point purely because LLMs are canned models that aren't actively being trained aka inference only? It isn't really a fair comparison considering humans can actively learn/continuous training.

I suppose one could build an LLM around a lora that's being continuously trained to attempt to get it to adapt to new scenarios.

[go to top]