zlacker

[parent] [thread] 0 comments
1. razoda+(OP)[view] [source] 2023-11-22 01:50:04
LLMs can't generalise no, but the meta-architecture around them can 100%.

Think about the RLHF component that trains LLMs. It's the training itself that generalises - not the final model that becomes a static component.

[go to top]