zlacker

[parent] [thread] 1 comments
1. roland+(OP)[view] [source] 2023-12-18 14:09:41
Hence why I also included “the training methods and data.” All three come together to produce something impressive but with inherent limitations. The human tendency to anthropomorphize leads human intuition about its capabilities astray. It’s an extremely capable bullshit artist.

Training agents on every written word ever produced, or selected portions of it, will never impart the lessons that humans learn through “The School of Hard Knocks.” They are nihilist children who were taught to read, given endless stacks of encyclopedias and internet chat forum access, but no (or no consistent) parenting.

replies(1): >>Rugnir+Qi
2. Rugnir+Qi[view] [source] 2023-12-18 15:38:33
>>roland+(OP)
I get where you're going, but the original comment seemed to be trying to make a totalising "LLMs are inherently this way" which is the opposite of true, they weren't like this before (see gpt2, gpt3 etc) and had to intentionally work to make it this way, which was a concious and intentional choice. earlier llms would respond to the tone presented, so if you swore with it, it would swear back - if you presented a wall of "aaaaaaaaaaaaaaaaaaaaa" it would reply with more of the same
[go to top]