zlacker

[parent] [thread] 7 comments
1. 4death+(OP)[view] [source] 2023-11-20 23:27:46
I don't agree about your point regarding training data. The internet is infamous for pedants who will correct even the smallest factual or logical errors. Take this comment for instance... It seems like the training set would be filled with proposition X, followed by a corrective assertion Y.
replies(2): >>xp84+m1 >>jibal+qo3
2. xp84+m1[view] [source] 2023-11-20 23:36:06
>>4death+(OP)
I think you're agreeing with GP.

That's the point: The internet IS full of pedants correcting others' statements. (Hopefully those pedants are right enough of the time for this to be helpful training data, heh.)

I think GP (kromem) was pointing out that those corrections are more likely to be phrased as "You're wrong, here's why..." than as "I'm sorry, I was mistaken" because humans are full of sass for other humans and not as full of first-person admitted errors.

replies(2): >>kromem+62 >>galaxy+eC
◧◩
3. kromem+62[view] [source] [discussion] 2023-11-20 23:40:49
>>xp84+m1
Exactly.
replies(1): >>4death+1Q
◧◩
4. galaxy+eC[view] [source] [discussion] 2023-11-21 03:45:21
>>xp84+m1
Good point. That is why LLMs are incapable of humility. And that may be their downfall.
◧◩◪
5. 4death+1Q[view] [source] [discussion] 2023-11-21 05:26:29
>>kromem+62
Isn’t the entire promise of LLMs that they’re supposed to generalize, though?
replies(2): >>jibal+eo3 >>kromem+5L4
◧◩◪◨
6. jibal+eo3[view] [source] [discussion] 2023-11-21 20:24:06
>>4death+1Q
No. Or if so that's a false promise, because LLMs are incapable of generalizing.
7. jibal+qo3[view] [source] 2023-11-21 20:24:38
>>4death+(OP)
Read it again ... you got it exactly backwards.
◧◩◪◨
8. kromem+5L4[view] [source] [discussion] 2023-11-22 04:40:36
>>4death+1Q
How do you mean?
[go to top]