zlacker

[parent] [thread] 2 comments
1. mlhpdx+(OP)[view] [source] 2023-03-18 16:40:50
Most people don’t understand the technology and maths at play in these systems. That’s normal, as is using familiar words that make that feel less awful. If you have a genuine interest in understanding how and why errant generated content emerges, it will take some study. There isn’t (in my opinion) a quick helpful answer.
replies(1): >>trippi+s91
2. trippi+s91[view] [source] 2023-03-19 01:24:40
>>mlhpdx+(OP)
I genuinely want to understand whether there’s a meaningful difference between non-hallucinatory and hallucinatory content generation other than “real world correctness”.
replies(1): >>mlhpdx+1W7
◧◩
3. mlhpdx+1W7[view] [source] [discussion] 2023-03-21 00:51:23
>>trippi+s91
I’m far from an expert but as I understand it the reference point isn’t so much the “real world” as it is the training data. If the model generates a strongly weighted association that isn’t in the data, and shouldn’t exist perhaps at all. I’d prefer a word like “superstition”, it seems more relatable.
[go to top]