zlacker

[parent] [thread] 5 comments
1. trippi+(OP)[view] [source] 2023-03-18 15:22:46
An aside: what do people mean when they say “hallucinations” generally? Is it something more refined than just “wrong”?

As far as I can tell most people just use it as a shorthand for “wow that was weird” but there’s no difference as far as the model is concerned?

replies(2): >>mlhpdx+Ic >>bombca+Jr
2. mlhpdx+Ic[view] [source] 2023-03-18 16:40:50
>>trippi+(OP)
Most people don’t understand the technology and maths at play in these systems. That’s normal, as is using familiar words that make that feel less awful. If you have a genuine interest in understanding how and why errant generated content emerges, it will take some study. There isn’t (in my opinion) a quick helpful answer.
replies(1): >>trippi+am1
3. bombca+Jr[view] [source] 2023-03-18 18:03:46
>>trippi+(OP)
Wrong is saying 2+2 is five.

Wrong is saying that the sun rises in the west.

By hallucinating they’re trying to imply that it didn’t just get something wrong but instead dreamed up an alternate world where what you want existed, and then described that.

Or another way to look at it, it gave an answer that looks right enough that you can’t immediately tell it is wrong.

replies(1): >>permo-+HFe
◧◩
4. trippi+am1[view] [source] [discussion] 2023-03-19 01:24:40
>>mlhpdx+Ic
I genuinely want to understand whether there’s a meaningful difference between non-hallucinatory and hallucinatory content generation other than “real world correctness”.
replies(1): >>mlhpdx+J88
◧◩◪
5. mlhpdx+J88[view] [source] [discussion] 2023-03-21 00:51:23
>>trippi+am1
I’m far from an expert but as I understand it the reference point isn’t so much the “real world” as it is the training data. If the model generates a strongly weighted association that isn’t in the data, and shouldn’t exist perhaps at all. I’d prefer a word like “superstition”, it seems more relatable.
◧◩
6. permo-+HFe[view] [source] [discussion] 2023-03-22 19:47:04
>>bombca+Jr
this isn't a good explanation. these LLMs are essentially statistical models. when they "hallucinate", they're not "imagining" or "dreaming", they're simply producing a string of results that your prompt combined with its training corpus implies to be likely
[go to top]