zlacker

[parent] [thread] 4 comments
1. mepian+(OP)[view] [source] 2024-04-18 02:34:41
They should handle the problem of hallucinations then.
replies(2): >>famous+F9 >>eru+Z9
2. famous+F9[view] [source] 2024-04-18 04:32:08
>>mepian+(OP)
Bigger models hallucinate less.

and we don't call it hallucinations but gofai mispredicts plenty.

replies(1): >>xpe+LM7
3. eru+Z9[view] [source] 2024-04-18 04:38:39
>>mepian+(OP)
They are working on it. And current large language models via eg transformers aren't the only way to do AI with neural networks nor are they the only way to do AI with statistical approaches in general.

Cyc also has the equivalent of hallucinations, when their definitions don't cleanly apply to the real world.

◧◩
4. xpe+LM7[view] [source] [discussion] 2024-04-21 00:41:28
>>famous+F9
> Bigger models hallucinate less.

I'm skeptical. Based on what research?

replies(1): >>famous+EO8
◧◩◪
5. famous+EO8[view] [source] [discussion] 2024-04-21 15:00:23
>>xpe+LM7
GPT-4 hallucinates a lot less than 3.5. Same with the Claude Models. This is from personal experience. There are also benchmarks (like TruthfulQA) that try to measure hallucinations that show the same thing.
[go to top]