zlacker

[parent] [thread] 0 comments
1. omnico+(OP)[view] [source] 2026-01-20 16:12:32
> Its not clear to me that LLMs hallucinating is because of they are extrapolating beyond their training data. Is that proven? Or are you extrapolating?

It's a hypothesis at this stage, but I'm going to have a go at making it more quantitative. It seems the obvious explanation for "hallucinations", and it seems like it should also be rather straightforward to attribute particular inference results to the training data that influenced them. I'm expecting to encounter difficulties, though, since the idea seems so obvious it's vanishingly unlikely it hasn't been tried.

> And what exactly is your proof that human ingenuity is not just pattern matching.

Firstly, I'm not the one making a strong claim that needs to "proved". Secondly, "pattern matching" is ill-defined and not what I'm saying human intelligence isn't. I'm saying human intelligence isn't a kernel smoothing algorithm run over a corpus of text. This seems rather obvious. What's your proof that it is that?

[go to top]