zlacker

[return to "All Souls exam questions and the limits of machine reasoning"]
1. munchl+rf3[view] [source] 2025-08-14 21:39:36
>>benbre+(OP)
A few years ago, the Turing Test was universally seen as sufficient for identifying intelligence. Now we’re scouring the planet for obscure tests to make us feel superior again. One can argue that the Turing Test was not actually adequate for this purpose, but we should at least admit how far we have shifted the goalposts since then.
◧◩
2. altrui+Yi3[view] [source] 2025-08-14 22:04:20
>>munchl+rf3
I have trouble reconciling this point with the known phenomenon of hallucinations.

I would suppose the correct test is an 'infinite' Turing test, which after a long enough conversation, LLM's invariably do not pass, as they eventually degrade.

I think a better measure for the binary answer of "have they passed the Turing test?" is the metric of 'For how long do they continue to pass the Turing test?"...

This ignores such ideas of probing the LLM's weak spots. Since they do not 'see' their input as characters, and instead as tokens, counting letters in words, or specifics about those sub-token division provides a shortcut (for now) to failing the Turing test.

But the above approach is not in the spirit of the Turing test, as that only points out a blind spot in their perception, like how a human would have to guess a bit at what things would look like if UV and infrared were added to our visual field... sure we could reason about it, but we wouldn't actually perceive those wavelengths, so we could make mistakes about that qualia. And it would say nothing of our ability to think if we could not perceive those wavelengths, even if 'more-seeing' entities judged us as inferior for it...

[go to top]