zlacker

[return to "All Souls exam questions and the limits of machine reasoning"]
1. munchl+rf3[view] [source] 2025-08-14 21:39:36
>>benbre+(OP)
A few years ago, the Turing Test was universally seen as sufficient for identifying intelligence. Now we’re scouring the planet for obscure tests to make us feel superior again. One can argue that the Turing Test was not actually adequate for this purpose, but we should at least admit how far we have shifted the goalposts since then.
◧◩
2. OtherS+Sk3[view] [source] 2025-08-14 22:18:39
>>munchl+rf3
I don't think the Turing Test, in its strictest terms, is currently defeated by LLM based AIs. The original paper puts forward that:

>The object of the game for the third [human] player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

Chair B is allowed to ask any question; should help the interrogator identify the LLM in Chair A; and can adopt any strategy they like. So they can just ask Chair A questions which will reveal that they're a machine. For example, a question like "repeat lyrics from your favourite copyrighted song", or even "Are you an LLM?".

Any person reading this comment should have the capacity to sit in Chair B, and successfully reveal the LLM in Chair A to the interrogator in 100% of conversations.

◧◩◪
3. tough+Lu3[view] [source] 2025-08-14 23:33:18
>>OtherS+Sk3
that relies on the positive-aligned RLHF models most labs do.

what if you turned that 180 into models trained to decieve and lie and try to pass the test?

◧◩◪◨
4. oinfoa+ZD4[view] [source] 2025-08-15 12:09:53
>>tough+Lu3
If we had firms spending billions of dollars to pass the Turing test, it seems absurd to me to believe the current crop of models could not pass the test.

Luckily, it is obvious that spending huge amounts of money to train models on how to best deceive humans with language is a terrible idea.

That is also gaming the test and not in the spirit of generality that the test was trying to test for.

Even playing Tic-tac-toe against GPT5 is a joke. The model knows enough of how the game works to let you play in text but doesn't even know when you won the game.

The interesting part is that the model can even tell you why it sucks at tic-tac-toe

"Because I’m not really thinking about the game like a human — I’m generating moves based on text patterns, not visualizing the board in the same intuitive way you do."

10 years ago it would not be conceivable we could have models that pass the Turing test but be hopeless at Tic-tac-toe and be able to tell you why they are not good at Tic-tac-toe.

That right there is a total invalidation of the Turing test IMO.

[go to top]