zlacker

[parent] [thread] 2 comments
1. drsopp+(OP)[view] [source] 2023-11-18 11:17:32
I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".
replies(2): >>int_19+L32 >>Closi+173
2. int_19+L32[view] [source] 2023-11-18 23:32:15
>>drsopp+(OP)
Your test fails because the scaffolding around the LM in ChatGPT specifically does not implement this kind of thing. But you absolutely can run the LM in a continuous loop and e.g. feed it strings like "1 minute passed" or even just the current time in an internal monologue (that the user doesn't see). And then it would be able to do exactly what you describe. Or you could use all those API integrations that it has to let it schedule a timer to activate itself.
3. Closi+173[view] [source] 2023-11-19 07:34:08
>>drsopp+(OP)
By completely smashes, my assertion would be that it has invalidated the Turing test, because GPT-4s answers are not indistinguishable from a human because they are, on the whole, noticeably better answers than an average human would be able to provide for the majority of questions.

I don’t think the original test probably accounted for the fact that you could distinguish the machine because it’s answers were better than an average human.

[go to top]