zlacker

[parent] [thread] 1 comments
1. westur+(OP)[view] [source] 2023-05-11 01:25:23
If an LLM passes the Turing test ("The Imitation Game") - i.e. has output indistinguishable from a human's output - does that imply that it is not possible to stylometrically fingerprint its outputs without intentional watermarking?

https://en.wikipedia.org/wiki/Turing_test

replies(1): >>kadoba+D72
2. kadoba+D72[view] [source] 2023-05-11 16:10:02
>>westur+(OP)
Implicit in the Turing test is the entity doing the evaluation. It's quite possible that a human evaluator could be tricked, but a tool-assisted human, or an AI itself could not be. Or even just some humans could be better at not being tricked than others.
[go to top]