zlacker

[parent] [thread] 1 comments
1. mattbe+(OP)[view] [source] 2025-08-22 01:09:35
LLMs can't count letters, their writing is boring, and you can trick them into talking gibberish. That is a long way off the Turing test, even if we were fooled for a couple of weeks in 2022.

IMO when people declare that LLMs "pass" at a particular skill, it's a sign that they don't have the taste or experience to judge that skill themselves. Or - when it's CEOs - they have an interest in devaluing it.

So yes if you're trying to fool an experienced open source maintainer with unrefined LLM-generated code, good luck (especially one who's said he doesn't want that).

replies(1): >>otterl+1b1
2. otterl+1b1[view] [source] 2025-08-22 13:41:37
>>mattbe+(OP)
We’re talking about code here, not prose.

Would you like to take the Pepsi challenge? Happy to put random code snippets in front of you and see whether you can accurately determine whether it was written by a human or an LLM.

[go to top]