zlacker

[parent] [thread] 2 comments
1. bonobo+(OP)[view] [source] 2025-08-27 23:22:54
Whenever there are commonly agreed upon and known tell-tale signs of AI writing, the model creators can just retrain to eliminate those cues. On an individual level, you can also try to put it in your personalization prompt what turns of phrase to avoid (but central retraining is better).

This will be a cat and mouse game. Content factories will want models that don't create suspicious output, and the reading public will develop new heuristics to detect it. But it will be a shifting landscape. Currently, informal writing is rare in AI generation because most people ask models to improve their formulations, with more sophisticated vocabulary etc. Often non-native speakers, who then don't exactly notice the over-pompousness, just that it looks to them like good writing.

Usually there are also deeper cues, closer to the content's tone. AI writing often lacks the sharp edge, when you unapologetically put a thought there on the table. The models are more weasely, conflict-avoidant and hold a kind of averaged, blurred millennial Reddit-brained value system.

replies(1): >>jjani+Cw
2. jjani+Cw[view] [source] 2025-08-28 04:58:49
>>bonobo+(OP)
> Whenever there are commonly agreed upon and known tell-tale signs of AI writing

It's been two years now since such commonly agreed upon signs appeared yet by and large they're still just as present to this day.

replies(1): >>mh-+dC
◧◩
3. mh-+dC[view] [source] [discussion] 2025-08-28 05:52:55
>>jjani+Cw
Survivor bias. You don't know what you're not spotting in the wild.
[go to top]