zlacker

[parent] [thread] 9 comments
1. lo_zam+(OP)[view] [source] 2025-08-27 23:23:36
In the "opinion" of ChatGPT, my style of writing is "academic". I'm not exactly sure why. Perhaps I draw from a vocabulary or turns of phrase that aren't necessarily characteristic of colloquial speech among native speakers. Technically, English wasn't my first language, so perhaps this is something like the case with RP English in Britain. Only foreigners speak it, so if you speak RP, then you aren't a native Brit.

In any case, it's possible to misuse, abuse, or overuse words like "delve", but to think that the the mere use of "delve" screams "AI-generated"...well, there are some dark tunnels that perhaps such people should delve less into.

replies(2): >>bonobo+m >>lupusr+mW
2. bonobo+m[view] [source] 2025-08-27 23:26:33
>>lo_zam+(OP)
> In the "opinion" of ChatGPT, my style of writing is "academic".

It may simply be glazing. If you ask it to estimate your IQ (if it complies), it will likely say >130 regardless of what you actually wrote. RLHF taught it that users like being praised.

replies(2): >>ACCoun+v3 >>lo_zam+pp1
◧◩
3. ACCoun+v3[view] [source] [discussion] 2025-08-27 23:55:28
>>bonobo+m
And, if you want to have some fun, you could give it your writing sample - but say that it's from a random blog post you found online. See what it tells you on that.

It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.

By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".

replies(3): >>bonobo+h6 >>wink+Eb1 >>lo_zam+sr1
◧◩◪
4. bonobo+h6[view] [source] [discussion] 2025-08-28 00:20:55
>>ACCoun+v3
I wonder if someone would build a personalized social media simulator where you are the most popular person, a top celebrity and you get the most likes, and you everyone posts selfies with you (generated with editing models like Gemini's nano banana), and whatever dumb opinion you have, it's affirmed as genius and so on. Like a UI clone of a site like Instagram, but text and images populated by AI, with a mix of simulated real celebrities and random generated NPCs.

People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.

5. lupusr+mW[view] [source] 2025-08-28 09:23:16
>>lo_zam+(OP)
ChatGPT says that my writing is "analytical and precise" but I'm fucking retarded.
◧◩◪
6. wink+Eb1[view] [source] [discussion] 2025-08-28 11:59:06
>>ACCoun+v3
But is it really 100% positive? If you write a paper sounding academic is fine, but not necessarily if you write a novel. Especially if you try to blend in or mimic a certain style.
◧◩
7. lo_zam+pp1[view] [source] [discussion] 2025-08-28 13:31:10
>>bonobo+m
That assumes the characterization is perceived as flattering, or that enough data on me would allow it to "think" it would be to me. Generally, given the anti-intellectual bias in American popular culture, I'm on the fence about that. But then, what are the biases of the corpus ChatGPT was trained on?

For context, I was asking GPT to rewrite some passage in the style of various authors, like Hemingway or Waugh. I didn't even ask it for an assessment of my writing; I was given that for free.

In retrospect (this was while ago), I think the passage may have been expository in character, so perhaps it is not much a mystery why it was characterized as "academic". (When I give it samples similar to mine now, I get "formal, academic, and analytical tone". Compare this to how it characterizes an article from The Register as written in an "informal and conversational tone", in part because of the "colloquial jargon" and "pop culture references"). So my RP comparison is sensible. And there's the question of social class as well. I don't exactly speak like a construction workers, as it were.

replies(1): >>static+AC1
◧◩◪
8. lo_zam+sr1[view] [source] [discussion] 2025-08-28 13:41:11
>>ACCoun+v3
Not everyone appreciates having his speech characterized as "academic" - in certain circles, it's viewed rather poorly - so I'm not convinced of the glazing hypothesis.

ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.

"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.

replies(1): >>ACCoun+rD1
◧◩◪
9. static+AC1[view] [source] [discussion] 2025-08-28 14:39:00
>>lo_zam+pp1
Even if, for some reason, you think LLM's are fit for evaluating writing style (I don't), I'd at least ask Gemini Pro and Claude Opus to see if there's consensus among the plausible sounding bullshit generators.
◧◩◪◨
10. ACCoun+rD1[view] [source] [discussion] 2025-08-28 14:42:41
>>lo_zam+sr1
Does an average user appreciate this?

Do you?

The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.

The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.

[go to top]