zlacker

[parent] [thread] 4 comments
1. ACCoun+(OP)[view] [source] 2025-08-27 23:55:28
And, if you want to have some fun, you could give it your writing sample - but say that it's from a random blog post you found online. See what it tells you on that.

It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.

By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".

replies(3): >>bonobo+M2 >>wink+981 >>lo_zam+Xn1
2. bonobo+M2[view] [source] 2025-08-28 00:20:55
>>ACCoun+(OP)
I wonder if someone would build a personalized social media simulator where you are the most popular person, a top celebrity and you get the most likes, and you everyone posts selfies with you (generated with editing models like Gemini's nano banana), and whatever dumb opinion you have, it's affirmed as genius and so on. Like a UI clone of a site like Instagram, but text and images populated by AI, with a mix of simulated real celebrities and random generated NPCs.

People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.

3. wink+981[view] [source] 2025-08-28 11:59:06
>>ACCoun+(OP)
But is it really 100% positive? If you write a paper sounding academic is fine, but not necessarily if you write a novel. Especially if you try to blend in or mimic a certain style.
4. lo_zam+Xn1[view] [source] 2025-08-28 13:41:11
>>ACCoun+(OP)
Not everyone appreciates having his speech characterized as "academic" - in certain circles, it's viewed rather poorly - so I'm not convinced of the glazing hypothesis.

ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.

"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.

replies(1): >>ACCoun+Wz1
◧◩
5. ACCoun+Wz1[view] [source] [discussion] 2025-08-28 14:42:41
>>lo_zam+Xn1
Does an average user appreciate this?

Do you?

The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.

The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.

[go to top]