It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.
By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".
People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.
ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.
"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.
Do you?
The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.
The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.