zlacker

[parent] [thread] 1 comments
1. lo_zam+(OP)[view] [source] 2025-08-28 13:31:10
That assumes the characterization is perceived as flattering, or that enough data on me would allow it to "think" it would be to me. Generally, given the anti-intellectual bias in American popular culture, I'm on the fence about that. But then, what are the biases of the corpus ChatGPT was trained on?

For context, I was asking GPT to rewrite some passage in the style of various authors, like Hemingway or Waugh. I didn't even ask it for an assessment of my writing; I was given that for free.

In retrospect (this was while ago), I think the passage may have been expository in character, so perhaps it is not much a mystery why it was characterized as "academic". (When I give it samples similar to mine now, I get "formal, academic, and analytical tone". Compare this to how it characterizes an article from The Register as written in an "informal and conversational tone", in part because of the "colloquial jargon" and "pop culture references"). So my RP comparison is sensible. And there's the question of social class as well. I don't exactly speak like a construction workers, as it were.

replies(1): >>static+bd
2. static+bd[view] [source] 2025-08-28 14:39:00
>>lo_zam+(OP)
Even if, for some reason, you think LLM's are fit for evaluating writing style (I don't), I'd at least ask Gemini Pro and Claude Opus to see if there's consensus among the plausible sounding bullshit generators.
[go to top]