Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:
>>33911426 (Dec 2022)
>>32571890 (Aug 2022)
>>27558392 (June 2021)
>>26693590 (April 2021)
>>24189762 (Aug 2020)
>>22744611 (April 2020)
>>22427782 (Feb 2020)
>>21774797 (Dec 2019)
>>19325914 (March 2019)
We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.
The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.
Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.
* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.
Here’s an example article that begins with the cliched GPT-generated intro, and then switches up into crafted prose:
https://www.theatlantic.com/technology/archive/2022/12/chatg...
It is to communication what calculators are to mathematics.
I'm finding myself reaching for it instead of Google or Wikipedia for a lot of random questions, which is pretty damn impressive. It's not good at everything, but I'm rather blown away by how strong it is in the 'short informative essay' niche.
It allows you to explore topics that are well understood, in a way that fits your own understanding and pace. It's like somebody writing a great mini-tutorial on topics you're interested in, in a pace and abstraction that suits you.
Examples for me are concepts of mathematics or computer science that I would like to freshen up on. Things you could also ask a colleague over lunch, or find eventually via searching Google/Youtube/Wikipedia etc. Just much faster and more convenient.
Often I have a specific question like how does X relate to Y. And usually the answer given is total nonsense.