Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:
>>33911426 (Dec 2022)
>>32571890 (Aug 2022)
>>27558392 (June 2021)
>>26693590 (April 2021)
>>24189762 (Aug 2020)
>>22744611 (April 2020)
>>22427782 (Feb 2020)
>>21774797 (Dec 2019)
>>19325914 (March 2019)
We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.
The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.
Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.
* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.
Here’s an example article that begins with the cliched GPT-generated intro, and then switches up into crafted prose:
https://www.theatlantic.com/technology/archive/2022/12/chatg...
It is to communication what calculators are to mathematics.
I'm finding myself reaching for it instead of Google or Wikipedia for a lot of random questions, which is pretty damn impressive. It's not good at everything, but I'm rather blown away by how strong it is in the 'short informative essay' niche.
I'd argue with "fact-based". It frequently makes up facts (and even sources!) as it generates text. Also you should consider the possibility that "the facts" it generates can easily be a part of a tabloid article or a post on some "Moon landing was fake / flat earth" blog.