Case example: I tried seeing what its limits on chemical knowledge were, starting with simple electron structures of molecules, and it does OK - remarkably, it got the advanced high-school level of methane's electronic structure right. It choked when it came to the molecular orbital picture and while it managed to list the differences between old-school hybrid orbitals and modern molecular orbitals, it couldn't really go into any interesting details about the molecular orbital structure of methane. Searching the web, I notice such details are mostly found in places like figures in research papers, not so much in text.
On the other hand, since I'm a neophyte when it comes to database architecture, it was great at answering what I'm sure any expert would consider basic questions.
Allowing comments sections to be clogged up with ChatGPT output would thus be like going to a restaurant that only served averaged-out mediocre but mostly-acceptable takes on recipes.
Thats actually dangerous way to use ChatGPT. Since you don't know the real answer you won't be able to tell when it gets something wrong.
I've experimented with systems design using it, but as I expected, it's a big fat no.
If a robot gtp account does not have human supervision, it will spit out all sorts of rubbish / be easy to spot. Else the manager will just be a person who spams low quality content. I'm concerned, but we have time to find a solution.