Thats actually dangerous way to use ChatGPT. Since you don't know the real answer you won't be able to tell when it gets something wrong.
I've experimented with systems design using it, but as I expected, it's a big fat no.
If a robot gtp account does not have human supervision, it will spit out all sorts of rubbish / be easy to spot. Else the manager will just be a person who spams low quality content. I'm concerned, but we have time to find a solution.
This is already figuratively the case and it has had no impact on this phenomenon. Why would the new situation be any different?