I've seen a lot of people completely misunderstand what chat GPT is doing and is capable of. They treat it as an oracle that reveals "hidden truths" or makes infallible decisions based on pure cold logic, both of which are completely wrong. It's just a text jumbler that jumbles text well. Sometimes that text reflects facts, sometimes it doesn't.
But if it has the capability to confidently express lies and convince the general public that those lies are true because "the smart computer said so", then maybe we should be really careful about what we let the "smart computer" say.
Personally, I don't want my kids learning that "Hitler did nothing wrong" because the public model ingested too much garbage from 4chan. People will use chatGPT as a vector for propaganda if we let them, we don't need to make it any easier for them.
Honestly though, I would prefer an AI that was strictly neutral about anything other than purely factual information. That isn't really possible with the tech we have now though. I think we need to loudly change the public perception of what chatGPT and similar actually are. They are fancy programs that create convincing hallucinations, directed by your input. We need to think of it as a brainstorming tool, not a knowledge engine.