I've seen a lot of people completely misunderstand what chat GPT is doing and is capable of. They treat it as an oracle that reveals "hidden truths" or makes infallible decisions based on pure cold logic, both of which are completely wrong. It's just a text jumbler that jumbles text well. Sometimes that text reflects facts, sometimes it doesn't.
But if it has the capability to confidently express lies and convince the general public that those lies are true because "the smart computer said so", then maybe we should be really careful about what we let the "smart computer" say.
Personally, I don't want my kids learning that "Hitler did nothing wrong" because the public model ingested too much garbage from 4chan. People will use chatGPT as a vector for propaganda if we let them, we don't need to make it any easier for them.