That's a great point that I haven't seen in the GPT-related conversations. People view the fact that it can argue convincingly for both A and ~A as a flaw in GPT and limitation of LLMs, rather than an insight about human reasoning and motivation.
Maybe it's an illustration of a more general principle: when people butt up against limitations that make LLMs look silly, or inadequate, often their real objection is with some hard truths about reality itself.