Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.
It is also the only provider out of those that permits use for legal adult content.
There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.
What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.
Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.
However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.
I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.
We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.