But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.
The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.
In this specific case, though, Sam Altman's narrative is that they created an existential risk to humanity and that the access to it needs to be restricted for others. So which is it?
Anyone who's used their AI and discovered how it ignores instructions and makes things up isn't going to honestly believe it poses an existential threat any time soon. Now they're big enough they can end that charade.