But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.
The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.
The reason you can't "align" AI is because we, as humans on the planet, aren't universally aligned on what "aligned" means.
At best you can align to a particular group of people (a company, a town, a state, a country). But "global alignment" in almost any context just devolves into war or authoritarianism (virtual or actual).