But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.
The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.
Imagine if vehicle manufacturers[2] split their design and R&D teams into a "make the thing go" team and a "don't kill the passengers" team. Literally noone would think that arrangement made sense.
I can totally see when we are at a state of significant maturity of both AI and AI regulation that you have a special part of your legal team that are specialised in legal/regulatory compliance issues around AI just like companies tend to have specialised data privacy compliance experts. But we're not there yet.
[1] If you're serious about long-term AI risk and alignment research, sponsor some independent academic research that gets published. That way it's arms-length and genuinely credible.
[2] If you like you can maybe mentally exclude Boeing in this.
2. Boeing is a good and timely example of the consequences of said internal checks and balances collapsing under “value creation” pressure. That was a catastrophic failure which still can’t reasonably be compared to the downside of misaligned AI.