Imagine if vehicle manufacturers[2] split their design and R&D teams into a "make the thing go" team and a "don't kill the passengers" team. Literally noone would think that arrangement made sense.
I can totally see when we are at a state of significant maturity of both AI and AI regulation that you have a special part of your legal team that are specialised in legal/regulatory compliance issues around AI just like companies tend to have specialised data privacy compliance experts. But we're not there yet.
[1] If you're serious about long-term AI risk and alignment research, sponsor some independent academic research that gets published. That way it's arms-length and genuinely credible.
[2] If you like you can maybe mentally exclude Boeing in this.
Rocket caskets. Can't kill someone who is already dead!
2. Boeing is a good and timely example of the consequences of said internal checks and balances collapsing under “value creation” pressure. That was a catastrophic failure which still can’t reasonably be compared to the downside of misaligned AI.