>>sethhe+P4
I think the idea of
separate groups within the company checking and balancing each other is not a great idea. This is essentially what Google set up with their "Ethical AI" group, but this just led to an adversarial relationship with that group seeing their primary role as putting up as many roadblocks and vetoes as possible over the teams actually building AI (see the whole Timnit Gebru debacle). This led to a lot of the top AI talent at Google jumping ship to other places where they could move faster.
I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).