> A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979
Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.
It doesn't matter how true this should be in principle, in practice there are significant slop issues on the ground that we can't ignore and have to deal with. Context and subtext matter. It's already reasonable in some cases to trust contributions from different people differently based on who they are.
> Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes
The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.
Isolating the AI is the next best thing. It's still an account that's facing consequences, even if it's anonymous. Yes there are issues but there's no perfect solution in a world where we can't have good things anymore.