zlacker

[parent] [thread] 4 comments
1. keerth+(OP)[view] [source] 2023-11-20 00:09:48
I agree. I think a significantly better approach would have been to vote for the elaboration of a "checks and balances" structure to OpenAI as it grew in capabilities and influence.

Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).

I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".

The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.

replies(2): >>peyton+q2 >>Karrot+38
2. peyton+q2[view] [source] 2023-11-20 00:23:26
>>keerth+(OP)
Why would any employee put up with that? Why not go work somewhere else that better aligns with what you want?
replies(1): >>sethhe+f7
◧◩
3. sethhe+f7[view] [source] [discussion] 2023-11-20 00:51:49
>>peyton+q2
What is it that you want that this doesn’t offer?
replies(1): >>hn_thr+q8
4. Karrot+38[view] [source] 2023-11-20 00:58:30
>>keerth+(OP)
I like this idea, but I'm not sure if "democracy" is the word you're looking for. There's plenty of functioning bureaucracies in everything from monarchies to communist states that balance competing interests. As you say, a system of checks and balances balancing the interests of the for-profit and non-profit arms could have been a lot more interesting. Though honestly I don't have enough business experience to know if this kind of thing would be at all viable.
◧◩◪
5. hn_thr+q8[view] [source] [discussion] 2023-11-20 01:00:50
>>sethhe+f7
I think the idea of separate groups within the company checking and balancing each other is not a great idea. This is essentially what Google set up with their "Ethical AI" group, but this just led to an adversarial relationship with that group seeing their primary role as putting up as many roadblocks and vetoes as possible over the teams actually building AI (see the whole Timnit Gebru debacle). This led to a lot of the top AI talent at Google jumping ship to other places where they could move faster.

I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).

[go to top]