zlacker

[parent] [thread] 12 comments
1. hn_thr+(OP)[view] [source] 2023-11-19 23:46:36
The problem, though, is without the huge commercial and societal success of ChatGPT, the AI Safety camp had no real leverage over the direction of AI advancement worldwide.

I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."

That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.

replies(3): >>keerth+R3 >>Terrif+Hb >>lmm+uc
2. keerth+R3[view] [source] 2023-11-20 00:09:48
>>hn_thr+(OP)
I agree. I think a significantly better approach would have been to vote for the elaboration of a "checks and balances" structure to OpenAI as it grew in capabilities and influence.

Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).

I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".

The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.

replies(2): >>peyton+h6 >>Karrot+Ub
◧◩
3. peyton+h6[view] [source] [discussion] 2023-11-20 00:23:26
>>keerth+R3
Why would any employee put up with that? Why not go work somewhere else that better aligns with what you want?
replies(1): >>sethhe+6b
◧◩◪
4. sethhe+6b[view] [source] [discussion] 2023-11-20 00:51:49
>>peyton+h6
What is it that you want that this doesn’t offer?
replies(1): >>hn_thr+hc
5. Terrif+Hb[view] [source] 2023-11-20 00:56:26
>>hn_thr+(OP)
Better to have a small but independent voice that can grow in influence then being shackled by commercial interest and lose your integrity - e.g. How many people actually gives a shit what Google has to say about internet governance?
replies(1): >>HPsqua+4c
◧◩
6. Karrot+Ub[view] [source] [discussion] 2023-11-20 00:58:30
>>keerth+R3
I like this idea, but I'm not sure if "democracy" is the word you're looking for. There's plenty of functioning bureaucracies in everything from monarchies to communist states that balance competing interests. As you say, a system of checks and balances balancing the interests of the for-profit and non-profit arms could have been a lot more interesting. Though honestly I don't have enough business experience to know if this kind of thing would be at all viable.
◧◩
7. HPsqua+4c[view] [source] [discussion] 2023-11-20 00:59:48
>>Terrif+Hb
A LOT of people care about what Google does in that area. What they say is kinda redundant.
replies(1): >>Terrif+ed
◧◩◪◨
8. hn_thr+hc[view] [source] [discussion] 2023-11-20 01:00:50
>>sethhe+6b
I think the idea of separate groups within the company checking and balancing each other is not a great idea. This is essentially what Google set up with their "Ethical AI" group, but this just led to an adversarial relationship with that group seeing their primary role as putting up as many roadblocks and vetoes as possible over the teams actually building AI (see the whole Timnit Gebru debacle). This led to a lot of the top AI talent at Google jumping ship to other places where they could move faster.

I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).

9. lmm+uc[view] [source] 2023-11-20 01:02:22
>>hn_thr+(OP)
> That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did.

If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.

replies(1): >>peyton+jf
◧◩◪
10. Terrif+ed[view] [source] [discussion] 2023-11-20 01:06:42
>>HPsqua+4c
And everything Google does is in its self interest or prioritizes its self interest. Altruism falls by the sidelines.
◧◩
11. peyton+jf[view] [source] [discussion] 2023-11-20 01:21:19
>>lmm+uc
Why should anybody involved put up with this sort of behavior? Smearing the CEO? Ousting the chairman? Jeopardizing key supplier relationships? It’s ridiculous.
replies(2): >>yterdy+Yi >>lmm+fp
◧◩◪
12. yterdy+Yi[view] [source] [discussion] 2023-11-20 01:46:41
>>peyton+jf
Because they're right. Maybe principles other than, "Get the richest," are important when we're talking about technology that can end the world or create literal hell on Earth (in the long term).

One wishes someone had pulled a similar (in sentiment) move on energy companies and arms suppliers.

◧◩◪
13. lmm+fp[view] [source] [discussion] 2023-11-20 02:35:03
>>peyton+jf
> Why should anybody involved put up with this sort of behavior? Smearing the CEO? Ousting the chairman? Jeopardizing key supplier relationships?

Whether it was "smearing" or uncovering actual wrongdoing depends on the facts of the matter, which will hopefully emerge in due course. A board should absolutely be able and willing to fire the CEO, oust the chairman, and jeopardize supplier relationships if the circumstances warrant it. They're the board, that's what they're for!

[go to top]