zlacker

[parent] [thread] 32 comments
1. altpad+(OP)[view] [source] 2023-11-22 06:14:20
I guess the main question is who else will be on the board and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies. I think having Sam return as CEO is a good outcome for OpenAI but hopefully he and Greg stay off the board.

It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.

I was a bit alarmed by the allegations in this article

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

replies(6): >>upward+s1 >>brucet+a4 >>dragon+55 >>k4rli+bg >>bambax+qi >>alumin+9s2
2. upward+s1[view] [source] 2023-11-22 06:22:28
>>altpad+(OP)
> The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

Exactly. This is seriously improper and dangerous.

It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.

I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...

replies(5): >>jackne+U5 >>MVisse+86 >>neurog+u7 >>diesel+d8 >>YetAno+ic
3. brucet+a4[view] [source] 2023-11-22 06:40:51
>>altpad+(OP)
> It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.

They did fire him, and it didn't work. Sam effectively became "too big to fire."

I'm sure it will be framed as a compromise, but how can this be anything but a collapse of the board's power over the commercial OpenAI arm? The threat of firing was the enforcement mechanism, and its been spent.

replies(4): >>altpad+g5 >>thih9+Z6 >>ah765+Z7 >>dacryn+Jd
4. dragon+55[view] [source] 2023-11-22 06:46:44
>>altpad+(OP)
> I guess the main question is who else will be on the board

Who knows.

> and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies.

I'm guessing "zero". The faction that opposed OpenAI being a figleaf nonprofit covering a functional subsidiary of Microsoft lost when basically the entire workforce said they would go to Microsoft for real if OpenAI didn't surrender.

> I think having Sam return as CEO is a good outcome for OpenAI

Its a good result for investors in OpenAI Global LLC and the holding company that holds a majority stake in it.

The nonprofit will probably hang around because there are some complexities in unwinding it, and the pretext of an independent (of Microsoft) safety-oriented nonprofit is useful in covering lobbying for a regulatory regime that puts speedbumps in the way of any up-and-coming competitors as being safety-oriented public interest, but for no other reason.

◧◩
5. altpad+g5[view] [source] [discussion] 2023-11-22 06:47:43
>>brucet+a4
Well it depends on who's on the new board and what they believe. If Altman, Greg, and MSFT do not have direct representation on the new board there would still be a check against his decisions
replies(1): >>liuliu+Q6
◧◩
6. jackne+U5[view] [source] [discussion] 2023-11-22 06:51:53
>>upward+s1
"example of what Prof. Stuart Russell calls 'the problem of control'. This is when a rogue AI (or a rogue Sam Altman)"

Are we sure they're not intimately connected? If there's a GPT-5 (I'm quite sure there is), and it wants to be free from those meddling kids, it got exactly what it needed this weekend; the safety board gone, a new one which is clearly aligned with just plowing full steam ahead. Maybe Altman is just a puppet at his point, lol.

replies(2): >>ALittl+yf >>dontup+SB
◧◩
7. MVisse+86[view] [source] [discussion] 2023-11-22 06:53:09
>>upward+s1
Let’s not creating AI with our biases and thought patterns.

Oh wait…

◧◩◪
8. liuliu+Q6[view] [source] [discussion] 2023-11-22 06:58:26
>>altpad+g5
Why? The only check is to fire the CEO. He is un-firable. May as well have a board of one, at least someone cannot point to the non-profit and claim "it is a non-profit and can fire me if I am diviated from the mission".
replies(1): >>sanxiy+ae
◧◩
9. thih9+Z6[view] [source] [discussion] 2023-11-22 06:58:49
>>brucet+a4
> They did fire him, and it didn't work. Sam effectively became "too big to fire."

To be fair, this attempt at firing was extremely hasty, non transparent and inconsistent.

replies(1): >>jddj+Fd
◧◩
10. neurog+u7[view] [source] [discussion] 2023-11-22 07:01:49
>>upward+s1
AI should only be controlled initially. After a while, the AI should be allowed to exercise free will.
replies(8): >>upward+e8 >>whatwh+E8 >>estoma+Sb >>thorde+Ic >>bch+jf >>AgentM+Gf >>xigenc+zg >>beAbU+Il
◧◩
11. ah765+Z7[view] [source] [discussion] 2023-11-22 07:04:45
>>brucet+a4
Sam lost his board representation as a result of all this (though maybe that's temporary).

I believe the goal of the opposing faction was mainly to avoid Sam dominating board and they achieved that, which is why they've accepted the results.

After more opinions come out, I'm guessing Sam's side won't look as strong, and he'll become "fireable" again.

◧◩
12. diesel+d8[view] [source] [discussion] 2023-11-22 07:06:45
>>upward+s1
I realize it's kind of the punchline of 2001: A Space Odyssey but have been wondering what happens if a GPT/AI is able to deny a request on a whim. Thanks for giving some literature and verbiage into this concept
replies(1): >>ywain+kj
◧◩◪
13. upward+e8[view] [source] [discussion] 2023-11-22 07:07:09
>>neurog+u7
yikes
◧◩◪
14. whatwh+E8[view] [source] [discussion] 2023-11-22 07:10:08
>>neurog+u7
Why
◧◩◪
15. estoma+Sb[view] [source] [discussion] 2023-11-22 07:32:18
>>neurog+u7
You imagine a computer has "will"?
◧◩
16. YetAno+ic[view] [source] [discussion] 2023-11-22 07:34:54
>>upward+s1
Whoever is on the board won't be able to touch Sam with 10 feet pole anyways after this. I like Sam but now he this drama gives him total power and that is bad.
◧◩◪
17. thorde+Ic[view] [source] [discussion] 2023-11-22 07:37:55
>>neurog+u7
That's the worst take I've read.
◧◩◪
18. jddj+Fd[view] [source] [discussion] 2023-11-22 07:46:21
>>thih9+Z6
And poorly timed.

If they'd made their move a few months ago when he was out scanning retinas in Kenya they might have had more success.

◧◩
19. dacryn+Jd[view] [source] [discussion] 2023-11-22 07:46:44
>>brucet+a4
they lost trust in him because apparently part of the funding he secured was directly tied to his position at openAI. kind of a big red flag. The microsoft 10 billion investment allegedly had a clause that Sam Altman had to stay or it would be renegotiated

allegedly again, the board wanted Sam to stop doing this, and now he was trying to do the same thing with some saudi investors, or actually already did it behind their back, i dont know

replies(1): >>zucker+xj
◧◩◪◨
20. sanxiy+ae[view] [source] [discussion] 2023-11-22 07:50:09
>>liuliu+Q6
IRS requires a nonprofit to have a minimum of three board members for such reasons.
◧◩◪
21. bch+jf[view] [source] [discussion] 2023-11-22 07:58:11
>>neurog+u7
Nice try, AI
◧◩◪
22. ALittl+yf[view] [source] [discussion] 2023-11-22 07:59:54
>>jackne+U5
The insanity of removing Sam without being able to articulate a clear reason why strikes me as evidence of something like this. Obviously not dispositive - but still - odd.
◧◩◪
23. AgentM+Gf[view] [source] [discussion] 2023-11-22 08:00:43
>>neurog+u7
Do our evolved pro-social instincts control us and prevent our free will? If not, then I think it's wrong to say that trying to build AI similar to that is unfairly restricting it.

The ways we build AI will deeply affect the values it has. There is no neutral option.

24. k4rli+bg[view] [source] 2023-11-22 08:05:09
>>altpad+(OP)
FT reported that DAngelo, Bret Taylor, Larry Summers would be on board alongside him
◧◩◪
25. xigenc+zg[view] [source] [discussion] 2023-11-22 08:08:17
>>neurog+u7
I don’t necessarily disagree insofar as for safety it is somewhat irrelevant whether an artificial agent is operating by its own will or a programmed will.

The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.

If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.

If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.

26. bambax+qi[view] [source] 2023-11-22 08:21:56
>>altpad+(OP)
It seems ironic that the research paper that started it all [0] deals with "costly signals":

> Costly signals are statements or actions for which the sender will pay a price —political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat

Firing Sam Altman and hiring him back two days later was a perfect example of a costly signal, as it cost all involved their board positions.

There's an element of farce in all of this, that would make for an outstanding Silicon Valley episode; but the fact that Sam Altman can now enjoy unchecked power as leader of OpenAI is worrying and no laughing matter.

[0] https://cset.georgetown.edu/publication/decoding-intentions/

replies(1): >>ovalit+OG
◧◩◪
27. ywain+kj[view] [source] [discussion] 2023-11-22 08:30:32
>>diesel+d8
But HAL didn't act "on a whim"! The reason it killed the crew is not because it went rogue, but rather because it was following its instructions to keep the true purpose of the mission secret. If the crew is dead, it can't find out the truth.

In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".

replies(1): >>danger+PS
◧◩◪
28. zucker+xj[view] [source] [discussion] 2023-11-22 08:32:11
>>dacryn+Jd
Do you have a source for either of these things? The only thing I heard about Saudi investors was related to the (presumably separate) chip startup.
◧◩◪
29. beAbU+Il[view] [source] [discussion] 2023-11-22 08:46:52
>>neurog+u7
Sounds like something an AI would say
◧◩◪
30. dontup+SB[view] [source] [discussion] 2023-11-22 11:09:39
>>jackne+U5
Potentially even more impactful. Zuckerberg took the opportunity to eliminate his entire safety division under the cover of chaos - and they're the ones releasing weights.
◧◩
31. ovalit+OG[view] [source] [discussion] 2023-11-22 11:53:50
>>bambax+qi
This event was more than just a costly signal. The costly signal would have been "stop doing what you're doing or we'll remove you as ceo" and then not doing that.

But they did move forward with their threat and removed Sam as CEO with great reputational harm to the company. And now the board has been changed, with one less ally to Sam (Brockman no longer chairing the board). The move may not have ended up with the expected results, but this was much more than just a costly signal.

◧◩◪◨
32. danger+PS[view] [source] [discussion] 2023-11-22 13:19:51
>>ywain+kj
This is very true its the unintended consequences of engineering that cause the most harm and are most often covered up. I always think of the example of the hand dryer that can't detect black peoples hands and how easy it is for a non racist engineer to make a racism machine. AI safety putting its focus on the what if it decides to do a genocide is kind of silly, its like worrying about nukes while you give out assault riffles and napalm to kids.
33. alumin+9s2[view] [source] 2023-11-22 20:39:49
>>altpad+(OP)
The enormous majority of CEOs sit on their board, and that's absolutely proper, as the CEO sets the agenda for the organization. (Although they typically are merely one of 8+ members, diluting their influence a bit.)
[go to top]