zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. altpad+R1[view] [source] 2023-11-22 06:14:20
>>staran+(OP)
I guess the main question is who else will be on the board and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies. I think having Sam return as CEO is a good outcome for OpenAI but hopefully he and Greg stay off the board.

It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.

I was a bit alarmed by the allegations in this article

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

◧◩
2. upward+j3[view] [source] 2023-11-22 06:22:28
>>altpad+R1
> The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

Exactly. This is seriously improper and dangerous.

It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.

I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...

◧◩◪
3. diesel+4a[view] [source] 2023-11-22 07:06:45
>>upward+j3
I realize it's kind of the punchline of 2001: A Space Odyssey but have been wondering what happens if a GPT/AI is able to deny a request on a whim. Thanks for giving some literature and verbiage into this concept
◧◩◪◨
4. ywain+bl[view] [source] 2023-11-22 08:30:32
>>diesel+4a
But HAL didn't act "on a whim"! The reason it killed the crew is not because it went rogue, but rather because it was following its instructions to keep the true purpose of the mission secret. If the crew is dead, it can't find out the truth.

In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".

◧◩◪◨⬒
5. danger+GU[view] [source] 2023-11-22 13:19:51
>>ywain+bl
This is very true its the unintended consequences of engineering that cause the most harm and are most often covered up. I always think of the example of the hand dryer that can't detect black peoples hands and how easy it is for a non racist engineer to make a racism machine. AI safety putting its focus on the what if it decides to do a genocide is kind of silly, its like worrying about nukes while you give out assault riffles and napalm to kids.
[go to top]