>>jk_tec+(OP)
> I was happily avoiding full time employment. I took this job because I believe that OpenAI is one of the most important companies currently in existence. When the board shared the situation and asked me to take the role, I did not make the decision lightly. Ultimately I felt that I had a duty to help if I could.
An external CEO that's stepping in not because they want to but because they feel they "need" to doesn't sound like a recipe for success.
>>rutier+v9
He's an 'AGI potentially poses an existential threat' guy. He's given his p(doom) as being somewhere between 5 and 50 percent. If the people in charge of potentially making a thing you think could have up to a 50% chance of wiping out humanity are asking you if you can help, you're probably going to offer whatever help you can.
>>Random+Mg
At that high of a probability of doom, one could argue that the most ethical thing to do is to assassinate everyone involved in AI research. Probability of doom x number of people affected is .05 x 8 billion == 400 million, versus a few thousand AI researchers.
Of course, no one really believes the probability of doom is that high.