zlacker

Ilya Sutskever "at the center" of Altman firing?

submitted by apsec1+(OP) on 2023-11-18 02:40:37 | 416 points 485 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. convex+D[view] [source] 2023-11-18 02:46:02
>>apsec1+(OP)
Followup tweet by Kara:

Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120

◧◩
6. reduce+s1[view] [source] [discussion] 2023-11-18 02:52:52
>>rcpt+a1
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

◧◩
7. crazyg+B1[view] [source] [discussion] 2023-11-18 02:53:49
>>LoganD+m1
"This twitter account" is Kara Swisher, probably the most well-known tech reporter working right now. She has known essentially everyone in the tech world for decades at this point. Her sources are not only going to be legit, but she probably has more of them than literally anyone else in the tech world, so she can accurately corroborate information or not.

https://en.wikipedia.org/wiki/Kara_Swisher

11. convex+X1[view] [source] 2023-11-18 02:56:12
>>apsec1+(OP)
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."

Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩◪
46. convex+P4[view] [source] [discussion] 2023-11-18 03:16:40
>>minima+d3
LinkedIn says President, Chairman, & Co-Founder. Murati was the CTO. But from his interviews, he sounded more like the CTO.

And often like an individual contributor: "the feeling when you finally localize a bug to a small section of code, and know it's only a matter of time till you've squashed it"

https://twitter.com/gdb/status/1725373059740082475

"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."

https://time.com/collection/time100-ai/6309033/greg-brockman...

◧◩◪◨
52. sainez+a5[view] [source] [discussion] 2023-11-18 03:18:58
>>morale+22
It seems you are conflating OpenAI the non-profit, with OpenAI the LLC: https://openai.com/our-structure
66. tkgall+Z5[view] [source] 2023-11-18 03:24:40
>>apsec1+(OP)
I didn’t have much sense of who Ilya Sutskever is or what he thinks, so I searched for a recent interview. Here’s one from the No Priors podcast two weeks ago:

https://www.youtube.com/watch?v=Ft0gTO2K85A

No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.

◧◩
91. toomuc+a7[view] [source] [discussion] 2023-11-18 03:33:08
>>cardin+f1
There is no way Sam doesn't have the street cred to do a raise and pull talent for a competitor. They made the decision for him.

(pleb who would invest [1], no other association)

[1] >>35306929

◧◩
92. riraro+l7[view] [source] [discussion] 2023-11-18 03:33:50
>>andrew+l5
Sorry, what do you mean by "unknown -to-anyone"?

Ilya is a co-founder of OpenAI, the Chief Scientist, and one of the best known AI researchers in the field. He has also been touring with Sam Altman at public events, and getting highlights such as this one recently:

https://youtu.be/9iqn1HhFJ6c

104. throwa+08[view] [source] 2023-11-18 03:39:18
>>apsec1+(OP)
These Kara Swisher's tweets aligns extremely closely with the following pseudonymous Reddit user Anxious_Bandicoot126 from 4 hours ago: https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p...

> I feel compelled as someone close to the situation to share additional context about Sam and company.

> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.

> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.

> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.

> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.

> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.

---

The entire Reddit thread is full of interesting posts from this apparently legitimate pseudonymous OpenAI insider talking candidly.

◧◩◪◨⬒
108. cscurm+l8[view] [source] [discussion] 2023-11-18 03:42:16
>>aidama+07
Sorry. Robust research says no. Remember, people thought Eliza was AGI too.

https://arxiv.org/abs/2308.03762

If it was really AGI, there won't even be ambiguity and room for comments like mine.

◧◩◪
109. chipga+o8[view] [source] [discussion] 2023-11-18 03:43:00
>>Mistle+w6
It has basically been confirmed

https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩◪◨⬒
151. lossol+Ha[view] [source] [discussion] 2023-11-18 04:00:56
>>aidama+07
Well, if it's so smart then maybe it will learn to count finally someday.

https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...

◧◩◪
177. 36933+nc[view] [source] [discussion] 2023-11-18 04:14:17
>>k2xl+Ba
https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p...

> „im not at liberty to say, but im very close. i dont want to give to many details.“

◧◩
179. riraro+rc[view] [source] [discussion] 2023-11-18 04:14:44
>>kolja0+58
Actually, I think this precisely gives credence to the theory that Sam was disingenuously proselytizing to gain power and influence, regulatory capture being one method of many.

As you say, Altman has been on a world tour, but he's effectively paying lip service to the need for safety when the primary outcome of his tour has been to cozy up to powerful actors, and push not just product, but further investment and future profit.

I don't think Sutskever was primarily motivated by AI safety in this decision, as he says this "was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." [1]

To me this indicates that Sutskever felt that Sam's strategy was opposed to original the mission of the nonprofit, and likely to benefit powerful actors rather than all of humanity.

1. https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩
182. lotsof+Ac[view] [source] [discussion] 2023-11-18 04:15:12
>>Bjorkb+87
> and pissing off Microsoft by tanking their stock price

When did Microsoft’s stock price tank?

https://finance.yahoo.com/quote/MSFT/

◧◩◪◨⬒
185. convex+Oc[view] [source] [discussion] 2023-11-18 04:17:47
>>langit+t8
You're right: https://en.wikipedia.org/wiki/Greg_Brockman
201. justin+ud[view] [source] 2023-11-18 04:22:59
>>apsec1+(OP)
https://nitter.net/karaswisher/status/1725702501435941294
◧◩◪◨⬒
213. mvkel+te[view] [source] [discussion] 2023-11-18 04:30:08
>>fsckbo+sa
Not according to Kara Swisher it isn't.

https://x.com/maggienyt/status/1578074773174771712?s=46&t=k_...

I know it's convenient to dUnK on journalism these days but this is Kara Fucking Swisher. Her entire reputation is on the line if she gets these little things wrong. And she has a hell of a reputation

◧◩
214. smharr+ve[view] [source] [discussion] 2023-11-18 04:30:18
>>convex+X1
Some insider details that seem to agree with this: https://www.reddit.com/user/Anxious_Bandicoot126/
◧◩◪◨
226. lotsof+if[view] [source] [discussion] 2023-11-18 04:35:49
>>bushba+Bd
That does not qualify as tanking. Stock prices move that much all the time.

https://www.google.com/finance/quote/MSFT:NASDAQ

◧◩◪◨⬒⬓⬔⧯
232. lucubr+Gf[view] [source] [discussion] 2023-11-18 04:38:27
>>morale+7d
Kara's reporting on motive:

https://twitter.com/karaswisher/status/1725678074333635028?t...

Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...

Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...

Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.

For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

258. solard+Dh[view] [source] 2023-11-18 04:50:39
>>apsec1+(OP)
A joint statement just came out:>>38315309
265. kordan+Zh[view] [source] 2023-11-18 04:53:01
>>apsec1+(OP)
TL;DR – I believe Sam Altman's departure was orchestrated by none other than Microsoft CEO Satya Nadella.

Full details: https://x.com/KordanOu/status/1725736058233749559?s=20

◧◩◪◨
359. ALittl+wB[view] [source] [discussion] 2023-11-18 07:43:41
>>MVisse+5l
Actually the genome for viruses, and bacteria, does seem to be open. Here is an FTP server where you can download a bunch of different diseases.

https://ftp.ncbi.nih.gov/genomes/genbank/

372. ilaksh+xE[view] [source] 2023-11-18 08:10:22
>>apsec1+(OP)
https://www.bloomberg.com/news/articles/2023-11-18/openai-al...

https://archive.is/tCG3q

Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"

◧◩◪
416. justin+Yg1[view] [source] [discussion] 2023-11-18 13:17:19
>>mi3law+Lz
I listened to that and I'm pretty sure it was this [0] interview with the WSJ, Altman, and Mira Murati. If I'm wrong about that, well, it's still of interest given Mira Murati just took over running OpenAI.

[0] https://www.youtube.com/watch?v=byYlC2cagLw

◧◩◪◨⬒⬓⬔
419. justso+9j1[view] [source] [discussion] 2023-11-18 13:29:00
>>mlajto+pR
> Fuck Elop

It wasn't Elop who drove Nokia to the state it was in 2009. "Burning Platform" is from 2011.

>>35030334

420. boegel+rj1[view] [source] 2023-11-18 13:30:12
>>apsec1+(OP)
""" i love you all

I L Y A """

https://twitter.com/hellokillian/status/1725799674676936931

◧◩◪◨
435. toomuc+1J1[view] [source] [discussion] 2023-11-18 16:00:51
>>dragon+aw
Are you even innovating if you aren't defecting?

https://en.wikipedia.org/wiki/Fairchild_Semiconductor

https://en.wikipedia.org/wiki/Traitorous_eight

◧◩◪◨
439. fragme+tP1[view] [source] [discussion] 2023-11-18 16:38:13
>>steveB+wr1
It's critical to know: where are you located? lowly new grad engineers, as well as senior architects, can't be covered in non-competes in California, as long as it's done on non-company hardewre. it's a large part of why California is so big for tech, and subject of a current front page discussion.

>>38316870

◧◩◪◨⬒⬓⬔⧯▣
454. cscurm+P22[view] [source] [discussion] 2023-11-18 17:44:39
>>Camper+EF
The guy I replied to is claiming AGI:

>>38314733

"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "

455. amai+F32[view] [source] 2023-11-18 17:49:05
>>apsec1+(OP)
Somebody on HN saw this coming: >>36604501
◧◩◪◨⬒⬓
472. gnulin+fO2[view] [source] [discussion] 2023-11-18 22:08:26
>>neuron+Y92
It really is. Academia prioritizes quantity over quality. Western less so than Chinese, but nevertheless it's a problem in academia. Peter Higgs (the guy who predicted Higgs Boson) recently talked about it: "Peter Higgs: I wouldn't be productive enough for today's academic system" https://www.theguardian.com/science/2013/dec/06/peter-higgs-...
◧◩◪◨
478. dr_dsh+1F3[view] [source] [discussion] 2023-11-19 03:21:16
>>dr_dsh+3r
https://arxiv.org/abs/2311.02462

On operationalizing definitions of AGI

[go to top]