zlacker

Emmett Shear becomes interim OpenAI CEO as Altman talks break down

submitted by andsoi+(OP) on 2023-11-20 05:19:57 | 551 points 894 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
16. d3nt+v[view] [source] [discussion] 2023-11-20 05:22:32
>>tentac+4
Tweet that confirms the story from a reputable Bloomberg reporter (Emily Chang): https://twitter.com/emilychangtv/status/1726468006786859101
38. d3nt+r1[view] [source] 2023-11-20 05:28:12
>>andsoi+(OP)
More details in this tweet thread: https://twitter.com/ashleevance/status/1726469283734274338
42. jart+x1[view] [source] 2023-11-20 05:28:45
>>andsoi+(OP)
So long Mira Murati. That's the second CEO OpenAI has ousted in one week. https://openai.com/blog/openai-announces-leadership-transiti...
44. flylib+D1[view] [source] 2023-11-20 05:29:03
>>andsoi+(OP)
Seems like he liked anti sama stuff as well

https://x.com/moridinamael/status/1725893666663768321?s=46

◧◩
49. reduce+Y1[view] [source] [discussion] 2023-11-20 05:30:37
>>thatsa+11
Bad news for you, their last CEO was too. Sam Altman signed a statement saying "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Looks like you're not on any of their team, but you do have crypto king Marc Andreessen.

https://twitter.com/robbensinger/status/1726039794197872939

◧◩◪
59. seanhu+l2[view] [source] [discussion] 2023-11-20 05:32:35
>>mupuff+5
There are investors including sequoia, Tiger Global and Microsoft[1]. They may well have standing to sue.

[1] Sorry - the full list is behind a paywall https://www.crunchbase.com/organization/openai/company_finan...

96. lysecr+P3[view] [source] 2023-11-20 05:40:28
>>andsoi+(OP)
This is the guy https://www.youtube.com/watch?v=cw_ckNH-tT8
◧◩◪◨
98. toomuc+R3[view] [source] [discussion] 2023-11-20 05:40:37
>>nextwo+S2
Microsoft owns the GPUs and has rights to continue operating current models. The only value OpenAI had was cohesion of engineering talent as it relates to model development velocity.

https://news.microsoft.com/source/features/ai/openai-azure-s...

https://www.semafor.com/article/11/18/2023/openai-has-receiv...

> Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.

> That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.

> One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.

> Microsoft has certain rights to OpenAI’s intellectual property so if their relationship were to break down, Microsoft would still be able to run OpenAI’s current models on its servers.

◧◩◪◨
108. mupuff+64[view] [source] [discussion] 2023-11-20 05:41:58
>>seanhu+l2
I believe those are investor the in for-profit organization while the board is responsible for the non-profit org, and I believe the for-profit is bound to the mission of the non profit but not the other way around.

But I'm sure someone here knows the legal structure better than I do, I just quickly skimmed over

https://openai.com/our-structure

◧◩
176. seanhu+y6[view] [source] [discussion] 2023-11-20 05:55:57
>>valine+v3
No he didn't fire Sam over AI safety concerns. That's completely made up by people in the twittersphere. The only thing we know is that the board said the reason was that he lied to the board. The guardian[1] reported that he was working on a new startup[1] and that staff had been told it was due to a breakdown in communication and not to do with anything regarding safety, security, malfeasance or a bunch of other things.

[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...

◧◩
188. ignora+27[view] [source] [discussion] 2023-11-20 05:59:22
>>valine+v3
> Did he really fire Sam over "AI safety" concerns? How is that remotely rational.

Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.

(like LeCun, I am not a doomer; but I am also not Hinton to know any better)

204. pcbro1+G7[view] [source] 2023-11-20 06:04:01
>>andsoi+(OP)
Andrej Karpathy: (radioactive emoji)

https://twitter.com/karpathy/status/1726478716166123851

Going nuclear?

◧◩
210. singul+P7[view] [source] [discussion] 2023-11-20 06:04:49
>>valine+v3
To shine some light on the true nature of the "AI safety tribe" aspects I highly recommend reading the other top HN post / article : https://archive.is/Vqjpr
◧◩◪◨
216. reduce+38[view] [source] [discussion] 2023-11-20 06:05:39
>>ryanSr+z6
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Signed by Sam Altman, Ilya Sutskever, Yoshua Bengio, Geoff Hinton, Demis Hassabis (DeepMind CEO), Dario Amodei (Anthropic CEO), and Bill Gates.

https://twitter.com/robbensinger/status/1726039794197872939

◧◩◪
225. thekev+o8[view] [source] [discussion] 2023-11-20 06:07:39
>>altdat+c3
Funny you should reference a nuclear bomb. This was 14 minutes after your post.

https://twitter.com/karpathy/status/1726478716166123851

◧◩◪◨⬒
233. jbgt+S8[view] [source] [discussion] 2023-11-20 06:10:02
>>hsrada+u7
Perhaps listen to this podcast instead.

https://www.matthewgeleta.com/p/joscha-bach-ai-risk-and-the-...

◧◩◪
236. frabcu+X8[view] [source] [discussion] 2023-11-20 06:10:23
>>seanhu+y6
The Atlantic Article makes it pretty clear that the fast growth of the commercial business was giving Ilya too few resources and too little time to do the safety work he wanted to do: https://archive.ph/UjqmQ
◧◩◪◨⬒
268. alexga+6a[view] [source] [discussion] 2023-11-20 06:16:38
>>peanut+Z6
OpenAI's recruiting pitch was 5-10+ million/year in the form of equity. The structure of the grants is super weird by traditional big-company standards, but it was plausible enough that you could squint and call it the same. I'd posit that many of the people jumping to OpenAI are doing it for the cash and not the mission.

https://the-decoder.com/openai-lures-googles-top-ai-research....

284. Palmik+Ia[view] [source] 2023-11-20 06:20:36
>>andsoi+(OP)
Emmett is a self-proclaimed AI doomer. Given some of his Tweets from the past, it's likely the board indeed wants someone who will takes things slower: https://archive.is/tuf3s https://archive.is/sNvvs

This is probably not something you want to hear as a researcher who is motivated by pushing the frontiers of our capabilities, nor as a researcher who is motivated by pushing their compensation.

◧◩◪◨
285. ignora+Ka[view] [source] [discussion] 2023-11-20 06:20:43
>>sgregn+ta
https://archive.is/yjOmt
◧◩◪◨
287. comp_t+Na[view] [source] [discussion] 2023-11-20 06:20:53
>>polyga+V4
This view was totally divorced from reality. Sam literally laid out his own case for AI posing an extinction risk on his blog in 2015, before OpenAI was even founded: https://blog.samaltman.com/machine-intelligence-part-1

"Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity."

◧◩
289. frabcu+Qa[view] [source] [discussion] 2023-11-20 06:21:12
>>jaimex+R1
It's a non-profit. It isn't a Silicon Valley company.

https://openai.com/our-structure

◧◩
310. 0xDEAF+Lb[view] [source] [discussion] 2023-11-20 06:27:44
>>mrcwin+A8
That's not obvious to me. Why did so many join OpenAI in the first place, given that its charter prioritizes humanitarian benefit over building as quickly as possible?

https://openai.com/charter

◧◩◪
330. anupam+pc[view] [source] [discussion] 2023-11-20 06:31:54
>>frabcu+Ea
But the Board members of the non-profit company have been leveraging OpenAI's APIs in their other business endeavors -- like, Adam's Poe chatbot for Quora. More details on board's timeline:

https://loeber.substack.com/p/a-timeline-of-the-openai-board

Related read: https://www.techtris.co.uk/p/openai-set-out-to-show-a-differ...

◧◩◪◨⬒⬓
351. alex_y+kd[view] [source] [discussion] 2023-11-20 06:39:04
>>happyt+fc
This is how one answers if they actually intend to quit: https://x.com/gdb/status/1725667410387378559?s=46&t=Q5EXJgwO...

There’s nothing wrong with not following, it’s a brave and radical thing to do. A heart emoji tweet doesn’t mean much by itself.

◧◩◪◨
357. 0xDEAF+Ad[view] [source] [discussion] 2023-11-20 06:40:28
>>zombiw+k5
OpenAI was not founded to be a white hot startup: https://archive.is/Vqjpr
368. mfigui+Xd[view] [source] 2023-11-20 06:42:38
>>andsoi+(OP)
TheInformation: Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won’t Return

>Dozens of OpenAI staffers internally announced they were quitting the company Sunday night, said a person with knowledge of the situation, after board director and chief scientist Ilya Sutskever told employees that fired CEO Sam Altman would not return.

https://www.theinformation.com/articles/dozens-of-staffers-q...

◧◩◪
370. 0xDEAF+4e[view] [source] [discussion] 2023-11-20 06:43:27
>>threes+j2
On the other hand, if Kyle was ousted for pushing too hard too fast at Cruise, that seems like out of the frying pan into the fire. See https://archive.is/Vqjpr
386. frabcu+Be[view] [source] 2023-11-20 06:46:51
>>andsoi+(OP)
"I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down. If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead." - Emmett Shear, Sep 16, 2023

https://twitter.com/eshear/status/1703178063306203397?t=8nHS...

435. hn_thr+6g[view] [source] 2023-11-20 06:57:19
>>andsoi+(OP)
Kara Swisher has some additional details: https://twitter.com/karaswisher/status/1726477828072382480
◧◩◪
468. halfjo+Ah[view] [source] [discussion] 2023-11-20 07:07:19
>>Ludwig+x2
This is what happens...

https://twitter.com/gfodor/status/1725750082119811537

◧◩◪◨⬒⬓
485. ignora+3i[view] [source] [discussion] 2023-11-20 07:10:44
>>zxexz+9f
> hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models

Ilya is pretty serious about alignment (precisely?) due to his gut instinct: https://www.youtube.com/watch?v=Ft0gTO2K85A (2 Nov 2023)

◧◩◪
501. dundar+vi[view] [source] [discussion] 2023-11-20 07:14:00
>>hooand+7h
https://twitter.com/ashleevance/status/1726469283734274338

> So, here's what happened at OpenAI tonight. Mira planned to hire Sam and Greg back. She turned Team Sam over past couple of days. Idea was to force board to fire everyone, which they figured the board would not do. Board went into total silence. Found their own CEO Emmett Shear

Written by the person who broke the story at Bloomberg.

So it appears a single person on the board wanted the talks to bring him back, and nobody else. I think that's 1 against 3, but the point is that the board wasn't totally united (which is not surprising).

◧◩
519. 8bitch+Yi[view] [source] [discussion] 2023-11-20 07:16:19
>>huxflu+gh
NYTimes has the SecureDrop tip submission which uses Tor (see details at https://www.nytimes.com/tips)
521. gzer0+3j[view] [source] 2023-11-20 07:16:38
>>andsoi+(OP)
@karpathy on Twitter:

I just don’t have anything too remarkable to add right now. I like and respect Sam and I think so does the majority of OpenAI. The board had a chance to explain their drastic actions and they did not take it, so there is nothing to go on except exactly what it looks like.

https://twitter.com/karpathy/status/1726289070345855126

◧◩◪◨⬒⬓
522. upward+6j[view] [source] [discussion] 2023-11-20 07:16:46
>>idontw+ff
One route is if AI (not through malice but simply through incompetence) plays a part in a terrorist plan to trick the US and China or US and Russia into fighting an unwanted nuclear war. A working group I’m a part of, DISARM:SIMC4, has a lot of papers about this here: https://simc4.org
◧◩
541. elfbar+Jj[view] [source] [discussion] 2023-11-20 07:20:26
>>pcbro1+G7
His likes give you a pretty good idea of where he stands:

https://twitter.com/karpathy/likes

◧◩◪◨
548. didibu+5k[view] [source] [discussion] 2023-11-20 07:23:24
>>emoden+Qj
She tweeted it herself:

https://twitter.com/phuckfilosophy/status/145969637613300121...

https://twitter.com/phuckfilosophy/status/163570439893983232...

◧◩◪
556. seanhu+mk[view] [source] [discussion] 2023-11-20 07:24:48
>>cjbpri+Ob
There have been a few stories which sound like he may have had the opportunity to come back but that negotiations over board control etc (which is pretty unsurprising) broke down[1].

Even setting that aside for a second, that doesn't change my essential point that the board doesn't necessarily have all the autonomy it thinks it has. There are for sure repercussions to this - they may have to make concessions. Some of the seemingly committed funding may be unpaid and the donors may have the ability to invoke MAC clauses and similar to pull it. Even if that turns out not to be the case, the way this has played out will certainly affect decisions about future donations etc.

[1] https://www.theguardian.com/technology/2023/nov/20/sam-altma...

◧◩◪◨⬒⬓⬔⧯▣▦
604. astran+hm[view] [source] [discussion] 2023-11-20 07:38:31
>>avalys+Gj
The message is that if you do math in your head in a specific way involving Bayes' theorem, it will make you always right about everything. So it's not even quasi-religious, the good deity is probability theory and the bad one is evil computer gods.

This then causes young men to decide they should be in open relationships because it's "more logical", and then decide they need to spend their life fighting evil computer gods because the Bayes' theorem thing is weak to an attack called "Pascal's mugging" where you tell them an infinitely bad thing has a finite chance of happening if they don't stop it.

Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

https://metarationality.com/bayesianism-updating

Bit old but still relevant.

◧◩◪◨⬒⬓
612. hsrada+wm[view] [source] [discussion] 2023-11-20 07:40:10
>>ryanSr+ob
> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.

But when you go beyond that, it becomes difficult to ignore trend lines.

Here's a detailed scenario breakdown of how it might come to be –https://www.dwarkeshpatel.com/p/carl-shulman

◧◩◪◨⬒⬓⬔⧯▣▦
614. Feepin+ym[view] [source] [discussion] 2023-11-20 07:40:16
>>avalys+Gj
As far as I can tell, any single noun that's capitalized sounds religious. I blame the Bible. However, in this case it's just a short-hand for the sequences of topically related blog posts written by Eliezer between 2006 and 2009, which are written to fit together as one interconnected work. (https://www.lesswrong.com/tag/sequences , https://www.readthesequences.com/)
◧◩◪◨⬒
615. pg_123+zm[view] [source] [discussion] 2023-11-20 07:40:19
>>alsodu+vj
https://archive.ph/Berx9
617. pg_123+Cm[view] [source] 2023-11-20 07:40:47
>>andsoi+(OP)
https://archive.ph/Berx9
◧◩◪◨⬒
620. upward+Im[view] [source] [discussion] 2023-11-20 07:41:22
>>lyu072+Vf
Ilya might be a believer in what Eliezer Yudkowsky is currently saying, which is that opacity is safer.

https://x.com/esyudkowsky/status/1725630614723084627?s=46

Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.

But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.

I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.

He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.

◧◩◪◨⬒⬓⬔⧯▣▦▧
629. Feepin+4n[view] [source] [discussion] 2023-11-20 07:43:42
>>astran+hm
> This then causes young men to decide they should be in open relationships because it's "more logical"

Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).

The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.

> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
676. astran+Io[view] [source] [discussion] 2023-11-20 07:53:33
>>Feepin+4n
> The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place.

No, there isn't a correct way to do anything in the real world, only in logic problems.

This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)

The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".

https://metarationality.com/probabilism

> Given that this didn't happen with anyone else

They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

And of course, they think evil computer gods are going to kill them.

◧◩◪◨
693. tempus+pp[view] [source] [discussion] 2023-11-20 07:58:33
>>vinter+mn
There are concrete benchmarks like “how good is it at answering multiple choice questions accurately or “how good is it at producing valid code to solve a particular coding problem”.

There’s also a chatbot Elo ranking which crowd sources model comparisons https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

GPT-4 is the king right now

698. karmas+yp[view] [source] 2023-11-20 07:59:00
>>andsoi+(OP)
Update:

Sam and Greg, and left OpenAI staffers now join in Microsoft

https://twitter.com/satyanadella/status/1726509045803336122

◧◩◪◨
701. rrrrrr+Mp[view] [source] [discussion] 2023-11-20 08:00:33
>>ytoaww+cf
The Forbes article [1] from yesterday:

> The plan some investors are considering is to make the board consider the situation “untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors.

[1] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...

Edit: Looks like Microsoft just hired Sam outright: https://twitter.com/satyanadella/status/1726509045803336122

◧◩
719. pug_mo+er[view] [source] [discussion] 2023-11-20 08:06:43
>>pug_mo+Jn
4 minutes after that comment, Satya said Sam and some his people are joining Microsoft!

https://twitter.com/satyanadella/status/1726509045803336122

◧◩◪◨⬒⬓⬔⧯
731. upward+ds[view] [source] [discussion] 2023-11-20 08:09:34
>>hurrye+ep
According to current nuclear doctrine, no, they won’t wait. The current doctrine is called Launch On Warning which means you retaliate immediately after receiving the first indications of incoming missiles.

This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.

Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.

Here is the paper that introduced Decide Under Attack:

A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.

https://carnegieendowment.org/2019/09/10/commonsense-policy-...

◧◩◪◨⬒⬓⬔⧯
736. upward+Es[view] [source] [discussion] 2023-11-20 08:11:22
>>justco+ko
Exactly. WarGames is very similar to a true incident that occurred in 1979, four years before the release of the film.

https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...

    In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.
◧◩◪◨⬒⬓⬔⧯▣▦
766. upward+6z[view] [source] [discussion] 2023-11-20 08:39:32
>>hurrye+9t
Agree wholeheartedly. Human skepticism of computer systems has saved our species from nuclear extinction multiple times (Stanislav Petrov incident, 1979 NORAD training tapes incident, etc.)

The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.

The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.

> The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).

https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...

Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.

◧◩◪◨⬒⬓
770. intell+rz[view] [source] [discussion] 2023-11-20 08:40:55
>>deerin+vm
Or ... cut the middleman: Sam Altman and Greg Brockman joining MS to start a new AI unit - https://twitter.com/satyanadella/status/1726516824597258569
◧◩◪◨⬒
772. Booris+Xz[view] [source] [discussion] 2023-11-20 08:43:09
>>xvecto+dp
Looks like you got your wish earlier than anyone would have expected: https://twitter.com/satyanadella/status/1726509045803336122
◧◩◪◨
774. lucubr+iB[view] [source] [discussion] 2023-11-20 08:49:11
>>mcv+Pv
This is very short and explains exactly what they want: https://openai.com/charter

I think it's pretty obvious after reading it why people who were really committed to that Charter weren't happy with the direction that Sam was taking the company.

784. carlos+pE[view] [source] 2023-11-20 09:05:40
>>andsoi+(OP)
Emmett announced a plan for the next 30 days https://x.com/eshear/status/1726526112019382275?s=46&t=VuyFi...
◧◩
785. sainez+ME[view] [source] [discussion] 2023-11-20 09:08:14
>>mcv+qv
I'm not sure what more information people need. The original announcement was pretty clear: https://openai.com/blog/openai-announces-leadership-transiti....

Specifically:

> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.

> it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.

People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.

◧◩◪
794. dmix+CG[view] [source] [discussion] 2023-11-20 09:20:20
>>ssnist+Bo
And the new CEO wants to slow down AI development and is a Yudkowsky fan which is another incentive to leave https://x.com/drtechlash/status/1726507930026139651?s=46&t=
◧◩◪◨
806. dmix+2J[view] [source] [discussion] 2023-11-20 09:33:57
>>tsimio+3r
Now regardless, the new CEO Shear is also very much in the current development of AI is dangerous (not just hypothetically in the future as AGI becomes more plausible), comparable to a nuclear weapon, and wants to slow it down. This will definitely pit researchers into camps and have plenty looking at the door.

https://x.com/amir/status/1726503822925930759?s=46&t=

◧◩◪◨⬒⬓⬔⧯
811. astran+IJ[view] [source] [discussion] 2023-11-20 09:38:42
>>xvecto+gA
Here's the new CEO expressing the common EA belief that (theoretical world ending) AI is worse than the Nazis, because once you show them a thought experiment that might possibly true they're completely incapable of not believing in it.

https://x.com/eshear/status/1664375903223427072?s=46

◧◩◪◨⬒⬓
812. leobg+kK[view] [source] [discussion] 2023-11-20 09:42:50
>>Solven+Ys
Read what pg has to say about him. He named Altman as one of the top 5 most interesting founders of the last 30 years.

> startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

http://www.paulgraham.com/5founders.html

◧◩◪
825. mcv+lP[view] [source] [discussion] 2023-11-20 10:12:53
>>sainez+ME
That original announcement doesn't make it nearly as explicit as you're making it. It doesn't say what he lied about, and it doesn't say he's not on board with the mission.

Sounds like firing was done to better serve the original mission, and is therefore probably a good thing. Though the way it's happening does come across as sloppy and panicky to me. Especially since they already replaced their first replacement CEO.

Edit: turns out Wikipedia already has a pretty good write up about the situation:

> "Sutskever is one of the six board members of the non-profit entity which controls OpenAI.[7] According to Sam Altman and Greg Brockman, Sutskever was the primary driver behind the November 2023 board meeting that led to Altman's firing and Brockman's resignation from OpenAI.[30][31] The Information reported that the firing in part resulted from a conflict over the extent to which the company should commit to AI safety.[32] In a company all-hands shortly after the board meeting, Sutskever stated that firing Altman was "the board doing its duty."[33] The firing of Altman and resignation of Brockman led to resignation of 3 senior researchers from OpenAI."

(from https://en.wikipedia.org/wiki/Ilya_Sutskever)

826. anonzz+nQ[view] [source] 2023-11-20 10:20:18
>>andsoi+(OP)
Hopefully only interim

https://twitter.com/bindureddy/status/1726482338530693126

◧◩◪◨⬒⬓
839. morale+M61[view] [source] [discussion] 2023-11-20 12:15:25
>>clover+o9
Surprise surprise!

https://x.com/satyanadella/status/1726509045803336122?s=46

◧◩◪◨
846. feralo+lh1[view] [source] [discussion] 2023-11-20 13:20:57
>>bmitc+un
Yes. Well, it seems like it to me.

Here's more about Justin.tv the new interim CEO. It isn't paywalled. https://www.cnbc.com/2023/11/20/who-is-emmett-shear-the-new-...

◧◩◪◨⬒
859. toomuc+jM1[view] [source] [discussion] 2023-11-20 15:30:22
>>toomuc+R3
Citation: https://www.semianalysis.com/p/microsoft-swallows-openais-co... ("Microsoft Swallows OpenAI’s Core Team – GPU Capacity, Incentive Structure, Intellectual Property, OpenAI Rump State")
◧◩◪◨⬒⬓⬔⧯▣▦
867. LordDr+3D2[view] [source] [discussion] 2023-11-20 19:02:54
>>idontw+Vo
You could if you were educated enough in DNA synthesis and customer service manipulation to do so, and were smart enough to figure out a novel rna sequence based in publicly available data. I'm not, you're not. A superintelligence would be. The base assumption is that any superintelligence is smarter than us, and can solve problems we can't. AI can already come up with novel chemical weapons thousands of times faster than us[1], and it's way dumber than we are.

And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.

Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.

[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...

◧◩◪◨⬒⬓⬔
888. rramad+lS4[view] [source] [discussion] 2023-11-21 09:27:17
>>leobg+kK
This was one of the stupidest things PG had said/written. Sam Altman's first company "Loopt" was a YCombinator startup and hence i suspect PG was building him up for business reasons.

The rest is "Halo Effect" - https://en.wikipedia.org/wiki/Halo_effect

[go to top]