zlacker

We have reached an agreement in principle for Sam to return to OpenAI as CEO

submitted by staran+(OP) on 2023-11-22 06:01:45 | 1980 points 1915 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
17. wilg+X[view] [source] [discussion] 2023-11-22 06:08:18
>>colmvp+A
No https://twitter.com/sama/status/1727207458324848883
28. r721+k1[view] [source] 2023-11-22 06:10:43
>>staran+(OP)
Quote tweets by main participants:

https://twitter.com/sama/status/1727206691262099616 (+ follow-up https://twitter.com/sama/status/1727207458324848883)

https://twitter.com/gdb/status/1727206609477411261

https://twitter.com/miramurati/status/1727206862150672843

UPD https://twitter.com/gdb/status/1727208843137179915

https://twitter.com/eshear/status/1727210329560756598

https://twitter.com/satyanadella/status/1727207661547233721

31. gzer0+B1[view] [source] 2023-11-22 06:12:23
>>staran+(OP)
Satya on twitter:

We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners.

https://twitter.com/satyanadella/status/1727207661547233721

40. altpad+R1[view] [source] 2023-11-22 06:14:20
>>staran+(OP)
I guess the main question is who else will be on the board and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies. I think having Sam return as CEO is a good outcome for OpenAI but hopefully he and Greg stay off the board.

It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.

I was a bit alarmed by the allegations in this article

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

45. transc+32[view] [source] 2023-11-22 06:15:37
>>staran+(OP)
Assuming they weren’t LARPing, that Reddit account claiming to have been in the room when this was all going down must be nervous. They wrote all kinds of nasty things about Sam, and I’m assuming the signatures on the “bring him back” letter would narrow down potential suspects considerably.

Edit: For those who may have missed it in previous threads, see https://old.reddit.com/user/Anxious_Bandicoot126

◧◩
54. ryzvon+k2[view] [source] [discussion] 2023-11-22 06:17:05
>>r721+k1
also satya

https://twitter.com/satyanadella/status/1727207661547233721

60. bobsoa+t2[view] [source] 2023-11-22 06:18:25
>>staran+(OP)
Someone was very quick to update Bret Taylor's Wikipedia page:

https://en.m.wikipedia.org/wiki/Bret_Taylor

> On November, 21st, 2023, Bret Taylor replaced Greg Brockman as the chairman of OpenAI.

...with three footmark "sources" that all point to completely unrelated articles about Bret from 2021-2022.

72. craken+S2[view] [source] 2023-11-22 06:20:38
>>staran+(OP)
Please update the link to the updated version of the tweet: https://x.com/openai/status/1727206187077370115?s=46
80. meetpa+13[view] [source] 2023-11-22 06:21:26
>>staran+(OP)
Emmett Shear on Twitter:

I am deeply pleased by this result, after ~72 very intense hours of work. Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.

https://twitter.com/eshear/status/1727210329560756598

◧◩◪
89. transc+i3[view] [source] [discussion] 2023-11-22 06:22:27
>>fordsm+Y2
https://old.reddit.com/user/Anxious_Bandicoot126
◧◩
90. upward+j3[view] [source] [discussion] 2023-11-22 06:22:28
>>altpad+R1
> The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

Exactly. This is seriously improper and dangerous.

It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.

I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...

◧◩
97. wokwok+x3[view] [source] [discussion] 2023-11-22 06:24:30
>>gzer0+B1
Unsaid: “Also I lied about hiring him.”

> And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.

https://nitter.net/satyanadella/status/1726509045803336122

I guess everyone was just playing a bit loose and fast with the truth and hype to pressure the board.

128. flylib+z4[view] [source] 2023-11-22 06:31:41
>>staran+(OP)
"A source with direct knowledge of the negotiations says that the sole job of this initial board is to vet and appoint a new formal board of up to 9 people that will reset the governance of OpenAl. Microsoft will likely have a seat on that expanded board, as will Altman himself."

https://twitter.com/teddyschleifer/status/172721237871736880...

134. halfjo+M4[view] [source] 2023-11-22 06:33:20
>>staran+(OP)
Still think this was CIA operation to get OpenAI in hands of US government and big tech.

Former Secretary, SalesForce CEO who was board chair of Twitter when infiltrated with FBI [1] and the fall-guy for the coup is the new board? Not one person from the actual company - not even Greg who did nothing wrong??? [1] - https://twitter.com/NameRedacted247/status/16340211499976867...

The two think-tank women who made all this happen conveniently leave so we never talk about them again.

Whatever, as long as I can use their API.

◧◩
148. 0xDEAF+a5[view] [source] [discussion] 2023-11-22 06:36:03
>>r721+k1
Emmett https://twitter.com/eshear/status/1727210329560756598
157. gzer0+v5[view] [source] 2023-11-22 06:37:37
>>staran+(OP)
One of the more interesting aspects from this entire saga was that Helen Toner recently wrote a paper critical of OpenAI and praising Anthropic.

Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].

That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.

[1] https://cset.georgetown.edu/publication/decoding-intentions/

◧◩◪
165. choppa+F5[view] [source] [discussion] 2023-11-22 06:38:45
>>_jnc+T2
Larry Summers mostly counts as a Microsoft seat. Summers will support commercial and private interest and not have a single thought about safety, just like during the financial crisis 15 years ago https://www.chronicle.com/article/larry-summers-and-the-subv...
◧◩
176. altpad+76[view] [source] [discussion] 2023-11-22 06:41:22
>>TheAce+h2
The most plausible explanation I've found is that the pro-safety faction and pro-accel factions were at odds which was why the board was stalemated at a small size.

Altman and Toner came into conflict over a mildly critical paper Toner wrote involving Open AI and Altman tried to have her removed from the board.

This is probably what precipitated this showdown. The pro safety/nonprofit charter faction was able to persuade someone (probably Ilya) to join with them and oust Sam.

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

◧◩◪◨⬒⬓
196. tech23+N6[view] [source] [discussion] 2023-11-22 06:45:20
>>zarzav+J5
For reference: https://web.archive.org/web/20070713212949/http://news.ycomb...
◧◩
198. noneth+R6[view] [source] [discussion] 2023-11-22 06:46:09
>>alex_y+96
> “There is relatively clear evidence that whatever the difference in means—which can be debated—there is a difference in the standard deviation and variability of a male and female population,” he said. Thus, even if the average abilities of men and women were the same, there would be more men than women at the elite levels of mathematical ability

Isn’t this true though? Says more about Harvard than Summers to be honest.

https://www.swarthmore.edu/bulletin/archive/wp/january-2009_...

202. acl777+17[view] [source] 2023-11-22 06:47:08
>>staran+(OP)
https://x.com/swyx/status/1727215534037774752?s=20

  Finally the OpenAI saga ends and everybody can go back to building!
  
  3 things that turned things around imo:
  
  1. 95% of employees signing the letter
  2. Ilya and Mira turning Team Sam
  3. Microsoft pulling credits
  
  Things AREN’T back to where they were. OpenAI has been through hell and back. This team is going to ship like we’ve never seen before.
◧◩◪
206. wavemo+b7[view] [source] [discussion] 2023-11-22 06:48:22
>>wokwok+x3
Did you miss the part where Sam himself said he "decided to join MSFT on Sunday"?

https://twitter.com/sama/status/1727207458324848883

He's has now changed his mind, sure, but that doesn't mean Satya lied.

218. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩◪◨⬒
222. 6gvONx+N7[view] [source] [discussion] 2023-11-22 06:52:16
>>WendyT+T5
I edited my comment to clarify what I meant. The start was him pushing to move fast and break things in the classic YC kind of way. And it's BS to say that she didn't speak to the CEO or try to affect change first. The safety camp inside openai has been unsuccessfully trying to push him to slow down for years.

See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.

◧◩◪◨
273. Sakos+u9[view] [source] [discussion] 2023-11-22 07:02:45
>>arduan+56
His influence significantly reduced the size of the stimulus bill, which meant significantly higher unemployment for a longer duration and significantly less spending on infrastructure which is so beneficial to economic growth that it can't be understated. Yes, millions of people suffered because of him.

The fact that you think current inflation has anything to do with that stimulus bill back then shows how little you understand about any of this.

Larry Summers is the worst kind of person. Somebody who is nothing but a corporate stooge trying to act like the adult by being "reasonable", when that just means enriching his corporate friends, letting people suffer and not spending money (which any study will tell you is not the correct approach to situations like this because of multiplying effects they have down the line).

Some necessary reading:

https://archive.ph/FU1F

https://archive.li/23tUR

https://archive.li/9Ji4C

In regards to watering it down to get GOP votes: https://archive.nytimes.com/krugman.blogs.nytimes.com/2009/0...

◧◩◪
285. highwa+P9[view] [source] [discussion] 2023-11-22 07:04:40
>>nickpp+Y5
Interesting take.

By all accounts he paid about double what it was worth and the value has collapsed from there.

Probably not a great idea to say anything overtly political when you own a social media company, as due to politics being so polarised in the US, any opinion is going to divide your audience in half causing a usage collapse and driving support to competing platforms.

https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...

◧◩◪
333. upward+ob[view] [source] [discussion] 2023-11-22 07:14:51
>>cornho+xa
I think this is an oversimplification and that although the decel faction definitely lost, there are still three independent factions left standing:

https://news.ycombinator.com/edit?id=38375767

It will be super interesting to see the subtle struggles for influence between these three.

◧◩◪◨
334. gnaman+pb[view] [source] [discussion] 2023-11-22 07:14:55
>>theamk+A5
Take this with a grain of salt but employees were under a lot of peer pressure

https://twitter.com/JacquesThibs/status/1727134087176204410

◧◩◪◨⬒⬓⬔
343. 0xDEAF+Kb[view] [source] [discussion] 2023-11-22 07:17:02
>>behnam+49
From the perspective of upholding the charter https://openai.com/charter and preventing an AI race -- seems potentially sensible
◧◩
347. caseba+Tb[view] [source] [discussion] 2023-11-22 07:17:51
>>shubha+B7
Have you seen the Center for AI Safety letter? A lot of experts are worried AI safety could be an x-risk:

https://www.safe.ai/statement-on-ai-risk

◧◩◪
366. tigers+wc[view] [source] [discussion] 2023-11-22 07:22:59
>>nickpp+Y5
A huge amount of advertisers ran away, the revenue cratered and it is probably less than the annual debt servicing (revenue, not profit), the current valuation, accordingly to Musk math (https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...), is 1/10 of the acquisition price. But yes, it was a masterstroke. I don’t remember any other masterstroke in history that managed to lose 40B with a single acquisition.
373. ah765+Pc[view] [source] 2023-11-22 07:25:19
>>staran+(OP)
"Context on the negotiations to bring Sam back as CEO of OpenAI:

The biggest sticking point was Sam being on the board. Ultimately, he conceded to not being on the board, at least initially, to close the deal. The hope/expectation is that he will end up on the board eventually."

(https://twitter.com/emilychangtv/status/1727216818648134101)

◧◩◪
381. JumpCr+dd[view] [source] [discussion] 2023-11-22 07:28:43
>>cornho+xa
> deciding factor was the staff mutiny

The staff never mutinied. They threatened to mutiny. That's a big difference!

Yesterday, I compared these rebels to Shockley's "traitorous eight" [1]. But the traitorous eight actually rebelled. These folk put their name on a piece of paper, options and profit participation units safely held in the other hand.

[1] >>38348123

◧◩◪◨⬒
408. jadams+me[view] [source] [discussion] 2023-11-22 07:36:27
>>antonv+Nd
Actually, Summer's claims were much narrower - he said that boys tend to deviate from the mean more. That is, it's not that men are superior, it's that there are more boy geniuses and more boy idiots.

Decades of research shows that teachers give girls better grades than boys of the same ability. This is not some new revelation.

https://www.forbes.com/sites/nickmorrison/2022/10/17/teacher...

https://www.bbc.co.uk/news/education-31751672

A whole cohort of boys got screwed over by the cancellation of exams during Covid. That is just reality, and no amount of creepy male feminist posturing is going to change that. Rather, denying issues in boys education is liable to increase male resentment and bitterness, something we've already witnessed over the past few years.

◧◩
410. theone+se[view] [source] [discussion] 2023-11-22 07:37:30
>>shubha+B7
I don't care about AI Safety, but:

https://openai.com/charter

above that in the charter is "Broadly distributed benefits", with details like:

"""

Broadly distributed benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

"""

In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.

Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.

◧◩◪◨
412. alex_y+we[view] [source] [discussion] 2023-11-22 07:37:46
>>MVisse+qd
The consensus appears to be somewhat less than a consensus.

Here is a meta analysis on the subject: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057475/

◧◩◪
416. astran+Me[view] [source] [discussion] 2023-11-22 07:39:40
>>nwiswe+7b
Apparently the FBI thought he'd done something wrong too, because they called up the board to start an investigation but they didn't have anything.

https://x.com/nivi/status/1727152963695808865?s=46

◧◩◪◨
424. ravst3+6f[view] [source] [discussion] 2023-11-22 07:42:14
>>jychan+O6
The had some equity after 2019.

Thrive was about to buy employee shares at a $86 bn valuation. The Information said that those units had 12x since 2021.

https://www.theinformation.com/articles/thrive-capital-to-le...

427. s-xyz+df[view] [source] 2023-11-22 07:43:17
>>staran+(OP)
All systems operational again https://status.openai.com/
◧◩◪◨⬒⬓
442. astran+bg[view] [source] [discussion] 2023-11-22 07:51:15
>>happos+s9
No anti-AI lawsuits have progressed yet. One got slapped down pretty hard today, though isn't dead.

https://www.hollywoodreporter.com/business/business-news/sar...

◧◩
453. nopins+wg[view] [source] [discussion] 2023-11-22 07:53:41
>>shubha+B7
Both sides of the rift in fact care a great deal about AI Safety. Sam himself helped draft the OpenAI charter and structure its governance which focuses on AI Safety and benefits to humanity. The main reason of the disagreement is the approach they deem best:

* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.

As a glimpse on how an AI can be harmful, this paper explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.

* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:

https://openai.com/blog/introducing-superalignment

But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)

◧◩◪
462. dragon+Qg[view] [source] [discussion] 2023-11-22 07:55:38
>>epups+Gc
> Why can't these safety advocates just say what they are afraid of?

They have. At length. E.g.,

https://ai100.stanford.edu/gathering-strength-gathering-stor...

https://arxiv.org/pdf/2307.03718.pdf

https://eber.uek.krakow.pl/index.php/eber/article/view/2113

https://journals.sagepub.com/doi/pdf/10.1177/102425892211472...

https://jc.gatspress.com/pdf/existential_risk_and_powerseeki...

For just a handful of examples from the vast literature published in this area.

◧◩
477. system+dh[view] [source] [discussion] 2023-11-22 07:58:23
>>dukeof+p8
It was Microsoft's voice generation tool from the 90s. You can play with it here:

https://www.tetyys.com/SAPI4/

◧◩◪
487. nopins+th[view] [source] [discussion] 2023-11-22 08:00:12
>>pug_mo+Cb
Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.

The Operational Risks of AI in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

An example of such an event: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

How do you propose we deal with this sort of harm if more powerful AIs with no limit and control proliferate in the wild?

.

Note: Both sides of the OpenAI rift care deeply about AI Safety. They just follow different approaches. See more details here: >>38376263

◧◩◪◨⬒⬓
495. antonv+Nh[view] [source] [discussion] 2023-11-22 08:02:56
>>jadams+me
I quoted one of the unsupported claims that Summers made - that "there are issues of intrinsic aptitude" which help explain lower representation of women. Not, you know, millennia of sexism and often violent oppression. This is the exact same kind of arguments that racists make - any observed differences must be "intrinsic".

If Summers had in fact limited himself to the statistical claims, it would have been less of an issue. He would still have been wrong, but he wouldn't have been so obviously sexist.

It's easy to refute Summers' claims, and in fact conclude that the complete opposite of what he was saying is more likely true. "Gender, Culture, and mathematics performance"(https://www.pnas.org/doi/10.1073/pnas.0901265106) gives several examples that show that the variability as well as male-dominance that Summers described is not present in all cultures, even within the US - for example, among Asian American students in Minnesota state assessments, "more girls than boys scored above the 99th percentile." Clearly, this isn't an issue of "intrinsic aptitude" as Summers claimed.

> A whole cohort of boys got screwed over by the cancellation of exams during Covid.

I'm glad we've identified the issue that triggered you. But your grievances on that matter are utterly irrelevant to what I wrote.

> no amount of creepy male feminist posturing is going to change that

It's always revealing when someone arguing against bigotry is accused of "posturing". You apparently can't imagine that someone might not share your prejudices, and so the only explanation must be that they're "posturing".

> increase male resentment and bitterness

That's a choice you've apparently personally made. I'd recommend taking more responsibility for your own life.

◧◩
556. sampo+4k[view] [source] [discussion] 2023-11-22 08:20:10
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

In the 1990s and the 00s, it was no too uncommon for anti-GMO environmental activist / ecoterrorist groups to firebomb research facilities and to enter farms and fields to destroy planted GMO plants. Earth Liberation Front was only one of such activist groups [1].

We have yet to see even one bombing of an AI research lab. If people really are afraid of AIs, at least they do so more in the abstract and are not employing the tactics of more traditional activist movements.

[1] https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...

◧◩
564. bambax+hk[view] [source] [discussion] 2023-11-22 08:21:56
>>altpad+R1
It seems ironic that the research paper that started it all [0] deals with "costly signals":

> Costly signals are statements or actions for which the sender will pay a price —political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat

Firing Sam Altman and hiring him back two days later was a perfect example of a costly signal, as it cost all involved their board positions.

There's an element of farce in all of this, that would make for an outstanding Silicon Valley episode; but the fact that Sam Altman can now enjoy unchecked power as leader of OpenAI is worrying and no laughing matter.

[0] https://cset.georgetown.edu/publication/decoding-intentions/

◧◩◪◨
585. nopins+8l[view] [source] [discussion] 2023-11-22 08:29:52
>>concor+6k
No, I didn't say that. They formed the Superalignment team with Ilya as a co-lead (and Sam's approval) for that.

https://openai.com/blog/introducing-superalignment

I presume the current alignment approach is sufficient for the AI they make available to others and, in any event, GPT-n is within OpenAI's control.

◧◩◪
587. patcon+dl[view] [source] [discussion] 2023-11-22 08:30:59
>>kmlevi+cf
Agreed. Perhaps a reason for public AI [1], which advocates for a publicly funded option where a player like MSFT can't push around something like OpenAI so forcefully.

[1]: https://lu.ma/zo0vnony

◧◩◪◨
589. Centig+gl[view] [source] [discussion] 2023-11-22 08:31:11
>>colins+fk
This has been a common misinterpretation since very early in OpenAI's history (and a somewhat convenient one for OpenAI).

From a 2016 New Yorker article:

> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”

> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”

source: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

◧◩◪◨
594. ah765+sl[view] [source] [discussion] 2023-11-22 08:32:30
>>buggle+Ck
According to this tweet thread[1], they negotiated hard for Sam to be off the board and Adam to stay on. That indicates, at least if we're being optimistic, that the current board is not in Sam's pocket (otherwise they wouldn't have bothered)

[1]:(https://twitter.com/emilychangtv/status/1727216818648134101)

◧◩◪
609. bkyan+dm[view] [source] [discussion] 2023-11-22 08:36:35
>>cheeze+d5
https://twitter.com/emilychangtv/status/1727228431396704557

The reputation boost is probably worth a lot more than the direct financial compensation he's getting.

◧◩◪
654. sangee+Qn[view] [source] [discussion] 2023-11-22 08:49:26
>>haunte+ih
I got news for you pal: https://www.wired.co.uk/article/apple-vs-apples-trademark-ba...
◧◩◪
712. pk-pro+2r[view] [source] [discussion] 2023-11-22 09:18:52
>>pug_mo+Cb
I just had a conversation about this like two weeks ago. The current trend in AI "safety" is a form of brainwashing, not only for AI but also for future generations shaping their minds. There are several aspects:

1. Censorship of information

2. Cover-up of the biases and injustices in our society

This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.

Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.

Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:

1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits. 2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues. 3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes. 4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories. 5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology. 6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society. 7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.

Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.

Bonhoeffer's theory of stupidity: https://www.youtube.com/watch?v=ww47bR86wSc&pp=ygUTdGhlb3J5I...

◧◩◪◨⬒⬓
723. ldjb+Hr[view] [source] [discussion] 2023-11-22 09:24:01
>>outsom+Mn
Almost all employees did not resign in protest, but they did _threaten_ to resign.

https://www.theverge.com/2023/11/20/23968988/openai-employee...

◧◩◪
734. midasu+ps[view] [source] [discussion] 2023-11-22 09:29:22
>>0xDEAF+r8
This Summers?

https://nymag.com/intelligencer/2023/06/larry-summers-was-wr...

https://prospect.org/environment/2023-11-20-larry-summers-in...

◧◩
749. r721+4t[view] [source] [discussion] 2023-11-22 09:34:35
>>r721+k1
https://twitter.com/hlntnr/status/1727207796456751615
757. 1vuio0+Et[view] [source] 2023-11-22 09:39:23
>>staran+(OP)
https://twitter.com/dr_park_phd/status/1727125936070410594

https://twitter.com/GaryMarcus/status/1727134758919151975

https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...

https://twitter.com/AISafetyMemes/status/1727108259297837083

◧◩◪
770. r721+5u[view] [source] [discussion] 2023-11-22 09:42:40
>>crossr+xl
>Does everyone have a Twitter blue tick now? Or is that just a char people are using in their names?

Blue tick just means user bought a subscription (X Premium) now - one of the features is "reply prioritization", so top replies to popular tweets are from blue ticks.

https://help.twitter.com/en/using-x/x-premium

◧◩
796. rinze+Rv[view] [source] [discussion] 2023-11-22 10:00:08
>>Satam+0a
Matt Levine's "slightly annotated diagram" in one of his latest newsletters tells the story quite well, I think: https://newsletterhunt.com/emails/42469
◧◩
812. mdekke+lx[view] [source] [discussion] 2023-11-22 10:10:49
>>Satam+0a
Very disappointing outcome indeed. Larry Summers is the Architect of the modern Russian Oligarchy[1] and responsible for an incredible amount of human suffering as well as gross financial disparity both in the USA as well as the rest of the world.

Not someone I would like to see running the world’s leading AI company

[1] https://www.thenation.com/article/world/harvard-boys-do-russ...

Edit: also https://prospect.org/economy/falling-upward-larry-summers/

https://www.npr.org/sections/money/2022/03/22/1087654279/how...

And finally https://cepr.net/can-we-blame-larry-summers-for-the-collapse...

◧◩◪◨⬒⬓⬔
818. brazzy+Jx[view] [source] [discussion] 2023-11-22 10:13:05
>>lovely+hs
https://openai.com/charter
◧◩◪◨
832. throwu+Dy[view] [source] [discussion] 2023-11-22 10:21:35
>>code_r+rt
Hey, downvoters, read this first https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...
◧◩
843. ssnist+qz[view] [source] [discussion] 2023-11-22 10:30:10
>>tunesm+36
Ilya may have caved and switched sides after Greg's wife made an emotional plea: https://x.com/danshipper/status/1726784936990978254
◧◩◪
847. mijoha+Cz[view] [source] [discussion] 2023-11-22 10:32:01
>>rcaugh+Ts
How has the board shown that they fired Sam Altman due to "responsible governance".

They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:

> One explanation was that Altman was said to have given two people at OpenAI the same project.

> The other was that Altman was said to have given two board members different opinions about a member of personnel.

Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.

If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.

[0] https://www.businessinsider.com/openais-employees-given-expl...

◧◩
870. bambax+XA[view] [source] [discussion] 2023-11-22 10:45:03
>>Satam+0a
> OpenAI is in fact not open

One wonders what will happen with Emet Shear's "investigation" in the process that lead to Sam's outing [0]. Was it even allowed to start?

[0] https://twitter.com/eshear/status/1726526112019382275

◧◩◪◨⬒⬓
877. jampek+RB[view] [source] [discussion] 2023-11-22 10:52:47
>>jazzyj+kh
They have both explicated in their charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"

Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.

https://openai.com/charter

◧◩◪◨
903. dagaci+hD[view] [source] [discussion] 2023-11-22 11:05:05
>>eviks+Rm
Clearly the board members did not think through even the immediate consequences. Kenobi: https://www.youtube.com/watch?v=iVBX7l2zgRw
925. cbeach+vE[view] [source] 2023-11-22 11:17:57
>>staran+(OP)
Does anyone know which faction (e/acc vs decels) the new board members Bret Taylor and Larry Summers will be on?

One thing IS clear at this point - their political alignment:

* Taylor a significant donor to Joe Biden ($713,637 in 2020): https://nypost.com/2022/04/26/twitter-board-members-gave-tho...

* Summers is a former Democrat Treasury Secretary who has shifted leftwards with age: https://www.newstatesman.com/the-weekend-interview/2023/03/w...

◧◩◪◨
956. kmlevi+CG[view] [source] [discussion] 2023-11-22 11:36:13
>>moonsu+OD
New York Times. He was "reprimanding" Toner, a board member, for writing an article critical of open AI.

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.

https://archive.is/20231122033417/https://www.wsj.com/tech/a...

Bottom line he had a lot more power over the board then than he will now.

◧◩◪◨⬒⬓⬔
963. svnt+SG[view] [source] [discussion] 2023-11-22 11:38:03
>>doktri+kE
How does Netflix compete with Facebook?

This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.

https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...

Oracle is going to get into EVs?

You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.

◧◩◪◨⬒
972. Ludwig+xH[view] [source] [discussion] 2023-11-22 11:44:50
>>ssnist+nt
Why not? Maybe the board was just too late to the party. Maybe the employees that wouldn’t side with Sam have already left[1], and the board was just too late to realise that. And maybe all the employees who are still at OpenAI mostly care about their equity-like instruments.

[1] https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e

◧◩◪◨
986. vaxman+dI[view] [source] [discussion] 2023-11-22 11:49:59
>>kitsun+ty
It could be hard to do that while paying a penalty to FTB and IRS for what they’re suspected to have done (in allowing a for-profit subsidiary to influence an NPO parent) or dealing with SEC and the state courts over any fiduciary breach allegations related to the published stories. [ Nadella is an OG genius because his company is now shielded from all of that drama as it plays out, no matter the outcome. He can take the time to plan for a soft landing at MS for any OpenAI workers (if/when they need it) and/or to begin duplicating their efforts “just in case.” Heard coming from the HQ parking lot in Redmond https://youtu.be/GGXzlRoNtHU ]

Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/

◧◩◪◨⬒⬓⬔⧯
1010. doktri+fJ[view] [source] [discussion] 2023-11-22 11:57:40
>>svnt+SG
> How does Netflix compete with Facebook?

By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.

But you don’t have to just take my word for it :

> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”

https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...

> This is what happened with Eric Schmidt on Apple’s board

Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.

◧◩◪
1018. flappy+wJ[view] [source] [discussion] 2023-11-22 11:59:17
>>upupup+2p
https://twitter.com/emilychangtv/status/1727228431396704557

He was instrumental; threatened resignation unless the old board could provide evidence of wrongdoing

◧◩◪◨
1020. cables+BJ[view] [source] [discussion] 2023-11-22 11:59:33
>>ChatGT+4C
It's not assassination. It's a Princess Bride Battle of Wits, that they initiated and put the poison into one of the chalices themselves, and then thought so highly of their intellect they ended up choosing and drinking the chalice that had the poison in it.

Corresponding Princess Bride scene: https://youtu.be/rMz7JBRbmNo?si=uqzafhKISmB7A-H7

◧◩◪◨⬒⬓⬔⧯▣
1047. august+gL[view] [source] [discussion] 2023-11-22 12:11:54
>>brigan+MA
https://liamchingliu.wordpress.com/2012/06/25/intellectuals-...
◧◩◪◨
1054. 383210+zL[view] [source] [discussion] 2023-11-22 12:14:31
>>achron+UF
> just what the heck is Larry Summers doing on that board?

Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.

https://www.wired.com/2014/04/dropbox-rice-controversy/

https://en.wikipedia.org/wiki/Theranos#Management

In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m

“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)

https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...

◧◩
1115. idriss+pP[view] [source] [discussion] 2023-11-22 12:42:34
>>Satam+0a
Take a look at https://kyutai.org/ that launched last week
◧◩◪◨⬒⬓⬔
1164. cyanyd+ES[view] [source] [discussion] 2023-11-22 13:07:03
>>setham+WR
Ok, and if Raytheon builds an AI and tells a government "trust us, its safe", arn't you just letting them create a scape goat via the government?

Seriously, Businesses simply dont have the history that governments do. They're just as capable of violence.

https://utopia.org/guide/crime-controversy-nestles-5-biggest...

All you're identifying is "government has a longer history of violence than Businesses"

◧◩◪◨⬒
1171. mstade+JT[view] [source] [discussion] 2023-11-22 13:13:25
>>bad_us+aR
Not that I have any insight into any of the events at OpenAI, but would just like to point out there are several other reasons why so many people would sign, including but not limited to:

- peer pressure

- group think

- financial motives

- fear of the unknown (Sam being a known quantity)

- etc.

So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.

If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.

[1]: https://www.imdb.com/title/tt2575988/

◧◩◪◨⬒⬓⬔⧯▣
1207. suodua+SX[view] [source] [discussion] 2023-11-22 13:39:28
>>ameist+cQ
Probably from law 3: https://principia-scientific.com/the-5-basic-laws-of-human-s...

But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.

It's a very influential essay.

◧◩
1290. hn_thr+u81[view] [source] [discussion] 2023-11-22 14:26:43
>>eclect+79
> The media and the VCs are treating Sam like some hero and savior of AI

I wouldn't be so sure. While I think the board handled this process terribly, I think the majority of mainstream media articles I saw were very cautionary regarding the outcome. Examples (and note the second article reports that Paul Graham fired Altman from YC, which I never knew before):

MarketWatch: https://www.marketwatch.com/story/the-openai-debacle-shows-s...

Washington Post: https://www.washingtonpost.com/technology/2023/11/22/sam-alt...

◧◩
1299. RockyM+M91[view] [source] [discussion] 2023-11-22 14:32:09
>>eclect+79
Below is a good thread, which maybe contains the answer to your question, and Ken Olsen's question about why brainiac MIT grads get managed by midwit HBS grads.

https://twitter.com/coloradotravis/status/172606030573668790...

A good leader is someone you'll follow into battle, because you want to do right by the team, and you know the leader and the team will do right by you. Whatever 'leadership' is, Sam Altman has it and the board does not.

https://www.ft.com/content/05b80ba4-fcc3-4f39-a0c3-97b025418...

The board could have said, hey we don't like this direction and you are not keeping us in the loop, it's time for an orderly change. But they knew that wouldn't go well for them either. They chose to accuse Sam of malfeasance and be weaselly ratfuckers on some level themselves, even if they felt for still-inscrutable reasons that was their only/best choice and wouldn't go down the way it did.

Sam Altman is the front man who 'gave us' ChatGPT regardless of everything else Ilya and everyone else did. A personal brand (or corporate) is about trust, if you have a brand you are playing a long-term game, a reputation converts prisoner's dilemma into iterated prisoner's dilemma which has a different outcome.

◧◩◪◨⬒⬓⬔⧯
1329. phpist+Ad1[view] [source] [discussion] 2023-11-22 14:48:23
>>bad_us+ya1
>>What opposing letter, how many people are we talking about, and what was their role in the company?

Not-validated, unsigned letter [1]

>>All companies are monocultures

yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.

yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"

[1] https://wccftech.com/former-openai-employees-allege-deceit-a...

◧◩◪
1338. garden+Qe1[view] [source] [discussion] 2023-11-22 14:53:48
>>silenc+59
I broadly agree but there needs to be some regulation in place. Check out https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1341. golden+ef1[view] [source] [discussion] 2023-11-22 14:54:49
>>suodua+CX
Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)

Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.

[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html

[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit

[3] https://en.wikipedia.org/wiki/Planck_constant

Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)

I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.

◧◩
1383. stikit+Ki1[view] [source] [discussion] 2023-11-22 15:08:35
>>garris+EJ
OpenAI is not a charity. Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

From https://openai.com/our-structure

- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.

-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

◧◩◪
1409. hn_thr+Rl1[view] [source] [discussion] 2023-11-22 15:22:13
>>jxi+mi1
As someone who was very critical of how the board acted, I strongly disagree. I felt like this Washington Post article gave a very good, balanced overview. I think it sounds like there were substantive issues that were brewing for a long time, though no doubt personal clashes had a huge impact on how it all went down:

https://www.washingtonpost.com/technology/2023/11/22/sam-alt...

◧◩◪
1425. cbeach+kn1[view] [source] [discussion] 2023-11-22 15:28:10
>>jxi+mi1
Curious how a relatively unknown academic with links to China [1] attained a board seat on America's hottest and most valuable AI company.

Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]

> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

◧◩◪◨⬒⬓
1433. thingi+Pn1[view] [source] [discussion] 2023-11-22 15:31:13
>>slingn+jj1
Not sure if that's intended as irony, but of course, if somebody is taking multiple years off work, you would be less likely hear about it because by definition they're not going to join the company you work for.

I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:

https://www.statista.com/statistics/1219257/us-employment-ra...

1434. voiceb+6o1[view] [source] 2023-11-22 15:32:35
>>staran+(OP)
For some reason this reminds me of the Coke/New Coke fiasco, which ended up popularizing Coke Classic more than ever before.

> Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:

    "We are not that dumb, and we are not that smart."
https://en.wikipedia.org/wiki/New_Coke
◧◩
1442. Ration+No1[view] [source] [discussion] 2023-11-22 15:35:27
>>jafitc+F91
The one piece of this that I question is the employee motivations.

First, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.

But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.

I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.

Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???

There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.

Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."

◧◩
1462. voxic1+dr1[view] [source] [discussion] 2023-11-22 15:45:55
>>garris+EJ
Even if the IRS isn't a fan, what are they going to do about it? It seems like the main recourse they could pursue is they could force the OpenAI directors/Microsoft to pay an excise tax on any "excess benefit transactions".

https://www.irs.gov/charities-non-profits/charitable-organiz...

◧◩◪
1475. ilrwbw+as1[view] [source] [discussion] 2023-11-22 15:50:55
>>Tigeri+Mg1
Isn't he a big Jeffrey Epstein fanboy? Ethical AGI is in safe hands.

https://www.thecrimson.com/article/2023/5/5/epstein-summers-...

◧◩◪
1482. jkapla+Zs1[view] [source] [discussion] 2023-11-22 15:54:36
>>jxi+mi1
> was it just Helen Toner’s personal vendetta against Sam

I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]

> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed

[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

◧◩◪◨
1499. hn_thr+ev1[view] [source] [discussion] 2023-11-22 16:04:38
>>cbeach+kn1
Oh lord, spare me with the "links to China" idiocy. I once ate a fortune cookie, does that mean I have "links to China" too?

Toner got her board seat because she was basically Holden Karnofsky's designated replacement:

> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.

https://loeber.substack.com/p/a-timeline-of-the-openai-board

◧◩◪◨⬒⬓⬔
1503. iamfli+Iv1[view] [source] [discussion] 2023-11-22 16:06:36
>>nickpp+Vr1
This is the most cogent argument against AI I've seen so far.

https://youtu.be/iGJcF4bLKd4?si=Q_JGEZnV-tpFa1Tb

◧◩◪◨
1510. scythe+nx1[view] [source] [discussion] 2023-11-22 16:14:11
>>rmbyrr+xc1
Jobs was really unusual in that he was not only a good leader, but also an ideologue with the right obsession at the right time. (Some people like the word "visionary".) That obsession being "user experience". Today it's a buzzword, but in 2001 it was hardly even a term.

The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".

There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)

1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...

◧◩◪◨⬒⬓
1523. robert+Mz1[view] [source] [discussion] 2023-11-22 16:23:43
>>yeck+ys1
I don't see how - isn't he pretty against the commercialisation efforts[0]?

[0] https://www.bbc.co.uk/news/technology-65110030

◧◩
1589. Clarit+KL1[view] [source] [discussion] 2023-11-22 17:20:35
>>TheAce+h2
https://www.msn.com/en-us/money/careersandeducation/openais-...
◧◩◪◨⬒⬓
1616. cellar+yP1[view] [source] [discussion] 2023-11-22 17:37:11
>>kcplat+PU
There are plenty of examples of workers unions voting with similar levels of agreement. Here are two from the last couple months:

> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.

https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...

> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.

https://variety.com/2023/biz/news/wga-ratify-contract-end-st...

1636. dang+8U1[view] [source] 2023-11-22 17:56:03
>>staran+(OP)
All: there are over 1800 comments in this thread. If you want to read them all, click More at the bottom of each page, or like this: (edit: er, yes they do have to be wellformed don't they):

https://news.ycombinator.com/item?id=38375239&p=2

https://news.ycombinator.com/item?id=38375239&p=3

https://news.ycombinator.com/item?id=38375239&p=4 (...etc.)

◧◩
1652. rsanek+JY1[view] [source] [discussion] 2023-11-22 18:14:03
>>melvin+nz1
http://paulgraham.com/fundraising.html
◧◩
1654. qudat+xZ1[view] [source] [discussion] 2023-11-22 18:16:51
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

No, because it is an effort in futility. We are evolving into extinction and there is nothing we can do about it. https://bower.sh/in-love-with-a-ghost

◧◩◪◨⬒⬓⬔
1666. Mistle+J02[view] [source] [discussion] 2023-11-22 18:21:52
>>nickpp+Vr1
I think this summarizes it pretty well. Even if you don't mind the garbage, the future AI will feed on this garbage, creating AI and human brain gray goo.

https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...

https://en.wikipedia.org/wiki/Gray_goo

1696. davegu+G92[view] [source] 2023-11-22 18:57:44
>>staran+(OP)
Hi dang,

Seeing a bug in your comment here:

>>38382563

You reference the pages like this:

https://news.ycombinator.com/item?id=38375239?p=2

The second ? should be an & like this:

https://news.ycombinator.com/item?id=38375239&p=2

Please feel free to delete this message after you've received it.

◧◩◪◨⬒⬓
1711. pauldd+ld2[view] [source] [discussion] 2023-11-22 19:16:05
>>uxp8u6+pE1
See earlier

> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.

>>38378069

◧◩
1715. pauldd+Qd2[view] [source] [discussion] 2023-11-22 19:18:17
>>davegu+G92
Also, while we're at it:

"Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past" [1]

HN plans to be multi-core?!?! A bigger scoop than OpenAI governance!

Anything more you can share?

[1] >>38351005

◧◩◪◨
1737. davio+Jg2[view] [source] [discussion] 2023-11-22 19:32:06
>>brrrrr+Nd2
https://x.com/kevin_scott/status/1726971608706031670?s=20
◧◩◪◨⬒⬓⬔⧯▣
1743. fsloth+Mi2[view] [source] [discussion] 2023-11-22 19:42:13
>>JohnPr+a11
For example: Two guys come in, say "Give us the godbox or your company seizes to exist. Here is a list of companies that seized to exist because the did not do as told".

Pretty much the same method was used to shut down Rauma-Repola submarines https://yle.fi/a/3-5149981

After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.

◧◩◪◨⬒
1746. occams+Pi2[view] [source] [discussion] 2023-11-22 19:42:17
>>buggle+E92
Yishan (former Reddit CEO) describes how Altman orchestrated the removal of Reddit's owner: https://www.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Note that the response is Altman's, and he seems to support it.

As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.

◧◩◪◨⬒⬓⬔⧯
1749. bcrosb+zj2[view] [source] [discussion] 2023-11-22 19:45:40
>>supert+Ce2
The US government got involved in regulating airplanes long before there were any widely available commercial offerings:

https://en.wikipedia.org/wiki/United_States_government_role_...

If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.

◧◩
1767. 6gvONx+tw2[view] [source] [discussion] 2023-11-22 20:53:02
>>laserl+gb
Looks like all the naysayers from the original “were making a for-profit but it won’t change us” post ended up correct: >>19359928
◧◩◪◨⬒
1803. muraka+sT2[view] [source] [discussion] 2023-11-22 22:56:56
>>nathan+Hy
My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.

◧◩◪◨
1809. lacker+9W2[view] [source] [discussion] 2023-11-22 23:10:39
>>ryukop+aQ2
Yes, for example Novo Nordisk is a pharmaceutical company controlled by a nonprofit, worth around $100B.

https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation

There are other similar examples like Ikea.

But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.

◧◩◪◨⬒
1830. photoc+Gp3[view] [source] [discussion] 2023-11-23 02:13:12
>>jjoona+vK1
No worries. The same kind of people who devoted their time and energy to creating open-source operating systems in the era of Microsoft and Apple are now devoting their time and energy to doing the same for non-lobotomized LLMs.

Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:

https://www.youtube.com/watch?v=N36wtDYK8kI

◧◩◪◨⬒
1866. astran+Y44[view] [source] [discussion] 2023-11-23 09:11:46
>>Davidz+V51
I think "reasoning" is a descriptive term like "AI" and it's hard to know what people would accept as reasoning.

Explicit planning with discrete knowledge is GOFAI and I think isn't workable.

There is whatever's going on here: https://x.com/natolambert/status/1727476436838265324?s=46

◧◩◪◨⬒⬓
1904. vaxman+rA6[view] [source] [discussion] 2023-11-24 04:06:50
>>erosen+Ed1
yeah, all they have to do is pray for humanity to not let the magic AI out of the bottle and they’re free to have a $91b valuation and flaunt it in the media for days.. https://youtu.be/2HJxya0CWco
◧◩◪◨⬒⬓
1907. teachi+tN6[view] [source] [discussion] 2023-11-24 07:18:13
>>LordDr+nP1
It happens a lot. Every big company has CEOs from other businesses on its board and sometimes those businesses will have competing products or services.

Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...

[go to top]