zlacker

OpenAI's board has fired Sam Altman

submitted by davidb+(OP) on 2023-11-17 20:28:50 | 5710 points 2450 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. minima+B[view] [source] 2023-11-17 20:31:24
>>davidb+(OP)
Saying this is sudden would be an understatement.

Sam Altman spoke at an APEC panel on behalf of OpenAI literally yesterday: https://twitter.com/LondonBreed/status/1725318771454456208

5. Leary+i1[view] [source] 2023-11-17 20:33:49
>>davidb+(OP)
Who were on OpenAI's board?

"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner." [1]

[1]https://openai.com/our-structure

◧◩◪
109. atlasu+h6[view] [source] [discussion] 2023-11-17 20:45:49
>>DylanB+03
His sister had levied allegations of abuse

https://www.themarysue.com/annie-altmans-abuse-allegations-a...

◧◩
113. spbaar+t6[view] [source] [discussion] 2023-11-17 20:46:28
>>strike+42
This is why the groupon CEO's firing letter remains undefeated

After four and a half intense and wonderful years as CEO of Groupon, I've decided that I'd like to spend more time with my family. Just kidding – I was fired today. If you're wondering why ... you haven't been paying attention.

https://www.theguardian.com/technology/blog/2013/mar/01/grou...

118. acheon+E6[view] [source] 2023-11-17 20:46:50
>>davidb+(OP)
Here’s why: https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
◧◩◪◨
123. wslh+S6[view] [source] [discussion] 2023-11-17 20:47:44
>>kyledi+L4
This? https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
◧◩◪◨
142. kccqzy+M7[view] [source] [discussion] 2023-11-17 20:51:42
>>atlasu+h6
It's clear that neither Sam nor his sister[0] wants to discuss this.

[0]: https://x.com/phuckfilosophy/status/1710371830043939122

◧◩◪
168. koolba+O8[view] [source] [discussion] 2023-11-17 20:55:21
>>baches+u4
I've had a positive opinion of sama as a human ever since this comment about him living with his two brothers well into their 30s: >>12592010

It's a corollary to my theory that anybody that maintains close ties with their family and lives with them is a wholesome person.

◧◩
181. magicl+y9[view] [source] [discussion] 2023-11-17 20:57:59
>>jborde+g1
Well, striking language indeed.

But.. what are the responsibilities of the board that may be hindered? I studied https://openai.com/our-structure

One tantalising statement in there is that AGI-level system is not bound by licensing agreements that a sub-AGI system would be (ostensibly to Microsoft).

This phase-shift places a pressure on management to not declare reaching a AGI level threshold. But have they?

Of course, it could be an ordinary everyday scandal but given how well they are doing, I'd imagine censure/sanctions would be how that is handled.

188. dang+I9[view] [source] 2023-11-17 20:58:22
>>davidb+(OP)
All: our poor single-core server process has smoke coming out its ears, as you can imagine.

I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)

I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:

https://news.ycombinator.com/item?id=38309611&p=2

https://news.ycombinator.com/item?id=38309611&p=3

https://news.ycombinator.com/item?id=38309611&p=4

https://news.ycombinator.com/item?id=38309611&p=5

Sorry! Performance improvements are inching closer...

◧◩◪◨⬒
192. Siddha+P9[view] [source] [discussion] 2023-11-17 20:58:41
>>lumost+Z7
Could it be the allegations by his sister??

https://twitter.com/phuckfilosophy/status/163570439893983232...

◧◩
196. cubefo+Z9[view] [source] [discussion] 2023-11-17 20:59:24
>>paxys+96
His sister claimed a while ago that he abused her when he was 13. However, she also claims other implausible things, and she isn't very mentally stable.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

207. mi3law+fa[view] [source] 2023-11-17 21:00:25
>>davidb+(OP)
My theory as a pure AGI researcher-- it's because of the AGI lies OpenAI was built on, largely due to Sam.

On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?

Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.

Screenshot of Sam's quote RE needing another breakthrough for AGI: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... source: https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...

211. crater+ma[view] [source] 2023-11-17 21:01:09
>>davidb+(OP)
Related? https://www.themarysue.com/annie-altmans-abuse-allegations-a...

Flagged HN thread: >>37785072

216. elfbar+za[view] [source] 2023-11-17 21:01:54
>>davidb+(OP)
I wonder if this is related: https://x.com/growing_daniel/status/1725618106427683149?s=20

When I googled his name I saw the same cached text show up.

EDIT: As a few have pointed out, this looks like text from a tweet he quoted, and it's incorrectly showing as the description under his google search result.

◧◩◪◨⬒
252. ignora+Zb[view] [source] [discussion] 2023-11-17 21:08:19
>>saliag+94
> We all know what. HN moderators are deleting all related comments. Edit: dang is right, sorry y'all

This from 2021? >>37785072

Bad if true, but highly unlikely that it is.

271. alvis+Pc[view] [source] 2023-11-17 21:12:31
>>davidb+(OP)
Not long ago, Ed Newton-Rex of Stability AI was also kinda forced to resign over the company's view that it is acceptable to use copyrighted work without permission to train its products. AI really causes us to face many reality :/

https://www.bbc.co.uk/news/technology-67446000

294. gigel8+vd[view] [source] 2023-11-17 21:14:50
>>davidb+(OP)
There were a bunch of flags popping up recently around Microsoft, including this: https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

And possibly related the pause of ChatGPT Plus sign-ups due to capacity problemns (which is all Azure afaik).

◧◩
304. YetAno+Wd[view] [source] [discussion] 2023-11-17 21:16:43
>>minima+B
Here is the video of him talking at yesterday's summit.: https://www.youtube.com/watch?v=ZFFvqRemDv8

It doesn't looks like he has a hint about this:

> I am super excited. I can't imagine anything more exciting to work on.

316. epivos+je[view] [source] 2023-11-17 21:18:20
>>davidb+(OP)
manifold has some play money markets about this - pure speculation of course, although traders here do take their profit somewhat seriously

https://manifold.markets/Ernie/what-will-sam-altman-be-doing...

And this tag contains all the markets about him https://manifold.markets/browse?topic=sam-altman

Will he end up at Grok? Why was he fired? etc.

321. optima+se[view] [source] 2023-11-17 21:18:54
>>davidb+(OP)
With downcast eyes and heavy heart, Eliezer left Sam Altman

Some years go by, and AGI progresses to assault man

Atop a pile of paper clips he screams "It's not my fault, man!"

But Eliezer's long since dead, and cannot hear Sam Altman.

--

Scott Alexander

https://astralcodexten.substack.com/p/turing-test

◧◩
326. AJayWa+He[view] [source] [discussion] 2023-11-17 21:19:34
>>elfbar+za
Looks like Google is incorrectly showing text from a tweet he replied to? https://twitter.com/sama/status/1717941041721139488
◧◩◪
328. mk89+Oe[view] [source] [discussion] 2023-11-17 21:20:17
>>duval+Fc
Source: https://twitter.com/sama/status/1717941041721139488
330. birrie+Re[view] [source] 2023-11-17 21:20:21
>>davidb+(OP)
This is highly speculative, but minute 18:46 in the DevDay presentation [0] struck me as very awkward. Sam's AGI comment seemed off-script, and I don't think Satya liked it very much.

[0] https://www.youtube.com/live/U9mJuUkhUzk?si=dyXBxi9nz6MocLKO

349. drawkb+yf[view] [source] 2023-11-17 21:23:41
>>davidb+(OP)
I wonder if it is related to this: [Microsoft briefly restricted employee access to OpenAI’s ChatGPT, citing security concerns](https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...)
◧◩
350. dmix+zf[view] [source] [discussion] 2023-11-17 21:23:52
>>zffr+Te
https://x.com/teddyschleifer/status/1725624511519666418?s=61...

Just something he retweeted long ago

◧◩◪◨
358. wslh+Mf[view] [source] [discussion] 2023-11-17 21:24:48
>>prepen+U4
I doubt they are financially hosed.

I don't know about the Skynet because it has happened 26 years before [1] but I imagine NSA, the Military, and other government agencies approached the company.

[1] https://en.wikipedia.org/wiki/Terminator_2:_Judgment_Day

363. tallda+Sf[view] [source] 2023-11-17 21:25:17
>>davidb+(OP)
It was obvious Sam was a creep and anyone not in the tech world said he weirded them out when they saw him in interviews. If you impose that kind of guy feeling on people, it's for a reason.

Edit: I didn't even know he molested his sister when I wrote my post: https://twitter.com/phuckfilosophy/status/163570439893983232...

◧◩◪◨⬒⬓⬔
368. samspe+0g[view] [source] [discussion] 2023-11-17 21:25:48
>>nonfam+pb
Sam Altman had no equity in OpenAI https://www.cnbc.com/2023/03/24/openai-ceo-sam-altman-didnt-...

He confirmed it verbally as well in his May 2023 hearing in Congress https://twitter.com/thesamparr/status/1658554712151433219?la...

◧◩◪
377. goatfo+bg[view] [source] [discussion] 2023-11-17 21:26:36
>>geoffe+Y2
https://en.wikipedia.org/wiki/Garden_leave
◧◩◪
384. epolan+ng[view] [source] [discussion] 2023-11-17 21:27:24
>>rogerk+k4
https://en.m.wikipedia.org/wiki/Worldcoin
◧◩◪◨⬒
389. ignora+tg[view] [source] [discussion] 2023-11-17 21:27:35
>>strike+rc
Sam worked with Andrew Ng at Stanford on ML: https://twitter.com/AndrewYNg/status/1699808792047960540 / https://archive.is/pJiF7
◧◩◪◨
392. buffin+Dg[view] [source] [discussion] 2023-11-17 21:28:53
>>Bjorkb+b7
I don't use Twitter, nor do I really pay attention to Sam Altman, but the allegations of abuse are things I've seen covered.

Your use of "crazy abuse allegations" is strange to me as well. I hardly see any of her allegations as being "crazy".

Here's a collection of things she's said about the abuse.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

◧◩◪
396. davidm+Ng[view] [source] [discussion] 2023-11-17 21:29:32
>>ugh123+Z4
Her LinkedIn profile now 404's https://www.linkedin.com/in/tasha-mccauley-25475a54
◧◩
401. dang+1h[view] [source] [discussion] 2023-11-17 21:30:18
>>ape4+K9
We detached this subthread from >>38310168 .
◧◩
402. silenc+3h[view] [source] [discussion] 2023-11-17 21:30:28
>>minima+B
Right I just was watching a video of him a few minutes ago at Cambridge: https://www.youtube.com/watch?v=NjpNG0CJRMM

It was just posted but was filmed on November 1st.

◧◩◪◨⬒
403. benzib+4h[view] [source] [discussion] 2023-11-17 21:30:32
>>nostra+O9
"Shenanigans" would not be a remotely accurate way to characterize sexual assault on a minor. Not meant as a comment on the truth of these allegations, just on the accuracy of this way of characterizing them.

As far as whether this might be the cause, one possible scenario: the board hired a law firm to investigate, Sam made statements that were contradicted by credible evidence, and that was the fireable event. Brockman could have helped cover this up. Again, not saying that this is what happened but it's plausible.

BTW Rubin's $90M payout a) caused a shitstorm at Google b) was determined in part by David Drummond, later fired in part due to sexual misconduct. I would not use this as a representative example, especially since Google now has a policy against such payouts: https://www.cbsnews.com/news/andy-rubin-google-settlement-se...

◧◩◪◨⬒⬓⬔
405. codetr+9h[view] [source] [discussion] 2023-11-17 21:30:38
>>afro88+if
No.

> Many critics have called Worldcoin's business—of scanning eyeballs in exchange for crypto—dystopian and some have compared it to bribery.

https://time.com/6300522/worldcoin-sam-altman/

> market makers control 95% of the total circulating supply at launch, leading to an initial market imbalance.

https://beincrypto.com/worldcoin-wld-privacy-risk/

> Worldcoin’s use of biometric data, which is unusual in crypto, raises the stakes for regulators. Multiple agencies expressed safety concerns amid reports of the sale of Worldcoin digital identities, known as World IDs, on virtual black markets, the ability to create and profit off of fake IDs, as well as the theft of credentials for operators who sign up new users.

https://www.bloomberg.com/news/newsletters/2023-08-23/worldc...

◧◩
406. dang+ah[view] [source] [discussion] 2023-11-17 21:30:40
>>fabian+Z8
(I detached this from >>38309689 in a desperate attempt to prune the thread a bit)
◧◩
407. dnissl+bh[view] [source] [discussion] 2023-11-17 21:30:40
>>binary+pf
https://hn.algolia.com/ by default lists the most upvoted stories
408. fragme+fh[view] [source] 2023-11-17 21:31:09
>>davidb+(OP)
Discussing happening on swyx’s twitter space now. https://twitter.com/i/spaces/1eaKbgMnDzoGX
411. nickle+nh[view] [source] 2023-11-17 21:31:34
>>davidb+(OP)
Pure speculation and just trying to connect dots... I wonder if they realized they are losing a lot of money on ChatGPT Plus subscriptions. Sam tweeted about pausing sign-ups just a few days ago: https://twitter.com/sama/status/1724626002595471740

Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?

◧◩◪◨
424. jbogga+Rh[view] [source] [discussion] 2023-11-17 21:33:13
>>brecke+le
https://twitter.com/growing_daniel/status/172561788305578426...

Given the sudden shift in billing terms that is quite possible.

427. turkus+1i[view] [source] 2023-11-17 21:34:02
>>davidb+(OP)
I think it could simply be a matter of vision. Sam just recently sounded more cautious and calculated than ever, possibly scaling down the expectations from the current state of his company's AI [1]. That might not have played well with the board, based potentially on his previous messaging to them.

[1] https://twitter.com/Andercot/status/1725300091450519927

432. tallda+9i[view] [source] 2023-11-17 21:34:27
>>davidb+(OP)
The dude molested his own sister. I think that's enough proof he's got moral issues and shouldn't be leading a company of this importance.

https://twitter.com/phuckfilosophy/status/163570439893983232...

◧◩◪
437. datafl+ki[view] [source] [discussion] 2023-11-17 21:35:22
>>JohnFe+ee
I'd never heard of that, but that definitely sounds shady. Thanks for mentioning it. To save people a search: https://en.wikipedia.org/wiki/Worldcoin
◧◩◪◨
443. tallda+si[view] [source] [discussion] 2023-11-17 21:35:40
>>wilg+If
https://twitter.com/phuckfilosophy/status/163570439893983232...
469. damian+mj[view] [source] 2023-11-17 21:39:23
>>davidb+(OP)
Relevant commentary:

https://www.youtube.com/watch?v=EUrOxh_0leE

◧◩◪◨⬒
475. hn_thr+xj[view] [source] [discussion] 2023-11-17 21:39:47
>>nostra+O9
> In general sexual shenanigans in your personal life will get you a quiet departure from the company under the "X has retired to spend more time with family / pursue other adventures / start a foundation".

Dude, where have you been for the past decade?

> Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.

And hence the colossal blowback caused by that means it ain't ever happening again. Just 2 months ago a tech CEO was forced to resign immediately for egregious conduct, losing 100+ million in the process: https://nypost.com/2023/09/20/cs-disco-ceo-kiwi-camara-loses...

480. nikcub+Gj[view] [source] 2023-11-17 21:40:38
>>davidb+(OP)
Put the pieces together:

Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more

Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]

Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]

Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)

Nov 17 - OpenAI fire Altman

Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.

This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]

Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.

Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.

edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.

[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

[1] https://twitter.com/sama/status/1724626002595471740

[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...

[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...

481. doerin+Jj[view] [source] 2023-11-17 21:40:48
>>davidb+(OP)
https://openai.com/our-structure Worth a read, in light of all this. An interesting tidbit that I bet is bouncing around his head right now:

  Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
I sincerely hope this is about the man and not the AI.
◧◩◪◨
494. owlnin+9k[view] [source] [discussion] 2023-11-17 21:42:18
>>davidm+Ng
She has changed it to just Tasha M now, odd!

https://www.linkedin.com/in/tasha-m-25475a54/

◧◩◪◨⬒⬓⬔⧯
499. nonfam+hk[view] [source] [discussion] 2023-11-17 21:42:44
>>samspe+0g
From https://openai.com/our-structure :

> Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

That word “directly” seems to be relevant here.

503. gzer0+nk[view] [source] 2023-11-17 21:43:13
>>davidb+(OP)
Eric Schmidt, former CEO of Google has this to say:

https://x.com/ericschmidt/status/1725625144519909648?s=20

Sam Altman is a hero of mine. He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible. Thank you @sama for all you have done for all of us.

Making such a statement before knowing what happened, or, maybe he does know what happened, make this seem it might not be as bad as we think?

529. kayceb+cl[view] [source] 2023-11-17 21:47:11
>>davidb+(OP)
My bet is that the last paragraph of the statement holds the key:

> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

This prompted me to actually read up on the charter: https://openai.com/charter

◧◩◪◨⬒
537. rivers+nl[view] [source] [discussion] 2023-11-17 21:48:09
>>gjsman+yi
You got me excited that Github Copilot was free. Was going to post to tell you it is, in fact, not free. I've been using Bing on Edge browser for a while now, it's super useful! Sad that they rebranded it to Copilot though, "I have been a good Bing :)" will be forever in my memory. [1] RIP Bing, you were a good chat mode.

[1] https://simonwillison.net/2023/Feb/15/bing/

541. g-w1+wl[view] [source] 2023-11-17 21:49:10
>>davidb+(OP)
There's a prediction market here about why he was fired: https://manifold.markets/sophiawisdom/why-was-sam-altman-fir...
548. george+Nl[view] [source] 2023-11-17 21:50:45
>>davidb+(OP)
Sam Altman just tweeted: https://twitter.com/sama/status/1725631621511184771
◧◩
574. nopins+Hm[view] [source] [discussion] 2023-11-17 21:54:56
>>nikcub+Gj
OpenAI’s board previously consisted of 6 people, incl Sam Altman and Greg Brockman. Two of them are more involved in technical matters at OpenAI than Sam. Now there are only four members on the board.

At least one of them must jointly make this decision with the three outside board members. I’d say it’s more likely to be business related. (In addition, the CTO is appointed as the interim CEO.) (Edit: But obviously we currently don’t really know. I think the whistleblower theory below is possible too.)

The announcement: https://openai.com/blog/openai-announces-leadership-transiti...

“OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner. …..

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.“

Previous members: https://openai.com/our-structure

“Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.”

591. lysecr+dn[view] [source] 2023-11-17 21:57:49
>>davidb+(OP)
Comment from Eric Schmidt: https://twitter.com/ericschmidt/status/1725625144519909648
600. grapos+vn[view] [source] 2023-11-17 21:59:16
>>davidb+(OP)
Sama has posted to twitter now

See: https://twitter.com/sama/status/1725631621511184771

619. reset2+jo[view] [source] 2023-11-17 22:05:10
>>davidb+(OP)
CNBC Elon Open Ai: https://youtu.be/bWr-DA5Wjfw?feature=shared
◧◩
626. northe+zo[view] [source] [discussion] 2023-11-17 22:06:57
>>nikcub+Gj
From: https://openai.com/our-structure

"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors."

So, if I were to speculate, it was because they were at odds over profit/non-profit nature of the future of OpenAI.

◧◩◪◨⬒⬓
630. nkurz+Ho[view] [source] [discussion] 2023-11-17 22:07:29
>>red-ir+Vm
>>37785072
◧◩◪◨⬒⬓
632. jug+Lo[view] [source] [discussion] 2023-11-17 22:07:38
>>red-ir+Vm
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

Sexual abuse by Sam when she was four years old and he 13.

Develops PCOS (which has seen some association with child abuse) and childhood OCD and depression. Thrown out. Begins working as sex worker for survival. It's a real grim story.

635. partia+Ro[view] [source] 2023-11-17 22:08:08
>>davidb+(OP)
A Bloomberg reporter is pointing out that his leaving YC perhaps wasn't scrutinized enough by the press, indicating this could be a pattern. https://twitter.com/EricNewcomer/status/1725633569056506282
656. ademeu+wp[view] [source] 2023-11-17 22:13:10
>>davidb+(OP)
Sam implied OpenAI had a major breakthrough a few weeks ago in a panel yesterday:

"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".

https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s

This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.

Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.

◧◩
660. tentac+Cp[view] [source] [discussion] 2023-11-17 22:13:59
>>nlh+x3
It's worth noting (though I'm not sure whether this is related), that Discord has announced that they're shutting down their ChatGPT-based bot[0], Clyde.

[0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-its-...

◧◩
677. mi3law+9q[view] [source] [discussion] 2023-11-17 22:16:38
>>ademeu+wp
On the contrary, the video you linked to is likely to be part of the lie that ousted Altman.

He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )

To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.

◧◩◪◨⬒⬓
693. Probio+Sq[view] [source] [discussion] 2023-11-17 22:20:55
>>red-ir+Vm
Sam Altman's sister says he sexually abused her when she was 4

https://twitter.com/phuckfilosophy/status/163570439893983232...

◧◩
699. ldjkfk+0r[view] [source] [discussion] 2023-11-17 22:21:18
>>nikcub+Gj
I'm trying to find the episode, but on the All in Podcast ~6 months ago, they made comments about how the corporate structure of OpenAI may have been a secret way for Sam Altman to hold a large stake in the company. I don't think this is privacy related, but that there was a shell game with the equity and the non profit status. If they were training on data like that, the board/people at the company would have known.

EDIT:

episode is here: https://www.youtube.com/watch?v=4spNsmlxWVQ,

"somebody has to own the residual value of the company, sam controls the non profit, and so the non profit after all equity gets paid out at lower valuations, owns the whole company. Sam altman controls all of open ai if its a trillion dollar valuation. Which if true would be a huge scandal"

◧◩◪
708. mi3law+xr[view] [source] [discussion] 2023-11-17 22:23:09
>>fallin+zd
They are completely premised on AGI, especially financially, down to their 100x capped for-profit structure: https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

That you did not know that does not give me confidence in the rest of your argument. Please do your research. There's a LOT of hype to see beyond.

◧◩◪
718. mi3law+Tr[view] [source] [discussion] 2023-11-17 22:25:12
>>paxys+dd
The statement he mad about AGI needing another breakthrough was not a scripted statement, so I don't think he was directed to make it. Watch it here: https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3705s

Altman has been at OpenAI since the beginning, and since the beginning OpenAI is heavily premised on AGI/superintelligence.

◧◩◪◨
738. galley+zs[view] [source] [discussion] 2023-11-17 22:28:08
>>willdr+il
Here is the charter, you can read for yourself. Its only about 500 words. https://openai.com/charter
743. gondol+Fs[view] [source] 2023-11-17 22:28:18
>>davidb+(OP)
Looks like OpenAI deleted their Privacy policy, the website returns 404: https://openai.com/de/policies/privacy-policy
◧◩
749. rockwo+Rs[view] [source] [discussion] 2023-11-17 22:29:19
>>gondol+Fs
https://openai.com/policies/privacy-policy
◧◩
762. zdenha+jt[view] [source] [discussion] 2023-11-17 22:31:22
>>gondol+Fs
Thats a localization issue perhaps, see https://openai.com/policies/privacy-policy
◧◩◪◨
772. samspe+At[view] [source] [discussion] 2023-11-17 22:32:10
>>doerin+fl
> spin-off of some sketchy for-profit AI "university" deal called "Singularity University".

Wow, that university rings some bells https://en.wikipedia.org/wiki/Singularity_Group#Controversie...

"An investigative report from Bloomberg Businessweek found many issues with the organization, including an alleged sexual harassment of a student by a teacher, theft and aiding of theft by an executive, and allegations of gender and disability discrimination.[12] Several early members of Singularity University were convicted of crimes, including Bruce Klein, who was convicted in 2012 of running a credit fraud operation in Alabama, and Naveen Jain, who was convicted of insider trading in 2003.[12]

In February 2021, during the COVID-19 pandemic, MIT Technology Review reported that a group owned by Singularity, called Abundance 360, had held a "mostly maskless" event in Santa Monica ... The event, led by Singularity co-founder Peter Diamandis, charged up to $30,000 for tickets."

◧◩
773. mkl+Dt[view] [source] [discussion] 2023-11-17 22:32:21
>>gondol+Fs
https://openai.com/policies/privacy-policy

Looks like you're looking for a German one?

◧◩◪◨⬒⬓⬔⧯
775. Sebb76+Ht[view] [source] [discussion] 2023-11-17 22:32:52
>>xkqd+Dr
See the pinned comment: >>38310213
777. ddmma+Mt[view] [source] 2023-11-17 22:33:08
>>davidb+(OP)
In case you missed, Sam Altman & OpenAI | 2023 Hawking Fellow | Cambridge Union https://youtu.be/NjpNG0CJRMM?si=j-lOpQa0qbKxIvaA
778. doener+Ot[view] [source] 2023-11-17 22:33:11
>>davidb+(OP)
Wild rumors on Twitter: https://x.com/TraderLX/status/1725633352936595820?s=20
784. MKais+1u[view] [source] 2023-11-17 22:34:01
>>davidb+(OP)
The fake it untill you make it theory:

"Sam Altman was actually typing out all the chatgpt responses himself and the board just found out"

https://twitter.com/MattZeitlin/status/1725629795306774711

◧◩
819. MVisse+kv[view] [source] [discussion] 2023-11-17 22:39:53
>>epivos+je
Grok from Musk?

No lol: https://www.foxnews.com/media/elon-musk-hints-at-lawsuit-aga...

I wouldn't be surprised if the leadership direction of sam is related to the ousting.

◧◩◪
826. ademeu+Bv[view] [source] [discussion] 2023-11-17 22:40:55
>>mi3law+9q
Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.

More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp

Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.

Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?

As I said, I hope you're right, because the alternative is a lot scarier.

◧◩◪◨⬒⬓
841. miohta+4w[view] [source] [discussion] 2023-11-17 22:42:56
>>Sebb76+Us
Adam also managed to get almost half a billy worth of money out from Softbank as a corporate loan for himself

https://finance.yahoo.com/news/softbank-takes-14b-hit-wework...

Adam is good making people rich, but those people are not his investors.

◧◩◪
865. sebast+Zw[view] [source] [discussion] 2023-11-17 22:47:08
>>ldjkfk+0r
Parent comment is referring to Sept. 29th's Episode 147 [0], at 1 hour and 4 minutes in.

[0]: https://piped.video/watch?v=4spNsmlxWVQ&t=3866

◧◩
870. adl+5x[view] [source] [discussion] 2023-11-17 22:47:40
>>gkober+B2
Well, according to his sister, he used to molest her when he was 13, and she was 4, so...

https://twitter.com/phuckfilosophy/status/163570439893983232...

◧◩◪
884. DonHop+xx[view] [source] [discussion] 2023-11-17 22:49:26
>>PDSCod+ww
I can do anything I want with her - Silicon Valley S5:

https://www.youtube.com/watch?v=29MPk85tMhc

>That guy definitely fucks that robot, right?

That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.

https://en.wikipedia.org/wiki/Ben_Goertzel

https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...

>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:

VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:

https://www.youtube.com/watch?v=AKbltBLaFeI

◧◩
891. thraxs+Qx[view] [source] [discussion] 2023-11-17 22:50:38
>>weinzi+Uv
Haha I think it's a fake account: https://twitter.com/anothercohen
897. tomcam+Yx[view] [source] 2023-11-17 22:51:26
>>davidb+(OP)
The most credible proximate cause to me is his sister’s uncontested (by him) allegations of frequent sexual abuse when they were children.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

◧◩◪
918. tomcam+Oy[view] [source] [discussion] 2023-11-17 22:55:08
>>andrew+yp
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
◧◩◪
924. jefftk+7z[view] [source] [discussion] 2023-11-17 22:56:53
>>ugh123+Z4
I don't know, but she's also on the board of EVF UK [1], which is the largest effective altruism organization.

[1] https://ev.org/effective-ventures-foundation-uk/

◧◩◪◨⬒⬓⬔⧯
926. mylidl+bz[view] [source] [discussion] 2023-11-17 22:57:05
>>resolu+Xr
You can add suppressing on hn to the list:

>>37785072

◧◩◪
939. h3h+Gz[view] [source] [discussion] 2023-11-17 22:59:38
>>nopins+Hm
From another post on their structure[1]

> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.

So given the latest statement from the board emphasizing their mission, it could be that Brockman and Sutskever were not able to participate in the board decision to fire Altman, making it a 3-to-2 or 4-to-1 vote against Altman.

[1]: https://openai.com/blog/openai-lp

◧◩
952. I_am_t+kA[view] [source] [discussion] 2023-11-17 23:03:12
>>odood+Wm
I hope not but I guess it's not totally unrealistic, given they even attended a Bilderberg conference together. https://www.youtube.com/watch?v=iPis68U7bdo
975. doener+bB[view] [source] 2023-11-17 23:07:55
>>davidb+(OP)
Sam Altman's sister, Annie Altman, claims Sam has severely abused her

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

◧◩◪◨⬒
999. DonHop+kC[view] [source] [discussion] 2023-11-17 23:13:17
>>kristo+qt
I just posted a link to a Silicon Valley episode on youtube that implies that he fucks a robot, so let's see how that one goes... ;)

>>38311627

PDSCodes 27 minutes ago | unvote | parent | context | flag | favorite | on: OpenAI's board has fired Sam Altman

Turn that on it’s head - was he standing in the way of a commercial sale or agreement with Microsoft!

He may not be the villain.

But who knows, it feels like an episode of silicon valley!

DonHopkins 22 minutes ago | prev | edit | delete [–]

I can do anything I want with her - Silicon Valley S5:

https://www.youtube.com/watch?v=29MPk85tMhc

>That guy definitely fucks that robot, right?

That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.

https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...

>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:

VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:

https://www.youtube.com/watch?v=AKbltBLaFeI

◧◩◪◨
1001. jjuliu+qC[view] [source] [discussion] 2023-11-17 23:13:35
>>eigenv+ss
>He has a proven record of extreme accomplishment, in various domains, moreso than 99.9999% of people in the tech industry.

I don't really see anything[1] that suggests that this sentence is true. Now, I'm not saying that he hasn't been successful, but there's "successful" and then there's your hyperbole.

[1]https://en.wikipedia.org/wiki/Sam_Altman

◧◩
1033. mustac+ND[view] [source] [discussion] 2023-11-17 23:21:55
>>Viktor+ou
> A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours… Meanwhile ChatGPT still thinks it’s 2021 heh

I think your assumption is misinformed. I asked ChatGPT the same question, and it looked up the news online and delivered a sparser, but accurate reply.

The GPT4 knowledge cutoff was recently updated to April 2023, btw.

https://chat.openai.com/share/66e87457-834f-422f-9b16-40902b...

◧◩◪
1034. superf+OD[view] [source] [discussion] 2023-11-17 23:21:58
>>chanks+Xx
Holy crap... Is ChatGPT just ChaCha for GenZ?

https://en.wikipedia.org/wiki/ChaCha_(search_engine)

Seriously though... I just remembered this was a thing and now I'm having crazy nostalgia.

◧◩◪◨
1040. dang+4E[view] [source] [discussion] 2023-11-17 23:23:06
>>brvsft+1l
Vouch buttons show up when a post is [dead], not when it's [flagged]. I unkilled that comment a while ago*, so it's no longer [dead], so there's no longer a vouch button.

* normally we wouldn't do that, but in threads that have a YC connection we moderate less, not more - see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

1041. schlec+5E[view] [source] 2023-11-17 23:23:07
>>davidb+(OP)
The same guy who leaked the Gobi name and some OpenAI release dates called this a month in advance:

> There’s been a vibe change at openai and we risk losing some key ride or die openai employees.

https://x.com/apples_jimmy/status/1717043210730852602?s=20

◧◩◪◨⬒
1046. dang+gE[view] [source] [discussion] 2023-11-17 23:23:56
>>kristo+qt
It's not super-banned; I specifically unkilled it. It just isn't a very good HN comment, because it's inflammatory, speculative, and doesn't contain any information.

Actually I normally would have detached it from the parent, especially because it's part of a top-heavy subthread, but I specifically didn't do that in this case because of the principle described here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

◧◩◪◨⬒⬓
1082. dragon+2G[view] [source] [discussion] 2023-11-17 23:31:10
>>golemo+iD
The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 and still exist?) OpenAI has for years been a nonprofit shell around a for-profit firm.

EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at >>38312577

◧◩◪◨⬒⬓⬔
1084. DonHop+5G[view] [source] [discussion] 2023-11-17 23:31:29
>>kristo+WE
Silicon Valley is a comedy, and that was a joke, obviously. But you can't deny there's a striking resemblance between Ariel & Fiona, and Ben & Sophia! That's why Silicon Valley was such a great show: they did their research.

The entire final storyline is about an AI trying to take over -- if you haven't watched it, you should! But many of my friends who live and work in Silicon Valley can't stand watching it, because it strikes too close to home, not because it isn't funny.

I think it's much more likely that Elon Musk fucked a robot, after having mistaken it for a human being in a robot suit.

https://www.youtube.com/watch?v=TsNc4nEX3c4

◧◩◪
1103. __jona+ZG[view] [source] [discussion] 2023-11-17 23:35:51
>>bertil+Jz
Not sure what you mean by this, it is used by the military, and not in secret:

https://scale.com/donovan

Scroll down on the page, OpenAI is listed as a model provider, with logo and everything.

Or do you mean some kind of more 'direct' deal with military?

◧◩◪
1110. kashya+hH[view] [source] [discussion] 2023-11-17 23:36:48
>>podnam+js
From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] >>35960125

◧◩◪◨
1116. enonim+CH[view] [source] [discussion] 2023-11-17 23:38:29
>>dogcom+ct
Worldcoin https://worldcoin.org/ deserves a mention
◧◩◪◨⬒
1120. enonim+1I[view] [source] [discussion] 2023-11-17 23:40:17
>>freedo+HE
He would own roughly 10% of https://worldcoin.org/ which aims to be the non-corruptible source of digital identity in the age of AI.
◧◩◪
1142. Sebb76+kJ[view] [source] [discussion] 2023-11-17 23:47:03
>>fshbbd+0H
Given the other comments in this thread, this vote was very recent, with Sam apparently not knowing of the situation yesterday. They haven't even updated their website, this page still describes Sam as CEO: https://openai.com/our-structure

With this apparent rush, I'd habour the guess that the situation just happened to unfold on a Friday and wasn't planned as such.

◧◩
1146. Manouc+xJ[view] [source] [discussion] 2023-11-17 23:47:52
>>Sebb76+KC
Microsoft already has the GPT models, that's how Azure OpenAI is a thing from what I understand.

https://learn.microsoft.com/en-us/legal/cognitive-services/o...

◧◩
1152. ignora+IJ[view] [source] [discussion] 2023-11-17 23:48:37
>>baidif+aq
Kara Swisher just tweeted that MSFT knew about it merely minutes before the statement went out: https://twitter.com/karaswisher/status/1725657068575592617

Folks like Schmidt, Levchin, Chesky, Conrad have twitter posts up that weirdly read like obituaries.

◧◩◪◨
1166. mynega+8K[view] [source] [discussion] 2023-11-17 23:50:51
>>jaredk+MI
Really different between private and public companies. Recent hilarious piece from Matt Levine that was discussed on HN: https://www.bloomberg.com/opinion/articles/2023-11-16/hacker...
◧◩◪◨⬒⬓
1178. ChuckM+CK[view] [source] [discussion] 2023-11-17 23:53:44
>>golemo+iD
As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].

[1] https://www.irs.gov/charities-non-profits/publications-for-e...

[2] https://www.irs.gov/pub/irs-pdf/p4221pc.pdf

◧◩◪◨
1191. WillPo+dL[view] [source] [discussion] 2023-11-17 23:57:03
>>dizzyd+nE
This is his sister's Twitter:

https://twitter.com/phuckfilosophy

◧◩◪
1197. enonim+xL[view] [source] [discussion] 2023-11-17 23:58:38
>>pbadam+vz
Worldcoin deserves a look: https://worldcoin.org/
◧◩◪
1198. jy1+yL[view] [source] [discussion] 2023-11-17 23:58:41
>>pbadam+vz
Doesn't everyone at openai have "profit participation units"? https://www.levels.fyi/blog/openai-compensation.html
◧◩
1199. nikcub+zL[view] [source] [discussion] 2023-11-17 23:58:41
>>nikcub+Gj
further edit: found this comment on reddit [0][1] which also seems to line up:

> I feel compelled as someone close to the situation to share additional context about Sam and company.

> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.

> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.

> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.

> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.

> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.

[0] https://www.reddit.com/r/OpenAI/comments/17xoact/sam_altman_...

[1] take it with a grain of salt

◧◩◪
1203. dragon+QL[view] [source] [discussion] 2023-11-18 00:00:32
>>JohnFe+4J
> OpenAI is two entities, one nonprofit and the other for-profit, that are owned by the same umbrella company.

According to their website, It's four entities:

1. OpenAI Global LLC (the for-profit firm that does most of the actual work), which Microsoft and #2 co-own.

2. A holding company, which #3 controls and #4 and other investors own.

3. OpenAI GP LLC, a management entity that #4 owns and which controls #3.

4. The OpenAI Nonprofit.

(There's a blog entry about OpenAI LP, a for-profit limited partnership, being founded in 2019, and I've seen information about them from earlier in 2023, but they aren't listed in the current structure. That might be the holding company, with the other investors as limited partners; its odd, if so, that its not named on the structure diagram and description.)

https://openai.com/our-structure

◧◩◪
1240. Wowfun+HN[view] [source] [discussion] 2023-11-18 00:09:24
>>nwoli+OK
Apple was not running laps around the competition at the time Steve Jobs was fired.

https://www.folklore.org/StoryView.py?project=Macintosh&stor...

◧◩◪◨
1244. chx+3O[view] [source] [discussion] 2023-11-18 00:11:00
>>dizzyd+nE
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
◧◩◪◨
1245. chx+4O[view] [source] [discussion] 2023-11-18 00:11:13
>>thequa+xD
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
◧◩◪◨⬒⬓
1247. chx+aO[view] [source] [discussion] 2023-11-18 00:11:47
>>enonim+1I
You need to read https://web3isgoinggreat.com/ more
1251. bluede+rO[view] [source] 2023-11-18 00:12:51
>>davidb+(OP)
Greg just quit too: https://twitter.com/gdb/status/1725667410387378559
◧◩◪◨⬒⬓⬔⧯
1254. dragon+AO[view] [source] [discussion] 2023-11-18 00:13:22
>>voisin+fN
https://www.thestreet.com/investors/sam-altman-net-worth-how...
1257. callum+GO[view] [source] 2023-11-18 00:13:38
>>davidb+(OP)
https://fxtwitter.com/ShimminyKricket/status/172562744637490...
◧◩◪
1265. SushiH+1P[view] [source] [discussion] 2023-11-18 00:15:37
>>schrod+sm
> Apple pulls its ads from X after Musk's antisemitic posts

>>38310673

1267. stolsv+dP[view] [source] 2023-11-18 00:16:03
>>davidb+(OP)
This totally business-as-usual post from Greg Brockman happened 1 hour before the one from OpenAI: https://x.com/gdb/status/1725595967045398920

https://x.com/openai/status/1725611900262588813

How crazy is that?!

(Edit 2 minutes after) .. and /there/ Greg quit!!

https://x.com/gdb/status/1725667410387378559

◧◩◪◨⬒
1270. scioli+lP[view] [source] [discussion] 2023-11-18 00:16:24
>>paxys+yM
Microsoft had no prior knowledge: https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...
1271. WiSaGa+wP[view] [source] 2023-11-18 00:16:58
>>davidb+(OP)
Greg Brockman quitted. https://twitter.com/gdb/status/1725667410387378559
◧◩
1287. kashya+EQ[view] [source] [discussion] 2023-11-18 00:22:15
>>baidif+aq
On lying: There's a great irony here. Altman apparently accepted[1] "Hawking Fellowship Award on behalf of OpenAI" at the University of Cambridge.

I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.

[1] https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3632s

1289. adones+XQ[view] [source] 2023-11-18 00:23:20
>>davidb+(OP)
https://twitter.com/gdb/status/1725667410387378559?t=7pBJMgg...

Greg reigned. Things are happening fr

◧◩◪
1291. Geee+fR[view] [source] [discussion] 2023-11-18 00:24:40
>>zmmmmm+8f
It might be related to this: https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.

◧◩◪◨
1294. hanzma+yR[view] [source] [discussion] 2023-11-18 00:25:59
>>outsid+Zx
Not sure if you are being sarcastic. MS has been sued for bribery and kickbacks and has paid a sizable fine to settle (including a criminal fine) with the US Justice Department.

https://www.reuters.com/article/us-microsoft-settlement/micr...

◧◩◪◨
1311. cloudk+iT[view] [source] [discussion] 2023-11-18 00:32:41
>>iandan+H9
Confirmation https://twitter.com/gdb/status/1725667410387378559
1319. orf+8U[view] [source] 2023-11-18 00:36:26
>>davidb+(OP)
https://twitter.com/FreddieRaynolds/status/17256564730808771...
◧◩◪◨⬒
1322. latexr+YU[view] [source] [discussion] 2023-11-18 00:40:58
>>enonim+CH
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

◧◩◪
1325. lannis+8V[view] [source] [discussion] 2023-11-18 00:41:44
>>asylte+xy
https://twitter.com/sama/status/1725631621511184771
◧◩◪◨⬒⬓⬔⧯▣
1328. dredmo+fV[view] [source] [discussion] 2023-11-18 00:42:11
>>mylidl+bz
Per dang, that's a consequence of user flags: <>>38311933 >

This is hardly unexpected for profound allegations without strong supporting evidence, and yes, I'm well aware that presentation of any evidence would be difficult to validate on HN, such that a third-party assessment (as in a court of law, for example) would typically be required.

I'm not claiming that HN has a stellar record of dealing with unpleasant news or inconvenient facts. But that any such bias originates from YC rather than reader responses and general algorithmic treatments (e.g., "flamewar detector") is itself strongly unsupported, and your characterisation above really is beyond the pale.

◧◩◪
1342. cowl+GW[view] [source] [discussion] 2023-11-18 00:49:31
>>paxys+tL
The knives out language is very unusual for any CEO dismissal. The urgent timing (they didnt even wait for the closure of markets just 30 min later, causing MSFT to lose billions). Anything less than massive Legal and Financial/regulatory risk or a complete behind the back Deal with someone, would have been handled with much more calm and a much less adversive language. Also Greg Borkman has now resigned after it was annoucned that he would step down as chairman of the board. https://twitter.com/gdb/status/1725667410387378559
◧◩
1346. robbie+2X[view] [source] [discussion] 2023-11-18 00:51:43
>>baidif+aq
> Cant be a personal scandal, press release would be worded much more differently

I'm not sure, I agree with your point re wording but the situation with his sister that really got resolved, so I can't help but wonder if it's related. https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

◧◩
1349. dang+hX[view] [source] [discussion] 2023-11-18 00:52:56
>>bluede+rO
Related ongoing thread:

Greg Brockman quits OpenAI - >>38312704

◧◩◪◨
1362. Nelson+7Y[view] [source] [discussion] 2023-11-18 00:57:24
>>DebtDe+Lf
Tasha McCauley has her own career and should not be characterized as "wife of an actor". https://www.linkedin.com/in/tasha-m-25475a54/
◧◩
1372. tentac+TY[view] [source] [discussion] 2023-11-18 01:02:27
>>solard+gc
> Sorry! Performance improvements are inching closer...

@dang, why have you been saying you're working on performance improvements re: pagination for three years[0]? Are there any prior architectual decisions holding you back? The "Click more" on very popular topics has turned into a bit of a meme.

[0]: https://hn.algolia.com/?dateRange=all&page=2&prefix=true&que...

◧◩◪◨
1382. latexr+901[view] [source] [discussion] 2023-11-18 01:08:55
>>mardif+pq
> Not everyone that you don't like is a fraudster.

Sam Altman in particular has precedent, with Worldcoin, that should make you wary of defending him on that particular point.

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

◧◩◪◨⬒⬓
1385. VirusN+A01[view] [source] [discussion] 2023-11-18 01:11:08
>>sersi+jX
>>34471720
1386. convex+C01[view] [source] 2023-11-18 01:11:18
>>davidb+(OP)
Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028

She also says that there will be many more top employees leaving.

◧◩
1417. ssnist+j31[view] [source] [discussion] 2023-11-18 01:27:31
>>water-+As
GPT-3 had "books1" and "books2" among its training material and "books2" never had its actual source disclosed :https://arxiv.org/pdf/2005.14165.pdf

Speculations about these source materials can be traced back as far as 2020: https://twitter.com/theshawwn/status/1320282152689336320

I don't think this issue would've flown under the radar for so long, especially with the implication that Ilya sided with the rest of the board to vote against Sam and Greg.

◧◩◪◨⬒
1421. JohnFe+A31[view] [source] [discussion] 2023-11-18 01:29:14
>>banana+PO
This was a huge deal and widely reported. You can easily find copious reporting with a web search if you don't like this link:

https://www.theverge.com/2018/3/6/17086276/google-ai-militar...

1436. fullad+K41[view] [source] 2023-11-18 01:37:17
>>davidb+(OP)
This appears to be relevant to Sam's firing: https://x.com/FreddieRaynolds/status/1725656473080877144?s=2...
◧◩◪◨⬒
1442. chubot+i51[view] [source] [discussion] 2023-11-18 01:40:21
>>lazyas+4X
Yeah, although I guess you can read that as: "I will do everything I can to raise the stock price, which executives and employees both hold", then it actually makes sense.

But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.

The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.

It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.

(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:

https://www.cnet.com/culture/how-jobs-dodged-the-stock-optio...

The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)

I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.

◧◩◪◨⬒
1450. latexr+c61[view] [source] [discussion] 2023-11-18 01:44:48
>>strike+xC
> what did he do before open ai?

Worldcoin. Which is, to put it mildly, not positive.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

◧◩◪◨
1460. thedai+H71[view] [source] [discussion] 2023-11-18 01:53:10
>>chubot+3K
In recent profile, it was stated that he jokes in private about becoming the first trillionaire, which doesnt seem to reconcile with the public persona he sought to craft. Reminds me of Zuckerberg proclaiming he would bring the world together while calling users fucking dumbshits in private chats.

https://nymag.com/intelligencer/article/sam-altman-artificia...

◧◩◪◨
1468. static+y81[view] [source] [discussion] 2023-11-18 01:59:30
>>ademeu+Vy
Altman told people on reddit OpenAI had achieved AGI and then when they reacted in surprise said he was "just meming".

https://www.independent.co.uk/tech/chatgpt-ai-agi-sam-altman...

I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?

◧◩◪◨⬒
1473. tempes+S81[view] [source] [discussion] 2023-11-18 02:01:38
>>soderf+5U
Announcement on twitter: https://twitter.com/gdb/status/1725667410387378559
◧◩◪◨
1499. chmod7+nb1[view] [source] [discussion] 2023-11-18 02:18:51
>>0xDEF+DL
Sidenote: Even the name itself is typical eastern European + Jewish.

According to Wikipedia it's "the East Slavic form of the male Hebrew name Eliyahu (Eliahu), meaning 'My God is Yahu/Jah.'"

https://en.wikipedia.org/wiki/Ilya

1513. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩◪
1515. heavys+Yc1[view] [source] [discussion] 2023-11-18 02:36:23
>>H8cril+Zp
Here's a thread titled "Sam Altman's sister, Annie Altman, claims Sam has severely abused her"[1].

[1] https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

◧◩◪
1527. ritwik+Fd1[view] [source] [discussion] 2023-11-18 02:41:08
>>airstr+PI
Military contracts are posted and solicited publicly. There's no "dark" acquisition of the type that you are suggesting. You can look up if OpenAI has any contracts with the DoD at [0]. They do not.

[0] https://www.usaspending.gov/

◧◩
1532. convex+re1[view] [source] [discussion] 2023-11-18 02:47:56
>>convex+C01
Followup tweet by Kara: Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120

1542. josh-s+Xe1[view] [source] 2023-11-18 02:52:22
>>davidb+(OP)
Kara Swisher tweets claiming sources tell her the chief scientist was aligned with another board member against Altman and Brockman about a “move fast and pursue profit” vs “move slow and safely” divide.

https://twitter.com/karaswisher/status/1725702501435941294

https://x.com/karaswisher/status/1725682088639119857?s=20

◧◩
1568. convex+ch1[view] [source] [discussion] 2023-11-18 03:08:44
>>convex+C01
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩
1630. pauldd+0p1[view] [source] [discussion] 2023-11-18 04:06:34
>>baidif+aq
> Cant be a personal scandal

And Brockman (Chairman of the board) has resigned.

https://twitter.com/gdb/status/1725667410387378559

◧◩◪◨
1633. ivraat+lp1[view] [source] [discussion] 2023-11-18 04:09:01
>>chimer+zn1
The operations of the for-profit are subservient to those of the non-profit; the board of the non-profit controls all operations of the for-profit. They're not an "umbrella company" - while technically they are two different organizations run by the same board, one is controlled by the goals of the other. See https://openai.com/our-structure.
◧◩◪◨⬒⬓⬔
1692. Intral+bx1[view] [source] [discussion] 2023-11-18 05:04:15
>>mycolo+Q61
> A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).

Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?

Not sure if these are all the same sources I read a while ago, but E.G.:

https://www.theverge.com/features/23764584/ai-artificial-int...

https://www.marketplace.org/shows/marketplace-tech/human-lab...

https://www.technologyreview.com/2022/04/20/1050392/ai-indus...

https://time.com/6247678/openai-chatgpt-kenya-workers/

https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...

https://www.noemamag.com/the-exploited-labor-behind-artifici...

https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...

1695. drodio+Ax1[view] [source] 2023-11-18 05:07:21
>>davidb+(OP)
1723 comments are a lot to get through. I just made a SmartChat of them for anyone who wants to ask for a summary. Anyone can chat with it here: https://go.storytell.ai/sam-altman-hn-comments-smartchat

I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP

Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG

Helps to turn markdown mode on to see the list & table.

Hope that helps!

1696. momofu+Gx1[view] [source] 2023-11-18 05:07:51
>>davidb+(OP)
Update from sama:

https://x.com/sama/status/1725742088317534446

1714. gordon+LA1[view] [source] 2023-11-18 05:28:57
>>davidb+(OP)
From NYT article [1] and Greg's tweet [2]

"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”

Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.

He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."

[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...

[2] https://twitter.com/gdb/status/1725736242137182594

1718. singlu+nB1[view] [source] 2023-11-18 05:36:43
>>davidb+(OP)
Greg Brockman sharing the timeline on Twitter: https://twitter.com/gdb/status/1725736242137182594?s=46&t=Nn...
◧◩◪◨⬒⬓
1729. Intral+sC1[view] [source] [discussion] 2023-11-18 05:43:33
>>r7r8f7+Px1
Ah, I edited my comment right as you were writing yours.

> Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]

For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.

> Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.

The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:

https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation

https://foundation.mozilla.org/en/what-we-fund/

> people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.

Ultimately I'm not sure what the point you're trying to argue is.

The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.

They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.

◧◩
1764. dkut+CF1[view] [source] [discussion] 2023-11-18 06:08:56
>>drodio+Ax1
Brand new to storytell but it seems your "knowledge" is open to all. Didn't know if you wanted all of this public.

http://postimg.cc/Lqv1LR3n

◧◩◪
1765. somena+HF1[view] [source] [discussion] 2023-11-18 06:09:32
>>cedws+xC1
Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...

◧◩◪◨
1774. foobie+AG1[view] [source] [discussion] 2023-11-18 06:17:37
>>outsid+4H
Yeah, Schmidt is whatever is on the opposite end of the spectrum from "mature" and "ethical."

https://www.forbes.com/sites/davidjeans/2023/10/23/eric-schm...

1777. mfigui+DG1[view] [source] 2023-11-18 06:17:49
>>davidb+(OP)
Ron Conway:

>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.

https://twitter.com/RonConway/status/1725759359748309381

1779. jdprgm+LG1[view] [source] 2023-11-18 06:19:25
>>davidb+(OP)
https://www.youtube.com/watch?v=ppx06FLieeY
◧◩◪◨
1785. kmlevi+mH1[view] [source] [discussion] 2023-11-18 06:24:23
>>avindr+FG1
Masayoshi really is dumb. PG is smart but he’s a venture capitalist, and so is Sam. His strength is in helping building multi billion dollar ventures and that’s how he ran the company, so I can see how he could run into ideological conflict with the nonprofit true believers.

https://www.japantimes.co.jp/business/2023/11/08/companies/s...

◧◩
1790. stella+8I1[view] [source] [discussion] 2023-11-18 06:31:55
>>Michae+An1
Probably via retrieval augmented generation (RAG) https://www.promptingguide.ai/techniques/rag
◧◩◪
1792. drodio+hI1[view] [source] [discussion] 2023-11-18 06:35:24
>>dkut+CF1
Thanks for sharing! Privacy in Storytell is permissioned at the content level when you upload content. There are three privacy levels in Storytell:

- "anyone with the link"

- "only my organization" (i.e., people who have registered w/ the same biz email domain)

- "just me"

You can see those SmartChat™ dynamic container tags because I have at least one piece of "anyone with the link" content in each of them.

Our goal is to de-silo content as much as possible -- i.e., as much as the person who's uploading the content wants it to be open vs. closed.

More at https://www.web.storytell.ai/support/smartchat-tm/how-to-man...

◧◩◪
1802. drodio+pJ1[view] [source] [discussion] 2023-11-18 06:47:54
>>ohblee+DC1
Cool, we're just getting started so let us know what we could build that would be helpful/valuable for you.

For example:

- We have a Chrome extension at https://go.Storytell.ai/chrome that I used to ingest all the HN comments; you can run that on any HN page to summarize all the comments in real time. (Here's an Adobe PMM talking about how he uses it: https://www.tiktok.com/@storytell.ai/video/72996137210752566... )

- We've also built OpenAI's Assistant API into Storytell to process both structured data like CSVs along-side unstructured data like PDFs: https://www.web.storytell.ai/support/engineering-demos-updat...

◧◩◪◨
1825. jacoop+SK1[view] [source] [discussion] 2023-11-18 07:00:53
>>somena+HF1
Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.

Apparently Microsoft was also blindsided by this.

https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...

◧◩◪
1831. drodio+iL1[view] [source] [discussion] 2023-11-18 07:05:07
>>Flammy+YC1
Would love to know what you'd like to see us build to make it even better for you!

You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.

Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8

◧◩
1835. dwd+zL1[view] [source] [discussion] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪◨
1838. 101011+TL1[view] [source] [discussion] 2023-11-18 07:11:03
>>somena+HF1
TY for sharing. I found this to be very enlightening, especially when reading more about the board members that were part of the oust.

One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence

Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.

1840. pts_+3M1[view] [source] 2023-11-18 07:12:17
>>davidb+(OP)
His sister alleged abuse by him when they were kids https://www.timesnownews.com/world/sam-altman-sister-annie-a...
◧◩◪◨⬒
1846. anupam+jM1[view] [source] [discussion] 2023-11-18 07:14:22
>>bagels+u81
At least there's no accurate way for a search engine to check for originality. It's like asking a machine to evaluate other machines.

Here's the top-most featured snippet when I google if programming languages had honest slogans: https://medium.com/nerd-for-tech/if-your-favourite-programmi...

Half of the above post is plagiarised from my 2020 post: https://betterprogramming.pub/if-programming-languages-had-h...

◧◩◪
1895. garbth+5R1[view] [source] [discussion] 2023-11-18 08:00:26
>>lucubr+Xa1
i think this: https://youtu.be/9iqn1HhFJ6c?si=0nBNl1R1Aw37oVUH
◧◩◪◨⬒⬓
1900. ignora+yR1[view] [source] [discussion] 2023-11-18 08:04:47
>>kazama+VN1
Pachocki, Director of Research, just quit: >>38316378

Real chance of an exodus, which will be an utter shame.

◧◩
1907. painte+iS1[view] [source] [discussion] 2023-11-18 08:11:47
>>johnwh+Uc1
Elon Musk was talking about his view on OpenAI and especially the role of Ilya just 8 days ago on Lex Friedman Podcast.

Listening to it again now, it feels like he might have know what is going on:

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100

Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5255

◧◩◪◨
1910. Alexan+CS1[view] [source] [discussion] 2023-11-18 08:14:20
>>MattRi+bO1
"That guy" has a pretty good idea when it comes to NLP

https://arxiv.org/abs/1801.06146

1912. Cyphas+ZS1[view] [source] 2023-11-18 08:16:57
>>davidb+(OP)
At the moment this thread is the third most highly voted ever on HN.

1. (6015) Stephen Hawking dying

2. (5771) Apple's letter related to the San Bernardino case

3. (4629) Sam Altman getting fired from OpenAI (this thread)

4. (4338) Apple's page about Steve Jobs dying

5. (4310) Bram Moolenaar dying

https://hn.algolia.com/

◧◩
1922. hitrad+hU1[view] [source] [discussion] 2023-11-18 08:29:35
>>johnwh+Uc1
This video dropped 2 weeks ago: https://www.youtube.com/watch?v=9iqn1HhFJ6c

Ilya clearly has a different approach to Sam

◧◩◪
1944. SXX+mY1[view] [source] [discussion] 2023-11-18 09:05:28
>>edgyqu+PM1
Microsoft only own minor share of their "for profit" subsidiary. The way OpenAI structured it's would be basically impossible for Microsoft to increase their 49% share without Non-profit board approval.

Most likely their share is this high is to guarantee no other company will compete for the share or IP. OpenAI non-profit also excluded anything that will be considered "AGI" from deal with Microsoft.

https://openai.com/our-structure

◧◩◪◨⬒⬓⬔
1948. logifa+EY1[view] [source] [discussion] 2023-11-18 09:07:27
>>fuzzte+kV1
> ?

"OpenAI is a non-profit artificial intelligence research company"

https://openai.com/blog/introducing-openai

◧◩◪◨⬒⬓
1956. aidama+pZ1[view] [source] [discussion] 2023-11-18 09:13:50
>>garden+aU1
not yet: https://arxiv.org/abs/2310.20216

that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.

most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.

◧◩◪◨⬒⬓⬔
1957. happyt+tZ1[view] [source] [discussion] 2023-11-18 09:14:40
>>parent+OC1
Including the mining of the comments for ideas to publish on mainstream news.

https://techcrunch.com/2023/02/21/the-non-profits-accelerati...

1960. jrcplu+KZ1[view] [source] 2023-11-18 09:17:20
>>davidb+(OP)
Tweet from Sam, decoded by @hellokillian: “i love you all” I L Y A “one takeaway: go tell your friends how great you think they are.”

https://twitter.com/hellokillian/status/1725799674676936931

◧◩◪◨⬒⬓⬔
1963. ben_w+XZ1[view] [source] [discussion] 2023-11-18 09:19:16
>>NoOn3+kZ1
It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...

IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.

◧◩◪
1967. Alchem+c02[view] [source] [discussion] 2023-11-18 09:20:57
>>dwd+zL1
He's since reversed his call: https://twitter.com/jeremyphoward/status/1725714720400068752
1990. redbel+I12[view] [source] 2023-11-18 09:34:25
>>davidb+(OP)
I have to admit that this was a strong shock to me, not because I admire Sam but because it was extremely unexpected.

The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!

I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!

This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.

______________________

1. https://twitter.com/edmondyang/status/1725645504527163836

2018. Maximi+I52[view] [source] 2023-11-18 10:10:28
>>davidb+(OP)
sama on twitter: "if i start going off, the openai board should go after me for the full value of my shares"

https://twitter.com/sama/status/1725748751367852439

2019. pknerd+J52[view] [source] 2023-11-18 10:10:31
>>davidb+(OP)
"Let me repeat myself. Don’t hire hot girls before product market fit."

https://twitter.com/spakhm/status/1725750772024176976

◧◩◪◨⬒⬓⬔
2022. jwestb+i62[view] [source] [discussion] 2023-11-18 10:15:04
>>aidama+pZ1
Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.

0. https://www.youtube.com/watch?v=QrSCwxrLrRc

◧◩◪◨⬒⬓⬔⧯
2037. oska+k82[view] [source] [discussion] 2023-11-18 10:31:13
>>concor+y72
I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s

2073. falitj+9e2[view] [source] 2023-11-18 11:19:01
>>davidb+(OP)
Sama's I love you all –> I L Y A https://twitter.com/sama/status/1725742088317534446
◧◩◪◨⬒
2097. ayewo+ej2[view] [source] [discussion] 2023-11-18 11:57:07
>>croes+Fg2
Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

> But the company absolutely blossomed nonetheless.

> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

> I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752

2124. keepam+bo2[view] [source] 2023-11-18 12:29:57
>>davidb+(OP)
I find it fascinating how this occurred just after the big World Leader / CEO meet in SF.

Also, the paradox in the reactions to Sam Altman's firing is striking:

while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.

This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.

Another important question not discussed here: who sits on the board of OpenAI exactly and in full?

Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”

0: https://openai.com/blog/openai-announces-leadership-transiti...

◧◩◪◨⬒⬓⬔
2132. FabHK+Po2[view] [source] [discussion] 2023-11-18 12:34:05
>>foldr+6l2
Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.

[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw

2167. natebu+2z2[view] [source] 2023-11-18 13:39:48
>>davidb+(OP)
Follow the GPU.

- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.

- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...

- Plus signups paused.

If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.

Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.

◧◩◪◨⬒⬓
2203. colone+8J2[view] [source] [discussion] 2023-11-18 14:38:14
>>airstr+ul2
OpenAI has been for profit since 2019.

https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

◧◩◪◨⬒⬓⬔⧯
2214. nvm0n2+oN2[view] [source] [discussion] 2023-11-18 15:03:15
>>pmoria+wH2
Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.

https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...

Q: Can you decide on a satisfying programming project using noisemaps?

A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?

It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.

◧◩◪◨⬒⬓⬔
2215. cthalu+3O2[view] [source] [discussion] 2023-11-18 15:07:25
>>colone+8J2
It is not that simple. https://openai.com/our-structure

The board is for the non-profit that ultimately owns and totally controls the for-profit company.

Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.

◧◩◪◨⬒⬓⬔⧯
2244. killer+TS2[view] [source] [discussion] 2023-11-18 15:39:39
>>criley+jD2
GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?

◧◩◪◨⬒⬓
2252. mv4+FU2[view] [source] [discussion] 2023-11-18 15:49:32
>>kazama+VN1
Money attracts talent as well. Altman knows how to raise money.

2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...

2258. bicepj+6W2[view] [source] 2023-11-18 15:57:11
>>davidb+(OP)
Did you folks see this ? https://x.com/thecaptain_nemo/status/1725717732518461930?s=4...

openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG

◧◩◪◨⬒⬓⬔
2268. Philpa+wY2[view] [source] [discussion] 2023-11-18 16:11:08
>>bl0rg+fE2
It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw
◧◩◪◨⬒
2275. dmix+XZ2[view] [source] [discussion] 2023-11-18 16:19:21
>>101011+TL1
So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.

There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)

The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.

2304. amai+8h3[view] [source] 2023-11-18 17:48:11
>>davidb+(OP)
Somebody on HN saw this coming: >>36604501
◧◩◪◨⬒⬓
2312. sumthi+4k3[view] [source] [discussion] 2023-11-18 18:02:18
>>orwin+oX
A primer, I guess =]

https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...

◧◩◪
2315. sillys+xm3[view] [source] [discussion] 2023-11-18 18:13:49
>>dang+3t1
Publish the timestamps of all votes for the top 10 most upvoted stories. Then the community can create scatterplots showing the acceleration of each story's score:

  (def allstories ()
    "All visible loaded stories"
    (keep cansee (vals items*)))

  (def mostvoted (n (o stories (allstories)))
    "N most upvoted stories"
    (bestn n (compare > len:!votes) stories))

  (def votetimes (s)
    "The timestamp of each vote, in ascending order"
    (sort < (map car s!votes)))

  ; save vote timestamps for top 10 most upvoted stories

  ; each line contains the story id followed by a list of timestamps

  (w/outfile o "storyvotes.txt"
    (w/stdout o
      (each s (mostvoted 10)
        (apply prs s!id (votetimes s))
        (prn))))

  ; paste storyvotes.txt to https://gist.github.com/ and post the url here
Note that this prints the timestamp of all votes, whereas each story's score is vote count minus sockpuppet votes.

If you don't want to reveal the timestamps of every vote, you could randomly drop K timestamps for each story, where K is the vote count minus the score. (E.g. >>3078128 has 4338 points, and you'll only reveal 4338 timestamps.) Since there are thousands of votes, this won't skew the scatterplot much.

◧◩◪◨⬒⬓⬔
2320. trompe+oq3[view] [source] [discussion] 2023-11-18 18:30:28
>>krick+bK1
I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.

Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.

Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.

*They call it a hash but I think it's technically not.

https://whitepaper.worldcoin.org/technical-implementation

◧◩◪◨⬒⬓⬔⧯▣▦▧
2329. ddj231+Xw3[view] [source] [discussion] 2023-11-18 19:10:31
>>kelsey+yc3
This article gives a clear view on Marx’s vs. Engel’s view of Utopianism vs. other utopian socialists [1]. That Marx was not opposed to utopianism per se, but rather when the ideas of the utopia did not come from the proletariat. Yet you’re right in that he was opposed to the view of the other utopian socialist, and there is tension in the views of the different socialist thinkers in that time. (I do disagree on the idea that refusing to propose an ideal negates one from in practice having a utopic vision)

That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].

[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...

[2] https://www.jstor.org/stable/3143688?seq=1

◧◩◪◨
2353. shrimp+lP3[view] [source] [discussion] 2023-11-18 20:56:56
>>Intral+7E1
Also Dang the designer from the show "Silicon Valley" https://www.youtube.com/watch?v=qyLv1dQasaY
◧◩◪◨⬒⬓⬔⧯▣▦
2355. killer+PP3[view] [source] [discussion] 2023-11-18 20:58:57
>>dcow+oV2
Chat-based AI like ChatGPT are marketed as an assistant. People expect that it can answer their questions, and often it can answer even complex questions correctly. Then it can fail miserably on a basic question.

GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.

Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.

Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441

Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.

◧◩
2356. thinkc+AQ3[view] [source] [discussion] 2023-11-18 21:03:32
>>andyjo+6p
Because it's the reason he got fired.

https://www.plainsite.org/posts/aaron/r8huu7s/

◧◩◪◨
2387. justin+Lr4[view] [source] [discussion] 2023-11-19 00:22:57
>>moffka+G72
George Lucas's neck used to have a blog [0] but it's been inactive in recent years. If Ilya reaches a certain level of fame, perhaps his hair will be able to persuade George's neck to come out of retirement and team up on a YouTube channel or something.

[0] https://georgelucasneck.tumblr.com/

◧◩◪◨⬒⬓⬔⧯
2388. rattra+2s4[view] [source] [discussion] 2023-11-19 00:24:56
>>rattra+xT3
seems to be Szymon Sidor – https://archive.is/Ij684#selection-595.304-595.316
◧◩◪◨⬒⬓
2397. chatma+NV4[view] [source] [discussion] 2023-11-19 03:43:03
>>dragon+sp
Random question: do you have any connection to the Dragon speech-to-text software [0] that was first released in 1997? I've always found that to be an intriguing example of software that was "ahead of its time" (along with "the mother of all demos" [1]). And if so, it's funny to see you replying to the account named after (a typo of) ChatGPT.

[0] https://en.wikipedia.org/wiki/Dragon_NaturallySpeaking

[1] https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

2411. gebt+DH5[view] [source] 2023-11-19 11:11:30
>>davidb+(OP)
Last week lcamtuf published a blog post on corporate life which exactly fits here. Whatever you do as a staff for your company, they still can fire you easily. Corporates want you to consider them as family, but they don't do it themselves.

https://lcamtuf.substack.com/p/on-corporate-life

◧◩◪◨⬒⬓
2416. jakder+GW5[view] [source] [discussion] 2023-11-19 13:29:25
>>vidarh+UY2
I understand why people fall for it. They see someone highly successful and assume they possess prophetic insights into the world so profound that trying to explain his tweets to us mortals would waste both our time.

Even using an anonymous account on HN, I'd never express such certainty unaccompanied by any details or explanation for it.

The people on the following list are much wealthier than that VC guy:

https://en.wikipedia.org/wiki/List_of_Tiger_Cubs_(finance)

You can find them on Twitter promoting unsourced COVID vaccine death tolls, claims of "obvious" election fraud in every primary and general election Trump ran in, and I've even seen them tweet each other about Obama's birth certificate being fake as late as 2017. Almost all of them promote the idea that the COVID vaccine is poison and almost all of them promote the idea that Trump hasn't received fair credit for discovering that same vaccine. They're successful because they jerked off the right guy the right way and landed jobs at Tiger.

◧◩◪◨⬒
2421. happyt+H57[view] [source] [discussion] 2023-11-19 19:35:19
>>happyt+WJ
That didn’t take long.

https://www.bloomberg.com/news/articles/2023-11-19/altman-so...

◧◩◪◨⬒⬓⬔⧯▣▦
2429. dagaci+8c9[view] [source] [discussion] 2023-11-20 08:49:34
>>nprate+qV5
Forked:

https://twitter.com/satyanadella/status/1726509045803336122

"to lead a new advanced AI research team"

I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.

◧◩
2443. omgJus+Nbk[view] [source] [discussion] 2023-11-23 00:10:58
>>omgJus+3i
Now #1 appears to be manifesting https://www.reuters.com/technology/sam-altmans-ouster-openai...
◧◩◪◨⬒⬓⬔⧯
2448. roguec+5ns[view] [source] [discussion] 2023-11-26 02:29:28
>>fennec+VN9
The model T killed a _lot_ of people, and almost certainly should have been banned: https://www.detroitnews.com/story/news/local/michigan-histor...

If it had been, we wouldn't now be facing an extinction event.

[go to top]