zlacker

OpenAI staff threaten to quit unless board resigns

submitted by skille+(OP) on 2023-11-20 13:41:27 | 1441 points 1245 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. skille+g[view] [source] 2023-11-20 13:42:21
>>skille+(OP)
https://archive.is/RiAqC
6. JumpCr+74[view] [source] 2023-11-20 13:57:26
>>skille+(OP)
We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.

[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...

◧◩
24. lordna+G5[view] [source] [discussion] 2023-11-20 14:02:25
>>yeck+o4
For people who appreciate some vintage British comedy:

https://www.youtube.com/watch?v=Gpc5_3B5xdk

The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?

42. rsecor+D6[view] [source] 2023-11-20 14:05:21
>>skille+(OP)
Also discussed here: >>38348042
50. breadw+17[view] [source] 2023-11-20 14:06:24
>>skille+(OP)
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

72. sesutt+88[view] [source] 2023-11-20 14:09:50
>>skille+(OP)
Ilya posted this on Twitter:

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

https://twitter.com/ilyasut/status/1726590052392956028

◧◩
78. fallen+q8[view] [source] [discussion] 2023-11-20 14:10:26
>>wxw+T4
he even posted a apology: https://x.com/ilyasut/status/1726590052392956028?s=20

what the actual fuck =O

96. ekojs+a9[view] [source] 2023-11-20 14:12:37
>>skille+(OP)
From The Verge [1]:

> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.

So, 3/4 of the current board (excluding Ilya) held on despite this letter?

[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...

◧◩◪◨
99. almost+h9[view] [source] [discussion] 2023-11-20 14:12:48
>>nextwo+k7
MSFT RSUs actually have value as opposed to OpenAI’s Profit Participation Units (PPU).

https://www.levels.fyi/blog/openai-compensation.html

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...

132. nkcmr+xa[view] [source] 2023-11-20 14:17:02
>>skille+(OP)
A lot of people here seem to be forgetting [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor)

> Never attribute to malice that which is adequately explained by stupidity.

◧◩◪◨
148. engina+gb[view] [source] [discussion] 2023-11-20 14:19:21
>>saagar+I7
looks like found his twitter password https://x.com/ilyasut/status/1726590052392956028?s=20
◧◩◪
151. dhruvd+ub[view] [source] [discussion] 2023-11-20 14:20:12
>>paulpa+Q8
They acquired Activision for 69B recently.

While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.

Some background: https://sl.bing.net/dEMu3xBWZDE

◧◩◪
188. DonHop+Tc[view] [source] [discussion] 2023-11-20 14:24:40
>>zeven7+R7
"I am NOT a BELLBOY!"

https://www.youtube.com/watch?v=d8oVTKG39U8&t=27s

◧◩◪
224. breadw+Ee[view] [source] [discussion] 2023-11-20 14:30:51
>>Myster+x9
See here: https://stratechery.com/2023/openais-misalignment-and-micros...
233. alvis+4f[view] [source] 2023-11-20 14:32:09
>>skille+(OP)
& the most drastic thing is that Ilya says he regrets what he has done and undersign the public statement.

https://twitter.com/ilyasut/status/1726590052392956028

◧◩
275. Philpa+Hg[view] [source] [discussion] 2023-11-20 14:38:48
>>ThinkB+Vb
It looks like it's OpenAI²: https://twitter.com/satyanadella/status/1726516824597258569
◧◩◪◨
279. j-a-a-+Vg[view] [source] [discussion] 2023-11-20 14:39:35
>>JumpCr+x8
Ha! One of my all-time favourites, the fuck-you position. The Gambler, the uncle giving advice:

You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.

https://www.imdb.com/title/tt2039393/characters/nm0000422

◧◩
296. raphma+Ph[view] [source] [discussion] 2023-11-20 14:42:32
>>tarrud+qf
Somehow reminds me of Nokia...

>>7645482

frik on April 25, 2014:

> The Nokia fate will be remembered as hostile takeover. Everything worked out in the favor of Microsoft in the end. Though Windows Phone/Tablet have low market share, a lot lower than expected.

> * Stephen Elop the former Microsoft employee (head of the Business Division) and later Nokia CEO with his infamous "Burning Platform" memo: http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia

> * Some former Nokia employees called it "Elop = hostile takeover of a company for a minimum price through CEO infiltration": https://gizmodo.com/how-nokia-employees-are-reacting-to-the-...

For the record: I don't actually believe that there is an evil Microsoft master plan. I just find it sad that Microsoft takes over cool stuff and inevitably turns it into Microsoft™ stuff or abandons it.

◧◩◪◨
302. jacque+ei[view] [source] [discussion] 2023-11-20 14:44:00
>>toomuc+B8
>>38331457
◧◩
311. ChoGGi+Gi[view] [source] [discussion] 2023-11-20 14:46:43
>>whatwh+Ob
https://twitter.com/ilyasut/status/1726590052392956028?s=20
◧◩◪◨
312. kolink+Ji[view] [source] [discussion] 2023-11-20 14:46:56
>>breadw+Ee
The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.
329. theyin+rj[view] [source] 2023-11-20 14:50:41
>>skille+(OP)
Wow, they made it into Guardian live ticker land: https://www.theguardian.com/business/live/2023/nov/20/openai...
◧◩◪◨
408. ekojs+Qm[view] [source] [discussion] 2023-11-20 15:08:01
>>derwik+Sh
They do actually, https://learn.microsoft.com/en-us/azure/ai-services/openai/w...
427. tolmas+Rn[view] [source] 2023-11-20 15:14:40
>>skille+(OP)
Perhaps the AGI correctly reasoned that the best (or easiest?) initial strike on humanity was to distract them with a never-ending story about OpenAI leadership that goes back and forth every day. Who needs nuclear codes when simply turning the lights on and off sends everyone into a frenzy [1]. It certainly at the very least seems to be a fairly effective attack against HN servers.

1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...

◧◩
434. alexdu+7o[view] [source] [discussion] 2023-11-20 15:16:12
>>two_in+nf
Competitors including Quora: https://quorablog.quora.com/Poe-1
◧◩◪◨
448. hotnfr+Wo[view] [source] [discussion] 2023-11-20 15:22:25
>>toomuc+B8
>>38338096

What do I win? Hahaha.

◧◩
475. tucnak+xq[view] [source] [discussion] 2023-11-20 15:32:58
>>sesutt+88
To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.

The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].

Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.

[1]: >>38330819

[2]: https://nitter.net/jeremyphoward/status/1725712220955586899

◧◩◪◨
537. Terret+Ou[view] [source] [discussion] 2023-11-20 16:00:16
>>Furiou+re
> all roads lead back Adam d'Angelo

Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.

Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.

https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...

◧◩
538. himara+Tu[view] [source] [discussion] 2023-11-20 16:00:46
>>breadw+17
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

◧◩◪◨
541. culi+5v[view] [source] [discussion] 2023-11-20 16:01:52
>>august+Vt
This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

https://nonprofitquarterly.org/newmans-philanthropic-excepti...

◧◩◪◨
543. culi+dv[view] [source] [discussion] 2023-11-20 16:02:55
>>august+Vt
This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

https://nonprofitquarterly.org/newmans-philanthropic-excepti...

> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:

    The business must be owned by the private foundation through 100 percent ownership of the voting stock.
    The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
    All profits of the business must be distributed to the foundation.
◧◩
561. giggle+Ew[view] [source] [discussion] 2023-11-20 16:10:49
>>ekojs+a9
She's also reporting that newly anointed interim CEO already wants to investigate the board fuck up that put him there

https://x.com/karaswisher/status/1726626239644078365?s=20

562. zitter+cx[view] [source] 2023-11-20 16:13:47
>>skille+(OP)
I guess Microsoft now has a new division. (https://www.microsoft.com/investor/reports/ar13/financial-re...)

Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.

◧◩◪◨
573. strunz+Ky[view] [source] [discussion] 2023-11-20 16:21:59
>>guhcam+Vs
His post previous to that seems pretty ironic in that light - https://twitter.com/ilyasut/status/1710462485411561808
590. r4inde+jA[view] [source] 2023-11-20 16:30:33
>>skille+(OP)
The tweet was updated five minutes later to correct 550 to 505.

https://twitter.com/karaswisher/status/1726599700961521762?s...

623. Emma_G+QE[view] [source] 2023-11-20 16:53:23
>>skille+(OP)
I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.

The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.

[1] https://openai.com/charter

◧◩◪◨
626. bnralt+CF[view] [source] [discussion] 2023-11-20 16:56:07
>>vel0ci+dD
> They don't make large profits otherwise they wouldn't be nonprofits.

This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.

But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.

[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...

◧◩
656. FartyM+LI[view] [source] [discussion] 2023-11-20 17:06:54
>>Emma_G+QE
> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].

If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.

[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...

◧◩
657. minima+PI[view] [source] [discussion] 2023-11-20 17:07:05
>>r4inde+jA
The tweet is now obsolete as OpenAI employees are confirming the number is much higher now, atleast 650: https://twitter.com/lilianweng/status/1726634736943280270
663. dang+HJ[view] [source] 2023-11-20 17:09:53
>>skille+(OP)
All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.

I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:

https://news.ycombinator.com/item?id=38347868&p=2

https://news.ycombinator.com/item?id=38347868&p=3

https://news.ycombinator.com/item?id=38347868&p=4

etc...

◧◩◪
665. Emma_G+PJ[view] [source] [discussion] 2023-11-20 17:10:16
>>mcny+zG
Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.

EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...

◧◩◪
702. selimt+9N[view] [source] [discussion] 2023-11-20 17:20:41
>>lordna+G5
Funny, I would’ve thought this one would have been more appropriate

https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm

Substitute with appropriate ex-Soviet doomer music as necessary

◧◩◪◨⬒
740. araes+9Q[view] [source] [discussion] 2023-11-20 17:29:21
>>bnralt+CF
501(c)(3) is also not the only form of non-profit (note the (3))

https://en.wikipedia.org/wiki/501(c)_organization

"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"

However, many other forms of organizations can be non-profit, with utterly no implied morality.

Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].

Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes. https://stratechery.com/2023/openais-misalignment-and-micros...

◧◩◪◨⬒⬓
745. aaronb+nQ[view] [source] [discussion] 2023-11-20 17:30:13
>>bbor+6L
https://www.nytimes.com/2023/01/25/podcasts/the-daily/nonpro...
◧◩◪
746. throwa+rQ[view] [source] [discussion] 2023-11-20 17:30:26
>>dayjah+RN
Not just Sam, since Greg stuck with Sam and immediately quit he set the precedent for the rest of the company. If you read this post[0] by Sam about Greg's character and work ethic you'll understand why so many people would follow him. He was essentially the platoon sergeant of OpenAI and probably commands an immense amount of loyalty and respect. Where those two go, everyone will follow.

[0] https://blog.samaltman.com/greg

◧◩◪◨⬒
762. ben_w+uR[view] [source] [discussion] 2023-11-20 17:34:07
>>crazyg+aO
Apple's walled gardens are probably a good thing for safe AI, though they're a lot quieter about their research — I somehow missed that they even had any published papers until I went looking: https://machinelearning.apple.com/research/
770. silver+fS[view] [source] 2023-11-20 17:37:12
>>skille+(OP)
Entire history of fiasco on X

https://docs.google.com/document/d/1SWnabqe1PviVE3K7KIZsN4IA...

◧◩◪◨⬒⬓
779. jdminh+RS[view] [source] [discussion] 2023-11-20 17:39:33
>>belter+FQ
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.

◧◩
781. alephn+6T[view] [source] [discussion] 2023-11-20 17:40:52
>>kashya+KM
Adam D'Angelo was brought in as a friend because Sam Altman lead Quora's Series D around the time OpenAI was founded, and he is a board member on Dustin Moskovitz's Asana.

Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]

Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.

Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]

Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.

It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.

[0] - https://www.openphilanthropy.org/grants/openai-general-suppo...

785. silver+rT[view] [source] 2023-11-20 17:41:52
>>skille+(OP)
ICYMI: Timeline of all the madness >>38351214
789. ActVen+QT[view] [source] 2023-11-20 17:43:14
>>skille+(OP)
Adam has to be behind this. It is very reminiscent of the situation with Quora and Charlie. https://x.com/gergelyorosz/status/1725741349574480047?s=46&t...
◧◩
790. kashya+ST[view] [source] [discussion] 2023-11-20 17:43:17
>>Emma_G+QE
(I can't comment on the workforce question, but one thing below on bringing SamA back.)

Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)

That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.

These sort of Silicon Valley top dogs don't think in terms of sustainability.

Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.

[1] >>38312294

[2] https://news.ycombinator.com/edit?id=38350890

797. mfigui+mU[view] [source] 2023-11-20 17:44:33
>>skille+(OP)
Amir Efrati (TheInformation):

> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.

https://twitter.com/amir/status/1726656427056668884

◧◩◪
816. accrua+PV[view] [source] [discussion] 2023-11-20 17:50:17
>>calf+Mk
/r/singularity has been having a field day with this.

https://old.reddit.com/r/singularity/

◧◩◪
846. alasda+hY[view] [source] [discussion] 2023-11-20 17:57:33
>>himara+Tu
They could make ChatGPT++

https://en.wikipedia.org/wiki/Visual_J%2B%2B

◧◩◪◨
862. alephn+xZ[view] [source] [discussion] 2023-11-20 18:02:09
>>mizzao+1U
From Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives. They also gave OpenAI the initial $30M. She was their observer.

https://www.openphilanthropy.org/grants/openai-general-suppo...

◧◩◪◨⬒
864. ignora+KZ[view] [source] [discussion] 2023-11-20 18:03:05
>>idopms+CD
https://dictionary.cambridge.org/dictionary/english/likely
◧◩◪◨
865. himara+UZ[view] [source] [discussion] 2023-11-20 18:03:39
>>svnt+uX
I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?

[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...

◧◩◪◨
877. Terret+011[view] [source] [discussion] 2023-11-20 18:07:30
>>svnt+uX
>>38348995
903. ekojs+C31[view] [source] 2023-11-20 18:16:26
>>skille+(OP)
Now the count is at 700/770 (https://twitter.com/ashleevance/status/1726659403124994220).
◧◩◪◨⬒⬓
925. jowea+O51[view] [source] [discussion] 2023-11-20 18:24:21
>>denton+PP
> My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".

It takes time if you're a normal employee under standard operating procedure. If you really want to you can merge two of the largest financial institutions in the world in less than a week. https://en.wikipedia.org/wiki/Acquisition_of_Credit_Suisse_b...

◧◩◪◨
933. topspi+W61[view] [source] [discussion] 2023-11-20 18:28:52
>>dkjaud+fZ
> In that reading Altman is head clown.

That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."

https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...

◧◩◪◨⬒
942. Apocry+D71[view] [source] [discussion] 2023-11-20 18:30:59
>>anonym+8M
That's what happened to the German program though

https://en.wikipedia.org/wiki/German_nuclear_weapons_program

974. layer8+dc1[view] [source] 2023-11-20 18:46:21
>>skille+(OP)
700+ of 770 now: https://twitter.com/joannejang/status/1726667504133808242
◧◩◪◨
978. lrvick+zc1[view] [source] [discussion] 2023-11-20 18:47:44
>>supriy+7J
There is no moat

https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

980. Parano+Vc1[view] [source] 2023-11-20 18:48:44
>>skille+(OP)
THE FEAR AND TENSION THAT LED TO SAM ALTMAN’S OUSTER AT OPENAI

https://txtify.it/https://www.nytimes.com/2023/11/18/technol...

NYT article about how AI safety concerns played into this debacle.

The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.

Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:

https://twitter.com/thiagovscoelho/status/172650681847663424...

And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.

◧◩◪
989. btown+ue1[view] [source] [discussion] 2023-11-20 18:54:03
>>himara+Tu
Archive of the WSJ article above: https://archive.is/OONbb
◧◩◪◨⬒⬓
1026. freedo+qm1[view] [source] [discussion] 2023-11-20 19:24:38
>>financ+VB
I don't actually think (2) is part of the razor[1]. If it is, then it doesn't make sense because (1) is an absolute (i.e. "never") which is always evaluated boolean "true", therefore statement (2) is never actually executed and is dead code.

Nevertheless I agree with you and think (2) is wise to always keep in mind. I love Hanlon's Razor but people definitely should take it literally as written and/or as law.

[1]: https://en.wikipedia.org/wiki/Hanlon%27s_razor

◧◩◪◨⬒⬓
1076. incaho+RH1[view] [source] [discussion] 2023-11-20 20:44:36
>>0xNotM+1w1
Makes sense given their deal with the DoD a year or so ago

https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...

◧◩◪◨⬒⬓⬔
1077. noprom+0I1[view] [source] [discussion] 2023-11-20 20:45:21
>>nvm0n2+oa1
I'll leave this here... As a secondary response to your assertion re Ilya.

https://twitter.com/Benioff/status/1726695914105090498

1083. Parano+iL1[view] [source] 2023-11-20 20:59:22
>>skille+(OP)
https://twitter.com/thiagovscoelho/status/172650681847663424...

Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:

Preamble on his general pro-tech stance:

"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."

On why AI would be dangerous to humanity:

"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."

On his path to safe AI:

"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."

On why skeptics need to stop side-stepping the debate:

"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.

"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."

◧◩◪◨⬒⬓⬔⧯▣
1085. westur+RL1[view] [source] [discussion] 2023-11-20 21:00:55
>>tempes+Gt1
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" >>38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486

◧◩◪◨⬒⬓⬔
1122. shon+FW1[view] [source] [discussion] 2023-11-20 21:46:59
>>acje+US1
http://clippy.pro
◧◩◪◨⬒
1131. cpeter+102[view] [source] [discussion] 2023-11-20 22:03:45
>>burnte+hr1
Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.

https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...

◧◩
1194. biglyb+rH2[view] [source] [discussion] 2023-11-21 02:34:40
>>burcs+IB2
Link to latest numbers that say 95%? Last I saw was ~91% (700-of-770):

https://www.washingtonpost.com/technology/2023/11/20/microso...

◧◩◪
1197. IM6711+rJ2[view] [source] [discussion] 2023-11-21 02:48:48
>>biglyb+rH2
Here’s a tweet from Evan Morikawa, who’s been reporting numbers throughout the day.

https://twitter.com/E0M/status/1726743918023496140

◧◩◪
1199. breadw+MN2[view] [source] [discussion] 2023-11-21 03:18:37
>>himara+Tu
"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."

Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...

◧◩◪
1203. runjak+vZ2[view] [source] [discussion] 2023-11-21 04:44:42
>>himara+Tu
1. The article you posted is from June 2023.

2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.

That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.

Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.

It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.

https://twitter.com/karaswisher/status/1726782065272553835

Caveat: I don't know anything.

◧◩◪◨⬒⬓⬔
1213. nvm0n2+wr3[view] [source] [discussion] 2023-11-21 08:51:50
>>famous+rx1
Right, it was the case. Is it still? It's nearly the end of 2023, I see three papers with his name on them this year and they're all last-place names (i.e. minor contributions)

https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ...

Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.

◧◩◪◨⬒⬓⬔⧯▣
1232. razoda+Cj6[view] [source] [discussion] 2023-11-21 23:31:22
>>foobar+xr1
I'm actually playing with this idea: I've created a model from scratch and have it running occasionally on my Discord. https://ftp.bytebreeze.dev is where I throw up models and code. I'll be releasing more soon.
◧◩◪◨⬒⬓⬔
1238. Manouc+N09[view] [source] [discussion] 2023-11-22 17:52:11
>>ipaddr+d51
I was going to, but then I discovered LibreChat existed a few weeks ago. I use it way more often than ChatGPT now, it's been quite stable for me.

https://github.com/danny-avila/LibreChat

1241. caycep+ooc[view] [source] 2023-11-23 18:00:12
>>skille+(OP)
Comedy is starting to weigh in: https://www.threads.net/@thedailyshow/post/Cz8-7nLucAm
◧◩◪◨⬒⬓⬔⧯▣
1243. fsflov+pHc[view] [source] [discussion] 2023-11-23 19:43:44
>>Justsi+Vqc
There were many currencies in history that have lost all or almost all its value upon serious economical crisis in respective countries. It seems you wouldn't call that money? Crypto is simply an alternative currency.

See also: https://en.m.wikipedia.org/wiki/Private_currency

[go to top]