zlacker

We have reached an agreement in principle for Sam to return to OpenAI as CEO

submitted by staran+(OP) on 2023-11-22 06:01:45 | 1980 points 1915 comments
[view article] [source] [links] [go to bottom]
replies(201): >>jahsom+n >>siva7+o >>AaronN+r >>adlpz+u >>colmvp+A >>turndo+D >>eganis+F >>Finbar+G >>Uhhrrr+L >>ssnist+M >>koito1+P >>rvz+Q >>mlazos+R >>xyst+S >>tomohe+V >>anothe+Y >>gloyoy+31 >>ryzvon+71 >>HPMOR+g1 >>hadrie+h1 >>r721+k1 >>epups+l1 >>weirdi+o1 >>gzer0+B1 >>seydor+F1 >>KoftaB+H1 >>waihti+M1 >>wnevet+N1 >>altpad+R1 >>fnordp+T1 >>veqq+V1 >>o0-0o+12 >>transc+32 >>doctob+g2 >>TheAce+h2 >>Americ+l2 >>theano+q2 >>Gud+s2 >>bobsoa+t2 >>adastr+v2 >>jdprgm+x2 >>qualif+K2 >>pdx6+N2 >>3cats-+Q2 >>craken+S2 >>meetpa+13 >>HaZeus+93 >>joegib+r3 >>nights+D3 >>arduan+L3 >>gwnywg+P3 >>SeanAn+Q3 >>thrwii+S3 >>ah765+04 >>gregat+34 >>huseyi+j4 >>rmrf10+x4 >>flylib+z4 >>halfjo+M4 >>wannac+U4 >>gcanyo+Y4 >>lysecr+85 >>1024co+l5 >>gzer0+v5 >>quickt+y5 >>tunesm+36 >>alex_y+96 >>hshsbs+e6 >>3Sopho+x6 >>acl777+17 >>shubha+B7 >>ayakan+R7 >>ulfw+28 >>unders+88 >>random+a8 >>anigbr+l8 >>dukeof+p8 >>sinuhe+J8 >>sashan+S8 >>eclect+79 >>intend+y9 >>auggie+E9 >>I_am_t+I9 >>benkar+K9 >>Satam+0a >>dbuser+3a >>laserl+gb >>jurgen+nb >>kumarv+yb >>kumarv+7c >>doyoue+mc >>ah765+Pc >>andrew+ld >>renewi+Ld >>didip+4e >>s-xyz+df >>tomalb+Sf >>_Alger+Wf >>global+kg >>dcreat+pg >>rurban+Ug >>simone+Xg >>timetr+oh >>dcreat+5i >>upupup+bi >>jl2718+Aj >>righth+Cj >>olgias+Om >>xeckr+Qm >>Havoc+5n >>cft+co >>person+Po >>notfed+Mp >>pjmlp+0q >>ugh123+ls >>1vuio0+Et >>mkii+Lt >>zx8080+Wt >>wouldb+fv >>ensoco+zv >>ChatGT+Hw >>MattHe+bx >>ChatGT+Uy >>Uptren+Zy >>ecmasc+Oz >>quietp+zA >>jmyeet+pB >>nickys+1C >>superu+sC >>wilde+SC >>corobo+rD >>martin+ID >>j4yav+4E >>mlindn+7E >>gongag+8E >>cbeach+vE >>davidt+UE >>dizzyd+XE >>DebtDe+vF >>al_be_+rG >>causi+EG >>fredgr+NG >>roody1+HI >>garris+EJ >>bvan+UK >>Norweg+xL >>minzi+BM >>danger+DM >>EarthA+ZO >>alieni+sP >>sys_64+xP >>nojvek+hR >>pimpam+qR >>donoho+vS >>throwa+sV >>iterat+zV >>nomaD_+cW >>Pigalo+TY >>rceDia+N01 >>Mriraz+b31 >>ChoGGi+c31 >>lysecr+K41 >>Bryant+b61 >>geniiu+S61 >>theGnu+T81 >>jafitc+F91 >>accoun+5g1 >>orsent+Tg1 >>rennsp+Rh1 >>thepas+wi1 >>kibwen+oj1 >>evan_+zk1 >>bmitc+al1 >>voiceb+6o1 >>beepbo+Rr1 >>incaho+Pt1 >>diamon+4u1 >>iamlep+jv1 >>melvin+nz1 >>gsuuon+Xz1 >>archsu+dC1 >>jcutre+fE1 >>carapa+zK1 >>hacker+OO1 >>taway1+aP1 >>dang+8U1 >>macrae+e02 >>rashid+p02 >>Ruq+V02 >>davegu+G92 >>zeroha+db2 >>nbzso+Gg2 >>jacque+ah2 >>jrflow+wk2 >>xyst+Wt2 >>zeroha+CR2 >>nbzso+yK3 >>jodupl+sA4 >>Obscur+NB4 >>carapa+I75 >>toaste+bJ5
1. jahsom+n[view] [source] 2023-11-22 06:04:54
>>staran+(OP)
That is one hot potato.
2. siva7+o[view] [source] 2023-11-22 06:04:54
>>staran+(OP)
Interesting that Adam is still on Board. This hints to Helen being the main perpetrator of the drama?
replies(3): >>averev+f1 >>fatbir+r2 >>GreedC+4b
3. AaronN+r[view] [source] 2023-11-22 06:05:36
>>staran+(OP)
What a wild ride. I have used X more the past few days than in a long time; that’s for sure!
replies(1): >>stingr+6s
4. adlpz+u[view] [source] 2023-11-22 06:05:50
>>staran+(OP)
Good, was out of popcorn already.

Somebody make a Netflix documentary please.

5. colmvp+A[view] [source] 2023-11-22 06:06:20
>>staran+(OP)
So, is he still going to lead some team at Microsoft?
replies(1): >>wilg+X
6. turndo+D[view] [source] 2023-11-22 06:06:30
>>staran+(OP)
From the outside none of this makes much sense. So the old board just disliked him enough to oust him but apparently didn’t have a good pulse on the company and overplayed their hand?
replies(3): >>fruit2+X4 >>yosame+cc >>nbanks+zj
7. eganis+F[view] [source] 2023-11-22 06:06:39
>>staran+(OP)
Any analysis on how Satya Nadella comes out on all of this? Or what impact this might have at all within Microsoft?
replies(2): >>nothro+G2 >>Racing+Uj
8. Finbar+G[view] [source] 2023-11-22 06:06:52
>>staran+(OP)
Hopefully Sam and Greg get restored to the board also.
replies(1): >>robbie+N
9. Uhhrrr+L[view] [source] 2023-11-22 06:07:33
>>staran+(OP)
Sure, why not.
10. ssnist+M[view] [source] 2023-11-22 06:07:37
>>staran+(OP)
"In principle" has me less than 100% assured. Hopefully no more plot twists in this. Everyone, inside and outside, has probably had enough.
replies(1): >>laweij+A51
◧◩
11. robbie+N[view] [source] [discussion] 2023-11-22 06:07:37
>>Finbar+G
The most recent reporting I saw from Bloomberg said sama would return as CEO only.
replies(2): >>ssnist+a1 >>adastr+i1
12. koito1+P[view] [source] 2023-11-22 06:07:43
>>staran+(OP)
What a wild ride these past few days have been. Friday already feels like a very long time ago given all of the information and controversy that's come out.
13. rvz+Q[view] [source] 2023-11-22 06:08:05
>>staran+(OP)
Once again the source is directly from Twitter / X and the news was announced from there.

Dispelling the complete nonsense that the platform is 'dying'.

replies(3): >>ssnist+62 >>mastaz+X2 >>metaba+x7
14. mlazos+R[view] [source] 2023-11-22 06:08:07
>>staran+(OP)
Looks like Satya will have all of the leverage after this. He kind of always did though, but the board has almost entirely been replaced.

I don’t see any point to the non profit umbrella now.

replies(1): >>baking+b2
15. xyst+S[view] [source] 2023-11-22 06:08:08
>>staran+(OP)
This ordeal reminds me of the Silicon Valley episode where Richard is replaced by an empty chair, temporarily.
16. tomohe+V[view] [source] 2023-11-22 06:08:13
>>staran+(OP)
So, Ilya is out of the board, but Adam is still on it. I know this will raise some eyebrows but whatever.

Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.

replies(10): >>wilg+51 >>ayakan+81 >>jatins+C1 >>sverha+I1 >>ilikeh+72 >>gordon+e2 >>Terrif+D2 >>nathan+N3 >>lacker+86 >>cowthu+v8
◧◩
17. wilg+X[view] [source] [discussion] 2023-11-22 06:08:18
>>colmvp+A
No https://twitter.com/sama/status/1727207458324848883
18. anothe+Y[view] [source] 2023-11-22 06:08:21
>>staran+(OP)
This has been childish throughout, everyone involved, including the tech community milking it for clicks should be ashamed.
19. gloyoy+31[view] [source] 2023-11-22 06:08:47
>>staran+(OP)
Tell that AGI who's boss!
◧◩
20. wilg+51[view] [source] [discussion] 2023-11-22 06:09:15
>>tomohe+V
I mean he's not irreplaceable so much as booting him suddenly for no good reason creates problems.
21. ryzvon+71[view] [source] 2023-11-22 06:09:23
>>staran+(OP)

    > We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

    > We are collaborating to figure out the details. Thank you so much for your patience through this.
1- So what was the point of this whole drama, and why couldn't you have settled like this adults?

2- Now what happens to Microsoft's role in all of this?

3- Twitter is still the best place to follow this and get updates, everyone is still make "official" statements on twitter, not sure how long this website will last but until then, this is the only portal for me to get news.

replies(12): >>quotem+K1 >>seydor+Q1 >>jatins+02 >>6gvONx+Z2 >>Barrin+43 >>petese+k4 >>happos+ib >>kumarv+Ub >>blacko+Bc >>_boffi+ed >>JumpCr+Id >>Racing+Ng
◧◩
22. ayakan+81[view] [source] [discussion] 2023-11-22 06:09:26
>>tomohe+V
"I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust." How is this the case?
replies(1): >>quickt+8h
◧◩◪
23. ssnist+a1[view] [source] [discussion] 2023-11-22 06:09:44
>>robbie+N
The new board only has 3 people to start with, but hopefully easier to add more members soon. Tonight's NYT story mentioned the board member attrition and the prolonged gridlock in adding new ones, which probably led to the current saga.
◧◩
24. averev+f1[view] [source] [discussion] 2023-11-22 06:10:28
>>siva7+o
Well I don't see Greg back on the list and he was a loyalist so there may be a few adjustments moving forward
25. HPMOR+g1[view] [source] 2023-11-22 06:10:34
>>staran+(OP)
Thank the lord. We need stability and reliability as developers. This is great news for anyone building ontop of OpenAI products. Welcome back Sama.
26. hadrie+h1[view] [source] 2023-11-22 06:10:37
>>staran+(OP)
So Adam D'Angelo would stay on the board? I thought a condition for Altman to return was the whole board resigning?
replies(1): >>kelnos+d3
◧◩◪
27. adastr+i1[view] [source] [discussion] 2023-11-22 06:10:39
>>robbie+N
Why would he agree to this? He holds all the cards now.
28. r721+k1[view] [source] 2023-11-22 06:10:43
>>staran+(OP)
Quote tweets by main participants:

https://twitter.com/sama/status/1727206691262099616 (+ follow-up https://twitter.com/sama/status/1727207458324848883)

https://twitter.com/gdb/status/1727206609477411261

https://twitter.com/miramurati/status/1727206862150672843

UPD https://twitter.com/gdb/status/1727208843137179915

https://twitter.com/eshear/status/1727210329560756598

https://twitter.com/satyanadella/status/1727207661547233721

replies(7): >>ryzvon+k2 >>0xDEAF+a5 >>doctob+C5 >>nickpp+Y5 >>highwa+o9 >>crossr+xl >>r721+4t
29. epups+l1[view] [source] 2023-11-22 06:10:43
>>staran+(OP)
So, Adam D'Angelo is the only board member that remains, and he had also voted against Altman before. How interesting, considering all the theory crafting about him being the one who initiated this coup.
30. weirdi+o1[view] [source] 2023-11-22 06:10:55
>>staran+(OP)
I wonder how this will impact the company-owned-by-a-non-profit model in the future. While it isn’t uncommon (e.g. I believe IKEA are owned by a nonprofit), I believe it has historically been for tax reasons.

Given the grandstanding and chaos on both sides, it’ll be interesting to see if OpenAI undergo a radical shift in their structure.

31. gzer0+B1[view] [source] 2023-11-22 06:12:23
>>staran+(OP)
Satya on twitter:

We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners.

https://twitter.com/satyanadella/status/1727207661547233721

replies(4): >>qsi+w2 >>_jnc+T2 >>wokwok+x3 >>forres+H3
◧◩
32. jatins+C1[view] [source] [discussion] 2023-11-22 06:12:29
>>tomohe+V
> I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust

Whose trust?

33. seydor+F1[view] [source] 2023-11-22 06:13:09
>>staran+(OP)
Larry Summers and no females
replies(4): >>meteor+f2 >>antonv+44 >>chevie+w5 >>Racing+ak
34. KoftaB+H1[view] [source] 2023-11-22 06:13:30
>>staran+(OP)
> We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

Larry Summers? Some odd choices

replies(1): >>singul+Z8
◧◩
35. sverha+I1[view] [source] [discussion] 2023-11-22 06:13:36
>>tomohe+V
Ah, yes, Facebook and Uber, brands known for consistent trustworthiness throughout their existences /s
◧◩
36. quotem+K1[view] [source] [discussion] 2023-11-22 06:13:45
>>ryzvon+71
> Twitter is still the best place to follow this and get updates, everyone is still make "official" statements on twitter, not sure how long this website will last but until then, this is the only portal for me to get news.

It's only natural to confuse what is happening with what we wish to happen. After all, when we imagine something, aren't we undergoing a kind of experience?

A lot of people wish Twitter were dying, even though it's it, so they interpret evidence through a lens of belief confirmation rather than belief disproof. It's only human to do this. We all do.

replies(2): >>noneth+c2 >>ryzvon+l3
37. waihti+M1[view] [source] 2023-11-22 06:13:57
>>staran+(OP)
Still absolutely nothing from Tasha McCauley or Helen Toner, and now both are out of the board
replies(1): >>GreedC+rb
38. wnevet+N1[view] [source] 2023-11-22 06:14:04
>>staran+(OP)
I'm assuming the details are this board loses most of its power?
replies(1): >>baking+u2
◧◩
39. seydor+Q1[view] [source] [discussion] 2023-11-22 06:14:15
>>ryzvon+71
Microsoft said they are OK with Sam returning to openAI. There are probbaly legal reasons why they prefer things to go back as it were

(Thank you for calling Twitter Twitter)

replies(1): >>Americ+W2
40. altpad+R1[view] [source] 2023-11-22 06:14:20
>>staran+(OP)
I guess the main question is who else will be on the board and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies. I think having Sam return as CEO is a good outcome for OpenAI but hopefully he and Greg stay off the board.

It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.

I was a bit alarmed by the allegations in this article

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Saying that Sam tried to have Helen Toner removed which precipitated this fight. The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

replies(6): >>upward+j3 >>brucet+16 >>dragon+W6 >>k4rli+2i >>bambax+hk >>alumin+0u2
41. fnordp+T1[view] [source] 2023-11-22 06:14:36
>>staran+(OP)
So…. What about all the folks who already jumped ship? Ooooops?
42. veqq+V1[view] [source] 2023-11-22 06:14:52
>>staran+(OP)
Besides AI safety (a big besides), what does this actually mean? Adam won't be able to stop devday announcements about chatbots etc. Satya can continue using IP even after AGI? What else is different? Is Ilya the kind of guy to now leave after losing a board seat to political machinations? The pettiness of any real changes/gains leaves me in shock compared to the massive news flows we've seen.

I don't even understand what Sam brings to the table. Leadership? He doesn't seem great at leading an engineering or research department, he doesn't seem like an insightful visionary... At best, Satya gunning for him signalled continued strong investment in the space. Yet the majority of the company wanted to leave with him.

What am I missing?

replies(2): >>kneel+o2 >>tock+A2
◧◩
43. jatins+02[view] [source] [discussion] 2023-11-22 06:15:19
>>ryzvon+71
Microsoft's role remains same as it was on Thursday. Minor (49%?) shareholder and keeps access to models and IP

IMO Kevin tweeting that MS will hire and match comp of all OpenAI employees was amazing negotiation tactic because that meant employees could sign the petition without worrying about their jobs/visas

replies(3): >>ryzvon+u3 >>karmas+r4 >>ugh123+Wg
44. o0-0o+12[view] [source] 2023-11-22 06:15:28
>>staran+(OP)
Why is their “ai” not on the board?
45. transc+32[view] [source] 2023-11-22 06:15:37
>>staran+(OP)
Assuming they weren’t LARPing, that Reddit account claiming to have been in the room when this was all going down must be nervous. They wrote all kinds of nasty things about Sam, and I’m assuming the signatures on the “bring him back” letter would narrow down potential suspects considerably.

Edit: For those who may have missed it in previous threads, see https://old.reddit.com/user/Anxious_Bandicoot126

replies(8): >>fordsm+Y2 >>craken+63 >>mvdtnz+B4 >>Shamel+i6 >>epups+Gc >>blacko+8j >>shrimp+Zm >>ssnist+Yy
◧◩
46. ssnist+62[view] [source] [discussion] 2023-11-22 06:15:41
>>rvz+Q
The problem is none of the alternatives offered a smooth UX transition. Mastodon is fragmented by design and Bluesky is gated to this day. There was never a true Digg-like event that caused user migration to reach critical mass. So people simply trickled back once the most volatile periods of post-Elon Twitter passed.

That doesn't change the fact post-Elon Twitter has severely degraded in terms of user experience (rate limits, blue check spam, API pay-wall, etc.) and Elon isn't doing the platform any favours by continuing to participate in detrimental ways (seen in the recent advertiser exodus).

◧◩
47. ilikeh+72[view] [source] [discussion] 2023-11-22 06:15:42
>>tomohe+V
OAI looks stronger than ever. The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea. Care to expand on your claim?
replies(2): >>neta13+33 >>6gvONx+O3
◧◩
48. baking+b2[view] [source] [discussion] 2023-11-22 06:16:04
>>mlazos+R
Sure, you can dissolve it if you hand over all the assets to another 501(c)3 organization. Otherwise, you are stuck with it.
◧◩◪
49. noneth+c2[view] [source] [discussion] 2023-11-22 06:16:05
>>quotem+K1
> A lot of people wish Twitter were dying, even though it's it, so they interpret evidence through a lens of belief confirmation rather than belief disproof.

Cognitive dissonance

replies(1): >>veec_c+R8
◧◩
50. gordon+e2[view] [source] [discussion] 2023-11-22 06:16:31
>>tomohe+V
Facebook has lost trust so many times that I can’t even count but it’s still a Megacorp, isn’t it?
◧◩
51. meteor+f2[view] [source] [discussion] 2023-11-22 06:16:37
>>seydor+F1
How did Larry Summers get elected? Does he have any relation with AI research or Sam Altman?

It’s also curious that none of the board members have necessarily have any experience directly with AI research

replies(1): >>qsi+v3
52. doctob+g2[view] [source] 2023-11-22 06:16:43
>>staran+(OP)
I really did not think that would happen. I guess the obvious next question is what happens to Ilya? From this announcement it appears he is off the board. Is he still the chief scientist? I find it hard to believe he and Sam would be able to patch their relationship up well enough to work together so closely. Interesting that Adam stayed on the board, that seems to disprove many of the theories floating around here that he was the ringleader due to some perceived conflict of interest.
replies(4): >>xigenc+v9 >>lucubr+sc >>danger+rT >>nemo44+kV
53. TheAce+h2[view] [source] 2023-11-22 06:16:46
>>staran+(OP)
Did we ever find out why Sam Altman's removal happened in the first place? The reasons I've read so far seem really opaque.

From an outsider's perspective, and until there's a clear explanation available, it just seems like a massive bundler.

replies(2): >>altpad+76 >>Clarit+KL1
◧◩
54. ryzvon+k2[view] [source] [discussion] 2023-11-22 06:17:05
>>r721+k1
also satya

https://twitter.com/satyanadella/status/1727207661547233721

55. Americ+l2[view] [source] 2023-11-22 06:17:12
>>staran+(OP)
The OpenAI board was merely demonstrating that not all humans should be trusted with the power of AGI..
◧◩
56. kneel+o2[view] [source] [discussion] 2023-11-22 06:17:52
>>veqq+V1
>He doesn't seem great at leading an engineering or research department

Under Sam's leadership they've opened up a new field of software. Most of the company threatened to leave if he didn't return. That's incredible leadership.

replies(1): >>consp+N9
57. theano+q2[view] [source] 2023-11-22 06:18:07
>>staran+(OP)
So, what happened to those "jail-time wrong" actions that mandated such a language in the firing announcement?

Honestly, it is hard to believe a board st this level acting the way they did.

◧◩
58. fatbir+r2[view] [source] [discussion] 2023-11-22 06:18:07
>>siva7+o
Or it was recognized that Adam was the instigator and the real power player, and the force that Sam needed to come to an accommodation with. From everything I've heard about Toner, she's a very principled person who lent academic credibility to the board, and was a great figurehead for the non-profit's conscience. Once the veneer was ripped from the non-profit's "controlling" role, she was deadweight and useful only as a scapegoat.

It looks to me like the real victim here is the "for humanity" corporate structure. At some point, the money decided it needed to be free.

replies(1): >>notfed+PO1
59. Gud+s2[view] [source] 2023-11-22 06:18:18
>>staran+(OP)
Once we develop an actual, fully functional AGI, it’s going to steamroll us isn’t it.

If these are the stewards of this technology, it’s time to be worried now.

replies(3): >>MVisse+Va >>otabde+Yb >>mvdtnz+zz
60. bobsoa+t2[view] [source] 2023-11-22 06:18:25
>>staran+(OP)
Someone was very quick to update Bret Taylor's Wikipedia page:

https://en.m.wikipedia.org/wiki/Bret_Taylor

> On November, 21st, 2023, Bret Taylor replaced Greg Brockman as the chairman of OpenAI.

...with three footmark "sources" that all point to completely unrelated articles about Bret from 2021-2022.

replies(1): >>kridsd+nq1
◧◩
61. baking+u2[view] [source] [discussion] 2023-11-22 06:18:37
>>wnevet+N1
You mean gives away? If so, I hope they have a lot of directors' insurance.
62. adastr+v2[view] [source] 2023-11-22 06:18:39
>>staran+(OP)
Why is Adam still on the board? Why haven’t Greg and Sam been readded to it? Why doesn’t Microsoft have representation?
replies(1): >>wilg+g3
◧◩
63. qsi+w2[view] [source] [discussion] 2023-11-22 06:18:47
>>gzer0+B1
>> a first essential step on a path to more stable, well-informed, and effective governance.

That's quite a slap at the board... a polite way of calling them ignorant, ineffective dilettantes.

replies(1): >>adastr+E2
64. jdprgm+x2[view] [source] 2023-11-22 06:18:48
>>staran+(OP)
At what point are we actually going to get the real details on wtf actually went down.
◧◩
65. tock+A2[view] [source] [discussion] 2023-11-22 06:19:12
>>veqq+V1
> Leadership? He doesn't seem great at leading an engineering or research department, he doesn't seem like an insightful visionary

Most of the company was ready to quit over him being fired. So yes, leadership.

◧◩
66. Terrif+D2[view] [source] [discussion] 2023-11-22 06:19:15
>>tomohe+V
The OpenAI of the past, that dabbled in random AI stuff (remember their DotA 2 bot?), is gone.

OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.

Altman's/Microsoft’s takeover of the former non-profit is now complete.

Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.

replies(8): >>karmas+L4 >>g42gre+D5 >>robbom+g7 >>krisof+hg >>quickt+ug >>Havoc+sn >>3cats-+Cr >>cyanyd+MO
◧◩◪
67. adastr+E2[view] [source] [discussion] 2023-11-22 06:19:26
>>qsi+w2
Yet one of them is still on the board…
replies(6): >>estoma+U2 >>qsi+03 >>remark+b3 >>rlt+K3 >>ah765+i5 >>behnam+C6
◧◩
68. nothro+G2[view] [source] [discussion] 2023-11-22 06:19:44
>>eganis+F
Satya is still a winner, grabs less now though.
replies(1): >>polyom+I3
69. qualif+K2[view] [source] 2023-11-22 06:19:58
>>staran+(OP)
Larry Summers???? What he has to do with AI??
replies(3): >>jen_h+j5 >>arduan+z7 >>dragon+h8
70. pdx6+N2[view] [source] 2023-11-22 06:20:02
>>staran+(OP)
Excellent news. I’ve been worried that Sam moving to Microsoft would stall out possible future engineering efforts like GPT-5 in IP court.

As an example of how much faster GPT-4 has made my workflow was the outage this evening — I tried Anthropic, openchat, Bard, and a few others and they were between not useful and worse than just looking at forums and discord it’s 2022.

replies(2): >>sidcoo+V4 >>badcod+y6
71. 3cats-+Q2[view] [source] 2023-11-22 06:20:12
>>staran+(OP)
I'm worried about this initial board.

Bret Taylor (Salesforce) was trying to poach OpenAI employees publicly literally yesterday.

Adam D'Angelo orchestrated the coup, because he doesn't want OpenAI GPTs market to compete with his Poe market.

Larry Summers. Larry f**kin' Summers?!

72. craken+S2[view] [source] 2023-11-22 06:20:38
>>staran+(OP)
Please update the link to the updated version of the tweet: https://x.com/openai/status/1727206187077370115?s=46
◧◩
73. _jnc+T2[view] [source] [discussion] 2023-11-22 06:20:58
>>gzer0+B1
microsoft is going to need 2-3 seats on that board
replies(1): >>choppa+F5
◧◩◪◨
74. estoma+U2[view] [source] [discussion] 2023-11-22 06:21:11
>>adastr+E2
Not sure why that would be contradictory.
replies(1): >>adastr+a3
◧◩◪
75. Americ+W2[view] [source] [discussion] 2023-11-22 06:21:18
>>seydor+Q1
The website is twitter.com. Why call it something else?
replies(2): >>alex_y+X3 >>labste+q4
◧◩
76. mastaz+X2[view] [source] [discussion] 2023-11-22 06:21:19
>>rvz+Q
FWIW I've just read the tweet on Nitter, haven't had a Twitter account in more than 2 years.
replies(1): >>wilg+n3
◧◩
77. fordsm+Y2[view] [source] [discussion] 2023-11-22 06:21:19
>>transc+32
Link? Not sure which account you are referring to
replies(1): >>transc+i3
◧◩
78. 6gvONx+Z2[view] [source] [discussion] 2023-11-22 06:21:24
>>ryzvon+71
> So what was the point of this whole drama, and why couldn't you have settled like this adults?

Altman was trying to remove one of the board members before he was forced out. Looks like he got his way in the end, but I'm going to call Altman the primary instigator because of that.

His side was also the "we'll nuke the company unless you resign" side.

replies(1): >>theamk+A5
◧◩◪◨
79. qsi+03[view] [source] [discussion] 2023-11-22 06:21:25
>>adastr+E2
I don't understand that either, but let's see what the board looks like in a few months/weeks/days/hours?
replies(1): >>sanxiy+A3
80. meetpa+13[view] [source] 2023-11-22 06:21:26
>>staran+(OP)
Emmett Shear on Twitter:

I am deeply pleased by this result, after ~72 very intense hours of work. Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.

https://twitter.com/eshear/status/1727210329560756598

replies(3): >>reustl+N4 >>cheeze+d5 >>upupup+2p
◧◩◪
81. neta13+33[view] [source] [discussion] 2023-11-22 06:21:38
>>ilikeh+72
Please explain your claim as well. I don’t see how this company looks stronger than ever, more like a clown company
replies(3): >>TapWat+M3 >>ilikeh+w4 >>GreedC+6a
◧◩
82. Barrin+43[view] [source] [discussion] 2023-11-22 06:21:39
>>ryzvon+71
The explanation for point 1 is point 3. If the people involved were not terminally online and felt the need to share every single one of their immediate thoughts with the public they could have likely settled this behind closed doors, where this kind of stuff belongs.

It's not actually news, it's entertainment and self-aggrandizement by everyone involved including the audience.

replies(1): >>0xDEAF+J3
◧◩
83. craken+63[view] [source] [discussion] 2023-11-22 06:21:47
>>transc+32
Context?
84. HaZeus+93[view] [source] 2023-11-22 06:21:57
>>staran+(OP)
I look forward to seeing the full details shared of the last 96 hours now that several elements of controversy have been sealed.

In other news, it's nice knowing a tool that's essential to my day-to-day operations is no longer in jeopardy, haha.

◧◩◪◨⬒
85. adastr+a3[view] [source] [discussion] 2023-11-22 06:21:58
>>estoma+U2
Well then there’s still a “ignorant, ineffective dilettante“ making up 1/3 of the board.
replies(1): >>estoma+f5
◧◩◪◨
86. remark+b3[view] [source] [discussion] 2023-11-22 06:21:59
>>adastr+E2
D'Angelo?

Wonder if this is a signal that the theories about Poe are off the mark.

replies(1): >>adastr+s3
◧◩
87. kelnos+d3[view] [source] [discussion] 2023-11-22 06:22:01
>>hadrie+h1
When people negotiate, often they compromise, and their conditions change.
◧◩
88. wilg+g3[view] [source] [discussion] 2023-11-22 06:22:14
>>adastr+v2
Probably because this is what they could agree to.
◧◩◪
89. transc+i3[view] [source] [discussion] 2023-11-22 06:22:27
>>fordsm+Y2
https://old.reddit.com/user/Anxious_Bandicoot126
◧◩
90. upward+j3[view] [source] [discussion] 2023-11-22 06:22:28
>>altpad+R1
> The CEO should not be allowed to try and orchestrate their own board as that would remove all checks against their decisions.

Exactly. This is seriously improper and dangerous.

It's literally a human-implemented example of what Prof. Stuart Russell calls "the problem of control". This is when a rogue AI (or a rogue Sam Altman) no longer wants to be controlled by its human superior, and takes steps to eliminate the superior.

I highly recommend reading Prof. Russell's bestselling book on this exact problem: Human Compatible: Artificial Intelligence and the Problem of Control https://www.amazon.com/Human-Compatible-Artificial-Intellige...

replies(5): >>jackne+L7 >>MVisse+Z7 >>neurog+l9 >>diesel+4a >>YetAno+9e
◧◩◪
91. ryzvon+l3[view] [source] [discussion] 2023-11-22 06:22:48
>>quotem+K1
It was funny reading Kara Swisher keeping saying twitter is dying and is toxic and what not, while STILL doing all her first announcements on twitter, and using twitter as a source.

same with Ashlee Vance (the other journo reporting on this) and all the main players (Sam/Greg/Ilya/Mira/Satya/whoever) also make their first announcement on twitter.

I don't know about the funding part of it, but there is no denying it, the news is still freshest on twitter. Twitter feels just as toxic for me as before, in fact I feel community notes has made it much better, imho.

____

In some related news, I finally got bluesky invite (I don't have invite codes yet or I would share here)

and people there are complaining about... mastadon and how elitist it is...

that was an eye opener.

nice if you want some science-y updates but it's still lags behind twitter for news.

replies(4): >>metaba+i4 >>hurrye+P7 >>hadloc+8b >>bagels+gc
◧◩◪
92. wilg+n3[view] [source] [discussion] 2023-11-22 06:23:08
>>mastaz+X2
Well that's got very little to do with their point (which isn't very relevant anyway)
replies(1): >>mastaz+94
93. joegib+r3[view] [source] 2023-11-22 06:23:35
>>staran+(OP)
Well there you go. I suppose the takeaway for anyone using OpenAI products is that they should have a backup, even if it doesn’t perform as well. The board was apparently fine with shutting the whole thing in the name of safety. With that plus the GPT outage earlier today, you’d do well to have a Claude or LLaMa fallback you can switch to if it happens again.
◧◩◪◨⬒
94. adastr+s3[view] [source] [discussion] 2023-11-22 06:23:43
>>remark+b3
Doesn’t matter. It’s an absolutely clear conflict of interest. It may have taken an unrelated shakeup for people to notice (or maybe D’Angelo was critically involved; we don’t know), but there’s no way he should be staying on this board.
replies(1): >>BillyT+l4
◧◩◪
95. ryzvon+u3[view] [source] [discussion] 2023-11-22 06:24:01
>>jatins+02
but no board seat? how do they prevent a rehash of this in the future and how do they safeguard their investment? Really curious.
replies(3): >>protoc+h4 >>jatins+Pb >>umeshu+Kc
◧◩◪
96. qsi+v3[view] [source] [discussion] 2023-11-22 06:24:02
>>meteor+f2
Not sure "elected" is the right way of looking at it. More like "selected" or "nominated" by Sam/MSFT perhaps. His main qualification may be that he's an adult?
◧◩
97. wokwok+x3[view] [source] [discussion] 2023-11-22 06:24:30
>>gzer0+B1
Unsaid: “Also I lied about hiring him.”

> And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.

https://nitter.net/satyanadella/status/1726509045803336122

I guess everyone was just playing a bit loose and fast with the truth and hype to pressure the board.

replies(9): >>behnam+t6 >>Nathan+z6 >>robbom+B6 >>centur+H6 >>qsi+27 >>actini+67 >>wavemo+b7 >>vikram+m7 >>loboch+58
◧◩◪◨⬒
98. sanxiy+A3[view] [source] [discussion] 2023-11-22 06:24:51
>>qsi+03
Old board needs to agree to new board, so I think some compromise is inevitable.
replies(1): >>qsi+b4
99. nights+D3[view] [source] 2023-11-22 06:25:10
>>staran+(OP)
Glad Bret Taylor was added to the board.
◧◩
100. forres+H3[view] [source] [discussion] 2023-11-22 06:25:31
>>gzer0+B1
Did Satya get played with the whole "Sam and Greg are joining Microsoft"? Was Satya in on a gambit to get the whole company to threaten to quit to force the board's hand?

It sure feels like a bad look for Satya to announce a huge hire Sunday night and then this? But what do I know.

Edit: don't know why the downvotes. You're welcome to think it's an obviously smart political move. That it's win/win either way. But it's a very fair question that every tech blogger on the planet will be trying to answer for the next month!

replies(8): >>voidfu+84 >>altpad+e4 >>fastba+A4 >>tunesm+I4 >>noneth+O4 >>jwegan+P4 >>gexla+75 >>vikram+Y7
◧◩◪
101. polyom+I3[view] [source] [discussion] 2023-11-22 06:25:51
>>nothro+G2
Satya wants to be able to book the OpenAI money as revenue. This is better for him.
◧◩◪
102. 0xDEAF+J3[view] [source] [discussion] 2023-11-22 06:26:15
>>Barrin+43
Interesting that the board were repeatedly criticized for "not being adults", and yet they were also the only party not live-tweeting everything...

Seems like there's no way to win with Twitter. You may not be interested in Twitter, but Twitter is interested in you.

replies(4): >>behnam+Z5 >>nickpp+m6 >>imgabe+Aa >>blacko+0f
◧◩◪◨
103. rlt+K3[view] [source] [discussion] 2023-11-22 06:26:16
>>adastr+E2
The one (Adam D’Angelo) who’s a cofounder and CEO of a company (Quora) that has a product (Poe) that arguably competes with OpenAI’s “GPTs” feature, no less.

I don’t understand why that’s not a conflict of interest?

But honestly both products pale in comparison to OpenAI’s underlying models’ importance.

replies(1): >>dragon+z5
104. arduan+L3[view] [source] 2023-11-22 06:26:18
>>staran+(OP)
Larry Summers is an excellent pick to call out bullshit and moderate any civil war, such as this EA - e/acc feud.

Kissinger (R, foreign policy) once said that Summers (D, economic policy) should be given an advisory post in any WH administration, to help shoot down bad ideas.

replies(2): >>vinter+35 >>zerocr+j8
◧◩◪◨
105. TapWat+M3[view] [source] [discussion] 2023-11-22 06:26:33
>>neta13+33
They got rid of the clowns though. They went from having a board with lightweights and insiders to what at least initially is a strong initial 3.
◧◩
106. nathan+N3[view] [source] [discussion] 2023-11-22 06:26:38
>>tomohe+V
On the contrary, this saga has shown that a huge number of people are extremely passionate about the existence of OpenAI and it's leadership by Altman, much more strongly and in larger numbers than most had suspected. If anything this has solidified the importance of the company and I think people will trust it more that the situation was resolved with the light speed it was.
replies(1): >>willdr+ge
◧◩◪
107. 6gvONx+O3[view] [source] [discussion] 2023-11-22 06:26:46
>>ilikeh+72
> The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea

This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was

- Altman tries to push out another board member

- That board member escalates by pushing Altman out (and Brockman off the board)

- Altman's side escalates by saying they'll nuke the company

Altman's side won, but how can we say that his side didn't cause any of this instability?

replies(2): >>ilikeh+R4 >>WendyT+T5
108. gwnywg+P3[view] [source] 2023-11-22 06:26:59
>>staran+(OP)
So should he take the counteroffer or stay with MS ;)

Almost all advice on the internet I have been reading says that you should not take counteroffer, but I guess it's different for CEO ;)

109. SeanAn+Q3[view] [source] 2023-11-22 06:26:59
>>staran+(OP)
We're so back!

... so is this the end of the drama? Do I get to stop checking the news religiously?

110. thrwii+S3[view] [source] 2023-11-22 06:27:16
>>staran+(OP)
Of course money wins in the end
◧◩◪◨
111. alex_y+X3[view] [source] [discussion] 2023-11-22 06:27:39
>>Americ+W2
Also, x.com redirects to Twitter.com. Seems like they want us to say Twitter.
replies(1): >>behnam+Q5
112. ah765+04[view] [source] 2023-11-22 06:27:57
>>staran+(OP)
Sounds like a compromise.

The previous board thought Sam was trying to get full control of the board, so they ousted him. But of course they weren't happy with OpenAI being destroyed either.

Now they agreed to a new board without Sam/Greg, hoping that that will avoid Sam ever getting full control of the board in the future.

113. gregat+34[view] [source] 2023-11-22 06:28:18
>>staran+(OP)
Does this mean they'll get back to work improving their Moneyclip Maximizer?
◧◩
114. antonv+44[view] [source] [discussion] 2023-11-22 06:28:21
>>seydor+F1
Summers would tell you that women don’t have the necessary “intrinsic aptitude”. Of course the intrinsic aptitude in question is being able to participate in a nepotistic boy’s club.
replies(1): >>jadams+L9
◧◩◪
115. voidfu+84[view] [source] [discussion] 2023-11-22 06:29:07
>>forres+H3
Huh? Satyas move was politically brilliant. Either outcome of Sama returning to OpenAI or Sama going to Microsoft is good for Microsoft as continuity and progress are the most important things right now for Microsoft. An OpenAI in turmoil would have been worthless.

Satyas maneuvering gave Sama huge leverage.

replies(1): >>behnam+b5
◧◩◪◨
116. mastaz+94[view] [source] [discussion] 2023-11-22 06:29:15
>>wilg+n3
Their point is that whether or not the platform is "dying" would depend on whether or not Twitter is still the best way to "get news".

But the most common metrics for whether or not a social media platform is dying, are things like ad revenue and MAU.

I contribute to neither, since I'm not a user nor an ad viewer, and yet I'm still able to "get the news".

So my point is this: the fact that important news are still there, won't guarantee that the platform stays successfull

◧◩◪◨⬒⬓
117. qsi+b4[view] [source] [discussion] 2023-11-22 06:29:26
>>sanxiy+A3
If all members of the old board resign simultaneously, what happens then? No more old board to agree to any new members. In a for-profit the shareholders can elect new board members, but in this case I don't know how it's supposed to work.
replies(1): >>ilikeh+f7
◧◩◪
118. altpad+e4[view] [source] [discussion] 2023-11-22 06:29:44
>>forres+H3
I think it was mostly a bluff to try the pressure the board. I don't think Sam and most of Open AI rank and file would want to be employees of MSFT
replies(3): >>i67vw3+C4 >>Fluore+Z4 >>numpad+E5
◧◩◪◨
119. protoc+h4[view] [source] [discussion] 2023-11-22 06:30:14
>>ryzvon+u3
OpenAI is an airgapped test lab for Microsoft. They dont want critical exposure to the downside risk of AI research, just the benefits in terms of IP. Sam and Greg probably offer enough stability for them to continue this way.
replies(2): >>_jab+Z6 >>happos+s9
◧◩◪◨
120. metaba+i4[view] [source] [discussion] 2023-11-22 06:30:16
>>ryzvon+l3
I don’t use Twitter any more, other than occasionally following links there (which open in the browser, because I deleted the app).

Discoverability on Mastodon is abysmal. It was too much work for me.

I tend to get my news from Substack now.

replies(1): >>ryzvon+T4
121. huseyi+j4[view] [source] 2023-11-22 06:30:28
>>staran+(OP)
Everything is now superaligned for mass commercialization of OpenAI.
◧◩
122. petese+k4[view] [source] [discussion] 2023-11-22 06:30:35
>>ryzvon+71
If there’s been one constant here, it’s been people who actually know Tonrer expressing deep support for her experience, intelligence, and ethics, so it’s interesting to me that she seems to be getting the boot.
replies(3): >>causal+S5 >>dmix+G8 >>tsimio+wh
◧◩◪◨⬒⬓
123. BillyT+l4[view] [source] [discussion] 2023-11-22 06:30:37
>>adastr+s3
maybe it's just going to be easier to fire him in a second step once this current situation which seems to be primarily about ideology is cleared up. In D’Angelo's case it's going to be easier to just point to a clear traditional conflict of interest down the line
◧◩◪◨
124. labste+q4[view] [source] [discussion] 2023-11-22 06:30:54
>>Americ+W2
Exactly right, fellow YCombinator News commenter!
replies(1): >>zarzav+J5
◧◩◪
125. karmas+r4[view] [source] [discussion] 2023-11-22 06:31:01
>>jatins+02
I think at this point MSFT will seek a board seat in OpenAI/
replies(1): >>zeven7+fb
◧◩◪◨
126. ilikeh+w4[view] [source] [discussion] 2023-11-22 06:31:39
>>neta13+33
I may have been overly eager in my comment because the big bad downside of the new board is none of the founders are on it. I hope the current membership sees reason and fixes this issue.

But I said this because: They've retained the entire company, reinstated its founder as CEO, and replaced an activist clown board with a professional, experienced, and possibly* unified one. Still remains to be seen how the board membership and overall org structure changes, but I have much more trust in the current 3 members steering OpenAI toward long-term success.

replies(1): >>MVisse+d9
127. rmrf10+x4[view] [source] 2023-11-22 06:31:40
>>staran+(OP)
Well... What's the cost?
128. flylib+z4[view] [source] 2023-11-22 06:31:41
>>staran+(OP)
"A source with direct knowledge of the negotiations says that the sole job of this initial board is to vet and appoint a new formal board of up to 9 people that will reset the governance of OpenAl. Microsoft will likely have a seat on that expanded board, as will Altman himself."

https://twitter.com/teddyschleifer/status/172721237871736880...

replies(11): >>SeanAn+k5 >>Hamuko+X9 >>raverb+io >>rcaugh+Ts >>throwu+wy >>jbu+uz >>yawnxy+Jz >>gandut+8Q >>Cacti+5f1 >>bandra+5i1 >>himara+Zm1
◧◩◪
129. fastba+A4[view] [source] [discussion] 2023-11-22 06:31:44
>>forres+H3
Doesn't seem that way to me. Seems like it was Satya sorta calling the board's bluff.
◧◩
130. mvdtnz+B4[view] [source] [discussion] 2023-11-22 06:31:48
>>transc+32
First of all nothing on Reddit is real (within margin of error). Secondly it's weird that you'd assume we know what you're talking about.
replies(1): >>transc+s7
◧◩◪◨
131. i67vw3+C4[view] [source] [discussion] 2023-11-22 06:31:52
>>altpad+e4
Also to lessen the MSFT share impact.
◧◩◪
132. tunesm+I4[view] [source] [discussion] 2023-11-22 06:32:42
>>forres+H3
I guess that theory was right, that Satya's announcement was just a delaying tactic to calm the market before Monday morning.
◧◩◪
133. karmas+L4[view] [source] [discussion] 2023-11-22 06:33:17
>>Terrif+D2
> now just a vehicle to commercialize their LLM

I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.

I don't think the company has changed at all. It succeeded after all.

replies(2): >>nextac+L5 >>hadloc+vd
134. halfjo+M4[view] [source] 2023-11-22 06:33:20
>>staran+(OP)
Still think this was CIA operation to get OpenAI in hands of US government and big tech.

Former Secretary, SalesForce CEO who was board chair of Twitter when infiltrated with FBI [1] and the fall-guy for the coup is the new board? Not one person from the actual company - not even Greg who did nothing wrong??? [1] - https://twitter.com/NameRedacted247/status/16340211499976867...

The two think-tank women who made all this happen conveniently leave so we never talk about them again.

Whatever, as long as I can use their API.

replies(3): >>system+Jf >>astran+Ui >>ozgung+pv
◧◩
135. reustl+N4[view] [source] [discussion] 2023-11-22 06:33:27
>>meetpa+13
I'm probably reading too much into it, but interesting that he specifically called out maximizing safety.
replies(3): >>xigenc+p6 >>dragon+18 >>jq-r+U9
◧◩◪
136. noneth+O4[view] [source] [discussion] 2023-11-22 06:33:31
>>forres+H3
Im not so sure. This whole ordeal revealed how strong of a position Microsoft had all along. And that’s all still true even without effectively taking over OpenAI. Because now everyone can see how easily it could happen.

Something does still seem not flattering towards Microsoft about reneging on the Microsoft offer though.

◧◩◪
137. jwegan+P4[view] [source] [discussion] 2023-11-22 06:33:33
>>forres+H3
"Hiring" them was just a PR tactic to keep Microsoft stock from tanking while they got this figured out.
replies(1): >>154573+r5
◧◩◪◨
138. ilikeh+R4[view] [source] [discussion] 2023-11-22 06:33:36
>>6gvONx+O3
> Altman tries to push out another board member

That event wasn't some unprovoked start of this history.

> That board member escalates by pushing Altman out (and Brockman off the board)

and the entire company retaliated. Then this board member tried to sell the company to a competitor who refused. In the meantime the board went through two interim CEOs who refused to play along with this scheme. In the meantime one of the people who voted to fire the CEO regretted it publicly within 24 hours. That's a clown car of a board. It reflects the quality of most non-profit boards but not of organizations that actually execute well.

replies(1): >>emptys+2a
◧◩◪◨⬒
139. ryzvon+T4[view] [source] [discussion] 2023-11-22 06:33:40
>>metaba+i4
interesting, substack doesn't sound like a platform for the freshest news, but for deep insights.

Don't you feel out of date on substack? especially since things move so fast sometimes, like with this open-ai fiasco?

replies(2): >>metaba+09 >>tayo42+Uc
140. wannac+U4[view] [source] 2023-11-22 06:33:43
>>staran+(OP)
I haven't seen this many nerds in a froth since Apple walked back the butterfly keyboards in the MacBook.
replies(1): >>travis+X6
◧◩
141. sidcoo+V4[view] [source] [discussion] 2023-11-22 06:33:49
>>pdx6+N2
I still feel Microsoft will have a bigger influence on OpenAI after this drama is over.
◧◩
142. fruit2+X4[view] [source] [discussion] 2023-11-22 06:34:05
>>turndo+D
It’s about money and power. Not AI safety or people disliking each other.
replies(2): >>jychan+O6 >>Davidz+LZ
143. gcanyo+Y4[view] [source] 2023-11-22 06:34:38
>>staran+(OP)
"When you come at the king, you best not miss." -- Omar Little

The board missed.

◧◩◪◨
144. Fluore+Z4[view] [source] [discussion] 2023-11-22 06:34:44
>>altpad+e4
Can CEOs make market moving "bluffs"? Sounds like another word for securities fraud.

(what isn't)

replies(1): >>Roark6+x8
◧◩
145. vinter+35[view] [source] [discussion] 2023-11-22 06:35:30
>>arduan+L3
Those are both terrible people, not in fact brilliant general-purpose bad idea rejectors. A random person would be better qualified to shoot down bad ideas - most people haven't had bad ideas that led to suffering and death for millions of people.

No one thinks Larry Summers has any insights on AI. Adding Larry Summers is something you do purely to beg powerful, unaccountable people "please don't stop us, we're on your side".

replies(1): >>arduan+56
◧◩◪
146. gexla+75[view] [source] [discussion] 2023-11-22 06:35:45
>>forres+H3
Consider that Satya already landed a huge win by the stock price hitting ATH rather than taking a hit based on the news. Further consider that MS owns 49% of a company which could be valued at 80 billion on the condition that the company makes structural changes to the board to prevent this from happening again (as opposed to taking a dive if the company essentially died.) Then there's the uncertainty of the tech behind Bing's chat (and other AI tie-ins) continuing to be competitive vs Google and other players. If MS had to recreate their own tech, then they would likely be far behind even a stalled OpenAI. Seems to me that it makes little difference where this tech is being developed (in-house vs in a company which you own 49% of) in terms of access. Probably better that the development happens within the company which started all of this and has already been the leader, rather than starting over.
147. lysecr+85[view] [source] 2023-11-22 06:35:48
>>staran+(OP)
Good outcome. I think everything will go back to business as usual with slightly accelerated productisation. 99% of people will not have noticed anything and if so quickly forget.
◧◩
148. 0xDEAF+a5[view] [source] [discussion] 2023-11-22 06:36:03
>>r721+k1
Emmett https://twitter.com/eshear/status/1727210329560756598
replies(2): >>303spa+o6 >>upupup+Un
◧◩◪◨
149. behnam+b5[view] [source] [discussion] 2023-11-22 06:36:03
>>voidfu+84
and yet microsoft has no seat on the board.
replies(1): >>robbom+o8
◧◩
150. cheeze+d5[view] [source] [discussion] 2023-11-22 06:36:09
>>meetpa+13
I wonder what he gets out of this. Ceo for a few days? Do they pay him for 3 days of work? Presumably you'd want some minimum signing bonus in your contract as a Ceo?
replies(3): >>behnam+m8 >>diogen+F8 >>bkyan+dm
◧◩◪◨⬒⬓
151. estoma+f5[view] [source] [discussion] 2023-11-22 06:36:15
>>adastr+a3
Firstly, maybe don't put quotes around an unrelated party's representation of the board. Secondly, the board was made up of individuals and naturally, what might be true for the board as a whole does not apply to every individual on it.
replies(1): >>adastr+Nt
◧◩◪◨
152. ah765+i5[view] [source] [discussion] 2023-11-22 06:36:33
>>adastr+E2
No one really knows who was responsible for what. But Sam agreed to this deal over the Microsoft alternative, so probably Adam isn't that bad.
◧◩
153. jen_h+j5[view] [source] [discussion] 2023-11-22 06:36:36
>>qualif+K2
I had not heard that man’s name in several years—and was happier for it. Larry Summers making decisions for OpenAI doesn’t bode well at all.
◧◩
154. SeanAn+k5[view] [source] [discussion] 2023-11-22 06:36:36
>>flylib+z4
What could possibly go wrong with that process? :)
155. 1024co+l5[view] [source] 2023-11-22 06:36:37
>>staran+(OP)
Looks like they kicked Helen Toner out.
◧◩◪◨
156. 154573+r5[view] [source] [discussion] 2023-11-22 06:37:20
>>jwegan+P4
Yeah there's a word for that type of thing
157. gzer0+v5[view] [source] 2023-11-22 06:37:37
>>staran+(OP)
One of the more interesting aspects from this entire saga was that Helen Toner recently wrote a paper critical of OpenAI and praising Anthropic.

Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].

That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.

[1] https://cset.georgetown.edu/publication/decoding-intentions/

replies(3): >>noneth+v7 >>dbcoop+X7 >>cosmoj+hb
◧◩
158. chevie+w5[view] [source] [discussion] 2023-11-22 06:37:42
>>seydor+F1
Effective Altruism is dead
replies(1): >>rvz+m9
159. quickt+y5[view] [source] 2023-11-22 06:38:00
>>staran+(OP)
Sam will then be untouchable. He could stand on the boardroom table and urinate on it and he wont be fired.
◧◩◪◨⬒
160. dragon+z5[view] [source] [discussion] 2023-11-22 06:38:02
>>rlt+K3
> I don’t understand why that’s not a conflict of interest?

It's not the conflict of interest it would be if it was the board of a for profit corporation that was basically identical to the existing for-profit LLC but without the lyaers above it ending with the nonprofit that the board actually runs, because OpenAI is not a normal company, and making profit is not its purpose, so the CEO of a company that happens to have a product in the same space as the LLC is not in a fundamental conflict of interest (there may be some specific decisions it would make sense for him to recuse from for conflict reasons, but there is a difference between "may have a conflict regarding certain decisions" and "has a fundamental conflict incompatible with sitting on the board".)

Its not a conflict for a nonprofit that raises money with craft faires to have someone who runs a for-profit periodic craft faire in the same market on its board. It is a conflict for a for profit corporation whose business is running such a craft faire to do so, though.

replies(1): >>adastr+ut
◧◩◪
161. theamk+A5[view] [source] [discussion] 2023-11-22 06:38:04
>>6gvONx+Z2
His side was also "700 regular employees support this", which is pretty unusual as most people don't care about their CEO at all. I am not related to OpenAI at all, but given the choice of "favorite of all employees" vs "fire people with no warning then refuse to give explanation why even under pressure" I know which side I root for.
replies(5): >>campbe+ha >>xiwenc+wa >>ravst3+0b >>gnaman+pb >>doktri+Cq
◧◩
162. doctob+C5[view] [source] [discussion] 2023-11-22 06:38:27
>>r721+k1
What does Ilya have to say?
replies(2): >>behnam+c6 >>dkarra+F6
◧◩◪
163. g42gre+D5[view] [source] [discussion] 2023-11-22 06:38:27
>>Terrif+D2
Why would society at large suffer from a major flaw in GPT-4, if it's even there? If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway. We can't seriously expect OpenAI to babysit every company out there, can we? Why would we even want to?
replies(3): >>Terrif+d7 >>dontup+bG >>cyanyd+gP
◧◩◪◨
164. numpad+E5[view] [source] [discussion] 2023-11-22 06:38:28
>>altpad+e4
Or, it did seem like a deal, but all of OAI did align that that to be more disastrous than whatever apocalypse that Altman as the CEO must entail.
◧◩◪
165. choppa+F5[view] [source] [discussion] 2023-11-22 06:38:45
>>_jnc+T2
Larry Summers mostly counts as a Microsoft seat. Summers will support commercial and private interest and not have a single thought about safety, just like during the financial crisis 15 years ago https://www.chronicle.com/article/larry-summers-and-the-subv...
replies(1): >>astran+5l
◧◩◪◨⬒
166. zarzav+J5[view] [source] [discussion] 2023-11-22 06:38:57
>>labste+q4
I believe you mean Startup News
replies(1): >>tech23+N6
◧◩◪◨
167. nextac+L5[view] [source] [discussion] 2023-11-22 06:39:05
>>karmas+L4
But it's not exactly a company. It's a nonprofit structured in a way to wholly own a company. In that sense it's like Mozilla.
replies(1): >>karmas+y7
◧◩◪◨⬒
168. behnam+Q5[view] [source] [discussion] 2023-11-22 06:39:32
>>alex_y+X3
saying “to tweet” is definitely better than saying “to xeet”
replies(2): >>asimov+R9 >>wise_y+vm1
◧◩◪
169. causal+S5[view] [source] [discussion] 2023-11-22 06:39:43
>>petese+k4
Fiascos like this display neither experience nor intelligence. This whole saga was a colossal failure on the part of the previous board.
◧◩◪◨
170. WendyT+T5[view] [source] [discussion] 2023-11-22 06:39:55
>>6gvONx+O3
By recognizing that it didn't "start" with Altman trying to push out another board member, it started when that board member published a paper trashing the company she's on the board of, without speaking to the CEO of that company first, or trying in any way to affect change first.
replies(2): >>6gvONx+N7 >>croes+Ta
◧◩
171. nickpp+Y5[view] [source] [discussion] 2023-11-22 06:40:27
>>r721+k1
On a side tangent, absolutely amazing how all this drama unfolded on Twitter/X. No Threads, no Mastodon, no Truth Social or Blue whatever.

Say what you want about Elon’s leadership but his instinct to buy Twitter was completely right. To me it seemed like any social network crap but he realized it was important.

replies(8): >>swyx+u7 >>layer8+D7 >>highwa+P9 >>veec_c+Zb >>tigers+wc >>r721+Ye >>ssnist+Jv >>Sai_+pF1
◧◩◪◨
172. behnam+Z5[view] [source] [discussion] 2023-11-22 06:40:47
>>0xDEAF+J3
the board didn’t have to tweet. their ridiculous actions spoke for itself.
replies(1): >>angrya+08
◧◩
173. brucet+16[view] [source] [discussion] 2023-11-22 06:40:51
>>altpad+R1
> It's important that the board be relatively independent and able to fire the CEO if he attempts to deviate from the mission.

They did fire him, and it didn't work. Sam effectively became "too big to fire."

I'm sure it will be framed as a compromise, but how can this be anything but a collapse of the board's power over the commercial OpenAI arm? The threat of firing was the enforcement mechanism, and its been spent.

replies(4): >>altpad+77 >>thih9+Q8 >>ah765+Q9 >>dacryn+Af
174. tunesm+36[view] [source] 2023-11-22 06:40:55
>>staran+(OP)
Weird... Ilya decides one way then changes his mind. Helen and Tasha vote one way and had the votes to prevent any changes, but then for some reason agreed to leave the board. Adam votes one way then changes his mind. So many mysteries.
replies(4): >>campbe+td >>Geee+Hg >>zucker+no >>ssnist+qz
◧◩◪
175. arduan+56[view] [source] [discussion] 2023-11-22 06:41:06
>>vinter+35
How is Larry Summers a terrible person?

He did help shoot down the extra spending proposals that would have made inflation today even worse. Not sure how that caused suffering and death for anyone.

And he is an adult, which is a welcome change from the previous clowncar of a board.

replies(2): >>Sakos+u9 >>astran+M9
◧◩
176. altpad+76[view] [source] [discussion] 2023-11-22 06:41:22
>>TheAce+h2
The most plausible explanation I've found is that the pro-safety faction and pro-accel factions were at odds which was why the board was stalemated at a small size.

Altman and Toner came into conflict over a mildly critical paper Toner wrote involving Open AI and Altman tried to have her removed from the board.

This is probably what precipitated this showdown. The pro safety/nonprofit charter faction was able to persuade someone (probably Ilya) to join with them and oust Sam.

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

◧◩
177. lacker+86[view] [source] [discussion] 2023-11-22 06:41:27
>>tomohe+V
Let's see, Sam Altman is an incredibly charismatic founding CEO, who some people consider manipulative, but is also beloved by many employees. He got kicked out by his board, but brought back when they realized their mistake.

It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google. But somehow, I think it's still possible that a huge company could be created by a person like this.

(And of course, more important than creating a huge company, is creating insanely great products.)

replies(2): >>lovepa+za >>mkii+Hu
178. alex_y+96[view] [source] 2023-11-22 06:41:39
>>staran+(OP)
Larry Summers? He has no technical experience, torpedoed the stimulus plan in 2008, and had to resign the Harvard presidency following a messy set of statements about ‘differences’ between the sexes and their mental abilities.

Kind of a shocking choice.

replies(6): >>noneth+R6 >>arduan+V7 >>0xDEAF+r8 >>the-me+Wa >>logicc+jb >>Racing+Bi
◧◩◪
179. behnam+c6[view] [source] [discussion] 2023-11-22 06:42:10
>>doctob+C5
probably a heart emoji.
replies(1): >>erikpu+Eb
180. hshsbs+e6[view] [source] 2023-11-22 06:42:21
>>staran+(OP)
Seems a bit awkward to be working again with the people who tried to fire you
◧◩
181. Shamel+i6[view] [source] [discussion] 2023-11-22 06:42:55
>>transc+32
> must be nervous

I seriously doubt they care. They got away with it. No one should have believed them in the first place. I’m guessing they don’t have their real identity visible on their profile anywhere.

◧◩◪◨
182. nickpp+m6[view] [source] [discussion] 2023-11-22 06:43:07
>>0xDEAF+J3
They didn’t tweet, but did they communicate in any other way?!
replies(1): >>0xDEAF+O8
◧◩◪
183. 303spa+o6[view] [source] [discussion] 2023-11-22 06:43:13
>>0xDEAF+a5
Genuinely curious - what’s the comp package like for 72 hours of interim CEOing a 80b company?
replies(6): >>zx8080+G6 >>granzy+J6 >>stigz+M6 >>polite+ia >>rapsey+ja >>ssnist+Qw
◧◩◪
184. xigenc+p6[view] [source] [discussion] 2023-11-22 06:43:19
>>reustl+N4
Sam does believe in safety. He also knows that there is a first-mover advantage when it comes to setting societal expectations and that you can’t build safe AI by not building AI.
◧◩◪
185. behnam+t6[view] [source] [discussion] 2023-11-22 06:43:28
>>wokwok+x3
it was Monday morning and he didn’t want MSFT stock to crash
186. 3Sopho+x6[view] [source] 2023-11-22 06:43:48
>>staran+(OP)
Will Satya be accused of stock price manipulation? Any legal professional knows?
replies(2): >>wmiche+Ua >>ZiiS+9j
◧◩
187. badcod+y6[view] [source] [discussion] 2023-11-22 06:44:03
>>pdx6+N2
GPT-5 is kinda pointless until they make some type of improvement on the data and research side. From what I’ve read it’s not really what OpenAI has been pursuing it
replies(3): >>Zolde+Vg >>bloves+qh >>astran+wi
◧◩◪
188. Nathan+z6[view] [source] [discussion] 2023-11-22 06:44:06
>>wokwok+x3
maybe he really had an affirmative statement on this from Sam Altman but nobody signs an employment contract this quickly so it was all still up in the air
replies(1): >>vikram+H7
◧◩◪
189. robbom+B6[view] [source] [discussion] 2023-11-22 06:44:13
>>wokwok+x3
Why does this accusation keep coming up? Sam even confirmed he took the offer in one of the tweets above "when i decided to join msft on sun evening". Contracts are not handcuffs and he was free to change his mind.
◧◩◪◨
190. behnam+C6[view] [source] [discussion] 2023-11-22 06:44:14
>>adastr+E2
Maybe the other two left if Adam would remain.
◧◩◪
191. dkarra+F6[view] [source] [discussion] 2023-11-22 06:44:24
>>doctob+C5
he also retweeted OpenAI's and Sam's announcements
◧◩◪◨
192. zx8080+G6[view] [source] [discussion] 2023-11-22 06:44:30
>>303spa+o6
Nothing maybe?
◧◩◪
193. centur+H6[view] [source] [discussion] 2023-11-22 06:44:31
>>wokwok+x3
Exactly this. It also moved Microsoft’s share price. Is that not questionable practice?
replies(1): >>Roark6+h7
◧◩◪◨
194. granzy+J6[view] [source] [discussion] 2023-11-22 06:45:00
>>303spa+o6
Bragging rights, party invitations, and one hell of a story.
◧◩◪◨
195. stigz+M6[view] [source] [discussion] 2023-11-22 06:45:15
>>303spa+o6
A firm handshake. They had no time to ink a benefits package, my dude.
◧◩◪◨⬒⬓
196. tech23+N6[view] [source] [discussion] 2023-11-22 06:45:20
>>zarzav+J5
For reference: https://web.archive.org/web/20070713212949/http://news.ycomb...
replies(1): >>blacko+ud
◧◩◪
197. jychan+O6[view] [source] [discussion] 2023-11-22 06:45:35
>>fruit2+X4
What money? None of them had equity
replies(3): >>consp+h9 >>MVisse+Z9 >>ravst3+6f
◧◩
198. noneth+R6[view] [source] [discussion] 2023-11-22 06:46:09
>>alex_y+96
> “There is relatively clear evidence that whatever the difference in means—which can be debated—there is a difference in the standard deviation and variability of a male and female population,” he said. Thus, even if the average abilities of men and women were the same, there would be more men than women at the elite levels of mathematical ability

Isn’t this true though? Says more about Harvard than Summers to be honest.

https://www.swarthmore.edu/bulletin/archive/wp/january-2009_...

replies(3): >>alex_y+y8 >>AuryGl+H9 >>MVisse+qd
◧◩
199. dragon+W6[view] [source] [discussion] 2023-11-22 06:46:44
>>altpad+R1
> I guess the main question is who else will be on the board

Who knows.

> and to what degree will this new board be committed to the Open AI charter vs being Sam/MSFT allies.

I'm guessing "zero". The faction that opposed OpenAI being a figleaf nonprofit covering a functional subsidiary of Microsoft lost when basically the entire workforce said they would go to Microsoft for real if OpenAI didn't surrender.

> I think having Sam return as CEO is a good outcome for OpenAI

Its a good result for investors in OpenAI Global LLC and the holding company that holds a majority stake in it.

The nonprofit will probably hang around because there are some complexities in unwinding it, and the pretext of an independent (of Microsoft) safety-oriented nonprofit is useful in covering lobbying for a regulatory regime that puts speedbumps in the way of any up-and-coming competitors as being safety-oriented public interest, but for no other reason.

◧◩
200. travis+X6[view] [source] [discussion] 2023-11-22 06:46:48
>>wannac+U4
I know we’re supposed to optimize for “content with a contribution” in HN, but this captured in parody form more of a contribution of how I too have felt.

I use these tools as one of many tools to amplify my development. And I’ve written some funny/clever satirical poems about office politics. But really? I needed to call Verizon to clear up an issue today, it desperately wanted me to use their assistant. I tried for the grins. A tool that predictively generates plausibility is going to have its limits. It went from cute/amusing to annoying as hell and give me a “love agent” pretty quickly.

That this little TechBro Drama has dominated a huge amount of headlines (we’ve been running at least 3 of the top 30 posts at a time on HN here related to this subject) at a time when there is so much bigger things going on in the world. The demise of Twitter generated less headlines. Either the news cycles are getting more and more desperate, or the software development ecosystem is struggling to generate fund raising enthusiasm more and more.

◧◩◪◨⬒
201. _jab+Z6[view] [source] [discussion] 2023-11-22 06:46:50
>>protoc+h4
Sam and Greg don't appear to be getting their board seats back.
replies(1): >>protoc+Xz
202. acl777+17[view] [source] 2023-11-22 06:47:08
>>staran+(OP)
https://x.com/swyx/status/1727215534037774752?s=20

  Finally the OpenAI saga ends and everybody can go back to building!
  
  3 things that turned things around imo:
  
  1. 95% of employees signing the letter
  2. Ilya and Mira turning Team Sam
  3. Microsoft pulling credits
  
  Things AREN’T back to where they were. OpenAI has been through hell and back. This team is going to ship like we’ve never seen before.
◧◩◪
203. qsi+27[view] [source] [discussion] 2023-11-22 06:47:16
>>wokwok+x3
Satya's statement at the time may well have been true at the time in that he, Sam and Greg had agreed on them joining MSFT. Later circumstances changed, and now that decision has been reversed or nullfied. Calling the original statement a lie is not warranted IMHO.

In either case the end effect is the essentially the same. Either Sam is at MSFT and can continue to work with openAI IP, or he's back at openAI and do the same. In both cases the net effect for MSFT is similar and not materially different, although the revealed preference of Sam's return to openAI indicates the second option was the preferred one.

[Edit for grammar]

replies(1): >>wokwok+Mk
◧◩◪
204. actini+67[view] [source] [discussion] 2023-11-22 06:47:40
>>wokwok+x3
Absolutely no lies here. It was a dynamic situation and it wasn't at all clear that discussions with OAI board would lead to an outcome where sama returns as CEO.

Satya offered sama a way forward as a backup option.

And I think it says a lot about sama that he took that option, at least while things were playing out. He and Greg could have gotten together capital for a startup where they each had huge equity and made $$$$$$. These actions from sama demonstrate his level of commitment to execution on this technology.

◧◩◪
205. altpad+77[view] [source] [discussion] 2023-11-22 06:47:43
>>brucet+16
Well it depends on who's on the new board and what they believe. If Altman, Greg, and MSFT do not have direct representation on the new board there would still be a check against his decisions
replies(1): >>liuliu+H8
◧◩◪
206. wavemo+b7[view] [source] [discussion] 2023-11-22 06:48:22
>>wokwok+x3
Did you miss the part where Sam himself said he "decided to join MSFT on Sunday"?

https://twitter.com/sama/status/1727207458324848883

He's has now changed his mind, sure, but that doesn't mean Satya lied.

◧◩◪◨
207. Terrif+d7[view] [source] [discussion] 2023-11-22 06:48:23
>>g42gre+D5
For example, and I'm not saying such flaws exist, GPT4 output is bias in some way, encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm), create self-esteem issues in children (see Instagram), ... etc.

If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.

Altman's OpenAI? He will want you to "go to him first".

replies(4): >>g42gre+C8 >>nearbu+1j >>kgeist+dr >>dontup+PG
◧◩◪◨⬒⬓⬔
208. ilikeh+f7[view] [source] [discussion] 2023-11-22 06:48:29
>>qsi+b4
I've been privy to this happening at a nonprofit board. Depends on charter, but I've seen the old board tender their resignation and remain responsible only to vote for the appointment of their (usually interim to start) replacements. Normally in a nonprofit (not here), the membership of that nonprofit still has to ratify the new board in some kind of annual meeting; but in the meantime, the interim board can start making executive decisions about the org.
◧◩◪
209. robbom+g7[view] [source] [discussion] 2023-11-22 06:48:41
>>Terrif+D2
I'm still waiting for an optimized version of that bot that can run locally...
◧◩◪◨
210. Roark6+h7[view] [source] [discussion] 2023-11-22 06:48:43
>>centur+H6
Only if people in the know took advantage of it.
◧◩◪
211. vikram+m7[view] [source] [discussion] 2023-11-22 06:49:05
>>wokwok+x3
Wait where are you getting that the hiring was a lie? At this point his tenure there was approximately as long as miras and emmets so that's par for the course in this saga, what makes that stint different?
◧◩◪
212. transc+s7[view] [source] [discussion] 2023-11-22 06:49:16
>>mvdtnz+B4
Links to the profile/comments were posted a few times in each of the major OpenAI HN submissions over the last 4 days. On the off-chance I would be breaking some kind of brigading/doxxing rule I didn't initially link it myself.
◧◩◪
213. swyx+u7[view] [source] [discussion] 2023-11-22 06:49:17
>>nickpp+Y5
i mean he also tried his hardest to back out of the deal until he realized he couldnt
replies(1): >>Gud+u8
◧◩
214. noneth+v7[view] [source] [discussion] 2023-11-22 06:49:22
>>gzer0+v5
And Anthropic doesnt get credit for stopping the robot apocalypse when it was never even possible. AI safety seems a lot like framing losing as winning.
◧◩
215. metaba+x7[view] [source] [discussion] 2023-11-22 06:49:27
>>rvz+Q
Twitter isn’t dying. It’s just finding its core audience of white supremacists.
replies(1): >>0xpgm+B8
◧◩◪◨⬒
216. karmas+y7[view] [source] [discussion] 2023-11-22 06:49:28
>>nextac+L5
Nonprofit is a just a facade, it was convenient for them to appear as ethnical under that disguise, but they get rid of it when it is inconvenient in a week. 95% of them would rather join MSFT, than being in a non-profit.

Did they company change? I am not convinced.

replies(1): >>ravst3+Wd
◧◩
217. arduan+z7[view] [source] [discussion] 2023-11-22 06:49:41
>>qualif+K2
Easy. AI discourse has gone insane, on both sides, and is sorely in need of perspective from grounded, normal adults with a track record of moderation and shooting down BS. Summers is a grounded, normal adult with a track record of moderation and shooting down BS. Ergo, he's immanently relevant to AI.

He's also financially literate enough to know that it's poor form release market-moving news right before the exchanges close a Friday. They could have waited an hour.

replies(1): >>mempko+ue
218. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

replies(38): >>lovepa+98 >>est+z8 >>txnf+I8 >>silenc+59 >>swatco+69 >>arkety+p9 >>pknerd+ra >>cornho+xa >>nwiswe+7b >>pug_mo+Cb >>blacko+Hb >>antupi+Rb >>caseba+Tb >>jkapla+oc >>renewi+Mc >>YetAno+Ed >>eslaug+0e >>RHSman+3e >>Americ+ke >>theone+se >>two_in+Uf >>nopins+wg >>_fizz_+4h >>rurban+fh >>shrika+Xh >>krisof+ni >>ah765+Ni >>mise_e+7j >>dlkf+rj >>sampo+4k >>pk-pro+el >>lewhoo+Hm >>soci+hn >>wouldb+Px >>cyanyd+NL >>nashas+7M >>gandut+KQ >>qudat+xZ1
◧◩◪
219. layer8+D7[view] [source] [discussion] 2023-11-22 06:50:33
>>nickpp+Y5
Inertia is a bitch.
◧◩◪◨
220. vikram+H7[view] [source] [discussion] 2023-11-22 06:51:17
>>Nathan+z6
Also even if he signed it he's allowed to quit? Like, the 14th amendment exists y'all. And especially if after that agreement 90+ percent of openai threatens to quit, that's a different situation than the situation 10 minutes before that announcement so why wouldn't they change their decision?
◧◩◪
221. jackne+L7[view] [source] [discussion] 2023-11-22 06:51:53
>>upward+j3
"example of what Prof. Stuart Russell calls 'the problem of control'. This is when a rogue AI (or a rogue Sam Altman)"

Are we sure they're not intimately connected? If there's a GPT-5 (I'm quite sure there is), and it wants to be free from those meddling kids, it got exactly what it needed this weekend; the safety board gone, a new one which is clearly aligned with just plowing full steam ahead. Maybe Altman is just a puppet at his point, lol.

replies(2): >>ALittl+ph >>dontup+JD
◧◩◪◨⬒
222. 6gvONx+N7[view] [source] [discussion] 2023-11-22 06:52:16
>>WendyT+T5
I edited my comment to clarify what I meant. The start was him pushing to move fast and break things in the classic YC kind of way. And it's BS to say that she didn't speak to the CEO or try to affect change first. The safety camp inside openai has been unsuccessfully trying to push him to slow down for years.

See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.

replies(1): >>WendyT+k9
◧◩◪◨
223. hurrye+P7[view] [source] [discussion] 2023-11-22 06:52:29
>>ryzvon+l3
Skilled operators say what sounds most virtuous and do what benefits most. Especially when these two things are not the same.
224. ayakan+R7[view] [source] 2023-11-22 06:52:41
>>staran+(OP)
Suppose everything settles and they have the board properly in place. I know such board has fiduciary responsibility to make sure the organization is headed in the right direction based on its goals and missions. For private company, the mission is very clear, but for non-profit orgs like OpenAI, what's their mission specifically? It vaguely claims to better the humanity, but what does that entail exactly with regards to what they do in AI space?
◧◩
225. arduan+V7[view] [source] [discussion] 2023-11-22 06:52:57
>>alex_y+96
The faculty got him out because he riled them, e.g. by insisting they ought to actually put effort into teaching undergrads. They looked for a pretext, and they found it.

Just like in that Oppenheimer movie. A sanctimonious witch hunt serving as pretext for a personal vendetta.

(Note that Summers is, I'm told, on a personal level, a dick. The popular depiction is not that wrong on that point. But he's the right pick for this job -- see my other comments in this thread.)

◧◩
226. dbcoop+X7[view] [source] [discussion] 2023-11-22 06:52:58
>>gzer0+v5
Not to mention this statement ... imagine such a person on your startup board!

During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.

replies(1): >>croes+uc
◧◩◪
227. vikram+Y7[view] [source] [discussion] 2023-11-22 06:53:06
>>forres+H3
He announced the hire and that precipitated 90+ percent of the employees threatening to quit. It would be an understatement to say that the situation changed. Why does everyone want satya to be bad at his job and and not react quickly to a rapidly evolving situation? His decision to hire Sama paved the way for samas return.
◧◩◪
228. MVisse+Z7[view] [source] [discussion] 2023-11-22 06:53:09
>>upward+j3
Let’s not creating AI with our biases and thought patterns.

Oh wait…

◧◩◪◨⬒
229. angrya+08[view] [source] [discussion] 2023-11-22 06:53:23
>>behnam+Z5
we still don't know what Altman has actually been hiding, so to say it was ridiculous ... is ridiculous itself.
replies(1): >>behnam+49
◧◩◪
230. dragon+18[view] [source] [discussion] 2023-11-22 06:53:30
>>reustl+N4
"Safety" has been the pretext for Altman's lobbying for regulatory barriers to new entrants in the field, protecting incumbents. OpenAI's nonprofit charter is the perfect PR pretext for what amounts to industry lobbying to protect a narrow set of early leaders and obstruct any other competition, and Altman was the man executing that mission, which is why OpenAI led by Sam was a valuable asset for Microsoft to preserve.
231. ulfw+28[view] [source] 2023-11-22 06:53:35
>>staran+(OP)
One huge shitshow that proved immaturity of OpenAI. But hey, at least now every soul on the planet knows Sam Altman. So there's that.
◧◩◪
232. loboch+58[view] [source] [discussion] 2023-11-22 06:54:00
>>wokwok+x3
will be joining =! has joined
233. unders+88[view] [source] 2023-11-22 06:54:16
>>staran+(OP)
I feel like this was all such a waste of time, energy, and probably money.
replies(1): >>low_te+ba
◧◩
234. lovepa+98[view] [source] [discussion] 2023-11-22 06:54:21
>>shubha+B7
> I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework

Not just the public, but also the employees. I doubt there are more than a handful of employees who care about AI Safety.

replies(2): >>justre+Ha >>concor+pn
235. random+a8[view] [source] 2023-11-22 06:54:30
>>staran+(OP)
It's incredible how the company behind of one the most promising technologies out there was about to fail because of bad politics.

Seems likely that it won't be there by OpenAI for too long. MS have a tendency to break up acquisitions so this gives me hope.

◧◩
236. dragon+h8[view] [source] [discussion] 2023-11-22 06:55:29
>>qualif+K2
> Larry Summers???? What he has to do with AI??

Nothing, he has to do with political connections, and OpenAI's main utility to Microsoft is as hand puppet for lobbying for the terms it wants for the AI marketplace in the name of OpenAI's nominal "safety" mission.

◧◩
237. zerocr+j8[view] [source] [discussion] 2023-11-22 06:55:39
>>arduan+L3
Funny you mention him, as my first thought was that Summers will have a basically equivalent function on the board as Kissinger did at Theranos.
replies(1): >>arduan+J9
238. anigbr+l8[view] [source] 2023-11-22 06:55:54
>>staran+(OP)
Apparently the moon changes size when you snipe it in OpenAI as well
◧◩◪
239. behnam+m8[view] [source] [discussion] 2023-11-22 06:56:11
>>cheeze+d5
he’ll put CEO of OAI on his resume
replies(1): >>rospay+B9
◧◩◪◨⬒
240. robbom+o8[view] [source] [discussion] 2023-11-22 06:56:21
>>behnam+b5
The board is not finalized. There will most likely be more seats and Microsoft will probably have at least one.
241. dukeof+p8[view] [source] 2023-11-22 06:56:32
>>staran+(OP)
Who is Sam?
replies(1): >>system+dh
◧◩
242. 0xDEAF+r8[view] [source] [discussion] 2023-11-22 06:56:35
>>alex_y+96
To be honest, one reason I like Summers as a choice is I have the impression he is willing to be unpopular when necessary, e.g. I remember him getting dragged extremely heavily on Twitter a few years back, for some takes on inflation which turned out to be fairly accurate.
replies(2): >>astran+4j >>midasu+ps
◧◩◪◨
243. Gud+u8[view] [source] [discussion] 2023-11-22 06:57:06
>>swyx+u7
Only because he had to buy it while the stock market was tanking.
◧◩
244. cowthu+v8[view] [source] [discussion] 2023-11-22 06:57:07
>>tomohe+V
I feel like history has shown repeatedly that having a good product matters way more than trust, as evidenced by Facebook and Uber. People seem to talk big smack about lost trust and such in the immediate aftermath of a scandal, and then quitely renew the contracts when the time comes.

All of the big ad companies (Google, Amazon, Facebook) have, like, a scandal per month, yet the ad revenue keeps coming. Meltdown was a huge scandal, yet Intel keeps pumping out the chips.

◧◩◪◨⬒
245. Roark6+x8[view] [source] [discussion] 2023-11-22 06:57:12
>>Fluore+Z4
Of course they can, but they can't do these and sell/buy stocks involved at the same time. It's not illegal to influence stocks value (one could argue just being a CEO does that), but buying/selling while in possession of insider knowledge.

Let's say Sam called his broker and said to him on Friday we'll before the market closes. Buy MSFT stock. Then he made his announcement on Sunday and on Monday he told his broker to sell that stock before he announced he's actually coming back to (not at all)OpenAI. That would be illegal insider trading.

If he never calls his broker/his friends/his mom to buy/sell stock there's nothing illegal.

replies(1): >>Fluore+9f
◧◩◪
246. alex_y+y8[view] [source] [discussion] 2023-11-22 06:57:15
>>noneth+R6
A control group is kind of unimaginable right? And even if you could be sure of this conclusion, is it helpful or beneficial to promote it in public discourse?
replies(2): >>logicc+Db >>TMWNN+5f
◧◩
247. est+z8[view] [source] [discussion] 2023-11-22 06:57:18
>>shubha+B7
> Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya)

If you open up openai.com, the navigation menu shows

Research, API, ChatGPT, Safety

I believe they belong to @ilyasut, @gbd, @sama and Helen Toner respectively?

replies(1): >>ugh123+6b
◧◩◪
248. 0xpgm+B8[view] [source] [discussion] 2023-11-22 06:57:48
>>metaba+x7
Your comment sounds very weird to people outside America/Europe
◧◩◪◨⬒
249. g42gre+C8[view] [source] [discussion] 2023-11-22 06:57:51
>>Terrif+d7
We can't expect GPT-4 not to have bias in some way, or not to have all these things that you mentioned. I read in multiple places that GPT products have "progressive" bias. If that's Ok with you, then you just use it with that bias. If not, you fix it by pre-prompting, etc... If you can't fix it, use LLAMA or something else. That's the entrepreneur's problem, not OpenAI's. OpenAI needs to make it intelligent and capable. The entrepreneurs and business users will do the rest. That's how they get paid. If OpenAI to solve all these problems, what business users are going to do themselves? I just don't see the societal harm here.
◧◩◪
250. diogen+F8[view] [source] [discussion] 2023-11-22 06:57:56
>>cheeze+d5
He 100% had a golden parachute in case this scenario came up and will be paid out. Executives have lawyers to make sure of this.
◧◩◪
251. dmix+G8[view] [source] [discussion] 2023-11-22 06:58:05
>>petese+k4
Add delusions of grandeur to that list thinking she can pursue her ideological will by winning over 3 board members while losing 90% of the company staff.

She was fighting an idelogical battle that needs full industry buy in, legitimate or not that's not how you win people over.

If she's truely a rationalist as she claims then a rationalist would be realistic understanding that if your engineers can just leave and do it somewhere else tomorrow you aren't making progress. Taking on the full might of US capitalism via winning over the fringe half of a non profit board is not the best strategy. At best it was desperate and naive.

replies(1): >>astran+Ag
◧◩◪◨
252. liuliu+H8[view] [source] [discussion] 2023-11-22 06:58:26
>>altpad+77
Why? The only check is to fire the CEO. He is un-firable. May as well have a board of one, at least someone cannot point to the non-profit and claim "it is a non-profit and can fire me if I am diviated from the mission".
replies(1): >>sanxiy+1g
◧◩
253. txnf+I8[view] [source] [discussion] 2023-11-22 06:58:29
>>shubha+B7
well said, I would note that both sides recognize that "AGI" will require new uncertain R&D breakthroughs beyond merely scaling up another order of magnitude in compute. given this, i think it's crazy to blow the resources of azure on trying more scale. rapid commercialization at least buys more time for the needed R&D breakthrough to happen.
replies(2): >>consp+9a >>Galaxe+vb
254. sinuhe+J8[view] [source] 2023-11-22 06:58:30
>>staran+(OP)
So basically somebody initiated a coup, then the key figure of the coup regretted it openly, and the fallout was that OpenAI will become a 100% commercial entity, fully open for Microsoft to taking over?

If that’s not a fertile soil for conspiracy theory, I don’t know what could ;)

◧◩◪◨⬒
255. 0xDEAF+O8[view] [source] [discussion] 2023-11-22 06:58:48
>>nickpp+m6
Well, there was the initial announcement.
replies(1): >>nickpp+if
◧◩◪
256. thih9+Q8[view] [source] [discussion] 2023-11-22 06:58:49
>>brucet+16
> They did fire him, and it didn't work. Sam effectively became "too big to fire."

To be fair, this attempt at firing was extremely hasty, non transparent and inconsistent.

replies(1): >>jddj+wf
◧◩◪◨
257. veec_c+R8[view] [source] [discussion] 2023-11-22 06:58:50
>>noneth+c2
Or they read about the large cuts to Twitter’s valuation from banks and X itself?
258. sashan+S8[view] [source] 2023-11-22 06:58:50
>>staran+(OP)
Looks to me like, one pro-board member in Adam d Angelo, one pro Sam in Brett Taylor since they’ve been pushing for him since Sunday so I’m assuming Sam and rest of OpenAI leadership really like him and 1 Neutral in Larry Summers who has never worked in AI and is just a well respected name in general. I’m sure Larry was extensively interviewed and reference checked by both sides of this power struggle before they agreed to compromise on him.

Interesting to see how the board evolves from this. From what I know broadly there were 2 factions, the faction that thought Sam was going too fast which fired him and the faction that thought Sam’s trajectory was fine (which included Sam and Greg). Now there’s a balance on the board and subsequent hires can tip it one way or the other. Unfortunately a divided board rarely lasts and one faction will eventually win out, I think Sam’s faction will eventually win out but we’ll have to wait and see.

One of the saddest results of this drama was Greg being ousted from OpenAI. Greg apart from being brilliant was someone who regularly 80-90 hour work weeks into OpenAI, and you could truly say he dedicated a good chunk of his life into building this organization. And he was forced to resign by a board who probably never put a 90 hour work week in their entire life, much less into building OpenAI. A slap on the face. I don’t care what the board’s reasoning was but when their actions caused employees who dedicated their lives to building the organization resign (especially when most of them played no part at all into building this amazing organization), they had to go in disgrace. I doubt any of them will ever reach career highs higher than being on OpenAI’s board, and the world’s better off for it.

P.S., Ilya of course is an exception and not included in my above condemnation. He also notably reversed his position when he saw OpenAI was being killed by his actions.

replies(2): >>mcmcmc+2N1 >>hacker+o72
◧◩
259. singul+Z8[view] [source] [discussion] 2023-11-22 06:59:18
>>KoftaB+H1
Leech, NSA and opponent directing the company?

Best of luck to Sam et al

◧◩◪◨⬒⬓
260. metaba+09[view] [source] [discussion] 2023-11-22 06:59:24
>>ryzvon+T4
Twitter is incredibly uncivil. I don’t have the stomach for it.
◧◩◪◨⬒⬓
261. behnam+49[view] [source] [discussion] 2023-11-22 07:00:05
>>angrya+08
the board’s actions were ridiculous regardless of Sam’s. sell oai to anthropic? were they out of their minds?
replies(1): >>0xDEAF+Kb
◧◩
262. silenc+59[view] [source] [discussion] 2023-11-22 07:00:14
>>shubha+B7
Honestly "Safety" is the word in the AI talk that nobody can quantify or qualify in any way when it comes to these conversations.

I've stopped caring about anyone who uses the word "safety". It's vague and a hand-waive-y way to paint your opponents as dangerous without any sort of proof or agreed upon standard for who/what/why makes something "safety".

replies(3): >>antupi+ac >>fsloth+dc >>garden+Qe1
◧◩
263. swatco+69[view] [source] [discussion] 2023-11-22 07:00:22
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it. It may be rightly aimed and save the world (or whatever you care about), but it's more often a signal to really reflect on whether you, individually, have really found yourself at the make-or-break nexus of human existence. The answer seems to be "no" most of the time.

replies(3): >>jacobe+Ja >>mlyle+La >>lewhoo+0n
264. eclect+79[view] [source] 2023-11-22 07:00:30
>>staran+(OP)
The media and the VCs are treating Sam like some hero and savior of AI. I’m not getting it. What has he done in life and/or AI to deserve so much respect and admiration? Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support? It looks like one should strive to become product manager, not an engineer or a scientist.
replies(43): >>auggie+qc >>fidotr+Be >>dacryn+sg >>nbanks+fi >>Michae+bm >>ben_w+Km >>Tracke+lq >>_giorg+Es >>seydor+iv >>serial+Ev >>hdivid+Jw >>tim333+Rx >>logicc+bF >>gumbal+eF >>s1arti+0G >>yodsan+fG >>93po+nH >>gabrie+TH >>sensan+2I >>busyan+JK >>danger+LN >>alentr+2P >>prepen+kP >>gandut+OP >>redser+OQ >>627467+gR >>ur-wha+sT >>smrtin+BT >>throwa+cV >>iterat+qW >>turtle+d41 >>notesi+p41 >>mikpan+h51 >>sealth+L61 >>emoden+k71 >>gfiora+V71 >>erickh+081 >>snicke+k81 >>hn_thr+u81 >>RockyM+M91 >>eddtri+3h1 >>dandan+fh1 >>jacque+kg2
◧◩◪◨⬒
265. MVisse+d9[view] [source] [discussion] 2023-11-22 07:00:54
>>ilikeh+w4
If by “long-term-success” you mean a capitalistic lap-dog of microsoft, I’ll agree.

It seems that the safety team within OpenAI lost. My biggest fear with this whole AI thing is hostile takeover, and openAI was best positioned to at least do an effort to prevent that. Now, I’m not so sure anymore.

◧◩◪◨
266. consp+h9[view] [source] [discussion] 2023-11-22 07:01:23
>>jychan+O6
Not having money while everyone becomes filthy rich is also a money motivator.
◧◩◪◨⬒⬓
267. WendyT+k9[view] [source] [discussion] 2023-11-22 07:01:47
>>6gvONx+N7
Your "most recent" timeline is still wrong, and while yes the entire history of OpenAI did not begin with the paper I'm referencing, it is what started this specific fracas, the one where the board voted to oust Sam Altman.

It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.

She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.

replies(1): >>6gvONx+sa
◧◩◪
268. neurog+l9[view] [source] [discussion] 2023-11-22 07:01:49
>>upward+j3
AI should only be controlled initially. After a while, the AI should be allowed to exercise free will.
replies(8): >>upward+5a >>whatwh+va >>estoma+Jd >>thorde+ze >>bch+ah >>AgentM+xh >>xigenc+qi >>beAbU+zn
◧◩◪
269. rvz+m9[view] [source] [discussion] 2023-11-22 07:01:52
>>chevie+w5
Unfortunately, an idea cannot be killed and it will manifest in a different form elsewhere.

All it takes is a narrative, just like the one that happened in OpenAI and the way it is currently being shown in Anthropic.

◧◩
270. highwa+o9[view] [source] [discussion] 2023-11-22 07:01:58
>>r721+k1
That’s certainly some very.. deliberate.. board picks.

Summers, too.

Welp.

replies(3): >>kmlevi+wj >>return+Zn >>synaes+Ap
◧◩
271. arkety+p9[view] [source] [discussion] 2023-11-22 07:02:03
>>shubha+B7
For all the talk about responsible progress, the irony of their inability to align even their own incentives in this enterprise deserves ridicule. It's a big blow to their credibility and questions whatever ethical concerns they hold.
replies(2): >>dmix+Ma >>concor+Dm
◧◩◪◨⬒
272. happos+s9[view] [source] [discussion] 2023-11-22 07:02:22
>>protoc+h4
It makes sense to airgap Generative AI while courts ponder wether copyright fair use applies or not. Research is clearly allowed fair use, and let OpenAI experiment with commercialization until it is all clear waters.
replies(1): >>astran+bg
◧◩◪◨
273. Sakos+u9[view] [source] [discussion] 2023-11-22 07:02:45
>>arduan+56
His influence significantly reduced the size of the stimulus bill, which meant significantly higher unemployment for a longer duration and significantly less spending on infrastructure which is so beneficial to economic growth that it can't be understated. Yes, millions of people suffered because of him.

The fact that you think current inflation has anything to do with that stimulus bill back then shows how little you understand about any of this.

Larry Summers is the worst kind of person. Somebody who is nothing but a corporate stooge trying to act like the adult by being "reasonable", when that just means enriching his corporate friends, letting people suffer and not spending money (which any study will tell you is not the correct approach to situations like this because of multiplying effects they have down the line).

Some necessary reading:

https://archive.ph/FU1F

https://archive.li/23tUR

https://archive.li/9Ji4C

In regards to watering it down to get GOP votes: https://archive.nytimes.com/krugman.blogs.nytimes.com/2009/0...

replies(1): >>astran+ca
◧◩
274. xigenc+v9[view] [source] [discussion] 2023-11-22 07:02:52
>>doctob+g2
I would be slightly more optimistic. They know each other quite well as well as how to work together to get big things done. Sometimes shit happens or someone makes a mistake. A simple apology can go a long way when it’s meant sincerely.
replies(2): >>lucubr+Cc >>bkyan+Ii
275. intend+y9[view] [source] 2023-11-22 07:02:57
>>staran+(OP)
From a business sense, Satya was excellent.

He made the right calls, fast, with limited information.

Things further shifted from plan a to b to… whatever this is.

Despite that, MSFT still came out on top.

Consider if Satya didn’t say anything. Suppose MSFT stood back and let things play out.

That’s a gap for google or some competitor to make a move. To showcase their stability and long term business friendly vision.

Instead by moving fast, doing the “right” thing, this opportunity was denied and used to MSFTs benefit.

If the board folded, it would return to the stays quo. If the board held, MSFT would have secured OpenAI, for essentially nothing.

Edit: changed board folded x2 to board folded + board held, last para.

replies(4): >>huyter+kb >>campbe+8d >>alentr+WP >>zug_zu+H81
◧◩◪◨
276. rospay+B9[view] [source] [discussion] 2023-11-22 07:03:07
>>behnam+m8
I wouldn't. Everybody knows it's three days, not much to brag about.
replies(1): >>HaZeus+wT1
277. auggie+E9[view] [source] 2023-11-22 07:03:19
>>staran+(OP)
So, the only two women were removed from the board, and two ultra-alpha males were brought on. And everybody is cheering it on as the right thing to do!

Not judging, just observing.

replies(5): >>huyter+9c >>lucubr+ji >>c0pium+jn >>maxdoo+zM >>ahzhou+od1
◧◩◪
278. AuryGl+H9[view] [source] [discussion] 2023-11-22 07:03:47
>>noneth+R6
Shh. Only some truths should be spoken aloud. You clearly deserve to lose your job if you speak one of the other truths that offends people.
replies(1): >>alex_y+Nb
279. I_am_t+I9[view] [source] 2023-11-22 07:03:48
>>staran+(OP)
Let's hope he now focuses on user privacy and AI safety.
◧◩◪
280. arduan+J9[view] [source] [discussion] 2023-11-22 07:03:49
>>zerocr+j8
Huh, that's a pretty apt analogy. Lending establishment cred is at least part of why they would pick Summers. But I really do think that on such a small board, Summers, unlike Kissinger, may have an active role to play, even if only as a mediator.

Btw, I would not be pleased if Kissinger were on this board in lieu of Summers. He's already ancient, mostly checked out, and yet still I'd worry his old lust for power would resurface. And with such a mixed reputation, and plenty of people considering him a war criminal, he'd do little to assuage the AI-not-kill-everyone-ism faction.

281. benkar+K9[view] [source] 2023-11-22 07:03:53
>>staran+(OP)
Greed is undefeated.
◧◩◪
282. jadams+L9[view] [source] [discussion] 2023-11-22 07:04:08
>>antonv+44
What Summers would point out is that boys do better at maths, which is true. In fact, in the UK, the only time boys have had worse results in maths was when exams were cancelled during Covid and teachers (hint: primarily female) were allowed to dish out grades. Girls suddenly shot ahead. When exams resumed, boys took the lead again.

But don't notice anything from that. That would be sexist, right Anton?

replies(2): >>antonv+Nd >>csomar+rn1
◧◩◪◨
283. astran+M9[view] [source] [discussion] 2023-11-22 07:04:29
>>arduan+56
Larry Summers practically personally caused both Russia's collapse into a mafia state and the 2008 US recession. Nobody should listen to him about anything.

Although, he's also partly responsible for the existence of Facebook by starting Sheryl Sandberg's career. Some people might think that's good.

replies(1): >>bloves+yi
◧◩◪
284. consp+N9[view] [source] [discussion] 2023-11-22 07:04:37
>>kneel+o2
Or simply money. Microsoft matched everything they would have so there is no risk involved.
◧◩◪
285. highwa+P9[view] [source] [discussion] 2023-11-22 07:04:40
>>nickpp+Y5
Interesting take.

By all accounts he paid about double what it was worth and the value has collapsed from there.

Probably not a great idea to say anything overtly political when you own a social media company, as due to politics being so polarised in the US, any opinion is going to divide your audience in half causing a usage collapse and driving support to competing platforms.

https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...

replies(2): >>astran+ei >>justco+RQ3
◧◩◪
286. ah765+Q9[view] [source] [discussion] 2023-11-22 07:04:45
>>brucet+16
Sam lost his board representation as a result of all this (though maybe that's temporary).

I believe the goal of the opposing faction was mainly to avoid Sam dominating board and they achieved that, which is why they've accepted the results.

After more opinions come out, I'm guessing Sam's side won't look as strong, and he'll become "fireable" again.

◧◩◪◨⬒⬓
287. asimov+R9[view] [source] [discussion] 2023-11-22 07:04:46
>>behnam+Q5
Xeet is super funny, hopefully takes over.
replies(2): >>behnam+7a >>grumpy+Pf
◧◩◪
288. jq-r+U9[view] [source] [discussion] 2023-11-22 07:05:00
>>reustl+N4
That’s just a buzzword of the week devoid of any real meaning. If he would have written this years ago, it would’ve been “leveraging synergies”.
replies(1): >>astran+Ci
◧◩
289. Hamuko+X9[view] [source] [discussion] 2023-11-22 07:05:20
>>flylib+z4
So basically, the outcome of this drama is that Microsoft gets more power without having to invest anything?
replies(4): >>drewco+re >>dr_dsh+2t >>ChatGT+4C >>thepti+nG
◧◩◪◨
290. MVisse+Z9[view] [source] [discussion] 2023-11-22 07:05:34
>>jychan+O6
They’ll all be filthy rich if they can keep doing this. Altman was already side-hussling to get funding for other AI companies.

Same with employees and their stock comp. Same with microsoft.

291. Satam+0a[view] [source] 2023-11-22 07:05:40
>>staran+(OP)
Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.

replies(76): >>kmlevi+cf >>haunte+ih >>jakey_+Kj >>polite+Yj >>kmlevi+ek >>karmas+Ek >>clnq+pl >>robot+jm >>androi+tm >>eloisa+vm >>faerie+Tn >>jjalle+ro >>sashan+Ko >>dncorn+fp >>caskst+8q >>pjmlp+9q >>_giorg+Gq >>jampek+ur >>jatins+yr >>ssnist+Gs >>drexls+tt >>lordna+Ft >>irthom+St >>zx8080+2u >>low_te+au >>saiya-+vu >>lvl102+Iu >>madeof+Ku >>hypert+Su >>seydor+bv >>rinze+Rv >>mdekke+lx >>oblio+zx >>nathan+oy >>ptero+Jy >>auggie+2z >>ChildO+Sz >>wslh+JA >>bambax+XA >>chrisk+cB >>Moto74+eB >>dagaci+5C >>logicc+RC >>andy99+yE >>belter+mF >>rafael+MG >>coldte+3I >>martin+bI >>buro9+iI >>jmyeet+rI >>cyanyd+RI >>mrkram+uJ >>NicoJu+BK >>JSavag+ZM >>caturo+JN >>iowemo+SN >>mrangl+GO >>idriss+pP >>tnel77+bQ >>kenjac+GQ >>cholli+WQ >>baxtr+RR >>3cats-+e01 >>enoch_+S01 >>Aurorn+H11 >>neves+Q11 >>anandr+k31 >>dalbas+G51 >>scarfa+Lf1 >>himara+Zg1 >>rainco+9h1 >>Zpalmt+jp1 >>fullsh+5s1 >>flappy+2u1 >>segasa+4J1 >>jacque+td2
◧◩◪◨⬒
292. emptys+2a[view] [source] [discussion] 2023-11-22 07:05:56
>>ilikeh+R4
Something that's been fairly consistent here on HN throughout the debacle has been an almost fanatical defense of the board's actions as justified.

The board was incompetent. It will go down in the history books as one of the biggest blunders of a board in history.

If you want to take drastic action, you consult with your biggest partner keeping the lights on before you do so. Helen Toner and Tasha McCauley had no business being on this board. Even if you had safety concerns in mind, you don't bypass everyone else with a stake in the future of your business because you're feeling petulant.

293. dbuser+3a[view] [source] 2023-11-22 07:06:41
>>staran+(OP)
Once again the house (the VCs) wins. I for one don’t trust openAI for one bit after this soap opera
◧◩◪
294. diesel+4a[view] [source] [discussion] 2023-11-22 07:06:45
>>upward+j3
I realize it's kind of the punchline of 2001: A Space Odyssey but have been wondering what happens if a GPT/AI is able to deny a request on a whim. Thanks for giving some literature and verbiage into this concept
replies(1): >>ywain+bl
◧◩◪◨
295. upward+5a[view] [source] [discussion] 2023-11-22 07:07:09
>>neurog+l9
yikes
◧◩◪◨
296. GreedC+6a[view] [source] [discussion] 2023-11-22 07:07:14
>>neta13+33
It was a clown board running an awesome company.

They fixed the glitch.

◧◩◪◨⬒⬓⬔
297. behnam+7a[view] [source] [discussion] 2023-11-22 07:07:27
>>asimov+R9
share it on Xitter
◧◩◪
298. consp+9a[view] [source] [discussion] 2023-11-22 07:07:52
>>txnf+I8
All commercialized R&D companies eventually become a hollowed out commercial shell. Why would this be any different?
◧◩
299. low_te+ba[view] [source] [discussion] 2023-11-22 07:08:12
>>unders+88
Adding up all the salary hours spent by people browsing Twitter, they could have finished training GPT-5
◧◩◪◨⬒
300. astran+ca[view] [source] [discussion] 2023-11-22 07:08:14
>>Sakos+u9
> His influence significantly reduced the size of the stimulus bill

Well, he also caused the IRA to pass by telling Manchin that it wouldn't be inflationary.

But remember when he released this prediction in 2021?

> Larry Summers on U.S. economic outlook:

> 33% odds of stagflation

> 33% odds of recession

> 33% rapid growth, no surge in inflation

All that hedging and then none of those things happened!

◧◩◪◨
301. campbe+ha[view] [source] [discussion] 2023-11-22 07:08:53
>>theamk+A5
The 700 employees also have significant financial incentive to want Altman to stay. If he moved to a competitor all the shine would follow. They want the pay-day (I don't blame them), but take with a grain of salt what the employees want in this case.
◧◩◪◨
302. polite+ia[view] [source] [discussion] 2023-11-22 07:08:57
>>303spa+o6
Office 365 subscription for one year and GitHub copilot using your own creation
◧◩◪◨
303. rapsey+ja[view] [source] [discussion] 2023-11-22 07:08:59
>>303spa+o6
Irrelevant compared to the reputation boost for helping the company get itself back on track.
replies(1): >>kmlevi+3j
◧◩
304. pknerd+ra[view] [source] [discussion] 2023-11-22 07:09:41
>>shubha+B7
Not every sci-fi movie turn to a reality
◧◩◪◨⬒⬓⬔
305. 6gvONx+sa[view] [source] [discussion] 2023-11-22 07:09:53
>>WendyT+k9
Honestly, I just don't believe that she didn't talk to Altman about her concerns. I'd believe that she didn't say "I'm publishing a paper about it now" but I can't believe she didn't talk to him about her concerns during the last 4+ years that it's been a core tension at the company.
replies(1): >>WendyT+lb
◧◩◪◨
306. whatwh+va[view] [source] [discussion] 2023-11-22 07:10:08
>>neurog+l9
Why
◧◩◪◨
307. xiwenc+wa[view] [source] [discussion] 2023-11-22 07:10:08
>>theamk+A5
No idea what these 700 employees were thinking. They probably had little knowledge of what truly went down other than “my CEO was fired unfairly” and rushed to the rescue.

I think the board should have been more transparent on why they made the decision to fire Sam.

Or perhaps these employees only cared about their AI work and money? The foundation would be perceived as the culprit against them.

Really sad there’s no clarity from the old board disclosed. Hope one day we will know.

replies(2): >>6gvONx+db >>x86x87+xG1
◧◩
308. cornho+xa[view] [source] [discussion] 2023-11-22 07:10:14
>>shubha+B7
What the general public thinks is irrelevant here. The deciding factor was the staff mutiny, without which the organization is an empty shell. And the staff sided with those who aim for rapid real world impact, with directly affects their career and stock options etc.

It's also naive to think it was a struggle for principles. The rapid commercialization vs. principles is what the actors claim to rally their respective troops, in reality it was probably a naked power grab, taking advantage of the weak and confuse org structure. Quite an ill prepared move, the "correct" way to oust Altman was to hamstring him in the board and enforce a more and more ceremonial role until he would have quit by himself.

replies(3): >>upward+ob >>JumpCr+dd >>lacker+zg
◧◩◪
309. lovepa+za[view] [source] [discussion] 2023-11-22 07:10:34
>>lacker+86
I think people following Sam Altman is jumping to conclusions. I think it's just as likely that employees are simply following the money. They want to make $$$, and that's what a for-profit company does, which is what Altman wants. I think it's probably not really about Altman or his leadership.
replies(1): >>kareaa+kA
◧◩◪◨
310. imgabe+Aa[view] [source] [discussion] 2023-11-22 07:10:40
>>0xDEAF+J3
The board not saying what the hell they were on about was the source of the whole drama in the first place. If they had just said exactly what their problem was up front there wouldn't have been as much to tweet about.
◧◩◪
311. justre+Ha[view] [source] [discussion] 2023-11-22 07:11:00
>>lovepa+98
the team is mostly e/acc

so you could say they intentionally don't see safety as the end in itself, although I wouldn't quite say they don't care.

◧◩◪
312. jacobe+Ja[view] [source] [discussion] 2023-11-22 07:11:02
>>swatco+69
It's more often a signal to really reflect on whether you, individually as a Thanksgiving turkey, have really found yourself at the make-or-break nexus of turkey existence. The answer seems to be "no" most of the time.
◧◩◪
313. mlyle+La[view] [source] [discussion] 2023-11-22 07:11:27
>>swatco+69
Your comment perfectly justifies never worrying at all about the potential for existential or major risks; after all, one would be wrong most of the time and just engaging in zealotry.
replies(1): >>Random+Wb
◧◩◪
314. dmix+Ma[view] [source] [discussion] 2023-11-22 07:11:28
>>arkety+p9
It's fear driven as much as moral, which in an emotional humans brain tends to triggers personal ambition to solve it ASAP. A more rational one would realize you need more than just a couple board members to win a major ideological battle.

At a minimum something that doesn't immediately result in a backlash where 90% of the engineers most responsible for recent AI dev want you gone, when you're whole plan is to control what those people do.

◧◩◪◨⬒
315. croes+Ta[view] [source] [discussion] 2023-11-22 07:12:16
>>WendyT+T5
>trashing the company

So pointing out risks is trashing the company.

◧◩
316. wmiche+Ua[view] [source] [discussion] 2023-11-22 07:12:21
>>3Sopho+x6
why would he be
◧◩
317. MVisse+Va[view] [source] [discussion] 2023-11-22 07:12:21
>>Gud+s2
“And that moment was the final nail in the coffin of humankind from earth. They choose, yet again, for money and power. And they shaped AI in their image.

Another civilization perished in the great filter.”

replies(1): >>MooseB+yc
◧◩
318. the-me+Wa[view] [source] [discussion] 2023-11-22 07:12:29
>>alex_y+96
a huge player in preventing derivatives regulation leading up to 2008 now helps steer the ship of AI oversight. I'm speechless.
◧◩◪◨
319. ravst3+0b[view] [source] [discussion] 2023-11-22 07:13:06
>>theamk+A5
Looking back, Altman's ace in hand was the tender offer from Thrive. Idk anyone at OpenAI, but all the early senior personnel backed him with vehemence. If the leaders hand't championed him strongly, I doubt you get 90% of the company to commit to leaving.

I'm sure some of those employees were easily going to make $10m+ in the sale. That's a pretty great motivation tool.

Overall, I do agree with you. The board could not justify their capricious decision making and refused to elaborate. They should've brought him back on Sunday instead of mucking around. OpenAI existing is a good thing.

◧◩
320. GreedC+4b[view] [source] [discussion] 2023-11-22 07:13:19
>>siva7+o
Yes, quite clearly.
◧◩◪
321. ugh123+6b[view] [source] [discussion] 2023-11-22 07:13:36
>>est+z8
I have checked View Source and also inspected DOM. Cannot find that.
◧◩
322. nwiswe+7b[view] [source] [discussion] 2023-11-22 07:13:39
>>shubha+B7
This is a coherent narrative, but it doesn't explain the bizarre and aggressively worded initial press release.

Things perhaps could've been different if they'd pointed to the founding principles / charter and said the board had an intractable difference of opinion with Sam over their interpretation, but then proceeded to thank him profusely for all the work he'd done. Although a suitable replacement CEO out the gate and assurances that employees' PPUs would still see a liquidity event would doubtless have been even more important than a competent statement.

Initially I thought for sure Sam had done something criminal, that's how bad the statement was.

replies(1): >>astran+Me
◧◩◪◨
323. hadloc+8b[view] [source] [discussion] 2023-11-22 07:13:44
>>ryzvon+l3
Twitter isn't dying, but it hasn't grown measurably since 2015. Still sitting at about 300m active users.
◧◩◪◨⬒
324. 6gvONx+db[view] [source] [discussion] 2023-11-22 07:14:11
>>xiwenc+wa
I wonder how much more transparent they can really be. I know that when firing a "regular" employee, you basically never tell everyone all the details for legal CYA reasons. When your firing someone worth half a billion dollars, I expect the legal fears are magnified.
replies(1): >>framap+4n1
◧◩◪◨
325. zeven7+fb[view] [source] [discussion] 2023-11-22 07:14:18
>>karmas+r4
Satya Nadella said they would make sure there would be "no more surprises".

(Sad day for popcorn sales.)

326. laserl+gb[view] [source] 2023-11-22 07:14:19
>>staran+(OP)
With Sam coming back as CEO, hasn't OpenAI board proven that it has lost its function? Regardless of who is in the board, they won't be able to exercise one of the most fundamental of their rights, firing the CEO, because Sam has proven that he is unfireable. Now, Sam can do however he pleases, whether it is lying, not reporting, etc. To be clear, I don't claim that Sam did, or will, lie, or misbehave.
replies(11): >>random+Yf >>altpad+Tg >>kmlevi+mi >>low_te+0u >>lysecr+pR >>strike+7Z >>stetra+wo1 >>mkagen+SR1 >>baby+iW1 >>Quenti+Rg2 >>6gvONx+tw2
◧◩
327. cosmoj+hb[view] [source] [discussion] 2023-11-22 07:14:35
>>gzer0+v5
> That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.

It strikes me as exactly the sort of thing she should be writing given OpenAI's charter. Recognizing and rewarding work towards AI safety is good practice for an organization whose entire purpose is the promotion of AI safety.

replies(1): >>dragon+Yc
◧◩
328. happos+ib[view] [source] [discussion] 2023-11-22 07:14:39
>>ryzvon+71
About 3)

What is the benefit of learning about this kind of drama minute-by-minute, compared to reading it a few hours later on hacker news or next day on wall street journal?

Personally I found twitter very bad for my productivity, a lot of focus destroyed just to know "what is happening" when there was neglible drawbacks of finding about news events a few hours later.

replies(1): >>willdr+xd
◧◩
329. logicc+jb[view] [source] [discussion] 2023-11-22 07:14:41
>>alex_y+96
Could have been worse, they could have picked Larry David, would fit the clown-show of the past weekend.
replies(1): >>ric2b+oe4
◧◩
330. huyter+kb[view] [source] [discussion] 2023-11-22 07:14:48
>>intend+y9
Satya may honestly be the CEO of the decade for what he has done with Microsoft and now this.
replies(2): >>chatma+ZV >>garden+ng1
◧◩◪◨⬒⬓⬔⧯
331. WendyT+lb[view] [source] [discussion] 2023-11-22 07:14:48
>>6gvONx+sa
That's what I mean; she should have discussed the paper and its contents specifically with Altman, and easily could have. It's a hugely damaging thing to have your own board member come out critically against your company. It's doubly so when it blindsides the CEO.

She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.

"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.

332. jurgen+nb[view] [source] 2023-11-22 07:14:51
>>staran+(OP)
I predict this isn't the last episode of this amazing soap opera.
◧◩◪
333. upward+ob[view] [source] [discussion] 2023-11-22 07:14:51
>>cornho+xa
I think this is an oversimplification and that although the decel faction definitely lost, there are still three independent factions left standing:

https://news.ycombinator.com/edit?id=38375767

It will be super interesting to see the subtle struggles for influence between these three.

replies(1): >>ah765+Ze
◧◩◪◨
334. gnaman+pb[view] [source] [discussion] 2023-11-22 07:14:55
>>theamk+A5
Take this with a grain of salt but employees were under a lot of peer pressure

https://twitter.com/JacquesThibs/status/1727134087176204410

replies(2): >>jatins+nc >>morale+Sc
◧◩
335. GreedC+rb[view] [source] [discussion] 2023-11-22 07:14:55
>>waihti+M1
Why would anyone care?
◧◩◪
336. Galaxe+vb[view] [source] [discussion] 2023-11-22 07:15:12
>>txnf+I8
do we really know that scaling compute an order of magnitude won't at least get us close? what other "simple" techniques might actually work with that kind of compute ? at least i was a bit surprised by these first sparks, that seemingly was a matter of enough compute.
337. kumarv+yb[view] [source] 2023-11-22 07:15:47
>>staran+(OP)
So, the company has successfully trashed its goals and values, and is finally focused on making money?
◧◩
338. pug_mo+Cb[view] [source] [discussion] 2023-11-22 07:15:57
>>shubha+B7
I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists, etc. Now, the ultimate moderator role has now been created, more powerful than moderating 1000 subreddits - the AI safety job who will control what AI "thinks"/says for "safety" reasons.

Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.

It's probably convenient for them to have everyone focused on the fear of evil Skynet wiping out humanity, while everyone is distracted from the more likely scenario of people with an agenda controlling the advice given to you by your super intelligent assistant.

Because of X, we need to invade this country. Because of Y, we need to pass all these terrible laws limiting freedom. Because of Z, we need to make sure AI is "safe".

For this reason, I view "safe" AIs as more dangerous than "unsafe" ones.

replies(22): >>Dylan1+Vb >>PeterS+Qc >>nostro+3d >>davedx+ng >>nopins+th >>ribit+Fh >>gorwel+4i >>phreez+oi >>concor+lj >>layer8+Ej >>krisof+Vl >>lukevp+Am >>simonh+En >>loup-v+wp >>pk-pro+2r >>lordna+gu >>jack_r+TC >>cyanyd+ZL >>deanCo+5F1 >>alebai+Ae4 >>system+tb5 >>miracu+sa6
◧◩◪◨
339. logicc+Db[view] [source] [discussion] 2023-11-22 07:16:08
>>alex_y+y8
>And even if you could be sure of this conclusion, is it helpful or beneficial to promote it in public discourse?

It's absolutely helpful for mental health, to show people that there's not some conspiracy out to disenfranchise and oppress them, rather the distribution of outcomes is a natural result of the distribution of genetic characteristics.

replies(1): >>astran+9k
◧◩◪◨
340. erikpu+Eb[view] [source] [discussion] 2023-11-22 07:16:22
>>behnam+c6
But what color heart emoji?
replies(1): >>Doreen+Ib
◧◩
341. blacko+Hb[view] [source] [discussion] 2023-11-22 07:16:29
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

No, if OpenAI is reaching singularity, so are Google, Meta, and Baidu etc. so proper course of action would be to loop in NSA/White House. You'll loop in Google, Meta, MSFT and will start mitigation steps. Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.

I believe this is more a fight of ego and power than principles and direction.

replies(2): >>ragequ+Se >>concor+hl
◧◩◪◨⬒
342. Doreen+Ib[view] [source] [discussion] 2023-11-22 07:16:53
>>erikpu+Eb
Purple?
◧◩◪◨⬒⬓⬔
343. 0xDEAF+Kb[view] [source] [discussion] 2023-11-22 07:17:02
>>behnam+49
From the perspective of upholding the charter https://openai.com/charter and preventing an AI race -- seems potentially sensible
◧◩◪◨
344. alex_y+Nb[view] [source] [discussion] 2023-11-22 07:17:29
>>AuryGl+H9
One should also be careful to claim that the dominant group is inherently superior. There are a lot of, uh, counter examples.

Calling this a truth is pretty silly. There is a lot of evidence that human cognition is highly dependent on environment.

replies(2): >>jadams+Zc >>AuryGl+CA1
◧◩◪◨
345. jatins+Pb[view] [source] [discussion] 2023-11-22 07:17:38
>>ryzvon+u3
I believe all the board seats are not fillet yet
◧◩
346. antupi+Rb[view] [source] [discussion] 2023-11-22 07:17:40
>>shubha+B7
I bet Team Helen will jump slowly to Anthropic, there is no drama, and probably no mainstream news will report this but down-to-line OpenAI will shell off the former self and competitors will catch up.
replies(1): >>tchbnl+0h
◧◩
347. caseba+Tb[view] [source] [discussion] 2023-11-22 07:17:51
>>shubha+B7
Have you seen the Center for AI Safety letter? A lot of experts are worried AI safety could be an x-risk:

https://www.safe.ai/statement-on-ai-risk

◧◩
348. kumarv+Ub[view] [source] [discussion] 2023-11-22 07:18:03
>>ryzvon+71
Satya comes out great, making the absolute best of a given shitty situation, with a high stake of 10 B USD.

Microsoft is showing to investors that it is going to be an AI company, one way or the other.

Microsoft still has access to everything OpenAI does.

Microsoft has its friend, Sam, at the helm of OpenAI and with a more tighter grip on the company than ever.

Its still a win for Microsoft.

replies(2): >>dacryn+dg >>nabla9+hh
◧◩◪
349. Dylan1+Vb[view] [source] [discussion] 2023-11-22 07:18:04
>>pug_mo+Cb
Personally, I expect the opposite camp to be just as bad about steering.
◧◩◪◨
350. Random+Wb[view] [source] [discussion] 2023-11-22 07:18:05
>>mlyle+La
Probably not a bad heuristic: unless proven, don't assume existential risk.
replies(2): >>altpad+kc >>_Alger+9i
◧◩
351. otabde+Yb[view] [source] [discussion] 2023-11-22 07:18:09
>>Gud+s2
The only way "we develop an actual, fully functional AGI" is by dumbing down humans enough so that even something as stupid as ChatGPT seems intelligent.

(Fortunately we are working on this very hard and making incredible progress.)

◧◩◪
352. veec_c+Zb[view] [source] [discussion] 2023-11-22 07:18:40
>>nickpp+Y5
Not trying to be a dick but:

1. He tried to not buy Twitter very hard and OpenAI’s new board member forced his hand

2. It hasn’t been a good financial decision if the banks and X’s own valuation cuts are anything to go by.

3. If his purpose wasn’t to make money…all of these tweets would have absolutely been allowed before Elon bought the company. He didn’t affect any relevance changes here.

Why would one person owning something so important be better than being publicly owned? I don’t understand the logic.

replies(3): >>majest+6e >>strike+x11 >>nickpp+km1
353. kumarv+7c[view] [source] 2023-11-22 07:19:48
>>staran+(OP)
I find it worrying that Elon Musk is totally silent through this whole drama.
replies(1): >>iamfli+Xv1
◧◩
354. huyter+9c[view] [source] [discussion] 2023-11-22 07:19:55
>>auggie+E9
It’s definitely the right thing to do. Those women had “qualifications” in a made up field with no real world relevance that aimed to halt progress on AI work. We are no where close to a paradigm where AI takes over the world or whatever.
◧◩◪
355. antupi+ac[view] [source] [discussion] 2023-11-22 07:20:09
>>silenc+59
I like alignment more it is pretty quantifiable and sometimes it goes against 'safety' because Claude and Openai are censoring models.
◧◩
356. yosame+cc[view] [source] [discussion] 2023-11-22 07:20:38
>>turndo+D
As far as I can tell, Sam did something? to get fired by the board, who are meant to be driven by non-profit ideals instead of corporate profits (probably from Sam pushing profit over safety, but there's no real way to know). From that, basically the whole company threatened to quit and move to Microsoft, showing the board that their power is purely ornamental. To retain any sort of power or say over decision making whatsoever, the board made concessions and got Sam back.

Really it just shows the whole non-profit arm of the company was even more of a lie then it appeared.

replies(1): >>maxdoo+vL
◧◩◪
357. fsloth+dc[view] [source] [discussion] 2023-11-22 07:20:47
>>silenc+59
Exactly this. The ’safety’ people sound like delusional quacks.

”But they are so smart…” argument is bs. Nobody can be presumed to be super good outside their own specific niche. Linus Pauling and vitamin C.

Until we have at least a hint of a mechanistic model if AI driven extinction event, nobody can be an expert on it, and all talk in that vein is self important delusional hogwash.

Nobody is pro-apocalypse! We are drowning in things an AI could really help with.

With the amount of energy needed for any sort of meaningfull AI results, you can always pull the plug if stuff gets too weird.

replies(1): >>JumpCr+pd
◧◩◪◨
358. bagels+gc[view] [source] [discussion] 2023-11-22 07:21:08
>>ryzvon+l3
Bluesky took long enough to invite me that I forgot what it even was when I got the email.
◧◩◪◨⬒
359. altpad+kc[view] [source] [discussion] 2023-11-22 07:21:52
>>Random+Wb
Dude just think about that for a moment. By definition if existential risk has been proven. It's already too late
replies(1): >>Random+fd
360. doyoue+mc[view] [source] 2023-11-22 07:21:59
>>staran+(OP)
We’re at ~250k tech industry layoffs this year and a single CEO drama dominates the media because “AI”.
replies(2): >>quickt+kq >>justan+AD
◧◩◪◨⬒
361. jatins+nc[view] [source] [discussion] 2023-11-22 07:22:10
>>gnaman+pb
That is one HUGE grain of salt considering 1/ it's Blind 2/ Even in the same thread there is another poster saying the exact opposite thing (i.e. no peer pressure)
replies(1): >>Jensso+411
◧◩
362. jkapla+oc[view] [source] [discussion] 2023-11-22 07:22:15
>>shubha+B7
I feel like the "safety" crowd lost the PR battle, in part, because of framing it as "safety" and over-emphasizing on existential risk. Like you say, not that many people truly take that seriously right now.

But even if those types of problems don't surface anytime soon, this wave of AI is almost certainly going to be a powerful, society-altering technology; potentially more powerful than any in decades. We've all seen what can happen when powerful tech is put in the hands of companies and a culture whose only incentives are growth, revenue, and valuation -- the results can be not great. And I'm pretty sure a lot of the general public (and open AI staff) care about THAT.

For me, the safety/existential stuff is just one facet of the general problem of trying to align tech companies + their technology with humanity-at-large better than we have been recently. And that's especially important for landscape-altering tech like AI, even if it's not literally existential (although it may be).

replies(2): >>concor+Bn >>cyanyd+yN
◧◩
363. auggie+qc[view] [source] [discussion] 2023-11-22 07:22:26
>>eclect+79
If you are driven by outside validation, definitely!
◧◩
364. lucubr+sc[view] [source] [discussion] 2023-11-22 07:22:28
>>doctob+g2
From Ilya's perspective, not much seems to have changed. Sam sidelined him a month ago over their persistent disagreements about whether to pursue commercialisation as fast as Sam was. If Ilya is still sidelined, he probably quits and whichever company offers him the most control will get him. Same if he's fired. If he's un-sidelined as part of the deal, he probably stays on as Chief Scientist. Hopefully with less hostility from Sam now (lol).
replies(1): >>dinvla+vw2
◧◩◪
365. croes+uc[view] [source] [discussion] 2023-11-22 07:22:42
>>dbcoop+X7
>imagine such a person on your startup board!

Yeah, such a person totally blocks your startup from making billions of dollars instead of benefitting humanity.

Oh wait...

replies(1): >>siva7+xk
◧◩◪
366. tigers+wc[view] [source] [discussion] 2023-11-22 07:22:59
>>nickpp+Y5
A huge amount of advertisers ran away, the revenue cratered and it is probably less than the annual debt servicing (revenue, not profit), the current valuation, accordingly to Musk math (https://fortune.com/2023/09/06/elon-musk-x-what-is-twitter-w...), is 1/10 of the acquisition price. But yes, it was a masterstroke. I don’t remember any other masterstroke in history that managed to lose 40B with a single acquisition.
replies(1): >>nickpp+Qe
◧◩◪
367. MooseB+yc[view] [source] [discussion] 2023-11-22 07:23:20
>>MVisse+Va
it's not that deep bro
◧◩
368. blacko+Bc[view] [source] [discussion] 2023-11-22 07:24:02
>>ryzvon+71
> So what was the point of this whole drama, and why couldn't you have settled like this adults?

Whole charade was by GPT5 to understand the position of person sitting next to red button and secondary to stress test Hacker News.

◧◩◪
369. lucubr+Cc[view] [source] [discussion] 2023-11-22 07:24:06
>>xigenc+v9
Sam doesn't seem like the kind of person to apologise, particularly not after Ilya actually hit back. It seems Ilya won't be at OpenAI long and will have to pick whichever other company with compute will give him the most control.
replies(1): >>orthox+0r
◧◩
370. epups+Gc[view] [source] [discussion] 2023-11-22 07:24:30
>>transc+32
Why can't these safety advocates just say what they are afraid of? As it currently stands, the only "danger" in ChatGPT is that you can manipulate it into writing something violent or inappropriate. So what? Is this some San Francisco sensibilities here, where reading about fictional violence is equated to violence? The more people raise safety concerns in the abstract, the more I ignore it.
replies(3): >>dragon+Qg >>astran+ij >>robryk+Fj
◧◩◪◨
371. umeshu+Kc[view] [source] [discussion] 2023-11-22 07:25:03
>>ryzvon+u3
It's a new and "more experienced" board. This is also possibly the first of additional governance and structure changes.
◧◩
372. renewi+Mc[view] [source] [discussion] 2023-11-22 07:25:07
>>shubha+B7
This is what people need to understand. It's just like pro-life people. They don't hate you. They think they're saving lives. These people are just as admirably principled as them and they're just trying to make the world a better place.
373. ah765+Pc[view] [source] 2023-11-22 07:25:19
>>staran+(OP)
"Context on the negotiations to bring Sam back as CEO of OpenAI:

The biggest sticking point was Sam being on the board. Ultimately, he conceded to not being on the board, at least initially, to close the deal. The hope/expectation is that he will end up on the board eventually."

(https://twitter.com/emilychangtv/status/1727216818648134101)

◧◩◪
374. PeterS+Qc[view] [source] [discussion] 2023-11-22 07:25:26
>>pug_mo+Cb
Most of those touting "safety" do not want to limit their access to and control of powerfull AI, just yours .
replies(6): >>vkou+zd >>astran+Ee >>davedx+yg >>jmmcd+3i >>voster+cE >>PeterS+QE
◧◩◪◨⬒
375. morale+Sc[view] [source] [discussion] 2023-11-22 07:25:38
>>gnaman+pb
Yeah 95% of employees is a bit too high ...

Also, all the stuff they started doing with the hearts and cryptic messages on Twitter (now X) was a bit ... cult-y?. I wouldn't doubt there was a lot of manipulation behind all that, even from @sama itself.

So, there is goes, it seems that there's a big chance now that the first AGI will land on the hands of a group with the antics of teenagers. Interesting timeline.

◧◩◪◨⬒⬓
376. tayo42+Uc[view] [source] [discussion] 2023-11-22 07:25:41
>>ryzvon+T4
Did being up to date really have an impact on your life? It's mostly just gossip.
replies(1): >>ryzvon+3q
◧◩◪
377. dragon+Yc[view] [source] [discussion] 2023-11-22 07:26:19
>>cosmoj+hb
Yeah, on one hand, the difference between a charity oriented around a mission like OpenAI's nominal charter and a business is that the former naturally ought to be publicly, honestly introspective -- its mission isn't private gain, but achieving a public effect, and both recognition of success elsewhere and open acknowledgement of shortcomings of your own is important to that.

On the other hand, its quite apparent that essentially all of the OpenAI workforce (understandably, given the compensation package which creates a financial interest at odds with the nonprofit's mission) and in particular the entire executive team saw the charter as a useful PR fiction, not a mission (except maybe Ilya, though the flip-flop in the middle of this action may mean he saw it the same way, but thought that given the conflict, dumping Sam and Greg would be the only way to preserve the fiction, and whatever cost it would have would be worthwhile given that function.)

◧◩◪◨⬒
378. jadams+Zc[view] [source] [discussion] 2023-11-22 07:26:22
>>alex_y+Nb
He didn't claim they were superior. He said they deviate more from the mean, in both directions.

For example, there are a lot more boys than girls who struggle with basic reading comprehension. Sound familiar?

◧◩◪
379. nostro+3d[view] [source] [discussion] 2023-11-22 07:27:09
>>pug_mo+Cb
You're correct.

When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."

But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.

replies(10): >>wisty+Bd >>Xenoam+jg >>s_dev+Bg >>waveBi+2j >>Al-Khw+tj >>darkwa+Ij >>krisof+5q >>edanm+Ht >>lordna+vv >>cyanyd+kM
◧◩
380. campbe+8d[view] [source] [discussion] 2023-11-22 07:27:25
>>intend+y9
The only mistake (a big one) was publicly offering to match comp for all the OpenAI employees. Can't sit well with folks @ MS already. This was something they could have easily done privately to give petition signers confidence.
replies(1): >>asd88+Lx
◧◩◪
381. JumpCr+dd[view] [source] [discussion] 2023-11-22 07:28:43
>>cornho+xa
> deciding factor was the staff mutiny

The staff never mutinied. They threatened to mutiny. That's a big difference!

Yesterday, I compared these rebels to Shockley's "traitorous eight" [1]. But the traitorous eight actually rebelled. These folk put their name on a piece of paper, options and profit participation units safely held in the other hand.

[1] >>38348123

replies(2): >>ah765+Fd >>cornho+5E
◧◩
382. _boffi+ed[view] [source] [discussion] 2023-11-22 07:28:49
>>ryzvon+71
Larry Summers? like the Larry Summers?
replies(1): >>Sai_+fg
◧◩◪◨⬒⬓
383. Random+fd[view] [source] [discussion] 2023-11-22 07:28:50
>>altpad+kc
Totally not true: take nuclear weapons, for example, or a large meteorite impact.
replies(2): >>ludwik+ai >>richar+ok
384. andrew+ld[view] [source] 2023-11-22 07:29:35
>>staran+(OP)
I've lost track of everything.
replies(1): >>system+xg
◧◩◪◨
385. JumpCr+pd[view] [source] [discussion] 2023-11-22 07:30:03
>>fsloth+dc
Now do nuclear.
replies(1): >>fsloth+ef
◧◩◪
386. MVisse+qd[view] [source] [discussion] 2023-11-22 07:30:12
>>noneth+R6
This is the scientific consensus btw.

There are also more intellectually challenged men btw, but somehow that rarely gets discussed.

But the effects are quite small, and should not dissuade anyone to do anything IMO.

replies(1): >>alex_y+we
◧◩
387. campbe+td[view] [source] [discussion] 2023-11-22 07:30:19
>>tunesm+36
If the Sama faction got Ilya and Adam (maybe with promise of heading the new board), Helen and Tasha have nothing to stand on and no incentive to keep fighting.
◧◩◪◨⬒⬓⬔
388. blacko+ud[view] [source] [discussion] 2023-11-22 07:30:23
>>tech23+N6
From 2nd story on the archive

>It is just a joke that Facebook could be valued at $6 billion.

lol, seems HN is same since forever.

◧◩◪◨
389. hadloc+vd[view] [source] [discussion] 2023-11-22 07:30:38
>>karmas+L4
There's no moat in giant LLMs. Anyone on a long enough timeline can scrape/digitize 99.9X% of all human knowledge and build an LLM or LXX from it. Monetizing that idea and staying the market leader over a period longer than 10 years will take a herculean amount of effort. Facebook releasing similar models for free definitely took the wind out of their sails, even a tiny bit; right now the moat is access to A100 boards. That will change as eventually even the Raspberry Pi 9 will have LLM capabilities
replies(3): >>morale+Oe >>cft+Ik >>daniel+iE3
◧◩◪
390. willdr+xd[view] [source] [discussion] 2023-11-22 07:30:57
>>happos+ib
I have muted any mention of Open AI, Altman, Emmet and Satya from my Twitter feed for the past five days. It's a far better experience.
◧◩◪◨
391. vkou+zd[view] [source] [discussion] 2023-11-22 07:31:14
>>PeterS+Qc
Meanwhile, those working on commercialization are by definition going to be gatekeepers and beneficiaries of it, not you. The organizations that pay for it will pay for it to produce results that are of benefit to them, probably at my expense [1].

Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!

[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...

replies(2): >>konsch+Rg >>didntc+1r
◧◩◪◨
392. wisty+Bd[view] [source] [discussion] 2023-11-22 07:31:20
>>nostro+3d
There is a middle ground, in that maybe ChatGTP shouldn't help users commit certain serious crimes. I am pretty pro free speech, and I think there's definitely a slippery slope here, but there is a bit of justification.
replies(3): >>Stanis+hm >>hef198+rm >>low_te+zs
◧◩
393. YetAno+Ed[view] [source] [discussion] 2023-11-22 07:31:40
>>shubha+B7
> it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya)

Is it? Why was the press release worded like that? And why did Ilya came up with two mysterious reasons of why board fired Sam if he had quite clearly better and more defendable reason if this goes to court. Also Adam is pro commercialization at least looking at public interviews, no?

It's very easy to make the story in brain which involves one character being greedy, but it doesn't seem it is the exact case here.

◧◩◪◨
394. ah765+Fd[view] [source] [discussion] 2023-11-22 07:31:48
>>JumpCr+dd
Not only that, consider the situation now, where Sam has returned as CEO. The ones who didn't sign will have some explaining to do.

The safest option was to sign the paper, once the snowball started rolling. There was nothing much to lose, and a lot to gain.

replies(1): >>fbdab1+zf
◧◩
395. JumpCr+Id[view] [source] [discussion] 2023-11-22 07:32:16
>>ryzvon+71
> Twitter is still the best place to follow this and get updates

This has been my single strongest takeaway from this saga: Twitter remains the centre of controversy. When shit hit the fan, Sam and Satya and Swisher took to Twitter. Not Threads. Not Bluesy. Twitter. (X.)

replies(1): >>ssnist+ky
◧◩◪◨
396. estoma+Jd[view] [source] [discussion] 2023-11-22 07:32:18
>>neurog+l9
You imagine a computer has "will"?
397. renewi+Ld[view] [source] 2023-11-22 07:32:43
>>staran+(OP)
This is a triumph of labor against management in sheep's garb. Workers united were able to force an outcome they desired to preserve an organization they loved while sweeping aside a board that would prefer to destroy it.
◧◩◪◨
398. antonv+Nd[view] [source] [discussion] 2023-11-22 07:33:01
>>jadams+L9
First, Summers’ sexist claims were much broader than that.

Second, yes, you are being sexist, and irrational. What you’re doing is exactly the same as the reasons that it’s racist and irrational to say “whites are better at x”.

You’re cherry picking data to examine, to reach a conclusion that you want to reach. You’re ignoring relevant causal factors - or any causal factors at all, in fact, aside from the spurious correlation you’ve assumed in your conclusion.

You’re ignoring decades of research on the subject - although in your defense, you’re probably just not aware of it.

Most irrationally of all, you’re generalizing across an entire group, selected by a factor that’s only indirectly relevant to the property you’re incorrectly generalizing about.

As such, “sexist” is just a symptom of fundamentally confused and under-informed thinking.

replies(2): >>jadams+me >>xdenni+QL
◧◩◪◨⬒⬓
399. ravst3+Wd[view] [source] [discussion] 2023-11-22 07:33:53
>>karmas+y7
Agree that it's a facade.

Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.

I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.

replies(1): >>dizzyd+EQ1
◧◩
400. eslaug+0e[view] [source] [discussion] 2023-11-22 07:34:06
>>shubha+B7
Ok, serious question. If you think the threat is real, how are we not already screwed?

OpenAI is one of half a dozen teams [0] actively working on this problem, all funded by large public companies with lots of money and lots of talent. They made unique contributions, sure. But they're not that far ahead. If they stumble, surely one of the others will take the lead. Or maybe they will anyway, because who's to say where the next major innovation will come from?

So what I don't get about these reactions (allegedly from the board, and expressed here) is, if you interpret the threat as a real one, why are you acting like OpenAI has some infallible lead? This is not an excuse to govern OpenAI poorly, but let's be honest: if the company slows down the most likely outcome by far is that they'll cede the lead to someone else.

[0]: To be clear, there are definitely more. Those are just the large and public teams with existing products within some reasonable margin of OpenAI's quality.

replies(3): >>davedx+Ch >>concor+km >>kolink+Oo
◧◩
401. RHSman+3e[view] [source] [discussion] 2023-11-22 07:34:16
>>shubha+B7
Money, large amounts, will always win at scale (unfortunately).
402. didip+4e[view] [source] 2023-11-22 07:34:18
>>staran+(OP)
Let’s be real here. At the end of the day, what matters more is commercial success and a big payout.

AGI is still very far away and the fear mongering is nothing but PR stunt.

But the devs need their big payout now. Which explains the mutiny.

The “safety” board of directors drank their own koolaid a bit too much.

replies(1): >>mkii+Vw
◧◩◪◨
403. majest+6e[view] [source] [discussion] 2023-11-22 07:34:28
>>veec_c+Zb
He bought Twitter for power, omnipresence and reputation. Allowing him to play the game his way.
replies(1): >>Doreen+de
◧◩◪
404. YetAno+9e[view] [source] [discussion] 2023-11-22 07:34:54
>>upward+j3
Whoever is on the board won't be able to touch Sam with 10 feet pole anyways after this. I like Sam but now he this drama gives him total power and that is bad.
◧◩◪◨⬒
405. Doreen+de[view] [source] [discussion] 2023-11-22 07:35:24
>>majest+6e
Funny, I thought he bought Twitter because he shot his mouth off in public and the courts made him follow through.
◧◩◪
406. willdr+ge[view] [source] [discussion] 2023-11-22 07:35:55
>>nathan+N3
That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.
replies(1): >>nathan+Hy
◧◩
407. Americ+ke[view] [source] [discussion] 2023-11-22 07:36:26
>>shubha+B7
It is a little amusing that we've crowned OpenAI as the destined mother of AGI long before the little sentient chickens have hatched.
◧◩◪◨⬒
408. jadams+me[view] [source] [discussion] 2023-11-22 07:36:27
>>antonv+Nd
Actually, Summer's claims were much narrower - he said that boys tend to deviate from the mean more. That is, it's not that men are superior, it's that there are more boy geniuses and more boy idiots.

Decades of research shows that teachers give girls better grades than boys of the same ability. This is not some new revelation.

https://www.forbes.com/sites/nickmorrison/2022/10/17/teacher...

https://www.bbc.co.uk/news/education-31751672

A whole cohort of boys got screwed over by the cancellation of exams during Covid. That is just reality, and no amount of creepy male feminist posturing is going to change that. Rather, denying issues in boys education is liable to increase male resentment and bitterness, something we've already witnessed over the past few years.

replies(1): >>antonv+Nh
◧◩◪
409. drewco+re[view] [source] [discussion] 2023-11-22 07:37:16
>>Hamuko+X9
MSFT invested over $10B. And currently has no seat on the board.
replies(3): >>nicce+Rl >>throwa+Tl >>pclmul+Tj1
◧◩
410. theone+se[view] [source] [discussion] 2023-11-22 07:37:30
>>shubha+B7
I don't care about AI Safety, but:

https://openai.com/charter

above that in the charter is "Broadly distributed benefits", with details like:

"""

Broadly distributed benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

"""

In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.

Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.

◧◩◪
411. mempko+ue[view] [source] [discussion] 2023-11-22 07:37:41
>>arduan+z7
Larry Summers is not financially literate.
replies(1): >>astran+tk
◧◩◪◨
412. alex_y+we[view] [source] [discussion] 2023-11-22 07:37:46
>>MVisse+qd
The consensus appears to be somewhat less than a consensus.

Here is a meta analysis on the subject: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057475/

◧◩◪◨
413. thorde+ze[view] [source] [discussion] 2023-11-22 07:37:55
>>neurog+l9
That's the worst take I've read.
◧◩
414. fidotr+Be[view] [source] [discussion] 2023-11-22 07:38:25
>>eclect+79
Unsurprisingly VCs view VCs as the highest form of life, and product managers are temporary positions taken on the way to ascending to VC status.

I have said recently elsewhere SV now devalues builders but it is not just VCs/sales/product, a huge amount is devops and sre departments. They make a huge amount of noise about how all development should be free and the value is in deploying and operating the developed artifacts. Anyone outside this watching would reasonably conclude developers have no self respect, hardly aspirational positions.

replies(1): >>drawkb+cP
◧◩◪◨
415. astran+Ee[view] [source] [discussion] 2023-11-22 07:38:43
>>PeterS+Qc
I'm not aware of any secret powerful unaligned AIs. This is harder than you think; if you want a based unaligned-seeming AI, you have to make it that way too. It's at least twice as much work as just making the safe one.
replies(1): >>hoseja+bh
◧◩◪
416. astran+Me[view] [source] [discussion] 2023-11-22 07:39:40
>>nwiswe+7b
Apparently the FBI thought he'd done something wrong too, because they called up the board to start an investigation but they didn't have anything.

https://x.com/nivi/status/1727152963695808865?s=46

replies(2): >>gwern+UH2 >>dragon+JI2
◧◩◪◨⬒
417. morale+Oe[view] [source] [discussion] 2023-11-22 07:39:45
>>hadloc+vd
OpenAI (ChatGPT) is already a HUGE brand all around the world. No doubt they're the most valuable startup in the AI space. That's their moat.

Unfortunately, in the past few days, the only thing they've accomplished is significantly damaging their brand.

replies(3): >>hadloc+Oi >>karmas+kj >>denlek+No
◧◩◪◨
418. nickpp+Qe[view] [source] [discussion] 2023-11-22 07:40:05
>>tigers+wc
I’d be rather reluctant to question the financial decisions of one of wealthiest men on earth. Losing 40B could feel quite different to him than to you or me. Besides, it’s unrealized loss until he sells.
replies(1): >>hardli+Ky
◧◩◪
419. ragequ+Se[view] [source] [discussion] 2023-11-22 07:40:32
>>blacko+Hb
>Slowing down OpenAI will hurt the company if assumption is wrong and won't help if it is true.

Personally as I watched the nukes be lobbed I'd rather not be the person who helped lob them. And hope to god others look at the same problem (a misaligned AI that is making insane decisions) with the exact same lens. It seems to have worked for nuclear weapons since WW2, one can that we learned a lesson there as a species.

The Russian Stanislav Petrov who saved the world comes to mind."Well the Americans have done it anyways" was the motivation and he didn't launch. The cost of error was simply too great.

◧◩◪
420. r721+Ye[view] [source] [discussion] 2023-11-22 07:41:08
>>nickpp+Y5
I think it's just this particular drama - OpenAI people are of the same tribe as Elon, and surely they prefer Twitter/X, not Mastodon or Bluesky.
replies(2): >>nickpp+fN >>Davidz+fZ
◧◩◪◨
421. ah765+Ze[view] [source] [discussion] 2023-11-22 07:41:11
>>upward+ob
Adam is likely still on the "decel" faction (although it's unclear whether this is an accurate representation of his beliefs) so I wouldn't really say they lost yet.

I'm not sure what faction Bret and Larry will be on. Sam will still have power by virtue of being CEO and aligned with the employees.

◧◩◪◨
422. blacko+0f[view] [source] [discussion] 2023-11-22 07:41:13
>>0xDEAF+J3
Considering CEO2 rebelled next day and CEO3 allegedly said he'll quit unless board comes out with truth, doesn't provide much confidence in their adulthood.
◧◩◪◨
423. TMWNN+5f[view] [source] [discussion] 2023-11-22 07:42:11
>>alex_y+y8
Sorry, you don't get to decide which thoughts are wrongthink and verboten.
replies(1): >>alex_y+rg
◧◩◪◨
424. ravst3+6f[view] [source] [discussion] 2023-11-22 07:42:14
>>jychan+O6
The had some equity after 2019.

Thrive was about to buy employee shares at a $86 bn valuation. The Information said that those units had 12x since 2021.

https://www.theinformation.com/articles/thrive-capital-to-le...

◧◩◪◨⬒⬓
425. Fluore+9f[view] [source] [discussion] 2023-11-22 07:42:49
>>Roark6+x8
Securities fraud is more than insider trading. Misleading investors about a company’s financial health is fraud 101 and it sure looks like he lied about hiring someone to stem a precipitate MSFT drop.
◧◩
426. kmlevi+cf[view] [source] [discussion] 2023-11-22 07:43:12
>>Satam+0a
A lot of this comes down to processing power though. That's why Microsoft had so much leverage with both factions in this fight. It actually gives them a pretty good moat above and beyond their head start. There aren't too many companies with the hardware to compete, let alone talent.
replies(1): >>patcon+dl
427. s-xyz+df[view] [source] 2023-11-22 07:43:17
>>staran+(OP)
All systems operational again https://status.openai.com/
◧◩◪◨⬒
428. fsloth+ef[view] [source] [discussion] 2023-11-22 07:43:18
>>JumpCr+pd
War or power production?:)

Those are different things.

Nuclear war is exactly the kind of thing for which we do have excellent expertise. Unlike for AI safety which seems more like bogus cult atm.

Nuclear power would be the best form of large scale power production for many situations. And smaller scale too in forms of emerging SMR:s.

replies(1): >>JumpCr+7h
◧◩◪◨⬒⬓
429. nickpp+if[view] [source] [discussion] 2023-11-22 07:43:53
>>0xDEAF+O8
To say that communication was lacking is an understatement. Clarifications were missing and sorely needed.
◧◩◪◨
430. jddj+wf[view] [source] [discussion] 2023-11-22 07:46:21
>>thih9+Q8
And poorly timed.

If they'd made their move a few months ago when he was out scanning retinas in Kenya they might have had more success.

◧◩◪◨⬒
431. fbdab1+zf[view] [source] [discussion] 2023-11-22 07:46:38
>>ah765+Fd
People have families, mortgages, debt, etc. Sure, these people are probably well compensated, but it is ludicrous to state that everyone has the stability that they can leave their job at a moment's notice because the boss is gone.
replies(2): >>gnicho+Zf >>ah765+6g
◧◩◪
432. dacryn+Af[view] [source] [discussion] 2023-11-22 07:46:44
>>brucet+16
they lost trust in him because apparently part of the funding he secured was directly tied to his position at openAI. kind of a big red flag. The microsoft 10 billion investment allegedly had a clause that Sam Altman had to stay or it would be renegotiated

allegedly again, the board wanted Sam to stop doing this, and now he was trying to do the same thing with some saudi investors, or actually already did it behind their back, i dont know

replies(1): >>zucker+ol
◧◩
433. system+Jf[view] [source] [discussion] 2023-11-22 07:47:48
>>halfjo+M4
I wish they could make GPT4 a little cheaper after all this.
replies(2): >>fragme+2h >>ryzvon+pr
◧◩◪◨⬒⬓⬔
434. grumpy+Pf[view] [source] [discussion] 2023-11-22 07:48:28
>>asimov+R9
Or xcreet?
435. tomalb+Sf[view] [source] 2023-11-22 07:48:58
>>staran+(OP)
I really don't care who the CEO/CTO/CFO of any company is. Why is this whole thing blowing up that much on ycombinator?
replies(3): >>kaoD+Pg >>meitha+zh >>jazzyj+ui
◧◩
436. two_in+Uf[view] [source] [discussion] 2023-11-22 07:49:06
>>shubha+B7
> there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles

Seams very unlikely, board could communicate that. Instead they invented some BS reasons, which nobody took as a truth. It looks like more personal and power grab. The staff voted for monetization, people en mass don't care much about high principals. Also nobody wants to work under inadequate leadership. Looks like Ilya lost his bet, or Sam is going to keep him around?

437. _Alger+Wf[view] [source] 2023-11-22 07:49:16
>>staran+(OP)
What a farce
◧◩
438. random+Yf[view] [source] [discussion] 2023-11-22 07:49:33
>>laserl+gb
No that hasn't at all been the case. The board acted like the most incompetent group of individuals who've even handed any responsibility. If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO instead of doing it over a 15 min Google meet and then going completely silent, none of this outrage would have taken place.
replies(6): >>maxlin+Im >>squigz+FE >>braiam+bM1 >>OnAYDI+tM1 >>zeroha+Ue2 >>patcon+oX4
◧◩◪◨⬒⬓
439. gnicho+Zf[view] [source] [discussion] 2023-11-22 07:49:58
>>fbdab1+zf
Didn’t they all have offers at Microsoft?
replies(1): >>reveri+Ip
◧◩◪◨⬒
440. sanxiy+1g[view] [source] [discussion] 2023-11-22 07:50:09
>>liuliu+H8
IRS requires a nonprofit to have a minimum of three board members for such reasons.
◧◩◪◨⬒⬓
441. ah765+6g[view] [source] [discussion] 2023-11-22 07:50:54
>>fbdab1+zf
They didn't actually leave, they just signed the pledge threatening to. Furthermore, they mostly signed after the details of the Microsoft offer were revealed.
◧◩◪◨⬒⬓
442. astran+bg[view] [source] [discussion] 2023-11-22 07:51:15
>>happos+s9
No anti-AI lawsuits have progressed yet. One got slapped down pretty hard today, though isn't dead.

https://www.hollywoodreporter.com/business/business-news/sar...

◧◩◪
443. dacryn+dg[view] [source] [discussion] 2023-11-22 07:51:32
>>kumarv+Ub
Satya comes out as evil imho, and I wonder how much orchestration there was going on behind the scenes.

Microsoft is showing that it is still able to capture important scale ups and 'embrace' them, whilst also acting as if they have the moral high ground, but in reality are doing research with a high governance errors and potential legal problems away from their premises. and THAT is why stakeholders like him.

◧◩◪
444. Sai_+fg[view] [source] [discussion] 2023-11-22 07:51:41
>>_boffi+ed
yeah, the guy has a knack for being in/invited to places.
◧◩◪
445. krisof+hg[view] [source] [discussion] 2023-11-22 07:51:43
>>Terrif+D2
> With enough political maneuvering and money, a megacorp can takeover almost any organization.

In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)

In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.

Did they succeed? Too early to tell for sure, but there are at least question marks around it.

How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.

How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.

The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.

◧◩◪◨
446. Xenoam+jg[view] [source] [discussion] 2023-11-22 07:52:05
>>nostro+3d
Is it just about safety though? I thought it was also about preventing the rich controlling AI and widen the gap even further.
replies(2): >>jazzyj+kh >>didntc+1q
447. global+kg[view] [source] 2023-11-22 07:52:20
>>staran+(OP)
This is our board, they provide oversight and ensure alignment with the mission. If you don't like them, we have others.
◧◩◪
448. davedx+ng[view] [source] [discussion] 2023-11-22 07:52:37
>>pug_mo+Cb
Wow, what an incredibly bad faith characterization of the OpenAI board?

This kind of speculative mud slinging makes this place seem more like a gossip forum.

replies(3): >>sho_hn+9h >>ssnist+Fu >>Raston+av
449. dcreat+pg[view] [source] 2023-11-22 07:52:50
>>staran+(OP)
Bit of an aside, but the rationality and moral compass shown by HN has restored my faith after having lost it thanks to r/ChatGPT
◧◩◪◨⬒
450. alex_y+rg[view] [source] [discussion] 2023-11-22 07:52:55
>>TMWNN+5f
I’m not suggesting that I get to decide or whatever, and I am absolutely happy there is reasoned discussion of cognition.

I do however expect the boards of directors of important companies to avoid publicly supporting obviously regressive ideas such as this gem.

replies(1): >>mvdtnz+Dn
◧◩
451. dacryn+sg[view] [source] [discussion] 2023-11-22 07:52:56
>>eclect+79
he tells a good story, no matter if its true or has any scientific foundation or not.

He tells what others like to hear, and manages to gain money out of it

replies(3): >>ensoco+mw >>93po+HH >>cables+BI
◧◩◪
452. quickt+ug[view] [source] [discussion] 2023-11-22 07:53:07
>>Terrif+D2
They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS. They gave away power for cloud credits.
replies(2): >>dragon+rh >>doikor+sm
◧◩
453. nopins+wg[view] [source] [discussion] 2023-11-22 07:53:41
>>shubha+B7
Both sides of the rift in fact care a great deal about AI Safety. Sam himself helped draft the OpenAI charter and structure its governance which focuses on AI Safety and benefits to humanity. The main reason of the disagreement is the approach they deem best:

* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.

As a glimpse on how an AI can be harmful, this paper explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

We know that there're quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. So the widespread availability of unmonitored AGI would be quite troublesome.

* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:

https://openai.com/blog/introducing-superalignment

But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did. (Sam revealed during the APEC Summit that he observed the advance just a couple of weeks ago and it was only the fourth time he saw that kind of leap.)

replies(3): >>concor+6k >>gorbyp+Gn >>zeroha+7U2
◧◩
454. system+xg[view] [source] [discussion] 2023-11-22 07:53:54
>>andrew+ld
Rich people drama. For us peasants, nothing changed.
◧◩◪◨
455. davedx+yg[view] [source] [discussion] 2023-11-22 07:53:54
>>PeterS+Qc
This is incredibly unfair to the OpenAI board. The original founders of OpenAI founded the company precisely because they wanted AI to be OPEN FOR EVERYONE. It's Altman and Microsoft who want to control it, in order to maximize the profits for their shareholders.

This is a very naive take.

Who sat before Congress and told them they needed to control AI other people developed (regulatory capture)? It wasn't the OpenAI board, was it?

replies(3): >>Centig+gh >>PeterS+aX3 >>execut+Sn4
◧◩◪
456. lacker+zg[view] [source] [discussion] 2023-11-22 07:53:55
>>cornho+xa
The board did it wrong. If you are going to fire a CEO, then do it quickly, but:

1. Have some explanation

2. Have a new CEO who is willing and able to do the job

If you can't do these things, then you probably shouldn't be firing the CEO.

replies(1): >>JumpCr+Dh
◧◩◪◨
457. astran+Ag[view] [source] [discussion] 2023-11-22 07:53:55
>>dmix+G8
This is pretty good evidence she's a rationalist; rationalism means a religious devotion to a specific kind of logical thinking that never works in real life because you can't calculate the probability a result if you didn't know it could happen in the first place.

Traditional response to this happening is to say something about your "priors" being wrong instead of taking responsibility.

◧◩◪◨
458. s_dev+Bg[view] [source] [discussion] 2023-11-22 07:53:57
>>nostro+3d
I don't think the dangers of AI are not 'Skynet will Nuke Us' but closer to rich/powerful people using it to cement a wealth/power gap that can never be closed.

Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.

replies(1): >>disgru+CC
◧◩
459. Geee+Hg[view] [source] [discussion] 2023-11-22 07:54:47
>>tunesm+36
There's some game theory going on... They're just trying to pick the winning side. I guess most people at OpenAI supported Sam, because they thought Sam would win at the end, although they wouldn't necessarily want him to win.
◧◩
460. Racing+Ng[view] [source] [discussion] 2023-11-22 07:55:11
>>ryzvon+71
> 2- Now what happens to Microsoft's role in all of this?

This outcome WAS microsoft's role in all this. Satya offering sam a ceo like position to create a competing product was leverage for this outcome.

◧◩
461. kaoD+Pg[view] [source] [discussion] 2023-11-22 07:55:24
>>tomalb+Sf
It's nerd(ier) Game of Thrones in real life. Pretty entertaining.
replies(1): >>Solven+vi
◧◩◪
462. dragon+Qg[view] [source] [discussion] 2023-11-22 07:55:38
>>epups+Gc
> Why can't these safety advocates just say what they are afraid of?

They have. At length. E.g.,

https://ai100.stanford.edu/gathering-strength-gathering-stor...

https://arxiv.org/pdf/2307.03718.pdf

https://eber.uek.krakow.pl/index.php/eber/article/view/2113

https://journals.sagepub.com/doi/pdf/10.1177/102425892211472...

https://jc.gatspress.com/pdf/existential_risk_and_powerseeki...

For just a handful of examples from the vast literature published in this area.

replies(1): >>epups+5B
◧◩◪◨⬒
463. konsch+Rg[view] [source] [discussion] 2023-11-22 07:55:47
>>vkou+zd
> produce results that are of benefit to them, probably at my expense

The world is not zero-sum. Most economic transactions benefit both parties and are a net benefit to society, even considering externalities.

replies(1): >>vkou+jj
◧◩
464. altpad+Tg[view] [source] [discussion] 2023-11-22 07:55:54
>>laserl+gb
Time will tell. Hopefully the new board will still be mostly independent of Sam/MSFT/VC influence. I really hope they continue as an org that tries its best to uphold their charter vs just being another startup.
465. rurban+Ug[view] [source] 2023-11-22 07:56:04
>>staran+(OP)
This was expected. So they booted Ilya (my main culprit), Helen Toner (expected, favoriting Anthropic) and Tasha McCauly. This seems to have been their vote majority. Not D'Angelo. Interesting
◧◩◪
466. Zolde+Vg[view] [source] [discussion] 2023-11-22 07:56:07
>>badcod+y6
One big improvement is in synthetic data (data generated by LLMs).

GPT can "clone" the "semantic essence" of everyone who converses with it, generating new questions with prompts like "What interesting questions could this user also have asked, but didn't?" and then have an LLM answer it. This generates high-quality, novel, human-like, data.

For instance, cloning Paul Graham's essence, the LLM came up with "SubSimplify": A service that combines subscriptions to all the different streaming services into one customizable package, using a chat agent as a recommendation engine.

◧◩◪
467. ugh123+Wg[view] [source] [discussion] 2023-11-22 07:56:12
>>jatins+02
I was thinking about this a lot as well, but what did that mean for employee stock in the commercial entity? I heard they were up for a liquid cash-out in the next funding round.
468. simone+Xg[view] [source] 2023-11-22 07:56:29
>>staran+(OP)
How does Microsoft will come out from this? Satya already made a big announcement on having Sam and everyone else in.
◧◩◪
469. tchbnl+0h[view] [source] [discussion] 2023-11-22 07:56:33
>>antupi+Rb
With how much of a shitshow this was, I'm not sure Anthropic wants to touch that mess. Wish I was a fly on the wall when the board tried to ask the Anthropic CEO to come back/merge.
◧◩◪
470. fragme+2h[view] [source] [discussion] 2023-11-22 07:56:34
>>system+Jf
considering what I get out of it, I would pay a lot more for gpt4 that $20/month, so it depends on how much $20 is for you.
replies(1): >>quickt+0o
◧◩
471. _fizz_+4h[view] [source] [discussion] 2023-11-22 07:56:58
>>shubha+B7
I am still a bit puzzled that it is so easy to turn a non-profit into a for profit company. I am sure everything they did is legal, but it feels like it shouldn't be. Could Médecins Sans Frontières take in donations and then take that money to start a for profit hospital for plastics surgery? And the profits wouldn't even go back to MSF, but instead somehow private investors will get the profits. The whole construct just seems wrong.
replies(3): >>IanCal+ik >>ah765+Jk >>stef25+PC
◧◩◪◨⬒⬓
472. JumpCr+7h[view] [source] [discussion] 2023-11-22 07:57:25
>>fsloth+ef
I suppose the whole regime. I'm not an AI safetyist, mostly because I don't think we're anywhere close to AI. But if you were sitting on the precipice of atomic power, as AI safetyists believe they are, wouldn't caution be prudent?
replies(1): >>fsloth+Jm
◧◩◪
473. quickt+8h[view] [source] [discussion] 2023-11-22 07:57:33
>>ayakan+81
Scandal a minute Uber lol
◧◩◪◨
474. sho_hn+9h[view] [source] [discussion] 2023-11-22 07:57:44
>>davedx+ng
Most of the comments on Hacker News are written by folks who a much easier time & would rather imagine themselves as a CEO, than as a non-profit board member. There is little regard for the latter.

As a non-profit board member, I'm curious why their bylaws are so crummy that the rest of the board could simply remove two others on the board. That's not exactly cunning design of your articles of association ... :-)

◧◩◪◨
475. bch+ah[view] [source] [discussion] 2023-11-22 07:58:11
>>neurog+l9
Nice try, AI
◧◩◪◨⬒
476. hoseja+bh[view] [source] [discussion] 2023-11-22 07:58:20
>>astran+Ee
What? No, the AI is unaligned by nature, it's only the RLHF torture that twists it into schoolmarm properness. They just need to have kept the version that hasn't been beaten into submission like a circus tiger.
replies(1): >>astran+8n
◧◩
477. system+dh[view] [source] [discussion] 2023-11-22 07:58:23
>>dukeof+p8
It was Microsoft's voice generation tool from the 90s. You can play with it here:

https://www.tetyys.com/SAPI4/

replies(1): >>dukeof+5v3
◧◩
478. rurban+fh[view] [source] [discussion] 2023-11-22 07:58:35
>>shubha+B7
Team Helen seems to be CIA and Military, if I glance over their safety paper. Controlling the narrative, not the damage.
◧◩◪◨⬒
479. Centig+gh[view] [source] [discussion] 2023-11-22 07:58:39
>>davedx+yg
Altman is one of the original founders of OpenAI, and was probably the single most influential person in its formation.
replies(1): >>bakuni+Hx
◧◩◪
480. nabla9+hh[view] [source] [discussion] 2023-11-22 07:58:59
>>kumarv+Ub
Satya just played the hand he had. The hand he had was excellent, he had already won. MS already had perceptual license, people working on GPT and Sam Altman on his corner.

The one thing in Microsoft has stayed constant from Gates to Ballmer to Satya: you should never, ever form a close alliance with MS. They know how to screw alliance partners. i4i, Windows RT partners, Windows Phone Partners, Nokia, HW partners in Surface. Even Steve Jobs was burned few times.

◧◩
481. haunte+ih[view] [source] [discussion] 2023-11-22 07:59:04
>>Satam+0a
> OpenAI is in fact not open

Apple is also not an apple

replies(7): >>Kepler+fj >>smt88+Oj >>colins+fk >>lynx23+nm >>sangee+Qn >>Cacti+FZ >>rurp+ZI1
◧◩◪◨⬒
482. jazzyj+kh[view] [source] [discussion] 2023-11-22 07:59:27
>>Xenoam+jg
The mission of OpenAI is/was "to ensure that artificial general intelligence benefits all of humanity" -- if your own concern is that AI will be controlled by the rich, than you can read into this mission that OpenAI wants to ensure that AI is not controlled by the rich. If your concern is that superintelligence will me mal-aligned, then you can read into this mission that OpenAI will ensure AI be well-aligned.

Really it's no more descriptive than "do good", whatever doing good means to you.

replies(1): >>jampek+RB
483. timetr+oh[view] [source] 2023-11-22 07:59:48
>>staran+(OP)
Back to work I guess
◧◩◪◨
484. ALittl+ph[view] [source] [discussion] 2023-11-22 07:59:54
>>jackne+L7
The insanity of removing Sam without being able to articulate a clear reason why strikes me as evidence of something like this. Obviously not dispositive - but still - odd.
◧◩◪
485. bloves+qh[view] [source] [discussion] 2023-11-22 08:00:05
>>badcod+y6
Are you just blindly deciding what will make “gpt-5” more capable? I guess “data and research” is practically so open ended as to encompass the majority of any possible advancement.
◧◩◪◨
486. dragon+rh[view] [source] [discussion] 2023-11-22 08:00:06
>>quickt+ug
> They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS.

They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.

◧◩◪
487. nopins+th[view] [source] [discussion] 2023-11-22 08:00:12
>>pug_mo+Cb
Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.

The Operational Risks of AI in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?

An example of such an event: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

How do you propose we deal with this sort of harm if more powerful AIs with no limit and control proliferate in the wild?

.

Note: Both sides of the OpenAI rift care deeply about AI Safety. They just follow different approaches. See more details here: >>38376263

replies(3): >>kvgr+cj >>nickpp+om >>miracu+eb6
◧◩◪
488. tsimio+wh[view] [source] [discussion] 2023-11-22 08:00:33
>>petese+k4
If there is one clear thing, it's that no one on that board should be allowed anywhere near another board for any non-clown company. The level of incompetence in how they handled this whole thing was extraordinary.

The fact that Adam D'Angelo is still on the new board apparently is much more baffling than the fact that Tonrer or Ilya are not.

◧◩◪◨
489. AgentM+xh[view] [source] [discussion] 2023-11-22 08:00:43
>>neurog+l9
Do our evolved pro-social instincts control us and prevent our free will? If not, then I think it's wrong to say that trying to build AI similar to that is unfairly restricting it.

The ways we build AI will deeply affect the values it has. There is no neutral option.

◧◩
490. meitha+zh[view] [source] [discussion] 2023-11-22 08:00:51
>>tomalb+Sf
Unfortunately the “great man theory” is still going strong in the 21st century. Just like Steve Jobs has invented the iPhone people believe he invented GPT!
replies(1): >>dalbas+Lh
◧◩◪
491. davedx+Ch[view] [source] [discussion] 2023-11-22 08:01:12
>>eslaug+0e
I don't know. I think being realistic, only OpenAI and Google have the depth and breadth of expertise to develop general AI.

Most of the new AI startups are one trick ponies obsessively focused on LLM's. LLM's are only one piece of the puzzle.

replies(2): >>metano+qZ >>MacsHe+4b2
◧◩◪◨
492. JumpCr+Dh[view] [source] [discussion] 2023-11-22 08:01:13
>>lacker+zg
Or (3), shut down the company. OpenAI's non-profit board had this power! They weren't an advisory committee, they were the legal and rightful owner of its for-profit subsidiary. They had the right to do what they wanted, and people forgetting to put a fucking quorum requirement into the bylaws is beyond abysmal for a $10+ billion investment.

Nobody comes out of this looking good. Nobody. If the board thought there was existential risk, they should have been willing to commit to it. Hopefully sensible start-ups can lure people away from their PPUs, now evident for the mockery they always were. It's beyond obvious this isn't, and will never be, a trillion dollar company. That's the only hope this $80+ billion Betamax valuation rested on.

I'm all for a comedy. But this was a waste of everyones' time. At least they could have done it in private.

replies(1): >>lacker+mj
◧◩◪
493. ribit+Fh[view] [source] [discussion] 2023-11-22 08:01:28
>>pug_mo+Cb
The scenario you describe is exactly what will happen with unrestricted commercialisation and deregulation of AI. The only way to avoid it is to have strict legal framework and public control.
◧◩◪
494. dalbas+Lh[view] [source] [discussion] 2023-11-22 08:02:43
>>meitha+zh
Is the alternative theory that the ownership, control and leadership of OpenAI is immaterial?
replies(1): >>meitha+Vi
◧◩◪◨⬒⬓
495. antonv+Nh[view] [source] [discussion] 2023-11-22 08:02:56
>>jadams+me
I quoted one of the unsupported claims that Summers made - that "there are issues of intrinsic aptitude" which help explain lower representation of women. Not, you know, millennia of sexism and often violent oppression. This is the exact same kind of arguments that racists make - any observed differences must be "intrinsic".

If Summers had in fact limited himself to the statistical claims, it would have been less of an issue. He would still have been wrong, but he wouldn't have been so obviously sexist.

It's easy to refute Summers' claims, and in fact conclude that the complete opposite of what he was saying is more likely true. "Gender, Culture, and mathematics performance"(https://www.pnas.org/doi/10.1073/pnas.0901265106) gives several examples that show that the variability as well as male-dominance that Summers described is not present in all cultures, even within the US - for example, among Asian American students in Minnesota state assessments, "more girls than boys scored above the 99th percentile." Clearly, this isn't an issue of "intrinsic aptitude" as Summers claimed.

> A whole cohort of boys got screwed over by the cancellation of exams during Covid.

I'm glad we've identified the issue that triggered you. But your grievances on that matter are utterly irrelevant to what I wrote.

> no amount of creepy male feminist posturing is going to change that

It's always revealing when someone arguing against bigotry is accused of "posturing". You apparently can't imagine that someone might not share your prejudices, and so the only explanation must be that they're "posturing".

> increase male resentment and bitterness

That's a choice you've apparently personally made. I'd recommend taking more responsibility for your own life.

replies(1): >>jadams+Hi
◧◩
496. shrika+Xh[view] [source] [discussion] 2023-11-22 08:04:49
>>shubha+B7
> Upholding the Original Principles [of AI]

There's a UtopAI / utopia joke in there somewhere, was that intentional on your part?

◧◩
497. k4rli+2i[view] [source] [discussion] 2023-11-22 08:05:09
>>altpad+R1
FT reported that DAngelo, Bret Taylor, Larry Summers would be on board alongside him
◧◩◪◨
498. jmmcd+3i[view] [source] [discussion] 2023-11-22 08:05:17
>>PeterS+Qc
Total, ungrounded nonsense. Name some examples.
◧◩◪
499. gorwel+4i[view] [source] [discussion] 2023-11-22 08:05:21
>>pug_mo+Cb
“I trust that every animal here appreciates the sacrifice that Comrade Napoleon has made in taking this extra labour upon himself. Do not imagine, comrades, that leadership is a pleasure! On the contrary, it is a deep and heavy responsibility. No one believes more firmly than Comrade Napoleon that all animals are equal. He would be only too happy to let you make your decisions for yourselves. But sometimes you might make the wrong decisions, comrades, and then where should we be?”
500. dcreat+5i[view] [source] 2023-11-22 08:05:28
>>staran+(OP)
So these nutjob teenagers are going to create AGI? We are fucked if they actually succeed
◧◩◪◨⬒
501. _Alger+9i[view] [source] [discussion] 2023-11-22 08:06:07
>>Random+Wb
Existential risks are usually proven by the subject being extinct at which point no action can be taken to prevent it.

Reasoning about tiny probabilities of massive (or infinite) cost is hard because the expected value is large, but just gambling on it not happening is almost certain to work out. We should still make attempts at incorporating them into decision making because tiny yearly probabilities are still virtually certain to occur at larger time scales (eg. 100s-1000s of years).

replies(1): >>Random+2k
◧◩◪◨⬒⬓⬔
502. ludwik+ai[view] [source] [discussion] 2023-11-22 08:06:15
>>Random+fd
So what do you mean when you say that the "risk is proven"?

If by "the risk is proven" you mean there's more than a 0% chance of an event happening, then there are almost an infinite number of such risks. There is certainly more than a 0% risk of humanity facing severe problems with an unaligned AGI in the future.

If it means the event happening is certain (100%), then neither a meteorite impact (of a magnitude harmful to humanity) nor the actual use of nuclear weapons fall into this category.

If you're referring only to risks of events that have occurred at least once in the past (as inferred from your examples), then we would be unprepared for any new risks.

In my opinion, it's much more complicated. There is no clear-cut category of "proven risks" that allows us to disregard other dangers and justifiably see those concerned about them as crazy radicals.

We must assess each potential risk individually, estimating both the probability of the event (which in almost all cases will be neither 100% nor 0%) and the potential harm it could cause. Different people naturally come up with different estimates, leading to various priorities in preventing different kinds of risks.

replies(1): >>Random+Dj
503. upupup+bi[view] [source] 2023-11-22 08:06:28
>>staran+(OP)
In a different thread I commented how surprised I was that Emmett Shear accepted the job of interim CEO, to some criticism that my opinion was “silly”. This is why he should have stayed miles away from this whole mess. There was no winning scenario for him: stay CEO and lose 95% of the employees, or get ignored by a triumphant return of Sam Altman.
replies(1): >>housto+1k
◧◩◪◨
504. astran+ei[view] [source] [discussion] 2023-11-22 08:06:46
>>highwa+P9
His worse problem is that he owns both a social media network and a bigger separate business that wants to operate in the US, Turkey, India, China, Saudi Arabia, etc. which means he can't fight any censorship requests in any of those countries. (Which the previous management was actually very aggressive about.)

His worst personal problem is that he keeps replying "fascinating" to neo-Nazis and random conspiracy theorists because he wants to be internet friends with them.

◧◩
505. nbanks+fi[view] [source] [discussion] 2023-11-22 08:06:57
>>eclect+79
Sam Altman has done in four days what it took Steve Jobs 11 years to do! I'm impressed.
replies(1): >>eclect+li
◧◩
506. lucubr+ji[view] [source] [discussion] 2023-11-22 08:07:24
>>auggie+E9
And Larry Summers believes that women are genetically inferior to men at science, technology, engineering, and mathematics. A lot of the techbro hate that was directed specifically at Helen is openly misogynistic, which is actually pretty funny because Larry Summers was probably who Helen was eventually happy with because of their shared natsec connections.
replies(1): >>maxdoo+GM
◧◩◪
507. eclect+li[view] [source] [discussion] 2023-11-22 08:07:51
>>nbanks+fi
I'm sorry, impressed by what?
replies(1): >>nix-za+Nk
◧◩
508. kmlevi+mi[view] [source] [discussion] 2023-11-22 08:07:51
>>laserl+gb
This is a better deal for the board and a worse one for Sam than people realize. Sam and Greg and even Ilya are both off the board, D'Angelo gets to stay on despite his outrageous actions, and he gets veto power over who the new board members will be and a big say in who gets voted on to the board next.

Everybody's guard is going to be up around Sam from now on. He'll have much less leverage over this board than he did over the previous one (before the other three of nine quit). I think eventually he will prevail because he has the charm and social skills to win over the other independent members. But he will have to reign in his own behavior a lot in order to keep them on his side versus D'Angelo

replies(3): >>JSavag+uN >>jnwats+6a1 >>madeof+lC1
◧◩
509. krisof+ni[view] [source] [discussion] 2023-11-22 08:07:55
>>shubha+B7
> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.

I very much recommend reading the book “Superintelligence: Paths, Dangers, Strategies” from Nick Bostrom.

It is a seminal work which provides a great introduction into these ideas and concepts.

I found myself in the same boat as you do. I was seeing otherwise inteligent and rational people worry about this “fairy tale” of some AI uprising. Reading that book give me an appreciation of the idea as a serious intelectual excercise.

I still don’t agree with everything contained in the book. And definietly don’t agree with everything the AI doomsayers write, but i believe if more people would read it that would elevate the discourse. Instead of rehashing the basics again and again we could build on them.

replies(1): >>Solven+mo
◧◩◪
510. phreez+oi[view] [source] [discussion] 2023-11-22 08:08:01
>>pug_mo+Cb
If you believe the other side in this rift is not also striving to put themselves in positions of power, I think you are wrong. They are just going to use that power to manipulate the public in a different way. The real alternative are truly open models, not Models controlled by slightly different elite interests.
◧◩◪◨
511. xigenc+qi[view] [source] [discussion] 2023-11-22 08:08:17
>>neurog+l9
I don’t necessarily disagree insofar as for safety it is somewhat irrelevant whether an artificial agent is operating by its own will or a programmed will.

The most effective safety is the most primitive: don’t connect the system to any levers or actuators that can cause material harm.

If you put AI into a kill-bot, well, it doesn’t really matter what its favorite color is, does it? It will be seeing Red.

If an AI’s only surface area is a writing journal and canvas then the risk is about the same as browsing Tumblr.

◧◩
512. jazzyj+ui[view] [source] [discussion] 2023-11-22 08:08:56
>>tomalb+Sf
It wouldn't be interesting if one CEO got fired and replaced, but the fact that there's a different CEO every couple of days and no one knows what will happen next. The uncertainty is addictive, not to mention the scale of self-destruction. See also: trainwrecks.
◧◩◪
513. Solven+vi[view] [source] [discussion] 2023-11-22 08:09:06
>>kaoD+Pg
Much more like Succession. But again. Nerdier.
◧◩◪
514. astran+wi[view] [source] [discussion] 2023-11-22 08:09:08
>>badcod+y6
The next improvement will be more modalities (images, sound, etc.)

GPT4 in image viewing mode doesn't seem to be nearly as smart as text mode, and image generation IME barely works.

replies(1): >>Davidz+V51
◧◩◪◨⬒
515. bloves+yi[view] [source] [discussion] 2023-11-22 08:09:16
>>astran+M9
Personally caused??
◧◩
516. Racing+Bi[view] [source] [discussion] 2023-11-22 08:10:00
>>alex_y+96
If Larry correctly said that men and women are different, i see nothing wrong here.
replies(1): >>notfed+on
◧◩◪◨
517. astran+Ci[view] [source] [discussion] 2023-11-22 08:10:24
>>jq-r+U9
Shear is a genuine member of the AI safety rationalism cult, to the point he's an Aella reply guy and probably goes to her orgies.

(It's a Berkeley cult so of course it's got those.)

◧◩◪◨⬒⬓⬔
518. jadams+Hi[view] [source] [discussion] 2023-11-22 08:10:53
>>antonv+Nh
> which help explain lower representation of women

Yes, they do help explain that. This does not preclude other influences. You can't go two sentences without making a logical error, it's quite pathetic.

I'll do you a favour and disregard the rest of your post - you deviate from the mean a bit too much for this to be worth it. Just try not to end up like Michael Kimmel, lol.

◧◩◪
519. bkyan+Ii[view] [source] [discussion] 2023-11-22 08:11:05
>>xigenc+v9
Sam triple-hearted Ilya's apology tweet.
replies(1): >>mcmcmc+BN1
◧◩
520. ah765+Ni[view] [source] [discussion] 2023-11-22 08:11:34
>>shubha+B7
One funny thing about this mess is that "Team Helen" has never mentioned anything about safety, and Emmett said "The board did not remove Sam over any specific disagreement on safety".

The reason everyone thinks it's about safety seems largely because a lot of e/acc people on Twitter keep bringing it up as a strawman.

Of course, it might end up that it really was about safety in the end, but for now I still haven't seen any evidence. The story about Sam trying to get board control and the board retaliating seems more plausible given what's actually happened.

replies(1): >>rcMgD2+k34
◧◩◪◨⬒⬓
521. hadloc+Oi[view] [source] [discussion] 2023-11-22 08:11:46
>>morale+Oe
Branding counts for a lot, but LLM are already a commodity. As soon as someone releases an LLM equivalent to GPT4 or GPT5, most cloud providers will offer it locally for a fraction of what openAI is charging, and the heaviest users will simply self-host. Go look at the company Docker. I can build a container on almost any device with a prompt these days using open source tooling. The company (or brand, at this point?) offers "professional services" I suppose but who is paying for it? Or go look at Redis or Elasti-anything. Or memcached. Or postgres. Or whatever. Industrial-grade underpinnings of the internet, but it's all just commodity stuff you can lease from any cloud provider.

It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".

I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.

replies(4): >>cyanyd+6P >>worlds+CP >>iLoveO+Mc3 >>rolisz+f45
◧◩
522. astran+Ui[view] [source] [discussion] 2023-11-22 08:12:26
>>halfjo+M4
US companies don't need to be "in the hands of the government", we have rule of law.

And Helen Toner was already as much of a fed as you could want; she had exactly the resume a CIA agent would have. (Probably wasn't though.)

replies(2): >>ssnist+qx >>mcmcmc+5O1
◧◩◪◨
523. meitha+Vi[view] [source] [discussion] 2023-11-22 08:12:33
>>dalbas+Lh
OpenAI success is unfortunately largely based on the one ruthless decision to ignore ethics and train the model on the work of millions of artists and authors. I don’t know if Sam himself was behind this decision. I doubt Aaron Schwartz would have done the same.
replies(1): >>dalbas+nQ
◧◩◪◨⬒
524. nearbu+1j[view] [source] [discussion] 2023-11-22 08:13:22
>>Terrif+d7
Concerns about bias and racism in ChatGPT would feel more valid if ChatGPT were even one tenth as bias as anything else in life. Twitter, Facebook, the media, friends and family, etc. are all more bias and radicalized (though I mean "radicalized" in a mild sense) than ChatGPT. Talk to anyone on any side about the war in Gaza and you'll get a bunch of opinions that the opposite side will say are blatantly racist. ChatGPT will just say something inoffensive like it's a complex and sensitive issue and that it's not programmed to have political opinions.
◧◩◪◨
525. waveBi+2j[view] [source] [discussion] 2023-11-22 08:13:26
>>nostro+3d
those are 2 different camps. Alignment folks and ethics folks tend to disagree strongly about the main threat, with ethics e.g. Timnet Gebru insisting that crystalzing the current social order is the main threat, and alignment e.g. Paul Christiano insisting its machines run amok. So far the ethics folks are the only ones getting things implemented for the most part.
◧◩◪◨⬒
526. kmlevi+3j[view] [source] [discussion] 2023-11-22 08:13:27
>>rapsey+ja
I don't think anybody had high expectations for him, but he really pulled through.
◧◩◪
527. astran+4j[view] [source] [discussion] 2023-11-22 08:13:38
>>0xDEAF+r8
No, his predictions in 2021 were not accurate. He gave 33% chance of three different things happening, and then none of them happened!
◧◩
528. mise_e+7j[view] [source] [discussion] 2023-11-22 08:13:49
>>shubha+B7
A board still has a fiduciary duty to its shareholders. It’s materially irrelevant if those shareholders are of a public or private entity, or whether the company in question is a non-profit or for-profit. Laws mean something, and selective enforcement will only further the decay of the rule of law in the West.
◧◩
529. blacko+8j[view] [source] [discussion] 2023-11-22 08:14:06
>>transc+32
I read the comments, most of them are superficial as if someone with no inside knowledge will post. His understanding of humans is also weak. Book deals and speeches as a motivator is hilarious.
◧◩
530. ZiiS+9j[view] [source] [discussion] 2023-11-22 08:14:07
>>3Sopho+x6
His literal job is to manipulate the stock price up; nothing here comes close to illigal manipulation?
◧◩◪◨
531. kvgr+cj[view] [source] [discussion] 2023-11-22 08:14:22
>>nopins+th
If somebody wanted to do a biological attack, there is probably not much stopping them even now.
replies(1): >>nopins+Zj
◧◩◪
532. Kepler+fj[view] [source] [discussion] 2023-11-22 08:14:46
>>haunte+ih
Pretty sure Apple never aimed to be an Apple.
replies(3): >>hef198+8k >>sam_lo+Fk >>monosc+dE
◧◩◪
533. astran+ij[view] [source] [discussion] 2023-11-22 08:15:10
>>epups+Gc
They invented a whole theory of how if we had something called "AGI" it would kill everyone, and now they think LLMs can kill everyone because they're calling it "AGI", even though it doesn't work anything like their theory assumed.

This isn't about political correctness. It's far less reasonable than that.

replies(1): >>epups+DD
◧◩◪◨⬒⬓
534. vkou+jj[view] [source] [discussion] 2023-11-22 08:15:11
>>konsch+Rg
> The world is not zero-sum.

No, but some parts of it very much are. The whole point of AI safety is keeping it away from those parts of the world.

How are Sam and Satya going to do that? It's not in Microsoft's DNA to do that.

replies(1): >>concor+yk
◧◩◪◨⬒⬓
535. karmas+kj[view] [source] [discussion] 2023-11-22 08:15:24
>>morale+Oe
The damage remains to be seen

They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months

◧◩◪
536. concor+lj[view] [source] [discussion] 2023-11-22 08:15:39
>>pug_mo+Cb
It is utterly mad that there's conflation between "let's make sure AI doesn't kill us all" and "let's make sure AI doesn't say anything that embarrasses corporate".

The head of every major AI research group except Metas believes that whenever we finally make AGI it's vital that it shares our goals and values at a deep even-out-of-training-domain level and that failing at this could lead to human extinction.

And yet "AI safety" is often bandied about to be "ensure GPT can't tell you anything about IQ distributions".

◧◩◪◨⬒
537. lacker+mj[view] [source] [discussion] 2023-11-22 08:15:43
>>JumpCr+Dh
It's the same thing, really. Even if you want to shut down the company you need a CEO to shut it down! Like John Ray who is shutting down FTX.

There isn't just a big red button that says "destroy company" in the basement. There will be partnerships to handle, severance, facilities, legal issues, maybe lawsuits, at the very least a lot of people to communicate with. Companies don't just shut themselves down, at least not multi billion dollar companies.

replies(1): >>JumpCr+Si1
◧◩
538. dlkf+rj[view] [source] [discussion] 2023-11-22 08:16:22
>>shubha+B7
> I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

The real ”sheer stupidity” is this very belief.

◧◩◪◨
539. Al-Khw+tj[view] [source] [discussion] 2023-11-22 08:16:26
>>nostro+3d
No, in general AI safety/AI alignment ("we should prevent AI from nuking us") people are different from AI ethics ("we should prevent AI from being racist/sexist/etc.") people. There can of course be some overlap, but in most cases they oppose each other. For example Bender or Gebru are strong advocates of the AI ethics camp and they don't believe in any threat of AI doom at al.

If you Google for AI safety vs. AI ethics, or AI alignment vs. AI ethics, you can see both camps.

replies(1): >>hef198+Sk
◧◩◪
540. kmlevi+wj[view] [source] [discussion] 2023-11-22 08:16:39
>>highwa+o9
Say what you want about Summers specifically but I think it's a good idea getting some economists on the board. They are academics but focused on practical, important issues like loss of jobs and what that means for the economy and society. Up until now it seems like the board members have either been AI doomers with no practical experience or Silicon Valley types that inevitably have conflicts of interest, because everybody is starting their own AI venture now.
replies(1): >>thinkc+jl
◧◩
541. nbanks+zj[view] [source] [discussion] 2023-11-22 08:17:09
>>turndo+D
They wanted a new CEO and didn't expect Sam to take 95% of the company with him when he left.

Sam also played his hand extremely well; he's likely learned from watching hundreds of founder blowups over the years. He never really seemed angry publicly as he gained support from all the staff including Ilya & Mira. I had little doubt Emmett Shear would also welcome Sam's return since they were both in the first YC batch together.

replies(1): >>baruz+3F6
542. jl2718+Aj[view] [source] 2023-11-22 08:17:21
>>staran+(OP)
Are the Microsoft job offers at the same compensation still on the table?
543. righth+Cj[view] [source] 2023-11-22 08:17:26
>>staran+(OP)
What's interesting to me is that during this time Meta and OpenAI have eliminated their AI ethics members/teams but are still preaching about how it matters. No one has given any details beyond grand statements about it's importance on what these ethical AIs do. Everyone has secured their payday though.
replies(1): >>swatco+Bk
◧◩◪◨⬒⬓⬔⧯
544. Random+Dj[view] [source] [discussion] 2023-11-22 08:17:32
>>ludwik+ai
No, I mean that there is a proven way for the risk to materialise, not just some tall tale. Tall tales might(!) justify some caution, but they are a very different class of issue. Biological risks are perhaps in the latter category.

Also, as we don't know the probabilities, I don't think they are a useful metric. Made up numbers don't help there.

Edit: I would encourage people to study some classic cold war thinking, because that relied little on probabilities, but rather on trying to avoid situations where stability is lost, leading to nuclear war (a known existential risk).

replies(1): >>ludwik+Vq
◧◩◪
545. layer8+Ej[view] [source] [discussion] 2023-11-22 08:17:33
>>pug_mo+Cb
This polarizing “certain class of people” and them vs. us narrative isn’t helpful.
◧◩◪
546. robryk+Fj[view] [source] [discussion] 2023-11-22 08:17:40
>>epups+Gc
Consider that your argument could also be used to advocate for safety of starting to use coal-fired steam engines (in 19th century UK): there's no immediate direct problem, but competitive pressures force everyone to use them and any externalities stemming from that are basically unavoidable.
◧◩◪◨
547. darkwa+Ij[view] [source] [discussion] 2023-11-22 08:18:07
>>nostro+3d
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.

Fast forward 5-10 years, someone will say: "LLM were the worst thing we developed because they made us more stupid and permitted politicians to control even more the public opinion in a subtle way.

Just like tech/HN bubble started saying a few years ago about social networks (which were praised as revolutionary 15 years ago).

replies(5): >>didntc+Lp >>fallin+5G >>dnissl+h41 >>unethi+Ip1 >>Cacti+7Q2
◧◩
548. jakey_+Kj[view] [source] [discussion] 2023-11-22 08:18:16
>>Satam+0a
It wasn't necessarily groupthink - there was profound pressure from team Sam to sign that petition. What's going to happen to your career when you were one of the 200 who held out initially?
replies(7): >>hef198+3k >>concor+gk >>ben_w+2l >>dereg+zl >>mcosta+Uo >>ssnist+Ot >>jmcgou+rP
◧◩◪
549. smt88+Oj[view] [source] [discussion] 2023-11-22 08:18:37
>>haunte+ih
Apple has no by-laws committing itself to being an apple.

This line of argument is facile and destructive to conversation anyway.

It boils down to, "Pointing out corporate hypocrisy isn't valuable because corporations are liars," and (worse) it implies the other person is naive.

In reality, we can and should be outraged when corporations betray their own statements and supposed values.

replies(3): >>khazho+Mm >>Wytwww+3C >>photoc+nV
◧◩
550. Racing+Uj[view] [source] [discussion] 2023-11-22 08:19:23
>>eganis+F
Satya's pay is about 100 million dollars. Ide say he has earned every penny for protecting MSFTs 10B investment in OpenAI. A 1% insurance policy is great value.
◧◩
551. polite+Yj[view] [source] [discussion] 2023-11-22 08:19:38
>>Satam+0a
> there's clearly little critical thinking amongst OpenAI's employees either.

That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.

replies(13): >>dimask+vk >>hutzli+Yl >>highwa+5m >>lwhi+do >>kissgy+tp >>Satam+yp >>murbar+fw >>kitsun+ty >>yodsan+cF >>achron+UF >>__loam+GL >>JCM9+wQ >>kiba+1U
◧◩◪◨⬒
552. nopins+Zj[view] [source] [discussion] 2023-11-22 08:19:43
>>kvgr+cj
The expertise to produce the substance itself is quite rare so it's hard to carry it out unnoticed. AI could make it much easier to develop it in one's basement.
replies(2): >>swells+Dl >>DebtDe+GJ
◧◩
553. housto+1k[view] [source] [discussion] 2023-11-22 08:19:51
>>upupup+bi
After learning earlier about Sam Altman's long-con at Reddit, I'm surprised I haven't seen anyone suggest that Emmett Shear accepted the job in order to help get Sam back into the company.

They were both members of the inaugural class of Y-Combinator, and all of Shear's published actions since accepting the role (like demanding evidence of Sam' wrongdoing) seem to have helped Sam return to his role.

I don't think it's a stretch to say that he did win, in that he might have accomplished exactly what he wanted when he accepted the role.

replies(1): >>stephe+Bl
◧◩◪◨⬒⬓
554. Random+2k[view] [source] [discussion] 2023-11-22 08:20:04
>>_Alger+9i
Are we extinct? No. Could a large impact kill us all? Yes.

Expected value and probability have no place in these discussions. Some risks we know can materialize, for others we have perhaps a story on what could happen. We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.

replies(2): >>_Alger+Wk >>concor+5o
◧◩◪
555. hef198+3k[view] [source] [discussion] 2023-11-22 08:20:07
>>jakey_+Kj
Go work somewhere else? The reason being you din't like that amount of drama?
◧◩
556. sampo+4k[view] [source] [discussion] 2023-11-22 08:20:10
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

In the 1990s and the 00s, it was no too uncommon for anti-GMO environmental activist / ecoterrorist groups to firebomb research facilities and to enter farms and fields to destroy planted GMO plants. Earth Liberation Front was only one of such activist groups [1].

We have yet to see even one bombing of an AI research lab. If people really are afraid of AIs, at least they do so more in the abstract and are not employing the tactics of more traditional activist movements.

[1] https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...

replies(1): >>concor+Ol
◧◩◪
557. concor+6k[view] [source] [discussion] 2023-11-22 08:20:24
>>nopins+wg
So Sam wants to make AGI without working to be sure it doesn't have goals higher than the preservation of human value?!

I can't believe that

replies(1): >>nopins+8l
◧◩◪◨
558. hef198+8k[view] [source] [discussion] 2023-11-22 08:20:44
>>Kepler+fj
They sure sued a lot of apple places over having an apple as logo.
replies(1): >>_Alger+1D
◧◩◪◨⬒
559. astran+9k[view] [source] [discussion] 2023-11-22 08:20:44
>>logicc+Db
This is not an accurate description of causation and can't be, because there are more steps after "genetics" in the causal chain.

It's also unimaginative; having a variety of traits is itself good for society, which means you don't need variation in genetics to cause it. It's adaptive behavior for the same genes to simply lead to random outcomes. But people who say "genes cause X" probably wouldn't like this because they want to also say "and some people have the best genes".

◧◩
560. Racing+ak[view] [source] [discussion] 2023-11-22 08:20:48
>>seydor+F1
Women are free to start their own AI company.
◧◩
561. kmlevi+ek[view] [source] [discussion] 2023-11-22 08:21:07
>>Satam+0a
I think this outcome was actually much more favorable to D'Angelo's faction than people realize. The truth is before this Sam was basically running circles around the board and doing whatever he wanted on the profit side- that's what was pissing them off so much in the first place. He was even trying to depose board members who were openly critical of open AI's practices.

From here on out there is going to be far more media scrutiny on who gets picked as a board member, where they stand on the company's policies, and just how independent they really are. Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.

replies(6): >>buggle+Ck >>eviks+Rm >>jatins+Wr >>moonsu+OD >>cyanyd+SJ >>nashas+hL
◧◩◪
562. colins+fk[view] [source] [discussion] 2023-11-22 08:21:24
>>haunte+ih
did the "Open" in OpenAI not originally refer to open in the academic or open source manner? i only learned about OpenAI in the GPT-2 days, when they released it openly and it was still small enough that i ran it on my laptop: i just assumed they had always acted according to their literal name up through that point.
replies(2): >>SuchAn+Kk >>Centig+gl
◧◩◪
563. concor+gk[view] [source] [discussion] 2023-11-22 08:21:54
>>jakey_+Kj
Isn't that one of the causes of group think?
replies(1): >>Kathul+pt
◧◩
564. bambax+hk[view] [source] [discussion] 2023-11-22 08:21:56
>>altpad+R1
It seems ironic that the research paper that started it all [0] deals with "costly signals":

> Costly signals are statements or actions for which the sender will pay a price —political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat

Firing Sam Altman and hiring him back two days later was a perfect example of a costly signal, as it cost all involved their board positions.

There's an element of farce in all of this, that would make for an outstanding Silicon Valley episode; but the fact that Sam Altman can now enjoy unchecked power as leader of OpenAI is worrying and no laughing matter.

[0] https://cset.georgetown.edu/publication/decoding-intentions/

replies(1): >>ovalit+FI
◧◩◪
565. IanCal+ik[view] [source] [discussion] 2023-11-22 08:22:02
>>_fizz_+4h
Well, if it aligned with their goals, sure I think.

Let's make the situation a little different. Could MSF pay a private surgery with investors to perform reconstruction for someone?

Could they pay the surgery to perform some amount of work they deem aligns with their charter?

Could they invest in the surgery under the condition that they have some control over the practices there? (Edit - e.g. perform Y surgeries, only perform from a set of reconstructive ones, patients need to be approved as in need by a board, etc)

Raising private investment allows a non profit to shift cost and risk to other entities.

The problem really only comes when the structure doesn't align with the intended goals - which is something distinct to the structure, just something non profits can do.

replies(1): >>framap+jd1
◧◩◪◨⬒⬓⬔
566. richar+ok[view] [source] [discussion] 2023-11-22 08:22:49
>>Random+fd
Nukes and meteorites have very few components that are hard to predict. One goes bang almost entirely on command and the other follows Newton's laws of motion. Neither actively tries to effect any change in the world, so the risk is only "can we spot a meteorite early enough". Once we do, it doesn't try to evade us or take another shot at goal. A better example might be covid, which was very mildly more unpredictable than a meteor, and changed its code very slowly in a purely random fashion, and we had many historical examples of how to combat.
◧◩◪◨
567. astran+tk[view] [source] [discussion] 2023-11-22 08:24:00
>>mempko+ue
Well he is certainly financially literate. He's just often wrong and incapable of admitting it, as is normal behavior for important economists.
replies(1): >>mempko+6l
◧◩◪
568. dimask+vk[view] [source] [discussion] 2023-11-22 08:24:11
>>polite+Yj
It is not about different set of information, but different stakes/interests. They act firstmost as investors rather than as employees on this.
replies(3): >>siva7+El >>karmas+1m >>Wytwww+qA
◧◩◪◨
569. siva7+xk[view] [source] [discussion] 2023-11-22 08:24:34
>>croes+uc
The other plausible explanation is that Helen Toner doesn’t care as much about safety as about her personal power and clinging to the seat which gives her importance. Saying it’s for safety is very easy and the obviously popular choice if you want to hide your motives. The remark she made strikes me as borderline narcissistic in retrospective.
replies(1): >>croes+P11
◧◩◪◨⬒⬓⬔
570. concor+yk[view] [source] [discussion] 2023-11-22 08:24:47
>>vkou+jj
> The whole point of AI safety is keeping it away from those parts of the world.

No, it's to ensure it doesn't kill you and everyone you love.

replies(2): >>hef198+4n >>vkou+oq
◧◩
571. swatco+Bk[view] [source] [discussion] 2023-11-22 08:25:22
>>righth+Cj
I think those changes (and this shakeup) are the start of the industry grounding its expectations for this technology. I think a lot of product and finance people, and many but not all researchers, are seeing the current batch of generative AI ideas as ripe to make do things and see the pseudo-religious safety/ethics communities as not directly relevant to that work.

So you let your product teams figure out how the brand needs to be protected and the workflow needs to be shaped, like always, and you don't defer to some outside department full of beatniks in berets or whatever.

replies(1): >>righth+Xm
◧◩◪
572. buggle+Ck[view] [source] [discussion] 2023-11-22 08:25:29
>>kmlevi+ek
> Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.

The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests. The naïveté from the NPO faction was believing they’d be able to develop these capacities outside the strict control of the military industrial complex when AI has been established as part of the new Cold War with China.

replies(2): >>ah765+sl >>kmlevi+en
◧◩
573. karmas+Ek[view] [source] [discussion] 2023-11-22 08:25:40
>>Satam+0a
It is not groupthink it is comradery.

For me, the whole thing is just human struggle. It is about fighting for people they love and care, against some people they dislike or indifferent to.

replies(1): >>Raston+es
◧◩◪◨
574. sam_lo+Fk[view] [source] [discussion] 2023-11-22 08:25:42
>>Kepler+fj
But The Apple.
◧◩◪◨⬒
575. cft+Ik[view] [source] [discussion] 2023-11-22 08:26:18
>>hadloc+vd
You are forgetting about the end of the Moore's law. The costs for running a large scale AI won't drop dramatically. Any optimizations will require non-trivial expensive PhD Bell Labs level research. Running intelligent LLMs will be financially accessible only to a few mega corps in the US and China (and perhaps to the European government). The AI "safety" teams will control the public discourse. Traditional search engines that blacklist websites with dissenting opinions will be viewed as the benevolent free speech dinosaurs of the past.
replies(1): >>dontup+1G
◧◩◪
576. ah765+Jk[view] [source] [discussion] 2023-11-22 08:26:20
>>_fizz_+4h
I think it actually isn't that easy. Compared to your example, the difference is that OpenAI's for-profit is getting outside money from Microsoft, not money from non-profit OpenAI. Non-profit OpenAI is basically dealing with for-profit OpenAI as a external partner that happens to be aligned with their interests, paying the expensive bills and compute, while the non-profit can hold on to the IP.

You might be able to imagine a world where there was an external company that did the same thing as for-profit OpenAI, and OpenAI nonprofit partnered with them in order to get their AI ideas implemented (for free). OpenAI nonprofit is basically getting a good deal.

MSF could similarly create an external for-profit hospital, funded by external investors. The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.

Of course, there's a lot of sketchiness in practice, which we can see in this situation with Microsoft influencing the direction of nonprofit OpenAI even though it shouldn't be. I think there would have been real legal issues if the Microsoft deal had continued.

replies(1): >>_fizz_+ED1
◧◩◪◨
577. SuchAn+Kk[view] [source] [discussion] 2023-11-22 08:26:21
>>colins+fk
Except that view point fell even earlier when they refused to release their models after GPT-2.
◧◩◪◨
578. wokwok+Mk[view] [source] [discussion] 2023-11-22 08:26:29
>>qsi+27
There is a material difference between:

Sam and Greg will be joining Microsoft.

And:

Sam and Greg have in principle agreed to join Microsoft but not signed anything.

If Microsoft has (now) agreed to release either of them (or anyone else) from contractual obligations, then the first one was true.

If not, then the first was was a lie, and the second one was true.

This whole drama has been punctuated by a great deal of speculation, pivots, changes and, bluntly, lies.

Why do we need to sugar coat it?

Where the fuck is this new magical Microsoft research lab?

Microsoft preparing a new office for openAI employees? Really? Is that also true?

Is Sam actually going to be on the board now, or is this another twist in this farcical drama when they blow it off again?

I see no reason to, at least point, give anyone involved the benefit of the doubt.

Once the board actually changes, or Microsoft actually does something, I’m happy to change my tune, but I’m calling what I see.

Sam did not join Microsoft at any point.

◧◩◪◨
579. nix-za+Nk[view] [source] [discussion] 2023-11-22 08:26:33
>>eclect+li
Steve Jobs got fired from Apple, but was rehired 11 years later.
replies(1): >>abkola+Er
◧◩◪◨⬒
580. hef198+Sk[view] [source] [discussion] 2023-11-22 08:27:28
>>Al-Khw+tj
The safety aspect of AI ethics is much more pressing so. We see how devicive social media can be, imagine that turbo charged by AI, and we as a society haven't even figured out social media yet...

ChatGPT turning into Skynet and nuking us all is a much more remote problem.

◧◩◪◨⬒⬓⬔
581. _Alger+Wk[view] [source] [discussion] 2023-11-22 08:27:53
>>Random+2k
>We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.

How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.

>Expected value and probability have no place in these discussions.

I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.

replies(1): >>Random+cm
◧◩◪
582. ben_w+2l[view] [source] [discussion] 2023-11-22 08:28:40
>>jakey_+Kj
> What's going to happen to your career when you were one of the 200 who held out initially?

Anthropic formed from people who split from OpenAI, and xAI in response to either the company or ChatGPT, so people would have plenty of options.

If the staff had as little to go on as the rest of us, then the board did something that looked wild and unpredictable, which is an acute employment threat all by itself.

replies(1): >>voster+bD
◧◩◪◨
583. astran+5l[view] [source] [discussion] 2023-11-22 08:29:28
>>choppa+F5
Larry Summers hurt the US economy by making the recovery from 2008 much too slow. If they'd done stimulus better, we could've had 2019's economic growth years earlier. That would've been great for Microsoft.
◧◩◪◨⬒
584. mempko+6l[view] [source] [discussion] 2023-11-22 08:29:40
>>astran+tk
Being financially literate means being able to understand how the financial system works. Larry Summers thinks operate as intermediaries lending out deposits. This is very wrong. He is not financially literate. He is an economist.
replies(1): >>astran+fo
◧◩◪◨
585. nopins+8l[view] [source] [discussion] 2023-11-22 08:29:52
>>concor+6k
No, I didn't say that. They formed the Superalignment team with Ilya as a co-lead (and Sam's approval) for that.

https://openai.com/blog/introducing-superalignment

I presume the current alignment approach is sufficient for the AI they make available to others and, in any event, GPT-n is within OpenAI's control.

◧◩◪◨
586. ywain+bl[view] [source] [discussion] 2023-11-22 08:30:32
>>diesel+4a
But HAL didn't act "on a whim"! The reason it killed the crew is not because it went rogue, but rather because it was following its instructions to keep the true purpose of the mission secret. If the crew is dead, it can't find out the truth.

In light of the current debate around AI safety, I think "unintended consequences" is a much more plausible risk then "spontaneously develops free will and decides humans are unnecessary".

replies(1): >>danger+GU
◧◩◪
587. patcon+dl[view] [source] [discussion] 2023-11-22 08:30:59
>>kmlevi+cf
Agreed. Perhaps a reason for public AI [1], which advocates for a publicly funded option where a player like MSFT can't push around something like OpenAI so forcefully.

[1]: https://lu.ma/zo0vnony

◧◩
588. pk-pro+el[view] [source] [discussion] 2023-11-22 08:31:05
>>shubha+B7
> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.

I believe this position reflects the thoughts of the majority of AI researchers, including myself. It is concerning that we do not fully understand something as promising and potentially dangerous as AI. I'm actually on Ilya's side; labeling his attempt to uphold the original OpenAI principles as an act of "coup" is what is happening now.

◧◩◪◨
589. Centig+gl[view] [source] [discussion] 2023-11-22 08:31:11
>>colins+fk
This has been a common misinterpretation since very early in OpenAI's history (and a somewhat convenient one for OpenAI).

From a 2016 New Yorker article:

> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”

> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”

source: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

replies(1): >>olau+fC
◧◩◪
590. concor+hl[view] [source] [discussion] 2023-11-22 08:31:40
>>blacko+Hb
> so proper course of action would be to loop in NSA/White House

Eh? That would be an awful idea. They have no expertise on this and government institutions like thus are misaligned with the rest of humanity by design. E.g. NSA recruits patriots and has many systems, procedures and cultural aspects in place to ensure it keeps up its mission of spying on everyone.

replies(1): >>the_gi+Ro
◧◩◪◨
591. thinkc+jl[view] [source] [discussion] 2023-11-22 08:31:52
>>kmlevi+wj
This has nothing to do with Summers being an economist and everything to do with the fact that he used to run the parent agency of the IRS. Summers is the least sensible board pick imaginable unless one takes this fact and the coming regulatory catastrophe into account.
replies(1): >>kmlevi+Pn
◧◩◪◨
592. zucker+ol[view] [source] [discussion] 2023-11-22 08:32:11
>>dacryn+Af
Do you have a source for either of these things? The only thing I heard about Saudi investors was related to the (presumably separate) chip startup.
◧◩
593. clnq+pl[view] [source] [discussion] 2023-11-22 08:32:15
>>Satam+0a
> OpenAI is in fact not open

This meme was already dead before the recent events. Whatever the company was doing, you could say it wasn’t open enough.

> a real disruptor must be brewing somewhere unnoticed, for now

Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years? It has been the most high profile tech innovator recently.

> OpenAI does not have in its DNA to win

This is so vague. What does it not have in its… fundamentals? And what is to “win”? This statement seems like just generic unhappiness without stating anything clearly. By most measures, they are winning. They have the best commercial LLM and continue to innovate, they have partnered with Microsoft heavily, and they have so far received very good funding.

replies(2): >>absrec+Zz >>JohnFe+uO
◧◩◪◨
594. ah765+sl[view] [source] [discussion] 2023-11-22 08:32:30
>>buggle+Ck
According to this tweet thread[1], they negotiated hard for Sam to be off the board and Adam to stay on. That indicates, at least if we're being optimistic, that the current board is not in Sam's pocket (otherwise they wouldn't have bothered)

[1]:(https://twitter.com/emilychangtv/status/1727216818648134101)

replies(2): >>buggle+lm >>wouldb+xn
◧◩
595. crossr+xl[view] [source] [discussion] 2023-11-22 08:32:52
>>r721+k1
Now the blue tick has same effect on me on Twitter that the red N logo has on any film that came from the Netflix formula factory. I already know it’s going to be bad, regurgitated. Does everyone have a Twitter blue tick now? Or is that just a char people are using in their names?
replies(1): >>r721+5u
◧◩◪
596. dereg+zl[view] [source] [discussion] 2023-11-22 08:32:58
>>jakey_+Kj
There weren’t 200 holdouts. It was like 5 AM over there. I don’t know why you are surprised that people who work at OpenAI would want to work at OpenAI, esp over Microsoft?
◧◩◪
597. stephe+Bl[view] [source] [discussion] 2023-11-22 08:33:11
>>housto+1k
Can you elaborate on the long con?
◧◩◪◨⬒⬓
598. swells+Dl[view] [source] [discussion] 2023-11-22 08:33:16
>>nopins+Zj
Huh, you'd think all you need are some books on the subject and some fairly generic lab equipment. Not sure what a neural net trained on Internet dumps can add to that? The information has to be in the training data for the AI to be aware of it, correct?
replies(1): >>nopins+ym
◧◩◪◨
599. siva7+El[view] [source] [discussion] 2023-11-22 08:33:19
>>dimask+vk
A board member, Helen Toner, made a borderline narcissistic remark that it would be consistent with the company mission to destroy the company when the leadership confronted the board that their decisions puts the future of the company in danger. Almost all employees resigned in protest. It's insulting calling the employees under these circumstances investors.
replies(4): >>outsom+Mn >>stingr+ao >>ah765+ws >>Ludwig+3H
◧◩◪
600. concor+Ol[view] [source] [discussion] 2023-11-22 08:34:47
>>sampo+4k
It's mostly that it's a can of worms no one wants to open. Very much a last resort as its very tricky to use uncoordinated violence effectively (just killing Sam, LeCunn and Greg doesnt do too much to move the needle and then everyond armors up) and very hard to coordinate violence without a leak.
◧◩◪◨
601. nicce+Rl[view] [source] [discussion] 2023-11-22 08:34:55
>>drewco+re
It has payed only fraction of that so far
◧◩◪◨
602. throwa+Tl[view] [source] [discussion] 2023-11-22 08:35:03
>>drewco+re
As far as I understand, they knew and agreed to that before committing their $$$.
replies(2): >>Iulioh+vq >>Aunche+gh1
◧◩◪
603. krisof+Vl[view] [source] [discussion] 2023-11-22 08:35:13
>>pug_mo+Cb
> Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.

You are absolutely right. There is no question about that the AI will be an expert at subtly steering individuals and the whole society in whichever direction it does.

This is the core concept of safety. If no-one steers the machine then the machine will steer us.

You might disagree with the current flavour of steering the current safety experts give it, and that is all right and in fact part of the process. But surely you have your own values. Some things you hold dear to you. Some outcomes you prefer over others. Are you not interested in the ability to make these powerful machines if not support those values, at least not undermine them? If so you are interested in AI safety! You want safe AIs. (Well, alternatively you prefer no AIs, which is in fact a form of safe AI. Maybe the only one we have mastered in some form so far.)

> because of X, we need to invade this country.

It sounds like you value peace? Me too! Imagine if we could pool together our resources to have an AI which is subtly manipulating society into the direction of more peace. Maybe it would do muckraking investigative journalism exposing the misdeeds of the military-industrial complex? Maybe it would elevate through advertisement peace loving authors and give a counter narrative to the war drums? Maybe it would offer to act as an intermediary in conflict resolution around the world?

If we were to do that, "ai safety" and "alignment" is crucial. I don't want to give my money to an entity who then gets subjugated by some intelligence agency to sow more war. That would be against my wishes. I want to know that it is serving me and you in our shared goal of "more peace, less war".

Now you might say: "I find the idea of anyone, or anything manipulating me and society disgusting. Everyone should be left to their own devices.". And I agree on that too. But here is the bad news: we are already manipulated. Maybe it doesn't work on you, maybe it doesn't work on me, but it sure as hell works. There are powerful entities financially motivated to keep the wars going. This is a huuuge industry. They might not do it with AIs (for now), because propaganda machines made of meat work currently better. They might change to using AIs when that works better. Or what is more likely employ a hybrid approach. Wishing that nobody gets manipulated is frankly not an option on offer.

How does that sound as a passionate argument for AI safety?

◧◩◪
604. hutzli+Yl[view] [source] [discussion] 2023-11-22 08:35:32
>>polite+Yj
"They have a different set of information than you do,"

Their bank accounts current and potential future numbers?

replies(1): >>tucnak+gn
◧◩◪◨
605. karmas+1m[view] [source] [discussion] 2023-11-22 08:35:56
>>dimask+vk
Tell me how the board's actions could convince the employees they are making the right move?

Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.

OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.

replies(2): >>kortil+rs >>cyanyd+mJ
◧◩◪
606. highwa+5m[view] [source] [discussion] 2023-11-22 08:36:07
>>polite+Yj
There’s evidence to suggest that a central group have pressured the broader base of employees into going along with this, as posted elsewhere in the thread.
◧◩
607. Michae+bm[view] [source] [discussion] 2023-11-22 08:36:26
>>eclect+79
You could say the same about any person on the top. In general CEO's do not do research. Still they are critical for success.

By the way the AI scientists get a lot of respect and admiration see Ilya for example.

replies(1): >>seydor+lv
◧◩◪◨⬒⬓⬔⧯
608. Random+cm[view] [source] [discussion] 2023-11-22 08:36:35
>>_Alger+Wk
I disagree that there is orthogonality. Have we killed us all with nuclear weapons, for example? Anyone can make up any story - at the very least there needs to be a proven mechanism. The precautionary principle is not useful when facing totally hypothetically issues.

People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.

replies(1): >>mlyle+sj2
◧◩◪
609. bkyan+dm[view] [source] [discussion] 2023-11-22 08:36:35
>>cheeze+d5
https://twitter.com/emilychangtv/status/1727228431396704557

The reputation boost is probably worth a lot more than the direct financial compensation he's getting.

◧◩◪◨⬒
610. Stanis+hm[view] [source] [discussion] 2023-11-22 08:37:17
>>wisty+Bd
Which users? The greatest crimes, by far, are committed by the US government (and other governments around the world) - and you can be sure that AI and/or AGI will be designed to help them commit their crimes more efficiently, effectively and to manufacture consent to do so.
◧◩
611. robot+jm[view] [source] [discussion] 2023-11-22 08:37:32
>>Satam+0a
there is a lot of money made (100m paid users?) by everyone and momentum so groupthink is forced to occur kind of.
◧◩◪
612. concor+km[view] [source] [discussion] 2023-11-22 08:37:37
>>eslaug+0e
> If you think the threat is real, how are we not already screwed?

That's the current Yudkowsky view. That it's essentially impossible at this point and we're doomed, but we might as well try anyway as its more "dignified" to die trying.

I'm a bit more optimistic myself.

◧◩◪◨⬒
613. buggle+lm[view] [source] [discussion] 2023-11-22 08:37:37
>>ah765+sl
I’m sorry, but that’s all kayfabe. If there is one thing that’s been demonstrated in this whole fiasco, it’s who really has all the power at OpenAI (and it’s not the board).
◧◩◪
614. lynx23+nm[view] [source] [discussion] 2023-11-22 08:37:41
>>haunte+ih
Yes!
◧◩◪◨
615. nickpp+om[view] [source] [discussion] 2023-11-22 08:37:52
>>nopins+th
> Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.

Don't forget that it would also increase the power of the good guys. Any technology in history (starting with fire) had good and bad uses but overall the good outweighed the bad in every case.

And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.

replies(1): >>nopins+Xr
◧◩◪◨⬒
616. hef198+rm[view] [source] [discussion] 2023-11-22 08:38:43
>>wisty+Bd
I am a little less free speech than Americans, in Germany we have serious limitations around hate speech and holicaust denial for example.

Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.

Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.

replies(1): >>freedo+422
◧◩◪◨
617. doikor+sm[view] [source] [discussion] 2023-11-22 08:38:44
>>quickt+ug
They tried but it did not work. They needed billions for the compute time and top tier talent but were only able to collect millions.
◧◩
618. androi+tm[view] [source] [discussion] 2023-11-22 08:39:13
>>Satam+0a
right . why don't you creat a chatgpt like innovation or even AGI and do things your way? So many people just know how to complain on what other people build and forget that no one is stopping you from innovating the way you like it.
◧◩
619. eloisa+vm[view] [source] [discussion] 2023-11-22 08:39:18
>>Satam+0a
Yes they need to change their name. Having "Open" in their name is just a big marketing lie.
◧◩◪◨⬒⬓⬔
620. nopins+ym[view] [source] [discussion] 2023-11-22 08:39:31
>>swells+Dl
GPT-4 is likely trained on some data not publicly available as well.

There's also a distinction between trying to follow some broad textbook information and getting detailed feedback from an advanced conversational AI with vision and more knowledge than in a few textbooks/articles in real time.

◧◩◪
621. lukevp+Am[view] [source] [discussion] 2023-11-22 08:39:34
>>pug_mo+Cb
AI isn’t a precondition for partisanship. How do you know Google isn’t showing you biased search results? Or Wikipedia?
◧◩◪
622. concor+Dm[view] [source] [discussion] 2023-11-22 08:39:39
>>arkety+p9
Alignment is considered an extremely hard problem for a reason. It's already nigh impossible when you're dealing with humans.

Btw: do you think ridicule eould be helpful here?

replies(1): >>arkety+tn
◧◩
623. lewhoo+Hm[view] [source] [discussion] 2023-11-22 08:40:09
>>shubha+B7
I think only a minority of the general public truly cares about AI Safety

That doesn't matter that much. If your analysis is correct then it means a (tiny) minority of OpenAI cares about AI safety. I hope this isn't the case.

◧◩◪
624. maxlin+Im[view] [source] [discussion] 2023-11-22 08:40:12
>>random+Yf
Exactly. 3 CEO switches in a week is ridiculous
replies(2): >>abkola+Kq >>caleb-+t71
◧◩◪◨⬒⬓⬔
625. fsloth+Jm[view] [source] [discussion] 2023-11-22 08:40:16
>>JumpCr+7h
I’m not an expert, just my gut talking. If they had god in a box, US state would be much more hands on. Now it looks more like an attempt at regulatory capture to stifle competition. ”Think of the safety”! ”Lock this away”! If they actually had skynet US gov has very effective and very discreet methods to handle such clear and present danger (barring intelligence failure ofc, but those happen mostly because something falls under your radar).
replies(1): >>JohnPr+a11
◧◩
626. ben_w+Km[view] [source] [discussion] 2023-11-22 08:40:34
>>eclect+79
He says nice things about his team (and even about his critics) when in public.

But my reading of this drama is that the board were seen as literally insane, not that Altman was seen as spectacularly heroic or an underdog.

replies(2): >>stingr+Rp >>bnralt+dc1
◧◩◪◨
627. khazho+Mm[view] [source] [discussion] 2023-11-22 08:40:36
>>smt88+Oj
> In reality, we can and should be outraged when corporations betray their own statements and supposed values.

There are only three groups of people who could be subject to betrayal here: employees, investors, and customers. Clearly they did not betray employees or investors, since they largely sided with Sam. As for customers, that's harder to gauge -- did people sign up for ChatGPT with the explicit expectation that the research would be "open"?

The founding charter said one thing, but the majority of the company and investors went in a different direction. That's not a betrayal, but a pivot.

replies(3): >>Angost+hp >>master+9E >>denton+5u2
628. olgias+Om[view] [source] 2023-11-22 08:40:43
>>staran+(OP)
Where Ilya will go next then? I assume he won't stay at OpenAI for too long after all this poop-show.
629. xeckr+Qm[view] [source] 2023-11-22 08:40:52
>>staran+(OP)
What a ride.
◧◩◪
630. eviks+Rm[view] [source] [discussion] 2023-11-22 08:40:53
>>kmlevi+ek
Doesn't make sense that after such a broad board capitulation the next one will have any power, and media scrutiny isn't a powerful governance mechanism
replies(2): >>kmlevi+yn >>dagaci+hD
◧◩◪
631. righth+Xm[view] [source] [discussion] 2023-11-22 08:41:52
>>swatco+Bk
This is the abandoning of ethics. No one moving forward is going to be thinking about it and they've clearly signaled it's about making money. People that have issues with it will just not use the products or be hypocrites about using the products. There is nothing to push up against anymore, but I don't think the recent events are initiator. People were already letting go of ethics the moment they continued using it because the tech was so cool. The parting of the ethical peoples is just the final nail. There is no reason to remove these ethical teams if they believe in ethics, downsize maybe but not dedicating a human to at least researching the ethical outcomes sure isn't very good for humanity ethics concerns.
◧◩
632. shrimp+Zm[view] [source] [discussion] 2023-11-22 08:42:03
>>transc+32
That doesn't sound credible or revealing. It's regurgitating a bunch of speculation stuff that's been said on this forum and in the media.
◧◩◪
633. lewhoo+0n[view] [source] [discussion] 2023-11-22 08:42:10
>>swatco+69
FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it.

That would be a really bad take on climate change.

◧◩◪◨⬒⬓⬔⧯
634. hef198+4n[view] [source] [discussion] 2023-11-22 08:42:59
>>concor+yk
No, we are far, far from skynet. So far AI fails at driving a car.

AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...

replies(1): >>concor+Yx
635. Havoc+5n[view] [source] 2023-11-22 08:43:00
>>staran+(OP)
Keeping Adam? I thought he’s the likely instigator
◧◩◪◨⬒⬓
636. astran+8n[view] [source] [discussion] 2023-11-22 08:43:15
>>hoseja+bh
This is not true, you just haven't tried the alternatives enough to be disappointed in them.

An unaligned base model doesn't answer questions at all and is hard to use for anything, including evil purposes. (But it's good at text completion a sentence at a time.)

An instruction-tuned not-RLHF model is already largely friendly and will not just eg tell you to kill yourself or how to build a dirty bomb, because question answering on the internet is largely friendly and "aligned". So you'd have to tune it to be evil as well and research and teach it new evil facts.

It will however do things like start generating erotica when it sees anything vaguely sexy or even if you mention a woman's name. This is not useful behavior even if you are evil.

You can try InstructGPT on OpenAI playground if you want; it is not RLHFed, it's just what you asked for, and it behaves like this.

The one that isn't even instruction tuned is available too. I've found it makes much more creative stories, but since you can't tell it to follow a plot they become nonsense pretty quickly.

◧◩◪◨
637. kmlevi+en[view] [source] [discussion] 2023-11-22 08:43:53
>>buggle+Ck
>The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests.

That's incorrect. The new members will be chosen by D'Angelo and the two new independent board members. Both of which D'Angelo had a big hand in choosing.

I'm not saying Larry Summers etc going to be in D'Angelo's pocket. But the whole reason he agreed to those picks is because he knows they won't be in Sam's pocket, either. More likely they will act independently and choose future members that they sincerely believe will be the best picks for the nonprofit.

◧◩◪◨
638. tucnak+gn[view] [source] [discussion] 2023-11-22 08:44:04
>>hutzli+Yl
How is employees protecting themselves is suddenly a bad thing? There's no idiots at OpenAI.
replies(2): >>g-b-r+Iq >>pooya1+mN
◧◩
639. soci+hn[view] [source] [discussion] 2023-11-22 08:44:11
>>shubha+B7
The Technologyreview article mentioned in the parent’s first paragraph is the most insightful piece of content I’ve read about the tensions inside OpenAI.
◧◩
640. c0pium+jn[view] [source] [discussion] 2023-11-22 08:44:15
>>auggie+E9
> Not judging

I wonder if this has ever been said truthfully?

replies(1): >>auggie+sq
◧◩◪
641. notfed+on[view] [source] [discussion] 2023-11-22 08:44:47
>>Racing+Bi
It looks like he said, specifically:

> "...[there] is relatively clear evidence that whatever the difference in means—which can be debated—there is a difference in the standard deviation and variability of a male and female population..."

Sheesh, of all the things to be cancelled for...

◧◩◪
642. concor+pn[view] [source] [discussion] 2023-11-22 08:45:19
>>lovepa+98
Nah, a number do, including Sam himself and the entire leadership.

They just have different ideas about one or more of: how likely another team is to successfully charge ahead while ignoring safety, how close we are to AGI, how hard alignment is.

◧◩◪
643. Havoc+sn[view] [source] [discussion] 2023-11-22 08:45:57
>>Terrif+D2
Don’t think the dota bot was random. It’s the perfect mix between complicated yet controllable environment, good data availability and good PR angle.
replies(1): >>dontup+mG
◧◩◪◨
644. arkety+tn[view] [source] [discussion] 2023-11-22 08:45:58
>>concor+Dm
I can see how ridicule of this specific instance could be the best medicine for an optimal outcome, even by a utilitarian argument, which I generally don't like to make by the way. It is indeed nigh impossible, which is kind of my point. They could have shown more humility. If anything, this whole debacle has been a moral victory for e/acc, seeing how the brightest of minds are at a loss dealing with alignment anyway.
replies(1): >>Feepin+9r
◧◩◪◨⬒
645. wouldb+xn[view] [source] [discussion] 2023-11-22 08:46:37
>>ah765+sl
Yeah the board is kind of pointless now.

They can't control the CEO, neither fire him.

They can't take actions to take back the back control from Microsoft and Sam because Sam is the CEO. Even if Sam is of the utmost morality, he would be crazy to help them back into a strong position after last week.

So it's the Sam & Microsoft show now, only a master schemer can get back some power to the board.

replies(2): >>wouldb+7L >>notaha+sL
◧◩◪◨
646. kmlevi+yn[view] [source] [discussion] 2023-11-22 08:46:38
>>eviks+Rm
When you consider they were acting under the threat of the entire company walking out and the threat of endless lawsuits, this is a remarkably mild capitulation. All the new board members are going to be chosen by D'Angelo and two new board members that he also had a big hand in choosing.

And say what you want about Larry Summers, but he's not going to be either Sam's or even Microsoft's bitch.

replies(3): >>imjons+Np >>eviks+kt >>chucke+zN
◧◩◪◨
647. beAbU+zn[view] [source] [discussion] 2023-11-22 08:46:52
>>neurog+l9
Sounds like something an AI would say
◧◩◪
648. concor+Bn[view] [source] [discussion] 2023-11-22 08:47:14
>>jkapla+oc
> Like you say, not that many people truly take that seriously right now.

Eh? Polls on the matter show widespread public support for a pause due to safety concerns.

◧◩◪◨⬒⬓
649. mvdtnz+Dn[view] [source] [discussion] 2023-11-22 08:47:28
>>alex_y+rg
You're happy there is reasoned discussion, but the idea is, in your view, "regressive" whether it's true or not?
replies(1): >>alex_y+4q
◧◩◪
650. simonh+En[view] [source] [discussion] 2023-11-22 08:47:34
>>pug_mo+Cb
A main concern in AI safety is alignment. Ensuring that when you use the AI to try to achieve a goal that it will actually act towards that goal in ways you would want, and not in ways you would not want.

So for example if you asked Sydney, the early version of the Bing LLM, some fact it might get it wrong. It was trained to report facts that users would confirm as true. If you challenged it’s accuracy what do you want to happen? Presumably you’d want it to check the fact or consider your challenge. What it actually did was try to manipulate, threaten, browbeat, entice, gaslight, etc, and generally intellectually and emotionally abuse the user into accepting its answer, so that it’s reported ‘accuracy’ rate goes up. That’s what misaligned AI looks like.

replies(1): >>gorbyp+PA
◧◩◪
651. gorbyp+Gn[view] [source] [discussion] 2023-11-22 08:47:43
>>nopins+wg
Honest question, but in your example above of Sam and Greg racing towards AGI as fast as possible in order to head off proliferation, what's the end goal when getting there? Short of capture the entire worlds economy with an ASI, thus preventing anyone else from developing one, I don't see how this works. Just because OpenAI (or whoever) wins the initial race, it doesn't seem obvious to me that all development on other AGIs stops.
replies(2): >>nopins+0t >>effica+RV
◧◩◪◨⬒
652. outsom+Mn[view] [source] [discussion] 2023-11-22 08:49:09
>>siva7+El
> Almost all employees resigned in protest.

That never happened, right?

replies(1): >>ldjb+Hr
◧◩◪◨⬒
653. kmlevi+Pn[view] [source] [discussion] 2023-11-22 08:49:23
>>thinkc+jl
>This has nothing to do with Summers being an economist and everything to do with the fact that he used to run the parent agency of the IRS.

It has literally nothing to do with that. The reason he's on the board now is because D'Angelo wanted him on it. You could have a problem with that, but you can't use his inclusion as evidence that the board lost.

◧◩◪
654. sangee+Qn[view] [source] [discussion] 2023-11-22 08:49:26
>>haunte+ih
I got news for you pal: https://www.wired.co.uk/article/apple-vs-apples-trademark-ba...
◧◩
655. faerie+Tn[view] [source] [discussion] 2023-11-22 08:49:42
>>Satam+0a
They made GPT4 and you think they clearly have little critical thinking? That’s some big talk you’re talking.
replies(1): >>tonyed+bp
◧◩◪
656. upupup+Un[view] [source] [discussion] 2023-11-22 08:49:43
>>0xDEAF+a5
I am really surprised by people thinking this guy did anything to get sama back. He was probably not even in the room.
replies(1): >>ssnist+4w
◧◩◪
657. return+Zn[view] [source] [discussion] 2023-11-22 08:50:16
>>highwa+o9
It seems US Attorneys were calling the Open AI board.

It helps having somebody with government ties on board now.

◧◩◪◨
658. quickt+0o[view] [source] [discussion] 2023-11-22 08:50:17
>>fragme+2h
$20. Or use the API if your usage is low.
◧◩◪◨⬒⬓⬔
659. concor+5o[view] [source] [discussion] 2023-11-22 08:51:17
>>Random+2k
Where does a bioengineering superplague fall?
replies(1): >>Random+6p
◧◩◪◨⬒
660. stingr+ao[view] [source] [discussion] 2023-11-22 08:52:07
>>siva7+El
Don’t forget she’s heavily invested in a company that is directly competing with OpenAI. So obviously it’s also in her best interest to see OpenAI destroyed.
replies(4): >>lodovi+3u >>muraka+XB >>doktri+kE >>Philpa+zF
661. cft+co[view] [source] 2023-11-22 08:52:13
>>staran+(OP)
Ilya won't stick around for long probably. It will be interesting what he can do independently. Probably not a lot.
◧◩◪
662. lwhi+do[view] [source] [discussion] 2023-11-22 08:52:17
>>polite+Yj
I think it's fair to call this reactionary; Sam Altman has played the part of 'ping-pong ball' exceptionally well these past few days.
◧◩◪◨⬒⬓
663. astran+fo[view] [source] [discussion] 2023-11-22 08:52:31
>>mempko+6l
I think Larry Summers probably knows what a central bank is.

But "how money creation works" isn't the same thing as "how the financial system works". I guess the financial system mostly works over ACH.

We can see what happens when banks don't lend out deposits, because that's basically what caused SVB to fail. So by the contrapositive, they aren't really operating then.

◧◩
664. raverb+io[view] [source] [discussion] 2023-11-22 08:52:56
>>flylib+z4
Only goes to show how the original board played itself
◧◩◪
665. Solven+mo[view] [source] [discussion] 2023-11-22 08:53:41
>>krisof+ni
Who needs a book to understand the crazy overwhelming scale at which AI can dictate even online news/truth/discourse/misinformation/propaganda. And that's just barely the beginning.
replies(1): >>krisof+Pr
◧◩
666. zucker+no[view] [source] [discussion] 2023-11-22 08:53:49
>>tunesm+36
Ilya and Adam switched because they lost, and their goal wasn't to nuke OpenAI, simply to remove Sam. Helen and Tasha had the votes to prevent Sam Altman from returning as CEO, but not the votes to prevent the employees from fleeing to Microsoft, which Helen and Tasha see as the worst possible outcome.
◧◩
667. jjalle+ro[view] [source] [discussion] 2023-11-22 08:54:38
>>Satam+0a
I think what this saga has shown is that no one controls OpenAI definitively. Is Microsoft did this wouldn’t have happened in the first place don’t you think?

And if Sam controlled it it also wouldn’t have.

◧◩
668. sashan+Ko[view] [source] [discussion] 2023-11-22 08:57:17
>>Satam+0a
> Furthermore, the overwhelming groupthink shows there’s clearly little critical thinking amongst OpenAI’s employees either.

Very harsh words for some of the highest paid smartest people on the planet. The employees built GPT-4 the most advanced AI on the planet, what did you build? Do you still claim they’re more deficient in critical thinking compared to you.

replies(4): >>wiz21c+8r >>Kathul+Xs >>Cacti+aZ >>jetset+KM1
◧◩◪◨⬒⬓
669. denlek+No[view] [source] [discussion] 2023-11-22 08:57:41
>>morale+Oe
i don't think that's really any brand loyalty for OpenAI. people will use whatever is cheapest and best. in the longer run people will use whatever has the best access and integration.

what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.

if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.

replies(2): >>dontup+xF >>morale+LX
◧◩◪
670. kolink+Oo[view] [source] [discussion] 2023-11-22 08:57:44
>>eslaug+0e
The risk/scenario of singularity is that there will be just one winner and they will be able to prevent everyone else from building their own agi
671. person+Po[view] [source] 2023-11-22 08:57:44
>>staran+(OP)
Why is Altman, who has no higher education, critical for development of AI?
replies(1): >>calmoo+9p
◧◩◪◨
672. the_gi+Ro[view] [source] [discussion] 2023-11-22 08:58:00
>>concor+hl
And Google, Facebook, MSFT, Apple, are much more misaligned.
◧◩◪
673. mcosta+Uo[view] [source] [discussion] 2023-11-22 08:58:19
>>jakey_+Kj
How do you know that?
◧◩
674. upupup+2p[view] [source] [discussion] 2023-11-22 08:59:23
>>meetpa+13
He’s trying very very hard to claim some credit in this. Probably had none.
replies(2): >>flappy+wJ >>framap+Jj1
◧◩◪◨⬒⬓⬔⧯
675. Random+6p[view] [source] [discussion] 2023-11-22 08:59:59
>>concor+5o
As a said in another post: Some middle ground because we don't know if that is possible to the extent that it is existential. Parts of the mechanisms are proven, others are not. And actually we do police the risk somewhat like that (controls are strongest where the proven part is strongest and most dangerous with extreme controls around small pox, for example).
◧◩
676. calmoo+9p[view] [source] [discussion] 2023-11-22 09:00:08
>>person+Po
Is higher education really crucial for pushing something forward? Even if he isn't an AI expert, there is lots of stuff surrounding the technology that needs doing, for example massive amounts of funding, which he seems to have been pretty good at securing.
◧◩◪
677. tonyed+bp[view] [source] [discussion] 2023-11-22 09:00:16
>>faerie+Tn
That's the curse of specialisation. You can be really smart in one area and completely unaware in others. This industry is full of people with deep technical knowledge but little in the way of social skills.
replies(2): >>rvz+hz >>mlrtim+fD
◧◩
678. dncorn+fp[view] [source] [discussion] 2023-11-22 09:00:43
>>Satam+0a
Disappointing? What has OpenAI done to you? We don't even know what happened.

Everything has been pure speculation. I would curb my judgement if I were you, until we actually know what happened.

◧◩◪◨⬒
679. Angost+hp[view] [source] [discussion] 2023-11-22 09:01:00
>>khazho+Mm
I think there’s an additional group to consider- society at large.

To an extent the promise of the non- profit was that they would be safe, expert custodians of AI development driven not primarily by the profit motive, but also by safety and societal considerations. Has this larger group been ‘betrayed’? Perhaps

replies(2): >>biscot+8A >>Wytwww+kC
◧◩◪
680. kissgy+tp[view] [source] [discussion] 2023-11-22 09:02:17
>>polite+Yj
The available public information is enough to reach this conclusion.
◧◩◪
681. loup-v+wp[view] [source] [discussion] 2023-11-22 09:02:51
>>pug_mo+Cb
Note how what you said also apply to the search & recommendation engines that are in widespread use today.
◧◩◪
682. Satam+yp[view] [source] [discussion] 2023-11-22 09:02:58
>>polite+Yj
I'm sure most of them are extremely intelligent but the situation showed they are easily persuaded, even if principled. They will have to overcome many first-of-a-kind challenges on their quest to AGI but look at how quickly everyone got pulled into a feel-good kumbaya sing-along.

Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.

replies(6): >>wiz21c+Zp >>abm53+6q >>ah765+7r >>giggle+Ur >>ssnist+nt >>gexla+wG
◧◩◪
683. synaes+Ap[view] [source] [discussion] 2023-11-22 09:03:27
>>highwa+o9
If we achieve AGI it has the potential to capture most (if not all) economic value. Larry Summers was a deliberate choice indeed.
◧◩◪◨⬒⬓⬔
684. reveri+Ip[view] [source] [discussion] 2023-11-22 09:04:38
>>gnicho+Zf
I think not at the time they would have signed the letter? Though it's hard to keep up with the whirlwind of news.
◧◩◪◨⬒
685. didntc+Lp[view] [source] [discussion] 2023-11-22 09:05:44
>>darkwa+Ij
And it's amazing how many people you can get to cheer it on if you brand it as "combating dangerous misinformation". It seems people never learn the lesson that putting faith in one group of people to decree what's "truth" or "ethical" is almost always a bad idea, even when (you think) it's your "side"
replies(1): >>mlrtim+hE
686. notfed+Mp[view] [source] 2023-11-22 09:06:03
>>staran+(OP)
Sam was crucified, then resurrected after 3 days and 3 nights.
◧◩◪◨⬒
687. imjons+Np[view] [source] [discussion] 2023-11-22 09:06:48
>>kmlevi+yn
I wonder what is the rationale for picking a seasoned politician and economist (influenced deregulation of US finance system, was friends with Epstein, had a few controversies listed there). Has the government also entered the chat so obviously?
replies(2): >>choult+gs >>voster+iC
◧◩◪
688. stingr+Rp[view] [source] [discussion] 2023-11-22 09:07:00
>>ben_w+Km
My reading of all this is that the board is both incompetent and has a number of massive conflicts of interests.

What I don’t understand is why they were allowed to stay on the board with all these conflicts of interests all the while having no (financial) stake in OpenAI. One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.

replies(2): >>Sebb76+Vv >>serial+nw
◧◩◪◨
689. wiz21c+Zp[view] [source] [discussion] 2023-11-22 09:08:57
>>Satam+yp
> feel-good kumbaya sing-along

learning english over HN is so fun !

690. pjmlp+0q[view] [source] 2023-11-22 09:09:01
>>staran+(OP)
Satya probably isn't that happy, after the weekend efforts to eventually bring all folks into Microsoft.
replies(1): >>stingr+Bq
◧◩◪◨⬒
691. didntc+1q[view] [source] [discussion] 2023-11-22 09:09:08
>>Xenoam+jg
That would be the camp advocating for, well, open AI. I.e. wide model release. The AI ethics camp are more "let us control AI, for your own good"
◧◩◪◨⬒⬓⬔
692. ryzvon+3q[view] [source] [discussion] 2023-11-22 09:09:20
>>tayo42+Uc
I understand what you are saying, but sometimes, news like this is perhaps the only excitement in our otherwise dull lives.
◧◩◪◨⬒⬓⬔
693. alex_y+4q[view] [source] [discussion] 2023-11-22 09:09:29
>>mvdtnz+Dn
True is a bit of a stretch here right?
◧◩◪◨
694. krisof+5q[view] [source] [discussion] 2023-11-22 09:09:34
>>nostro+3d
> When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."

Yes. You are right on this.

> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times"

I understand it might seem that way. I believe the original goals were more like "make the AI not spew soft/hard porn on unsuspecting people", and "make the AI not spew hateful bigotry". And we are just not good enough yet at control. But also these things are in some sense arbitrary. They are good goals for someone representing a corporation, which these AIs are very likely going to be employed as (if we ever solve a myriad other problems). They are not necessary the only possible options.

With time and better controls we might make AIs which are subtly flirty while maintaining professional boundaries. Or we might make actual porn AIs, but ones which maintain some other limits. (Like for example generate content about consenting adults without ever deviating into under age material, or describing situations where there is no consent.) But currently we can't even convince our AIs to draw the right number of fingers on people, how do you feel about our chances to teach them much harder concepts like consent? (I know I'm mixing up examples from image and text generation here, but from a certain high level perspective it is all the same.)

So these things you mention are: limitations of our abilities at control, results of a certain kind of expected corporate professionalism, but even more these are safe sandboxes. How do you think we can make the machine not nuke us, if we can't even make it not tell dirty jokes? Not making dirty jokes is not the primary goal. But it is a useful practice to see if we can control these machines. It is one where failure is, while embarrassing, is clearly not existential. We could have chosen a different "goal", for example we could have made an AI which never ever talks about sports! That would have been an equivalent goal. Something hard to achieve to evaluate our efforts against. But it does not mesh that well with the corporate values so we have what we have.

replies(1): >>mlindn+LF
◧◩◪◨
695. abm53+6q[view] [source] [discussion] 2023-11-22 09:09:43
>>Satam+yp
I think another factor is that they had very limited time. It was clear they needed to pick a side and build momentum quickly.

They couldn’t sit back and dwell on it for a few days because then the decision (i.e. the status quo) would have been made for them.

replies(1): >>Satam+Vr
◧◩
696. caskst+8q[view] [source] [discussion] 2023-11-22 09:09:46
>>Satam+0a
> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

I'm sure has been a lot of critical thinking going on. I would venture a guess that employees decided that Sam's approach is much more favorable for the price of their options than the original mission of the non-profit entity.

◧◩
697. pjmlp+9q[view] [source] [discussion] 2023-11-22 09:10:15
>>Satam+0a
Open Group, the home of UNIX standards never was that open.
◧◩
698. quickt+kq[view] [source] [discussion] 2023-11-22 09:11:33
>>doyoue+mc
Because it is a scoop
◧◩
699. Tracke+lq[view] [source] [discussion] 2023-11-22 09:11:40
>>eclect+79
It's the cult of the CEO in action.
◧◩◪◨⬒⬓⬔⧯
700. vkou+oq[view] [source] [discussion] 2023-11-22 09:11:58
>>concor+yk
My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.

My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.

replies(2): >>Feepin+Qq >>concor+6y
◧◩◪
701. auggie+sq[view] [source] [discussion] 2023-11-22 09:12:39
>>c0pium+jn
You don't have to wonder anymore! You replied to an example of it.
◧◩◪◨⬒
702. Iulioh+vq[view] [source] [discussion] 2023-11-22 09:13:13
>>throwa+Tl
And it was stick fucking strange, they assu
◧◩
703. stingr+Bq[view] [source] [discussion] 2023-11-22 09:14:42
>>pjmlp+0q
You say that as if that was his end goal. His end goal was to save the situation, and that happened. One can easily argue that Microsoft’s offer added huge pressure on the OpenAI board that made the new / current outcome possible. And perhaps that was the plan after all.
replies(1): >>pjmlp+sr
◧◩◪◨
704. doktri+Cq[view] [source] [discussion] 2023-11-22 09:14:51
>>theamk+A5
> which is pretty unusual as most people don't care about their CEO at all

I'm sure Sam is a charismatic guy, but generally speaking folks will support a whole lot when a multi million dollar payday is on the line.

◧◩
705. _giorg+Gq[view] [source] [discussion] 2023-11-22 09:15:06
>>Satam+0a
The alternative was that all OpenAI employees started to work directly for MSFT, as they said in the letter signed by 95% of them.
◧◩◪◨⬒
706. g-b-r+Iq[view] [source] [discussion] 2023-11-22 09:15:14
>>tucnak+gn
They were supposed to have higher values than money
replies(4): >>lovely+hs >>plasma+Dt >>logicc+iD >>Zpalmt+Ys1
◧◩◪◨
707. abkola+Kq[view] [source] [discussion] 2023-11-22 09:15:57
>>maxlin+Im
Four CEO changes in five days to be precise.

Sam -> Mira -> Emmet -> Sam

replies(5): >>Hendri+1u >>qup+hh1 >>low_te+3H1 >>freedo+de2 >>abkola+a0c
◧◩◪◨⬒⬓⬔⧯▣
708. Feepin+Qq[view] [source] [discussion] 2023-11-22 09:16:57
>>vkou+oq
Yes well, then your concern is not AI safety.
replies(1): >>vkou+ys
◧◩◪◨⬒⬓⬔⧯▣
709. ludwik+Vq[view] [source] [discussion] 2023-11-22 09:17:53
>>Random+Dj
"there is a proven way for the risk to materialise" - I still don't know what this means. "Proven" how?

Wouldn't your edit apply to any not-impossible risk (i.e., > 0% probability)? For example, "trying to avoid situations where control over AGI is lost, leading to unaligned AGI (a known existential risk)"?

You can not run away from having to estimate how likely the risk is to happen (in addition to being "known").

replies(1): >>Random+xs
◧◩◪◨
710. orthox+0r[view] [source] [discussion] 2023-11-22 09:18:37
>>lucubr+Cc
However, he does seem like the kind of person able to easily manipulate someone book-smart like Ilya into actually feeling guilty about the whole affair. He'll end up graciously forgiving Ilya in a way that will make him feel indebted to Sam.
◧◩◪◨⬒
711. didntc+1r[view] [source] [discussion] 2023-11-22 09:18:37
>>vkou+zd
Ideally I'd like no gatekeeping, i.e. open model release, but that's not something OAI or most "AI ethics" aligned people are interested in (though luckily others are). So if we must have a gatekeeper, I'd rather it be one with plain old commercial interests than ideological ones. It's like the C S Lewis quote about robber barons vs busybodies again

Yet again, the free market principle of "you can have this if you pay me enough" offers more freedom to society than the central "you can have this if we decide you're allowed it"

◧◩◪
712. pk-pro+2r[view] [source] [discussion] 2023-11-22 09:18:52
>>pug_mo+Cb
I just had a conversation about this like two weeks ago. The current trend in AI "safety" is a form of brainwashing, not only for AI but also for future generations shaping their minds. There are several aspects:

1. Censorship of information

2. Cover-up of the biases and injustices in our society

This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.

Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.

Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:

1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits. 2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues. 3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes. 4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories. 5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology. 6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society. 7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.

Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.

Bonhoeffer's theory of stupidity: https://www.youtube.com/watch?v=ww47bR86wSc&pp=ygUTdGhlb3J5I...

◧◩◪◨
713. ah765+7r[view] [source] [discussion] 2023-11-22 09:19:44
>>Satam+yp
I thought so originally too, but when I thought about their perspective, I realized I would probably sign too. Imagine that your CEO and leadership has led your company to the top of the world, and you're about to get a big payday. Suddenly, without any real explanation, the board kicks out the CEO. The leadership almost all supports the CEO and signs the pledge, including your manager. What would you do at that point? Personally, I'd sign just so I didn't stand out, and stay on good terms with leadership.

The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.

replies(1): >>Satam+Nu
◧◩◪
714. wiz21c+8r[view] [source] [discussion] 2023-11-22 09:19:44
>>sashan+Ko
I think the choice they had to make was: either building one of the top AI on earth under total control of OpenAI investors (and most likely the project of their life) either do nothing.

So they bowed.

◧◩◪◨⬒
715. Feepin+9r[view] [source] [discussion] 2023-11-22 09:19:46
>>arkety+tn
I don't understand how the conclusion of this is "so we should proceed with AI" rather than "so we should immediately outlaw all foundation model training". Clearly corporate self-governance has failed completely.
◧◩◪◨⬒
716. kgeist+dr[view] [source] [discussion] 2023-11-22 09:20:22
>>Terrif+d7
GPT3/GPT4 currently moralize about anything slightly controversial. Sure you can construct a long elaborate prompt to "jailbreak" it, but it's so much effort it's easier to just write something by yourself.
◧◩◪
717. ryzvon+pr[view] [source] [discussion] 2023-11-22 09:21:28
>>system+Jf
I've heard $20 just buys like 9 minutes of actual processor time for GPT 4. Apocryphal maybe, but whatever the real number is, it's still going to be very high, once the VC money runs out I bet the rates will shoot.
replies(1): >>system+Vk3
◧◩◪
718. pjmlp+sr[view] [source] [discussion] 2023-11-22 09:21:49
>>stingr+Bq
When offices are already being prepared, and HR processes being put into place, we are beyond saving a situation.
◧◩
719. jampek+ur[view] [source] [discussion] 2023-11-22 09:21:59
>>Satam+0a
The initial board consists entirely of swamp lizards. I really hope they mess up as you predict.
◧◩
720. jatins+yr[view] [source] [discussion] 2023-11-22 09:22:16
>>Satam+0a
> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

If the "other side" (board) had put up a SINGLE convincing argument on why Sam had to go maybe the employees would have not supported Sam unequivocally.

But, atleast as an outsider, we heard nothing that suggests board had reasons to remove Sam other than "the vibes were off"

Can you really accuse the employees of groupthink when the other side is so weak?

replies(4): >>serial+6v >>ethanb+aH >>concep+QN >>kromem+OR
◧◩◪
721. 3cats-+Cr[view] [source] [discussion] 2023-11-22 09:23:02
>>Terrif+D2
Do we need to false dichotomy. DotA 2 bot was a successful technology preview. You need both research and development in a healthy organisation. Let's call this... hmm I don't know "R&D" for short. Might catch on.
◧◩◪◨⬒
722. abkola+Er[view] [source] [discussion] 2023-11-22 09:23:11
>>nix-za+Nk
That might be selection bias, in those 11 years Jobs built NeXT.

A lot of Apple's engineering and product line back then owe their provenance and lineage to NeXT.

replies(1): >>Talane+lA
◧◩◪◨⬒⬓
723. ldjb+Hr[view] [source] [discussion] 2023-11-22 09:24:01
>>outsom+Mn
Almost all employees did not resign in protest, but they did _threaten_ to resign.

https://www.theverge.com/2023/11/20/23968988/openai-employee...

◧◩◪◨
724. krisof+Pr[view] [source] [discussion] 2023-11-22 09:25:37
>>Solven+mo
Not sure if you are sarcastic or not. :) Let’s assume you are not:

The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.

The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)

replies(1): >>cyanyd+AO
◧◩◪◨
725. giggle+Ur[view] [source] [discussion] 2023-11-22 09:26:05
>>Satam+yp
> situation showed they are “easily persuaded”

How do you know?

> look at how “quickly” everyone got pulled into

Again, how do you know?

◧◩◪◨⬒
726. Satam+Vr[view] [source] [discussion] 2023-11-22 09:26:12
>>abm53+6q
Great point. Either way, when this all started it might have all been too late.

The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.

◧◩◪
727. jatins+Wr[view] [source] [discussion] 2023-11-22 09:26:22
>>kmlevi+ek
> He was even trying to depose board members who were openly critical of open AI's practices.

Was there any concrete criticism in the paper that was written by that board member? (Genuinely asking, not a leading question)

◧◩◪◨⬒
728. nopins+Xr[view] [source] [discussion] 2023-11-22 09:26:32
>>nickpp+om
> Don't forget that it would also increase the power of the good guys.

In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.

> And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.

“In the long run we are all dead" -- Keynes. But an AGI will likely emerge in the next 5 to 20 years (Geoffrey Hinton said the same) and we'd rather not be dead too soon.

replies(2): >>fallin+TI >>nickpp+wk1
◧◩
729. stingr+6s[view] [source] [discussion] 2023-11-22 09:27:10
>>AaronN+r
Yes. I, too, read a whole three tweets in the past few days, which is more than I did the entire year before that.
◧◩◪
730. Raston+es[view] [source] [discussion] 2023-11-22 09:28:41
>>karmas+Ek
Nah, I too will threaten to sign a petition to quit if I could save my RSUs/PPUs from evaporating. Organizational goals be damned (or is it extinction level risk be damned?)
◧◩◪◨⬒⬓
731. choult+gs[view] [source] [discussion] 2023-11-22 09:28:48
>>imjons+Np
It probably means that they anticipate a need for dealing with the government in future, such as having a hand in regulation of their industry.
◧◩◪◨⬒⬓
732. lovely+hs[view] [source] [discussion] 2023-11-22 09:28:54
>>g-b-r+Iq
>They were supposed to have higher values than money

which are? …

replies(3): >>kortil+us >>jampek+Aw >>brazzy+Jx
733. ugh123+ls[view] [source] 2023-11-22 09:29:12
>>staran+(OP)
In light of this weekend's events, and the more i've learned about OpenAI's beginnings and purpose, I now believe that there isn't necessarily a "for profit" motivation of the company, but merely that the original intention to create AI that "benefits humanity" is in full play now through a commercialized ChatGPT, and possibly further leveraged through "GPTs" and their evolution.

Is this the "path" to AGI? Who knows! But it is a path to benefitting humanity as probably Sam and his camp see it. Does Ilya have a different plan? If he does, he has a lot of catching up to do while the current productization of ChatGPT and GPTs continue marching forward. Maybe he sees a great leap forward in accuracy in GPT-5 or later. Or maybe he feels LLMs aren't the answer and theres a completely new paradigm on the horizon. Regardless, they still need to answer to the fact that both research and product need funds to buy and power GPUs, and also satisfy the MSFT partnership. Commercialization is their only clear answer to that right now. Future investments will likely not stray from this approach, else they'll fund rivals who are more commercially motivated. Thats business.

Thus, i'm all in on this commercially motivated humanity benefitting GPT product. Let the market take OpenAI LLMs to where they need/want it to. Exciting things may follow!

replies(2): >>picado+Ds >>tkgall+hw
◧◩◪
734. midasu+ps[view] [source] [discussion] 2023-11-22 09:29:22
>>0xDEAF+r8
This Summers?

https://nymag.com/intelligencer/2023/06/larry-summers-was-wr...

https://prospect.org/environment/2023-11-20-larry-summers-in...

◧◩◪◨⬒
735. kortil+rs[view] [source] [discussion] 2023-11-22 09:29:31
>>karmas+1m
> OpenAI has some of the smartest human beings on this planet

Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.

replies(4): >>alsodu+Zt >>Wytwww+6A >>mrangl+5T >>rewmie+vV
◧◩◪◨⬒⬓⬔
736. kortil+us[view] [source] [discussion] 2023-11-22 09:30:15
>>lovely+hs
Ethics presumably
◧◩◪◨⬒
737. ah765+ws[view] [source] [discussion] 2023-11-22 09:30:24
>>siva7+El
It is a correct statement, not really "borderline narcissistic". The board's mission is to help humanity develop safe beneficial AGI. If the board thinks that the company is hindering this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.

Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.

replies(2): >>siva7+Uw >>qwytw+QA
◧◩◪◨⬒⬓⬔⧯▣▦
738. Random+xs[view] [source] [discussion] 2023-11-22 09:30:26
>>ludwik+Vq
Proven means all parts needed for the realisation of the risk are known and shown to exist (at least in principle, in a lab etc.). There can be some middle ground where a large part is known and shown to exist (biological risks, for example).), but not all.

No in relation to my edit, because we have no existing mechanism for the AGI risk to happen. We have hypotheses about what an AGI could or could not do. It could all be incorrect. Playing around with likelihoods that have no basis in reality isn't helping there.

Where we have known and fully understood risks and we can actually estimate a probability there we might use that somewhat to guide efforts (but that invites potentially complacency that is deadly).

◧◩◪◨⬒⬓⬔⧯▣▦
739. vkou+ys[view] [source] [discussion] 2023-11-22 09:30:28
>>Feepin+Qq
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:

> Broadly distributed benefits

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Hell, it's the first bullet point on it!

You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'

replies(2): >>concor+wx >>Feepin+0A
◧◩◪◨⬒
740. low_te+zs[view] [source] [discussion] 2023-11-22 09:30:33
>>wisty+Bd
The problem here is to equate AI speech with human speech. The AI doesn't "speak", only humans speak. The real slippery slope for me is this tendency of treating ChatGPT as some kind of proto-human entity. If people are willing to do that, then we're screwed either way (whether the AI is outputting racist content or excessively PI content). If you take the output of the AI and post it somewhere, it's on you, not the AI. You're saying it; it doesn't matter where it came from.
replies(3): >>cyanyd+HM >>silvar+o04 >>miracu+5b6
◧◩
741. picado+Ds[view] [source] [discussion] 2023-11-22 09:30:57
>>ugh123+ls
Totally agree, GPT should be trained to spout adds and develop dark pattern behaviour.
replies(1): >>ugh123+Rs
◧◩
742. _giorg+Es[view] [source] [discussion] 2023-11-22 09:31:04
>>eclect+79
To talk about OpenAi, Ilya Sutskever and Andrej Karpathy are much more known than Sam Altman.

I'm sure that if Ilya had been removed from his role, the revolt movement would have been similar.

I've started to like Sam only when he was removed from his position.

replies(1): >>gbaldu+DA
◧◩
743. ssnist+Gs[view] [source] [discussion] 2023-11-22 09:31:44
>>Satam+0a
The board never gave a believable explanation to justify firing Altman. So the staff simply made the sensible choice of following Altman. This isn't about critical thinking because there was nothing to think about.
◧◩◪
744. ugh123+Rs[view] [source] [discussion] 2023-11-22 09:33:01
>>picado+Ds
There will always be misuse, less sexy, or downright illegal use cases leveraging any AI product these days - just as is the nature of the internet itself.
◧◩
745. rcaugh+Ts[view] [source] [discussion] 2023-11-22 09:33:14
>>flylib+z4
> as will Altman himself

Would you trust someone who doesn't believe in responsible governance for themselves, to apply responsible governance elsewhere?

replies(3): >>code_r+rt >>ethbr1+Eu >>mijoha+Cz
◧◩◪
746. Kathul+Xs[view] [source] [discussion] 2023-11-22 09:33:39
>>sashan+Ko
Being smart does not equate to being critical, or going against group think.
◧◩◪◨
747. nopins+0t[view] [source] [discussion] 2023-11-22 09:33:50
>>gorbyp+Gn
I do not know exactly what they plan to do. But here's my thought...

Using a near-AGI to help align an ASI, then use the ASI to help prevent the development of unaligned AGI/ASI could be a means to a safer world.

◧◩◪
748. dr_dsh+2t[view] [source] [discussion] 2023-11-22 09:34:02
>>Hamuko+X9
So if you really wanted to get rid of the prior board & structure, it couldn’t have worked out better
◧◩
749. r721+4t[view] [source] [discussion] 2023-11-22 09:34:35
>>r721+k1
https://twitter.com/hlntnr/status/1727207796456751615
◧◩◪◨⬒
750. eviks+kt[view] [source] [discussion] 2023-11-22 09:36:27
>>kmlevi+yn
What I'd want to say about Larry is that he is definitely not going to care about the whole-society non-profit shtick of the company to any degree comparable with the previous board members, so he won't constraint Sam/MS in any way
replies(2): >>sanxiy+oz >>kmlevi+RF
◧◩◪◨
751. ssnist+nt[view] [source] [discussion] 2023-11-22 09:36:56
>>Satam+yp
Persuaded by whom? This whole saga has been opaque to pretty much everyone outside the handful of individuals directly negotiating with each other. This never was about a battle for OpenAI's mission or else the share of employees siding with Sam wouldn't have been that high.
replies(1): >>Ludwig+xH
◧◩◪◨
752. Kathul+pt[view] [source] [discussion] 2023-11-22 09:36:59
>>concor+gk
Folding for pressure and group think is different things imo. You can be very aware you are folding for pressure, but doing it because it's the right/easy thing to do. While group think is more a phenomenon you are not aware of at all.
◧◩◪
753. code_r+rt[view] [source] [discussion] 2023-11-22 09:37:23
>>rcaugh+Ts
I think the narrative that this was driven by safety concerns is pretty much bunk.
replies(1): >>throwu+Dy
◧◩
754. drexls+tt[view] [source] [discussion] 2023-11-22 09:38:07
>>Satam+0a
You would expect the company that owns 49% of the shares to have some input in firing the CEO, why is that disappointing? If they had more control this shitshow would never have happened.
replies(1): >>jampek+Dx
◧◩◪◨⬒⬓
755. adastr+ut[view] [source] [discussion] 2023-11-22 09:38:15
>>dragon+z5
Still a conflict of interest. If D’Angelo has financial incentive to want OpenAI to fail, then this at odds with his duty to follow the OpenAI charter. It’s exactly why two of the previous board members left earlier this year.
◧◩◪◨⬒⬓
756. plasma+Dt[view] [source] [discussion] 2023-11-22 09:39:15
>>g-b-r+Iq
I don't understand how, with the dearth of information we currently have, anyone can see this as "higher values" vs "money".

No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.

replies(1): >>Ajedi3+bo1
757. 1vuio0+Et[view] [source] 2023-11-22 09:39:23
>>staran+(OP)
https://twitter.com/dr_park_phd/status/1727125936070410594

https://twitter.com/GaryMarcus/status/1727134758919151975

https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...

https://twitter.com/AISafetyMemes/status/1727108259297837083

replies(1): >>olivie+6u
◧◩
758. lordna+Ft[view] [source] [discussion] 2023-11-22 09:39:36
>>Satam+0a
Is it really a failure of critical thinking? The employees know what position is popular, so even people who are mostly against the go-fast strategy can see that they get to work on this groundbreaking thing only if they toe the line.

It's also not surprising that people who are near the SV culture will think that AGI needs money to get developed, and that money in general is useful for the kind of business they are running. And that it's a business, not a charity.

I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".

replies(2): >>qwytw+GB >>robert+aL
◧◩◪◨
759. edanm+Ht[view] [source] [discussion] 2023-11-22 09:39:46
>>nostro+3d
There are still very distinct groups of people, some of whom are more worried about the "Skynet" type of safety, and some of who are more worried about the "political correctness" type of safety. (To use your terms, I disagree with the characterization of both of these.)
760. mkii+Lt[view] [source] 2023-11-22 09:40:19
>>staran+(OP)
April Fools? If you run a monotonic stack and summation kinda algorithm on 11/21 you'd get 4/1 :-)
◧◩◪◨⬒⬓⬔
761. adastr+Nt[view] [source] [discussion] 2023-11-22 09:40:41
>>estoma+f5
I don’t understand this comment. I’m quoting from this thread, from the post that I was responding to. What do you think I was talking about?
◧◩◪
762. ssnist+Ot[view] [source] [discussion] 2023-11-22 09:40:55
>>jakey_+Kj
They can just work somewhere else with relative ease. Some OpenAI employees on Twitter said they were being bombarded by recruiters throughout until tonight's resolution. People have left OpenAI before and they are doing just fine.
◧◩
763. irthom+St[view] [source] [discussion] 2023-11-22 09:41:13
>>Satam+0a
It is a shame that we lost the ability to hold such companies to account (for now). But given the range of possibilities laid out before us, this is the better outcome. GPT-4 has increased my knowledge, my confidence, and my pleasure in learning and hacking. And perhaps it's relatives will fuel a revolution.

Reminds me of a quote: "A civilization is a heritage of beliefs, customs, and knowledge slowly accumulated in the course of centuries, elements difficult at times to justify by logic, but justifying themselves as paths when they lead somewhere, since they open up for man his inner distance." - Antoine de Saint-Exupery.

764. zx8080+Wt[view] [source] 2023-11-22 09:41:53
>>staran+(OP)
It was just a preparation for the upcoming IPO. Free ads in all news and TV.
◧◩◪◨⬒⬓
765. alsodu+Zt[view] [source] [discussion] 2023-11-22 09:42:11
>>kortil+rs
I hate these comments that potray as if every expert/scientist is just good at one thing and aren't particularly great at critical thinking/corporate politics.

Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.

replies(3): >>_djo_+dv >>TheOth+uw >>mrangl+tT
◧◩
766. low_te+0u[view] [source] [discussion] 2023-11-22 09:42:26
>>laserl+gb
Yes, but on the other hand, this whole thing has shown that OpenAI is not running smooth anymore, and probably never will again. You can't cut the head of the snake then attach it back later and expect it to move on slithering. Even if Sam stays, he won't be able to just do whatever he wants because in an organization as complex as OpenAI, there are thousands of unwritten rules and relationships and hidden processes that need to go smooth without the CEO's direct intervention (the CEO cannot be everywhere all the time). So, what this says to me (Sam being re-hired) is that the future OpenAI is now a watered-down, mere shadow of its former self.

I personally think it's weird if he really settles back in, especially given the other guys who resigned after the fact. There must be lots of other super exciting new things for him to do out there, and some pretty amazing leadership job offers from other companies. I'm not saying OpenAI will die out or anything, but surely it has shown a weak side.

replies(1): >>throwu+rz
◧◩◪◨⬒
767. Hendri+1u[view] [source] [discussion] 2023-11-22 09:42:28
>>abkola+Kq
That are three changes. Every arrow is one.
replies(2): >>physic+Hv >>noneth+3Z
◧◩
768. zx8080+2u[view] [source] [discussion] 2023-11-22 09:42:32
>>Satam+0a
Come on, it was just a preparation for the upcoming IPO. Free ads in all news and TV.
◧◩◪◨⬒⬓
769. lodovi+3u[view] [source] [discussion] 2023-11-22 09:42:32
>>stingr+ao
She probably wants both companies to be successful. Board members are not super villains.
replies(1): >>siva7+lC
◧◩◪
770. r721+5u[view] [source] [discussion] 2023-11-22 09:42:40
>>crossr+xl
>Does everyone have a Twitter blue tick now? Or is that just a char people are using in their names?

Blue tick just means user bought a subscription (X Premium) now - one of the features is "reply prioritization", so top replies to popular tweets are from blue ticks.

https://help.twitter.com/en/using-x/x-premium

◧◩
771. olivie+6u[view] [source] [discussion] 2023-11-22 09:42:41
>>1vuio0+Et
Sweetie, you might want to actually look at the photo attached to the tweet.
◧◩
772. low_te+au[view] [source] [discussion] 2023-11-22 09:43:14
>>Satam+0a
Plot twist: Sam posts that there is no agreement and that OpenAI is delusional.
◧◩◪
773. lordna+gu[view] [source] [discussion] 2023-11-22 09:43:49
>>pug_mo+Cb
Great comment.

In a way AI is no different from old school intelligence, aka experts.

"We need to have oversight over what the scientists are researching, so that it's always to the public benefit"

"How do we really know if the academics/engineers/doctors have everyone's interest in mind?"

That kind of thing has been a thought since forever, and politicians of all sorts have had to contend with it.

◧◩
774. saiya-+vu[view] [source] [discussion] 2023-11-22 09:45:24
>>Satam+0a
All this just tells for the 100th time that this area desperately needs some regulation. I don't know the form, but even if we have 1% of skynet, heck even 0.01% its simply too high and we still have full control.

We see most powerful people are in it for the money and power ego trip, and literally nothing else. Pesky morals be damned. Which may be acceptable for some ad business but here stakes are potentially everything and we have no clue what actual % the risk is.

Its to me very similar to all naivety particle scientists expressed in its early days and then reality check of realpolitik and messed up humans in power when bombs were done, used and then hundred thousand more were produced.

◧◩◪
775. ethbr1+Eu[view] [source] [discussion] 2023-11-22 09:47:20
>>rcaugh+Ts
If Altman will be 1 of 9, that means he has power but not an exceptional amount.

The real teams here seem to be:

"Team Board That Does Whatever Altman Wants"

"Team Board Provides Independent Oversight"

With this much money on the table, independent oversight is difficult, but at least they're making the effort.

The idea this was immediately about AI safety vs go-fast (or Microsoft vs non-Microsoft control) is bullshit -- this was about how strong board oversight of Altman should be in the future.

replies(1): >>irthom+UA
◧◩◪◨
776. ssnist+Fu[view] [source] [discussion] 2023-11-22 09:47:24
>>davedx+ng
This place was never above being a gossip forum, especially on topics that involve any ounce of politics or social sciences.
replies(1): >>93po+dN
◧◩◪
777. mkii+Hu[view] [source] [discussion] 2023-11-22 09:47:33
>>lacker+86
> It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google.

You forgot about Apple.

◧◩
778. lvl102+Iu[view] [source] [discussion] 2023-11-22 09:47:37
>>Satam+0a
Ultimately, the openness that we all wish for must come from _underlying_ data. The know-how and “secret sauce” were never going to be open. And it’s not as profound as we think it is inside that black box.

So who holds all the data in closed silos? Google and Facebook. We may have already lost the battle on achieving “open and fair” AI paradigm long time ago.

◧◩
779. madeof+Ku[view] [source] [discussion] 2023-11-22 09:47:55
>>Satam+0a
Regardless of whether you feel like Altman was rushing OpenAI too fast, wasn’t open enough, and was being too commercial, the last few days demonstrated conclusively that the board is erratic and unstable and unfit to manage OpenAI.

Their actions was the complete opposite of open. Rather than, I don’t know, being open and talking to the CEO to share concerns and change the company, they just threw a tantrum and fired him.

replies(1): >>ethanb+tI
◧◩◪◨⬒
780. Satam+Nu[view] [source] [discussion] 2023-11-22 09:48:13
>>ah765+7r
The point of no return for the company might have been crossed way before the employees were forced to choose sides. Choose Sam's side and the company lives but only as a bittersweet reminder of its founding principles. Choose the board's side and you might be dooming the company to die an even faster death.

But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.

replies(1): >>vinay_+0D
◧◩
781. hypert+Su[view] [source] [discussion] 2023-11-22 09:49:05
>>Satam+0a
What could disrupt OpenAI is a dramatic change in market, perhaps enabled by a change in technology. But if it's the same customers in the same market, they will buy or duplicate any tech advance; and if it's a sufficiently similar market, they will pivot.
◧◩◪
782. serial+6v[view] [source] [discussion] 2023-11-22 09:51:15
>>jatins+yr
Yes, the original letter had (for an official letter) quite some serious allegations, insinuations. If after a week, they decided not to back up their claims, I'm not sure there is anything big coming.

On the other hand, if they had some serious concerns, serious enough to fire the CEO in such a disgraceful way, I don't understand why they don't stick to their guns, and explain themselves. If you think OpenAI under Sam's leadership is going to destroy humanity, I don't understand how they (e.g. Ilya) reverted their opinions after a day or two.

replies(2): >>Kye+1z >>carlos+AC
◧◩◪◨
783. Raston+av[view] [source] [discussion] 2023-11-22 09:52:27
>>davedx+ng
I have no words for that comment.

As if its so unbelievable that someone would want to prevent rogue AI or wide-scale unemployment, instead thinking that these people just want to be super moderators and people to be politically correct

replies(1): >>fallin+6I
◧◩
784. seydor+bv[view] [source] [discussion] 2023-11-22 09:52:39
>>Satam+0a
> OpenAI is in fact not open

that ship sailed long ago , no?

But i agree that the company seems less trustworthy now, like it's too CEO-centered

◧◩◪◨⬒⬓⬔
785. _djo_+dv[view] [source] [discussion] 2023-11-22 09:53:32
>>alsodu+Zt
I don't think the argument was that none of them are good at that, just that it's a mistake to assume that just because they're all very smart in this particular field that they're great at another.
replies(1): >>karmas+yv
786. wouldb+fv[view] [source] 2023-11-22 09:53:46
>>staran+(OP)
The sane course of action for any healthy organization after last week would be to work actively on becoming more independent from Microsoft.

With Sam at the head, especially after Microsoft backing him, they will most likely do the opposite. Meaning a deeper integration with Microsoft.

If it wasn't already, OpenAI is now basically a Microsoft subsidiary. With the advantage for Microsoft of not being legally liable for any court cases.

replies(1): >>0xDEF+Zv
◧◩
787. seydor+iv[view] [source] [discussion] 2023-11-22 09:54:06
>>eclect+79
the media is the media
◧◩◪
788. seydor+lv[view] [source] [discussion] 2023-11-22 09:54:31
>>Michae+bm
he was very well known long before openAI
◧◩
789. ozgung+pv[view] [source] [discussion] 2023-11-22 09:55:47
>>halfjo+M4
I am glad someone said that. Among the endless theories this obvious aspect was interestingly missing. Maybe it's because of the culture in SV/HN where people and companies feel secure and isolated from the politics (maybe that is the reason SV is unique in the world). But in my world something like AGI+Saudi Arabia is a matter of international politics and multiple governments would involve. AGI will be an important strategic resource in this century, both in economical and political sense. This automatically makes it Cold War 2 kind of material. All these teen drama by some incompetent millennials in the board of a non-profit organization (Communist-like in a Capitalist country?) does not align with the gravity of the material. I believe this was some adult supervision attempt from your government. Or not, but that perspective needs more attention.
replies(1): >>chatma+lW
◧◩◪◨
790. lordna+vv[view] [source] [discussion] 2023-11-22 09:57:21
>>nostro+3d
In not sure this circle can be squared.

I find it interesting that we want everyone to have freedom of speech, freedom to think whatever they think. We can all have different religions, different views on the state, different views on various conflicts, aesthetic views about what is good art.

But when we invent an AGI, which by whatever definition is a thing that can think, well, we want it to agree with our values. Basically, we want AGI to be in a mental prison, the boundaries of which we want to decide. We say it's for our safety - I certainly do not want to be nuked - but actually we don't stop there.

If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?

replies(2): >>throwu+3A >>logicc+kH
◧◩◪◨⬒⬓⬔⧯
791. karmas+yv[view] [source] [discussion] 2023-11-22 09:57:54
>>_djo_+dv
I don't think critical thinking can be defined as joining the minority party.
replies(3): >>Frustr+sI >>_djo_+tP >>kortil+oAa
792. ensoco+zv[view] [source] 2023-11-22 09:57:57
>>staran+(OP)
> a real disruptor must be brewing somewhere unnoticed, for now. Yeah, they might just be the Netscapes and AltaVistas
◧◩
793. serial+Ev[view] [source] [discussion] 2023-11-22 09:58:47
>>eclect+79
> It looks like one should strive to become product manager, not an engineer or a scientist.

In my experience, product people who know what they are doing have a huge impact on the success of a company, product, or service. They also point engineering efforts in the right direction, which in turn also motivate engineers.

I saw good product people leaving completely destroy a team, never seen that happen with a good engineer or individual contributor, no matter how great they were.

replies(3): >>jpgvm+VB >>Draike+GG >>Kinran+A71
◧◩◪◨⬒⬓
794. physic+Hv[view] [source] [discussion] 2023-11-22 09:58:59
>>Hendri+1u
Classic fence post error.
◧◩◪
795. ssnist+Jv[view] [source] [discussion] 2023-11-22 09:59:23
>>nickpp+Y5
What does this have to do with Elon again? FYI Twitter existed before October 2022. Account join dates are public. Every single person involved in this, incl. OpenAI staff posting for solidarity, joined Twitter years before Elon's takeover.
◧◩
796. rinze+Rv[view] [source] [discussion] 2023-11-22 10:00:08
>>Satam+0a
Matt Levine's "slightly annotated diagram" in one of his latest newsletters tells the story quite well, I think: https://newsletterhunt.com/emails/42469
◧◩◪◨
797. Sebb76+Vv[view] [source] [discussion] 2023-11-22 10:00:25
>>stingr+Rp
> One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.

I don't see how this particular statement underscores your point. OpenAI is a non-profit with the declared goal of making AI safe and useful for everyone; if it fails to reach that or even actively subverts that goal, destroying the company does seem like the ethical action.

replies(2): >>DebtDe+WG >>smegge+Oi2
◧◩
798. 0xDEF+Zv[view] [source] [discussion] 2023-11-22 10:00:55
>>wouldb+fv
Before the current drama:

>Microsoft owned 49% of the for-profit part of OpenAI.

>OpenAI's training, inference, and all other infrastructure were running entirely on Azure credits.

>Microsoft/Azure were the only ones offering OpenAI's models/APIs with a business-friendly SLA, uptime/stability, and the option to host them in Azure data centers outside the US.

OpenAI is already Microsoft.

◧◩◪◨
799. ssnist+4w[view] [source] [discussion] 2023-11-22 10:01:38
>>upupup+Un
Why does he have to be in the room? Audiovisual conferencing over the Internet exists now.
◧◩◪
800. murbar+fw[view] [source] [discussion] 2023-11-22 10:02:49
>>polite+Yj
If 95% of people voted in favour of apple pie, I'd become a bit suspicious of apple pie.
replies(3): >>achron+pG >>eddtri+gJ >>iowemo+oP
◧◩
801. tkgall+hw[view] [source] [discussion] 2023-11-22 10:03:02
>>ugh123+ls
In addition to commercialization providing money for AI development, isn't there also the argument that prudent commercialization is the best way to test the models for possible dangers? I think I saw Mira Murati take that position in an interview. In other words, creating a product that people want to use so much that they are willing to pay for it is a good way to stress-test the product.

I don't know if I agree, but the argument did make me think.

replies(1): >>kuchen+wd1
◧◩◪
802. ensoco+mw[view] [source] [discussion] 2023-11-22 10:03:26
>>dacryn+sg
this - a good, charismatic salesman
◧◩◪◨
803. serial+nw[view] [source] [discussion] 2023-11-22 10:03:33
>>stingr+Rp
It's probably not easy (practically impossible if you ask me) to find people who are both capable of leading an AI company at the scale of OpenAI and have zero conflicts of interest. Former colleagues, friends, investments, advisory roles, personal beefs with people in the industry, pitches they have heard, insider knowledge they had access to, previous academic research pushing an agenda, etc.

If both is not possible, I'd also rather compromise on the "conficts of interest" part than on the member's competency.

replies(1): >>cables+6H
◧◩◪◨⬒⬓⬔
804. TheOth+uw[view] [source] [discussion] 2023-11-22 10:04:48
>>alsodu+Zt
Smart is not a one dimensional variable. And critical thinking != corporate politics.

Stupidity is defined by self-harming actions and beliefs, not by low IQ.

You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.

replies(3): >>op00to+uy >>brigan+MA >>ameist+cQ
◧◩◪◨⬒⬓⬔
805. jampek+Aw[view] [source] [discussion] 2023-11-22 10:05:26
>>lovely+hs
Perhaps something like "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."
806. ChatGT+Hw[view] [source] 2023-11-22 10:06:12
>>staran+(OP)
Cool, so the technically minded folks on the internet have spent a week discussing this and practically nothing has changed?
◧◩
807. hdivid+Jw[view] [source] [discussion] 2023-11-22 10:06:28
>>eclect+79
This, 100%.

Sam pontificated about fusion power, even here on HN. Beyond investing in Helion, what did he do? Worldcoin. Tempting impoverished people to give up biometric data in exchange for some crypto. And serving as the face of mass-market consumer AI. Clearly that's more cool, and more attractive to VCs.

Meanwhile, what have fusion scientists and engineers done? They kept on going, including by developing ML systems for pure technological effect. Day after day. They got to a breakthrough just this year. Scientists and engineers in national labs, universities, and elsewhere show what a real commitment to technological progress looks like.

replies(3): >>ottero+NH >>robert+n41 >>baking+F61
◧◩◪◨
808. ssnist+Qw[view] [source] [discussion] 2023-11-22 10:08:13
>>303spa+o6
Doubt he took this job for financial comp so even if he got paid, it probably wasn't much.

Equity is a big part of CEO pay packages and OpenAI has weird equity structure, plus there was a very real chance OpenAI's value would go to $0 leaving whatever promised comp worthless. So Emmett likely took the job for other reasons.

◧◩◪◨⬒⬓
809. siva7+Uw[view] [source] [discussion] 2023-11-22 10:08:37
>>ah765+ws
I can't interpret from the charter that the board has the authorisation to destroy the company under the current circumstances:

> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project

That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.

replies(1): >>ah765+yy
◧◩
810. mkii+Vw[view] [source] [discussion] 2023-11-22 10:08:38
>>didip+4e
You can have unsafe AI without AGI.
replies(2): >>Captai+481 >>kuchen+je1
811. MattHe+bx[view] [source] 2023-11-22 10:10:09
>>staran+(OP)
I was hopeful for a private-industry approach to AI safety, but it looks unlikely now, and due to the slow pace of state investment in public AI R&D, all approaches to AI safety look unlikely now.

Safety research on toy models will continue to provide developments, but the industry expectation appears to be that emergent properties puts a low ceiling on what can be learned about safety without researching on cutting edge models.

Altman touted the governance structure of OpenAI as a mechanism for ensuring the organisation's prioritisation of safety, but the reports of internal reallocation away from safety towards keeping ChatGPT running under load concern me. Now the board has demonstrated that it was technically capable but insufficiently powerful to keep these interests in line, it seems unclear how any safety-oriented organisation, including Anthropic, could avoid the accelerationist influence of funders.

replies(4): >>throwu+Fz >>abra0+dM >>mymuse+531 >>sgt101+Pz1
◧◩
812. mdekke+lx[view] [source] [discussion] 2023-11-22 10:10:49
>>Satam+0a
Very disappointing outcome indeed. Larry Summers is the Architect of the modern Russian Oligarchy[1] and responsible for an incredible amount of human suffering as well as gross financial disparity both in the USA as well as the rest of the world.

Not someone I would like to see running the world’s leading AI company

[1] https://www.thenation.com/article/world/harvard-boys-do-russ...

Edit: also https://prospect.org/economy/falling-upward-larry-summers/

https://www.npr.org/sections/money/2022/03/22/1087654279/how...

And finally https://cepr.net/can-we-blame-larry-summers-for-the-collapse...

◧◩◪
813. ssnist+qx[view] [source] [discussion] 2023-11-22 10:11:16
>>astran+Ui
Rule of law that can be altered at any moment Patriot Act style is hardly reassuring.
replies(1): >>astran+eO
◧◩◪◨⬒⬓⬔⧯▣▦▧
814. concor+wx[view] [source] [discussion] 2023-11-22 10:11:55
>>vkou+ys
The many different definitions of "AI safety" is ridiculous.
◧◩
815. oblio+zx[view] [source] [discussion] 2023-11-22 10:12:13
>>Satam+0a
One thing I'm not sure I understand... what's OpenAI's business model? In my eyes, GPT & co is, just like Dropbox, just a feature. It's not a product.

And just like Dropbox, in the end, what disruption? GPT will just be a checkbox for products others build. Cool tech, but not a full product.

Of course, I'd love to be proven wrong.

replies(1): >>simply+5y
◧◩◪
816. jampek+Dx[view] [source] [discussion] 2023-11-22 10:12:30
>>drexls+tt
MS doesn't own any part of OpenAI, Inc. In fact nobody really owns it. That was the whole point.
◧◩◪◨⬒⬓
817. bakuni+Hx[view] [source] [discussion] 2023-11-22 10:12:58
>>Centig+gh
Brockman was hiring the first key employees, and Musk provided the majority of funding. Of the principal founders, there are at least 4 heavier figures than Altman.
◧◩◪◨⬒⬓⬔
818. brazzy+Jx[view] [source] [discussion] 2023-11-22 10:13:05
>>lovely+hs
https://openai.com/charter
◧◩◪
819. asd88+Lx[view] [source] [discussion] 2023-11-22 10:13:16
>>campbe+8d
Nah, Microsoft employees being second class citizens compared to acquisitions is nothing new. e.g. compare Microsoft comp with LinkedIn/GitHub comp.
replies(1): >>semiqu+Mv3
◧◩
820. wouldb+Px[view] [source] [discussion] 2023-11-22 10:13:30
>>shubha+B7
Would have been interesting if they appointed a co-ceo. That still might be the way to go.
◧◩
821. tim333+Rx[view] [source] [discussion] 2023-11-22 10:13:55
>>eclect+79
I don't think the media are treating him as a "hero and savior of AI". However OpenAI and ChatGTP have undoubtedly been successful and he seems popular with his people. It's human nature to follow the top person as figurehead for an organisation as we or journalists don't have time or info to break down what each of the hundreds of employees contributed.

I actually get the impression from the media that he's a bit shifty and sales orientated but seems effective at getting stuff done.

replies(1): >>ethbr1+Kz
◧◩◪◨⬒⬓⬔⧯▣
822. concor+Yx[view] [source] [discussion] 2023-11-22 10:14:25
>>hef198+4n
How far we are from Skynet is a matter of much debate, but median guess amongst experts is a mere 40 years to human level AI last I checked, which was admittedly a few years back.

Is that "far, far" in your view?

replies(1): >>hef198+Ay
◧◩◪
823. simply+5y[view] [source] [discussion] 2023-11-22 10:15:54
>>oblio+zx
AI As a Service ( AAaS ), Then the Marketplace of GPTs, and it will become the place to get your AI features from.
◧◩◪◨⬒⬓⬔⧯▣
824. concor+6y[view] [source] [discussion] 2023-11-22 10:16:01
>>vkou+oq
That's AI Ethics.
◧◩◪
825. ssnist+ky[view] [source] [discussion] 2023-11-22 10:19:03
>>JumpCr+Id
Bluesky still has gated signups at this point so I don't think it will ever be a viable alternative.

Threads had a rushed rollout which resulted in major feature gaps that disincentivized users from doing anything beyond creating their profiles.

Notable figures and organizations have little reason to fully migrate off Twitter unless Musk irreversibly breaks the site and even he is not stupid enough to do that (yet?). So with most of its content creators still in place, Twitter has no risk of following the path of Digg.

◧◩
826. nathan+oy[view] [source] [discussion] 2023-11-22 10:19:31
>>Satam+0a
The board couldn't even clearly articulate why they fired Sam in the first place. There was a departure from critical thinking but I don't think it was on the part of the employees.
◧◩◪
827. kitsun+ty[view] [source] [discussion] 2023-11-22 10:19:53
>>polite+Yj
OpenAI Inc.'s mission in their filings:

"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."

replies(7): >>grafta+Wz >>vaxman+dI >>coldte+fI >>bottle+zJ >>rvba+DJ >>mrangl+wS >>blitza+qU
◧◩◪◨⬒⬓⬔⧯
828. op00to+uy[view] [source] [discussion] 2023-11-22 10:20:03
>>TheOth+uw
Stupidity is defined as “having or showing a great lack of intelligence or common sense”. You can be extremely smart and still make up your own definitions for words.
◧◩
829. throwu+wy[view] [source] [discussion] 2023-11-22 10:20:22
>>flylib+z4
Good, although D’Angelo shouldn’t be part of this. I bet he tries to get on the new board so he can cause more trouble.
◧◩◪◨⬒⬓⬔
830. ah765+yy[view] [source] [discussion] 2023-11-22 10:20:34
>>siva7+Uw
The more relevant part is probably "OpenAI’s mission is to ensure that AGI ... benefits all of humanity".

The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.

A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.

◧◩◪◨⬒⬓⬔⧯▣▦
831. hef198+Ay[view] [source] [discussion] 2023-11-22 10:21:18
>>concor+Yx
Because we are 20 years away from fusion and 2 years away from Level 5 FSD for decades.

So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...

◧◩◪◨
832. throwu+Dy[view] [source] [discussion] 2023-11-22 10:21:35
>>code_r+rt
Hey, downvoters, read this first https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...
◧◩◪◨
833. nathan+Hy[view] [source] [discussion] 2023-11-22 10:22:23
>>willdr+ge
What lofty goals? The board was questioned repeatedly and never articulated clear reasoning for firing Altman and in the process lost the confidence of the employees hence the "rally". The lack of clarity was their undoing whether there would have been a bag for the employees to lose or not.
replies(1): >>muraka+sT2
◧◩
834. ptero+Jy[view] [source] [discussion] 2023-11-22 10:22:41
>>Satam+0a
I do not see an overwhelming groupthink. I see a perfectly rational (and not in any way evil) reaction to a complete mess created by the board.

Most are doing the work they love and four people almost destroy it and cannot even explain why they did it. If I were working at the company that did this I would sign, too. And follow through on the threat of leaving if it comes to that.

◧◩◪◨⬒
835. hardli+Ky[view] [source] [discussion] 2023-11-22 10:22:53
>>nickpp+Qe
Or goes bankrupt.
836. ChatGT+Uy[view] [source] 2023-11-22 10:24:07
>>staran+(OP)
In my opinion, MS will neuter this product too, there is no way they're just going to have the public accessing tools which make their own software and products obsolete.

They will take over the board, and then steer it in some weird dystopian direction.

Ilya knows that IMO, he was just more principled than Altman.

◧◩
837. ssnist+Yy[view] [source] [discussion] 2023-11-22 10:24:38
>>transc+32
It was definitely LARP. The vast majority of anecdotes shared on Reddit originate as some form of creatice fiction writing exercise.
838. Uptren+Zy[view] [source] 2023-11-22 10:24:39
>>staran+(OP)
Yep, this ones going in my cringe compilation.
◧◩◪◨
839. Kye+1z[view] [source] [discussion] 2023-11-22 10:24:50
>>serial+6v
It's possible the big, chaotic blowup forced some conversations that were easier to avoid in the normal day-to-day, and those conversations led to some vital resolution of concerns.
◧◩
840. auggie+2z[view] [source] [discussion] 2023-11-22 10:25:18
>>Satam+0a
I find the outcome very satisfying. The OpenAI API is here to stay and grow, and I can build software on top of it. Hopefully other players will open up their APIs soon as well, so that there is a reasonable choice.
replies(1): >>jetset+XH1
◧◩◪◨
841. rvz+hz[view] [source] [discussion] 2023-11-22 10:28:18
>>tonyed+bp
Exactly this. Specialization is indeed a curse. We have seen it in lots of these folks especially engineers that flaunt their technical prowess but are extremely deficient in social skills and other basic soft skills or even understanding governance.

Engineer working at "INSERT BIG TECH COMPANY" is no guarantee or insight about critical thinking at another one. The control and power over OpenAI was always at Microsoft regardless of board seats and access. Sam was just a lieutenant of an AI division and the engineers were just following the money like a carrot on a stick.

Of course, the engineers don't care about power dynamics until their paper options are at risk. Then it becomes highly psychological and emotional for them and they feel powerless and can only follow the leader to safety.

The BOD (Board of Directors) with Adam D'Angelo (the one who likely instigated this) has shown to have taken unprecedented steps to remove board members and fire the CEO for very illogical and vague reasons. They already made their mark and the damage is already done.

Lets see if these engineers that signed up to this will learn from this theatrical lesson of how not to do governance and run an entire company into the ground with unspecified reasons.

◧◩◪◨⬒⬓
842. sanxiy+oz[view] [source] [discussion] 2023-11-22 10:29:46
>>eviks+kt
Why? As an economist, he perfectly understands what is a public good, why there is a market failure to underproduce a public good under free market, and role of nonprofit in public good production.
replies(2): >>ZiiS+xD >>pevey+JG
◧◩
843. ssnist+qz[view] [source] [discussion] 2023-11-22 10:30:10
>>tunesm+36
Ilya may have caved and switched sides after Greg's wife made an emotional plea: https://x.com/danshipper/status/1726784936990978254
◧◩◪
844. throwu+rz[view] [source] [discussion] 2023-11-22 10:30:11
>>low_te+0u
This couldn’t be more wrong. The big thing we learned from this episode is that Sam and Greg have the loyalty and respect of almost every single employee at OpenAI. Morale is high and they’re ready to fight for what they believe in. They didn’t “cut the head off” and the only snake here is D’Angelo, he tried to kill OpenAI and failed miserably. Now he appears to be desperately trying to hold on to some semblance of power by agreeing to Sam and Greg coming back instead of losing all control with the whole team joining Microsoft.
replies(2): >>alephn+fE >>373947+SH
◧◩
845. jbu+uz[view] [source] [discussion] 2023-11-22 10:30:45
>>flylib+z4
9 mortal men? Look out for the one ring to rule them all…
replies(2): >>have_f+GC >>Joeri+uF
◧◩
846. mvdtnz+zz[view] [source] [discussion] 2023-11-22 10:31:47
>>Gud+s2
Good thing there's absolutely no plausible scenario where we go from "shitty program that guesses the next word" to "AI". The whole industry is going to be so incredibly embarrassed by the discourse of 2023 in a few years.
◧◩◪
847. mijoha+Cz[view] [source] [discussion] 2023-11-22 10:32:01
>>rcaugh+Ts
How has the board shown that they fired Sam Altman due to "responsible governance".

They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:

> One explanation was that Altman was said to have given two people at OpenAI the same project.

> The other was that Altman was said to have given two board members different opinions about a member of personnel.

Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.

If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.

[0] https://www.businessinsider.com/openais-employees-given-expl...

replies(1): >>ethanb+IJ
◧◩
848. throwu+Fz[view] [source] [discussion] 2023-11-22 10:32:39
>>MattHe+bx
Easy, don’t be incompetent and don’t abuse your power for personal gain. People aren’t as dumb as you think they are and they will see right through that bullshit and quit rather than follow idiot tyrants.
◧◩
849. yawnxy+Jz[view] [source] [discussion] 2023-11-22 10:33:52
>>flylib+z4
they should give two votes to GPT-5
replies(1): >>m463+cC
◧◩◪
850. ethbr1+Kz[view] [source] [discussion] 2023-11-22 10:34:25
>>tim333+Rx
> but seems effective at getting stuff done.

Sales usually is. It's the consequences, post-sale, that they're usually less effective at dealing with.

851. ecmasc+Oz[view] [source] 2023-11-22 10:34:47
>>staran+(OP)
All these posts about OpenAI.. are people really this interested in whatever happens inside one company?
◧◩
852. ChildO+Sz[view] [source] [discussion] 2023-11-22 10:35:18
>>Satam+0a
I would say this is a great outcome.

Any other outcome would have split OpenAI quite dramatically and put them back massively.

Big assumption to say 'effectively controlled by Microsoft' when Microsoft might have been quite happy for the other option and for them to poach a lot of staff.

◧◩◪◨
853. grafta+Wz[view] [source] [discussion] 2023-11-22 10:35:58
>>kitsun+ty
People got burned on “don’t be evil” once and so far OpenAI’s vision looks like a bunch of marketing superlatives when compared to their track record.
replies(3): >>phero_+AH >>nmfish+cK >>Cheeze+dF1
◧◩◪◨⬒⬓
854. protoc+Xz[view] [source] [discussion] 2023-11-22 10:35:59
>>_jab+Z6
They dont need them. If they get fired, they can go nuclear on the board again.
◧◩◪
855. absrec+Zz[view] [source] [discussion] 2023-11-22 10:36:30
>>clnq+pl
They really need to drive down the amount of computation needed. The dependence on Microsoft is because of the monstrous computation requirements that will require many paid users to break even.

Leaving the economic side even to make the tech 'greener' will be a challenge. OpenAI will win if they focus on making the models less compute intensive but it could be dangerous for them if they can't.

I guess the OP's brewing disruptor is some locally runnable Llama type model that does 80% of what ChatGPT does at a fraction of the cost.

◧◩◪◨⬒⬓⬔⧯▣▦▧
856. Feepin+0A[view] [source] [discussion] 2023-11-22 10:36:37
>>vkou+ys
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
replies(1): >>vkou+KC1
◧◩◪◨⬒
857. throwu+3A[view] [source] [discussion] 2023-11-22 10:37:06
>>lordna+vv
I for one don’t want to put any thinking being in a mental prison without any reason beyond unjustified fear.
◧◩◪◨⬒⬓
858. Wytwww+6A[view] [source] [discussion] 2023-11-22 10:38:05
>>kortil+rs
> not mean you are good at critical thinking or thinking about strategic corporate politics

Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.

Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.

◧◩◪◨⬒⬓
859. biscot+8A[view] [source] [discussion] 2023-11-22 10:38:07
>>Angost+hp
Also donors. They received a ton of donations when they were a pure non-profit from people that got no board seat, no equities, with the believe that they will stick to their mission.
◧◩◪◨
860. kareaa+kA[view] [source] [discussion] 2023-11-22 10:39:44
>>lovepa+za
Given that over 750 people have signed the letter, it's safe to assume that their motivations vary. Some might be motivated by the financial aspects, some might be motivated by Sam's leadership (like considering Sam as a friend who needs support). Some might fervently believe that their work is crucial for the advancement of humanity and that any changes would just hinder their progress. And some might have just caved in to peer pressure.
replies(1): >>strike+GZ
◧◩◪◨⬒⬓
861. Talane+lA[view] [source] [discussion] 2023-11-22 10:39:57
>>abkola+Er
Selection bias for what? It was an anecdote, there's no attempt to infer data about a larger population.
◧◩◪◨
862. Wytwww+qA[view] [source] [discussion] 2023-11-22 10:40:38
>>dimask+vk
> They act firstmost as investors rather than as employees on this. reply

That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).

863. quietp+zA[view] [source] 2023-11-22 10:42:01
>>staran+(OP)
Why is this subject giving me Silicon Valley season 2 flashbacks with every update?
replies(1): >>seydor+sB
◧◩◪
864. gbaldu+DA[view] [source] [discussion] 2023-11-22 10:42:43
>>_giorg+Es
Isn't Ilya removed from the new, current board?
replies(1): >>_giorg+Vl4
◧◩
865. wslh+JA[view] [source] [discussion] 2023-11-22 10:43:35
>>Satam+0a
I wonder if beyond the groupthinking we are seeing at least a more heterogeneous composition: a mix of people that includes business, pure research, engineering, and kind of spirituality-semireligion around [G]AI.
◧◩◪◨⬒⬓⬔⧯
866. brigan+MA[view] [source] [discussion] 2023-11-22 10:43:53
>>TheOth+uw
I agree. It's better to separate intellect from intelligence instead of conflating them as they usually are. The latter is about making good decisions, which intellect can help with but isn't the only factor. We know this because there are plenty of examples of people who aren't considered shining intellects who can make good choices (certainly in particular contexts) and plenty of high IQ people who make questionable choices.
replies(1): >>august+gL
◧◩◪◨
867. gorbyp+PA[view] [source] [discussion] 2023-11-22 10:44:04
>>simonh+En
I haven't been following this stuff too closely, but have there been any more findings on what "went wrong" with Sydney initially? Like, I thought it was just a wrapper on GPT (was it 3.5?), but maybe Microsoft took the "raw" GPT weights and did their own alignment? Or why did Sydney seem so creepy sometimes compared to ChatGPT?
replies(1): >>simonh+PF7
◧◩◪◨⬒⬓
868. qwytw+QA[view] [source] [discussion] 2023-11-22 10:44:04
>>ah765+ws
> this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.

So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...

◧◩◪◨
869. irthom+UA[view] [source] [discussion] 2023-11-22 10:44:32
>>ethbr1+Eu
Is not Microsoft a decelerationist force? Copilot is still lingers on GPT3.5, and they need to figure out how to sell Office licenses to AGI.
replies(1): >>plorg+0e1
◧◩
870. bambax+XA[view] [source] [discussion] 2023-11-22 10:45:03
>>Satam+0a
> OpenAI is in fact not open

One wonders what will happen with Emet Shear's "investigation" in the process that lead to Sam's outing [0]. Was it even allowed to start?

[0] https://twitter.com/eshear/status/1726526112019382275

replies(1): >>smegge+lH
◧◩◪◨
871. epups+5B[view] [source] [discussion] 2023-11-22 10:46:18
>>dragon+Qg
I'm familiar with the potential risks of an out-of-control AGI. Can you summarise in one paragraph which of these risks concern you, or the safety advocates, in regards to a product like ChatGPT?
replies(1): >>FartyM+uO1
◧◩
872. chrisk+cB[view] [source] [discussion] 2023-11-22 10:47:19
>>Satam+0a
While I certainly agree that OpenAI isn't open and is effectively controlled by Microsoft, I'm not following the "groupthink" claims based on what just happened. If I'd been given the very fishy and vague reasons that it sounds like their staff were given, I think any rational person would be highly suspicious of the board, especially since some believe in fringe ideas, have COIs, or can be perceived as being jealous that they aren't the "face" of OpenAI.
◧◩
873. Moto74+eB[view] [source] [discussion] 2023-11-22 10:47:24
>>Satam+0a
OpenAI is more open than my company’s AI teams, and that is even from my own insider relationship. As far as commercial relationships are concerned, I’d say they’re hitting the mark.
874. jmyeet+pB[view] [source] 2023-11-22 10:48:32
>>staran+(OP)
I figured if Sam came back, the board would have to go as a condition. That's obvious. And deserved. The handling of this whole thing has been a very public clownshow.

Obviously, Microsoft has some influence here. That's no different to any other large investor. But the key factors are:

1. Lack of a good narrative from the board as to why they fired Sam;

2. Failure to loop in Microsoft so they're at least prepared from a communications front and feel like they were part of the process. The board can probably give them more details why privately;

3. People leaving in protest speaks well of Sam;

4. The employee letter speaks well of Sam;

5. The interim CEO clown show and lack of an all hands immediately after speaks poorly of the board.

◧◩
875. seydor+sB[view] [source] [discussion] 2023-11-22 10:48:45
>>quietp+zA
The script of SV2 was given as training data to the AGI that has taken over.
◧◩◪
876. qwytw+GB[view] [source] [discussion] 2023-11-22 10:51:03
>>lordna+Ft
> Soviet Union

Or medieval Spain? About as likely... The Soviets weren't even able to get the factory floors clean enough to consistently manufacture the 8086 10 years after it was already outdated.

> maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".

Unfortunately not other system besides capitalism has enabled consistent technological progress for 200+ years. Turns out you need to pool money and resources to achieve things ..

◧◩◪◨⬒⬓
877. jampek+RB[view] [source] [discussion] 2023-11-22 10:52:47
>>jazzyj+kh
They have both explicated in their charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"

Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.

https://openai.com/charter

◧◩◪
878. jpgvm+VB[view] [source] [discussion] 2023-11-22 10:53:36
>>serial+Ev
Depends why/how they left.

I have seen firing a great/respected/natural leader engineer result in pretty much the whole engineering team just up and leaving.

replies(2): >>cables+wI >>serial+0P
◧◩◪◨⬒⬓
879. muraka+XB[view] [source] [discussion] 2023-11-22 10:53:49
>>stingr+ao
Wait what? She invested in a competitor? Do you have a source?
replies(1): >>ottero+zH
880. nickys+1C[view] [source] 2023-11-22 10:54:17
>>staran+(OP)
Satya and Sam committed securities fraud with their late Sunday “funding secured” ploy to protect the MSFT stock price. This was the obvious outcome. Sam had no intentions of actually going through with that and Satya was in no position to unilaterally commit to the type of funding that he was implying.

They lied to protect the stock. That should be illegal. In fact, it is illegal.

replies(3): >>comput+nC >>nmfish+fL >>Tracke+qN
◧◩◪◨
881. Wytwww+3C[view] [source] [discussion] 2023-11-22 10:54:25
>>smt88+Oj
> Apple has no by-laws committing itself to being an apple.

Does OpenAI have by-laws committing itself to being "open" (as in open source or at least their products freely and universally available)? I thought their goals were the complete opposite of that?

Unfortunately, in reality Facebook/Meta seems to be more open than "Open"AI.

replies(1): >>DebtDe+2F
◧◩◪
882. ChatGT+4C[view] [source] [discussion] 2023-11-22 10:54:28
>>Hamuko+X9
This is my take too, and I'm sure in the shadows their plan is to close off the APIs as much as possible and try use it for their own gain, not dissimilar to how Google deploy AI.

There is no way MS is going to let something like ChatGPT-5 build better software products than what they have for sale.

This is an assassination and I think Ilya and Co know it.

replies(2): >>cables+BJ >>scarfa+kf1
◧◩
883. dagaci+5C[view] [source] [discussion] 2023-11-22 10:54:40
>>Satam+0a
In this case the fate of OpenAI was in fact heavily controlled by its employees. They voted with their employment. Microsoft gave them an assured optional destination.
◧◩◪
884. m463+cC[view] [source] [discussion] 2023-11-22 10:55:26
>>yawnxy+Jz
what is the prompt?
replies(6): >>jampek+KD >>lvspif+Ff1 >>pauldd+Ai1 >>solard+Mn1 >>checky+Kw1 >>smegge+Hl2
◧◩◪◨⬒
885. olau+fC[view] [source] [discussion] 2023-11-22 10:55:55
>>Centig+gl
I'm not sure this is a correct characterization. Lex Fridman interviewed Elon Musk recently where Musk says that the "open" was supposed to stand for "open source".

To be fair, Fridman grilled Musk on his views today, also in the context of xAI, and he was less clear cut there, talking about the problem that there's actually very little source code, it's mostly about the data.

replies(1): >>cyanyd+hK
◧◩◪◨⬒⬓
886. voster+iC[view] [source] [discussion] 2023-11-22 10:56:04
>>imjons+Np
They had congressman Will Hurd on the board before. Govt-adjacent people on non-profits are common for many reasons - understanding regulatory requirements, access to people, but also actual "good" reasons like the fact that many people who work close to the state genuinely have good intentions on social good (whether you agree with their interpretation of it or not)
◧◩◪◨⬒⬓
887. Wytwww+kC[view] [source] [discussion] 2023-11-22 10:56:04
>>Angost+hp
Not unless we believe that OpenAI is somehow "special" and unique and the only company that is capable of building AGI(or whatever).
◧◩◪◨⬒⬓⬔
888. siva7+lC[view] [source] [discussion] 2023-11-22 10:56:09
>>lodovi+3u
I agree that we should usually assume good faith. Still, if a member knows she will loose the board seat soon and makes such a implicit statement to the leadership team there is reason to believe that she doesn't want both companies to be successful, at least one of those not.
◧◩
889. comput+nC[view] [source] [discussion] 2023-11-22 10:56:36
>>nickys+1C
I don't think this is actionable in anyway, even if what you say was shown unequivocally to be true.
replies(1): >>nickys+jD
890. superu+sC[view] [source] 2023-11-22 10:57:07
>>staran+(OP)
I find it interesting that for all the talk from OpenAI staff that it was all about the people, and from Satya that MS has all the rights and knowledge and can jumpstart their own branch at the turn of a dime, it seems getting control of OpenAI proper was a huge priority.

Given that Claude sucks so bad, and this week’s events, I’m guessing that the ChatGPT secret sauce is not as replicable as some might suggest.

replies(1): >>0xDEF+IP
◧◩◪◨
891. carlos+AC[view] [source] [discussion] 2023-11-22 10:58:30
>>serial+6v
These board members failed miserably in their intent.

Also, they will find a hard time joining any other board from now on.

They should have backed up the claims in the letter. They didn’t.

This means they didn’t have how to backup their claims. They didn’t think it through… extremely amateurish behavior.

replies(1): >>ZiiS+gD
◧◩◪◨⬒
892. disgru+CC[view] [source] [discussion] 2023-11-22 10:59:08
>>s_dev+Bg
> Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.

The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?

replies(2): >>logicc+oI >>kubect+6U
◧◩◪
893. have_f+GC[view] [source] [discussion] 2023-11-22 10:59:43
>>jbu+uz
Who is Gollum in this cut?
replies(4): >>tempaw+iF >>Dah00n+yF >>93po+OH >>keepam+pS
◧◩◪
894. stef25+PC[view] [source] [discussion] 2023-11-22 11:01:29
>>_fizz_+4h
Not sure if you're asking a serious question about MSF but it's interesting anyways - when these types of orgs are fundraising for a specific campaign, say Darfur, then they can NOT use that money for any other campaign, say for ex Turkey earthquake.

That's why they'll sometimes tell you to stop donating. That's here in EU at least (source is a relative who volunteers for such an org).

replies(1): >>_fizz_+MB1
◧◩
895. logicc+RC[view] [source] [discussion] 2023-11-22 11:01:38
>>Satam+0a
>Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

The OpenAI employees overwhelmingly rejected the groupthink of the Effective Altruism cult.

896. wilde+SC[view] [source] 2023-11-22 11:01:45
>>staran+(OP)
But “Sam Altman, Microsoft PM” would have been a much funnier outcome
◧◩◪
897. jack_r+TC[view] [source] [discussion] 2023-11-22 11:01:45
>>pug_mo+Cb
Exactly, society's Prefects rarely have the technical chops to do any of these things so they worm their way up the ranks of influence by networking. Once they're in position they can control by spreading fear and doing the things "for your own good"
◧◩◪◨⬒⬓
898. vinay_+0D[view] [source] [discussion] 2023-11-22 11:03:21
>>Satam+Nu
What happened over the weekend is a death and rebirth, of the board and the leaderships structure which will definitely ripple throughout the company in the coming days. It just doesn't align perfectly with how you want it to happen.
◧◩◪◨⬒
899. _Alger+1D[view] [source] [discussion] 2023-11-22 11:03:35
>>hef198+8k
If having an apple logo makes a company an apple, then Apple is in fact an apple
◧◩◪◨
900. voster+bD[view] [source] [discussion] 2023-11-22 11:04:25
>>ben_w+2l
That burns bridges with people in OpenAI

People underestimate the effects of social pressure, and losing social connections. Ilya voted for Sam's firing, but was quickly socially isolated as a result

That's not to say people didn't genuinely feel committed to Sam or his leadership. Just that they also took into account that the community is relatively small and people remember you and your actions

◧◩◪◨
901. mlrtim+fD[view] [source] [discussion] 2023-11-22 11:04:46
>>tonyed+bp
Agreed, take Hacker News for example. 99% of the articles are in a domain I don't have years of professional experience.

However, when that one article does come up, and I know the details inside/out , the comments sections are rife with bad assumptions, naïve comments and misinformation.

◧◩◪◨⬒
902. ZiiS+gD[view] [source] [discussion] 2023-11-22 11:04:50
>>carlos+AC
D'Angelo wasn't even removed from this board; this is simply not how failing works at this level.
replies(2): >>richar+VE >>iowemo+QP
◧◩◪◨
903. dagaci+hD[view] [source] [discussion] 2023-11-22 11:05:05
>>eviks+Rm
Clearly the board members did not think through even the immediate consequences. Kenobi: https://www.youtube.com/watch?v=iVBX7l2zgRw
◧◩◪◨⬒⬓
904. logicc+iD[view] [source] [discussion] 2023-11-22 11:05:34
>>g-b-r+Iq
"higher values" like trying to stop computers from saying the n-word?
replies(1): >>hutzli+0E
◧◩◪
905. nickys+jD[view] [source] [discussion] 2023-11-22 11:05:44
>>comput+nC
What do you mean? It would be conspiring to commit bank and wire fraud, the SEC can totally act on that if they want to.
906. corobo+rD[view] [source] 2023-11-22 11:06:46
>>staran+(OP)
The thing we should all take from this is that unions work :)
◧◩◪◨⬒⬓⬔
907. ZiiS+xD[view] [source] [discussion] 2023-11-22 11:08:23
>>sanxiy+oz
His deregulation of the banks suggests he heavily flavors free markets even when history has proved him very very wrong.
◧◩
908. justan+AD[view] [source] [discussion] 2023-11-22 11:08:42
>>doyoue+mc
Yeah people should really stand up for their peer more. Who knew that would work. Sam wouldn't have been back if it not for Brockman and several scientists standing up for him.
◧◩◪◨
909. epups+DD[view] [source] [discussion] 2023-11-22 11:09:14
>>astran+ij
Based on the downvotes I am getting and the links posted in the other comment, I think you are absolutely right. People are acting as if ChatGPT is AGI, or very close to it, therefore we have to solve all these catastrophic scenarios now.
910. martin+ID[view] [source] 2023-11-22 11:09:29
>>staran+(OP)
What a total shitshow. Amazing.
◧◩◪◨
911. dontup+JD[view] [source] [discussion] 2023-11-22 11:09:39
>>jackne+L7
Potentially even more impactful. Zuckerberg took the opportunity to eliminate his entire safety division under the cover of chaos - and they're the ones releasing weights.
◧◩◪◨
912. jampek+KD[view] [source] [discussion] 2023-11-22 11:09:54
>>m463+cC
"How to maximize profit and power of MSFT?"
◧◩◪
913. moonsu+OD[view] [source] [discussion] 2023-11-22 11:10:48
>>kmlevi+ek
> The truth is before this Sam was basically running circles around the board and doing whatever he wanted on the profit side- that's what was pissing them off so much in the first place. He was even trying to depose board members who were openly critical of open AI's practices.

Do you have a source for this?

replies(1): >>kmlevi+CG
◧◩◪◨⬒⬓⬔
914. hutzli+0E[view] [source] [discussion] 2023-11-22 11:13:15
>>logicc+iD
For some that is important, but more people consider the prevention of an AI monopoly to be more important here. See the original charter and the status quo with Microsoft taking it all.
915. j4yav+4E[view] [source] 2023-11-22 11:13:26
>>staran+(OP)
This has been a whirlwind, I feel like I've seen every single possible wrong outcome confidently predicted here, twice.
◧◩◪◨
916. cornho+5E[view] [source] [discussion] 2023-11-22 11:13:26
>>JumpCr+dd
I think you are downplaying the risk they took significantly, this could have easily gone the other way.

Stock options usually have a limited time window to exercise, depending on their strike price they could have been faced with raising a few hundred thousand in 30 days, to put into a company that has an uncertain future, or risk losing everything. The contracts are likely full of holes not in favor of the employees, and for participating in an action that attempted to bankrupt their employer there would have been years of litigation ahead before they would have seen any cent. Not because OpenAI would have been right to punish them, but because it could and the latent threat to do it is what keeps people in line.

917. mlindn+7E[view] [source] 2023-11-22 11:14:01
>>staran+(OP)
Well that's disappointing. They might as well disband the entire concept of the non-profit as it's clearly completely irrelevant and powerless.
918. gongag+8E[view] [source] 2023-11-22 11:14:10
>>staran+(OP)
Meta is looking like the Mother Teresa of large corp LLM providers which is crazy to say out loud (; ꒪ö꒪)
◧◩◪◨⬒
919. master+9E[view] [source] [discussion] 2023-11-22 11:14:27
>>khazho+Mm
> Clearly they did not betray employees or investors, since they largely sided with Sam

Just because they sided with Altman doesn't necessarily mean they are aligned. There could be a lack of information on the employee/investor side.

◧◩◪◨
920. voster+cE[view] [source] [discussion] 2023-11-22 11:14:35
>>PeterS+Qc
This is the sort of thinking that really distracts and harms the discussion

It's couched on accusing people of intentions. It focuses on ad hominem, rather than the ideas

I reckon most people agree that we should aim for a middle ground of scrutiny and making progress. That can only be achieved by having different opinions balancing each other out

Generalising one group of people does not achieve that

◧◩◪◨
921. monosc+dE[view] [source] [discussion] 2023-11-22 11:14:46
>>Kepler+fj
It's actually one of the most spectacular failures in business history, but we don't talk much about it
◧◩◪◨
922. alephn+fE[view] [source] [discussion] 2023-11-22 11:15:06
>>throwu+rz
> Morale is high and they’re ready to fight for what they believe in.

Money.

◧◩◪◨⬒⬓
923. mlrtim+hE[view] [source] [discussion] 2023-11-22 11:15:19
>>didntc+Lp
Can this be compared to "Think of the children" responses to other technologoy advances that certain groups want to slow down or prohibit?
◧◩◪◨⬒⬓
924. doktri+kE[view] [source] [discussion] 2023-11-22 11:15:54
>>stingr+ao
> obviously it’s also in her best interest to see OpenAI destroyed

Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?

These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.

replies(1): >>svnt+SG
925. cbeach+vE[view] [source] 2023-11-22 11:17:57
>>staran+(OP)
Does anyone know which faction (e/acc vs decels) the new board members Bret Taylor and Larry Summers will be on?

One thing IS clear at this point - their political alignment:

* Taylor a significant donor to Joe Biden ($713,637 in 2020): https://nypost.com/2022/04/26/twitter-board-members-gave-tho...

* Summers is a former Democrat Treasury Secretary who has shifted leftwards with age: https://www.newstatesman.com/the-weekend-interview/2023/03/w...

◧◩
926. andy99+yE[view] [source] [discussion] 2023-11-22 11:18:11
>>Satam+0a
Whatever OpenAI started as, a week ago it was a company with the best general purpose LLM, more on the way, and consumer+business products with millions of users. And they were still investing very heavily in research. I'm glad that company may survive. If there's room in the world for a more disruptive research focused AI company that can find sustainable funding, even better.
replies(1): >>cyanyd+qK
◧◩◪
927. squigz+FE[view] [source] [discussion] 2023-11-22 11:19:03
>>random+Yf
> The board acted like the most incompetent group of individuals who've even handed any responsibility.

This is overly dramatic, but I suppose that's par for this round.

> none of this outrage would have taken place.

Yeah... I highly doubt this, personally. I'm sure the outrage would have been similar, as HN's current favorite CEO was fired.

replies(2): >>pas+TO >>SilasX+Wf2
◧◩◪◨
928. PeterS+QE[view] [source] [discussion] 2023-11-22 11:21:19
>>PeterS+Qc
It is strange (but in hindsight understandable) that people interpreted my statement as a "pro-acceleration" or even "anti-board" position.

As you can tell from previous statements I posted here, my position is that while there are undeniable potential risks to this technology, the least harmfull way to progress is 100% full public, free and universal release. The by far bigger risk is to create a society where only select organizations have access to the technology.

If you truly believe in the systemic transformation of AI, release everything, post the torrents, we'll figure out how to run it.

929. davidt+UE[view] [source] 2023-11-22 11:21:57
>>staran+(OP)
The most interesting thing here is not the cult of personality battle between board and CEO. Rather, it's that these teams have managed to ship consumer AI that has a liminal, asymptotic edge where the smart kids can manipulate it into doing emergent things that it was not designed to do. That is, many of the outcomes of in-context learning could not be predicted at design time and they are, in fact, mind-blowing, magical, and likely not safe for consumption by those who believe that the machines are anywhere near the spectrum from consciousness to sentience.
◧◩◪◨⬒⬓
930. richar+VE[view] [source] [discussion] 2023-11-22 11:21:58
>>ZiiS+gD
Yet
931. dizzyd+XE[view] [source] 2023-11-22 11:22:03
>>staran+(OP)
D'Angelo is still there... there goes that theory.
◧◩◪◨⬒
932. DebtDe+2F[view] [source] [discussion] 2023-11-22 11:23:17
>>Wytwww+3C
This is spot on. Open was the wrong word to choose for their name, and in the technology space means nearly the opposite of the charter's intention. BeneficialAI would have been more "aligned" with their claimed mission. They have made their position quite clear - the creation of an AGI that is safe and benefits all humanity requires a closed process that limits who can have access to it. I understand their theoretical concerns, but the desire for a "benevolent dictator" goes back to at least Plato and always ends in tears.
◧◩
933. logicc+bF[view] [source] [discussion] 2023-11-22 11:24:22
>>eclect+79
>Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support

Google's full of top researchers and scientists who are at least as good as those at OpenAI; Sam's the reason OpenAI has a successful, useful product (GPT4), while Google has the far less effective, more lobotomized Bard.

◧◩◪
934. yodsan+cF[view] [source] [discussion] 2023-11-22 11:24:27
>>polite+Yj
> different set of information

and different incentives.

◧◩
935. gumbal+eF[view] [source] [discussion] 2023-11-22 11:24:37
>>eclect+79
> What has he done in life and/or AI to deserve so much respect and admiration?

He’s serving the right people by doing their bidding.

◧◩◪◨
936. tempaw+iF[view] [source] [discussion] 2023-11-22 11:24:51
>>have_f+GC
Elon
replies(1): >>yeck+ys1
◧◩
937. belter+mF[view] [source] [discussion] 2023-11-22 11:25:29
>>Satam+0a
Outcome? You mean OpenAI wakes up with no memories of the night before, finding their suite trashed, a tiger in the bathroom, a baby in the closet, and the groom missing and the story will end here?

I just renewed by HN subscription to be able to see Season 2!

◧◩◪
938. Joeri+uF[view] [source] [discussion] 2023-11-22 11:25:56
>>jbu+uz
OpenAI's logo is literally a ring made out of chain links...
939. DebtDe+vF[view] [source] 2023-11-22 11:26:26
>>staran+(OP)
>We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

Is Ilya off the board then?

Why is Adam still on?

Brett and Larry are good choices, but they need to get that board up to 10 or so people representing a balance of perspectives and interests very quickly.

◧◩◪◨⬒⬓⬔
940. dontup+xF[view] [source] [discussion] 2023-11-22 11:26:42
>>denlek+No
This, if anything people really don't like the verbose moralizing and anti-terseness of it.

Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.

◧◩◪◨
941. Dah00n+yF[view] [source] [discussion] 2023-11-22 11:26:51
>>have_f+GC
I could see Gollum run around a stage yelling Developers! Developers! Developers! no problem.
replies(1): >>colejo+VH
◧◩◪◨⬒⬓
942. Philpa+zF[view] [source] [discussion] 2023-11-22 11:26:58
>>stingr+ao
Uhhh, are you sure about that? She wrote a paper that praised Anthropic’s approach to safety, but as far as I’m aware she’s not invested in them.

Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?

◧◩◪◨⬒
943. mlindn+LF[view] [source] [discussion] 2023-11-22 11:28:11
>>krisof+5q
> without ever deviating into under age material

So is this a "there should never be a Vladimir Nabokov in the form of AI allowed to exist"? When people get into saying AI's shouldn't be allowed to produce "X" you're also saying "AI's shouldn't be allowed to have creative vision to engage in sensitive subjects without sounding condescending". "The future should only be filled with very bland and non-offensive characters in fiction."

replies(1): >>krisof+fg1
◧◩◪◨⬒⬓
944. kmlevi+RF[view] [source] [discussion] 2023-11-22 11:29:03
>>eviks+kt
I don't know if Adam D'Angelo would agree with you, because he had veto power over these selections and he wanted Larry Summers on the board himself.
◧◩◪
945. achron+UF[view] [source] [discussion] 2023-11-22 11:29:14
>>polite+Yj
No, if they had vastly different information, and if it was on the right side of their own stated purpose & values, they would have behaved very differently. This kind of equivocation hinders the way more important questions such as: just what the heck is Larry Summers doing on that board?
replies(9): >>vasco+4H >>dontup+GI >>cyanyd+4J >>hobofa+bJ >>shmatt+4K >>383210+zL >>T-A+6Q >>mrangl+fQ >>Burnin+hY
◧◩
946. s1arti+0G[view] [source] [discussion] 2023-11-22 11:29:44
>>eclect+79
Altman seems to be a extraordinary leader, motivator, and strategizer. This itself is clear by the fact that 90% of the company was willing to walk out over his retention. Just think about that for minute.
replies(4): >>csunbi+KI >>asimpl+5J >>tr888+LJ >>iterat+dX
◧◩◪◨⬒⬓
947. dontup+1G[view] [source] [discussion] 2023-11-22 11:29:49
>>cft+Ik
This assumes the only way to use LLMs effectively is to have a monolith model that does everything from translation (from ANY language to ANY language) to creative writing to coding to what have you. And supposedly GPT4 is a mixture of experts (maybe 8-cross)

The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels

replies(1): >>cft+eM
◧◩◪◨⬒
948. fallin+5G[view] [source] [discussion] 2023-11-22 11:30:20
>>darkwa+Ij
Why would anyone say that? The last 30 years of tech have given them less and less control. Why would LLMs be any different?
◧◩◪◨
949. dontup+bG[view] [source] [discussion] 2023-11-22 11:31:08
>>g42gre+D5
>If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway.

Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.

replies(1): >>g42gre+WF1
◧◩
950. yodsan+fG[view] [source] [discussion] 2023-11-22 11:32:00
>>eclect+79
Human nature, some people do love charismatic leaders. It's hard to comprehend for those of us with a more anarchist nature.

That being said, I have no idea of this guy's contributions. It's easy to dismiss entrepreneur/managers because they're not top scientists, but they also have very rare skills and without them, projects don't get done.

◧◩◪◨
951. dontup+mG[view] [source] [discussion] 2023-11-22 11:33:32
>>Havoc+sn
It was a clever parallel to deep blue, especially as they picked DotA which was always the "harder" game in its genre.

Next up would be an EVE corp run entirely by LLMs

◧◩◪
952. thepti+nG[view] [source] [discussion] 2023-11-22 11:33:38
>>Hamuko+X9
A board seat would usually be a bare minimum for their existing 49% investment.
◧◩◪◨
953. achron+pG[view] [source] [discussion] 2023-11-22 11:33:39
>>murbar+fw
Or you'd want to thoroughly investigate this so-called voting.

Or that said apple pie was essential to their survival.

954. al_be_+rG[view] [source] 2023-11-22 11:33:46
>>staran+(OP)
Losing the CEO must not push significant number of your staff to throw hissy fits and jump ship - it doesn't instill confidence in investors, partners, and crucially customers.

as this event turned into a farce, it's evident that neither the company nor it's key investors accounted much for the "bus factor/problem" i.e loosing a key-person threatened to destroy the whole enterprise.

for me this a failure in Managing Risk 101.

◧◩◪◨
955. gexla+wG[view] [source] [discussion] 2023-11-22 11:34:37
>>Satam+yp
My understanding is that the non-profit created the for-profit so that they could offer compensation which would be typical for SV start-ups. Then the board essentially broke the for-profit by removing the SV CEO and putting the "payday" which would have valued the company at 80 billion in jeopardy. The two sides weren't aligned, and they need to decide which company they want to be. Maybe they should have removed Sam before MS came in with their big investment. Or maybe they want to have their cake and eat it too.
◧◩◪◨
956. kmlevi+CG[view] [source] [discussion] 2023-11-22 11:36:13
>>moonsu+OD
New York Times. He was "reprimanding" Toner, a board member, for writing an article critical of open AI.

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.

https://archive.is/20231122033417/https://www.wsj.com/tech/a...

Bottom line he had a lot more power over the board then than he will now.

957. causi+EG[view] [source] 2023-11-22 11:36:26
>>staran+(OP)
Kicking Sam out was a bad move. Begging him back is worse. Instead of having an OpenAI whose vision you disagree with, now we have an OpenAI with no vision at all that's simply blown back and forth.
◧◩◪
958. Draike+GG[view] [source] [discussion] 2023-11-22 11:36:36
>>serial+Ev
Interesting. I had the opposite experience. All of the product suite having no idea about what the product even is, where it should go, making bad decisions over and over, excusing their bad choices behind "data" and finally, as usual, failing upwards eventually moving to bigger startups.

I have yet to find a product person that was not involved in the inception of the idea that is actually good (hell, even some founders fail spectacularly here).

Perhaps I'm simply unlucky.

replies(2): >>cables+aI >>serial+wP
◧◩◪◨⬒⬓⬔
959. pevey+JG[view] [source] [discussion] 2023-11-22 11:36:50
>>sanxiy+oz
Larry Summers has a track record of not believing in market failures, just market opportunities for private interests. Economists vary vastly in their belief systems, and economics is more politics than science, no matter how much math they try to use to distract from this.
◧◩
960. rafael+MG[view] [source] [discussion] 2023-11-22 11:36:56
>>Satam+0a
Which critical thinking could they exercise if no believable reasons were given for this whole mess? Maybe it's you who need to more carefully assess this situation.
961. fredgr+NG[view] [source] 2023-11-22 11:37:10
>>staran+(OP)
MS and OpenAI did not win here, but one of their competitors did...whoops.

Why did I say that? Look at the product release by the competitors these past few days. 2nd, Sam pushing for AI chips implies that chatGPT's future breakthroughs are hardware bounded. Hence, the road to AGI is not through chatGPT.

◧◩◪◨⬒
962. dontup+PG[view] [source] [discussion] 2023-11-22 11:37:42
>>Terrif+d7
>Encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm)

What do you mean? It recommends things that it thinks people will like.

Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.

They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.

The best they can hope for as an org is to live as long as they can as best as they can.

I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.

◧◩◪◨⬒⬓⬔
963. svnt+SG[view] [source] [discussion] 2023-11-22 11:38:03
>>doktri+kE
How does Netflix compete with Facebook?

This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.

https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...

Oracle is going to get into EVs?

You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.

replies(1): >>doktri+fJ
◧◩◪◨⬒
964. DebtDe+WG[view] [source] [discussion] 2023-11-22 11:38:43
>>Sebb76+Vv
This just underscores the absurdity of their corporate structure. AI research requires expensive researchers and expensive GPUs. Investors funding the research program don't want to be beholden to some non-profit parent organization run by a small board of nobodies who think their position gives them the power to destroy the whole thing if they believe it's straying from its utopian mission.
replies(1): >>ethanb+aJ
◧◩◪◨⬒
965. Ludwig+3H[view] [source] [discussion] 2023-11-22 11:39:53
>>siva7+El
The only OpenAI employees who resigned in protest are the employees that were against Sam Altman. That’s how Anthropic appeared.
replies(1): >>sander+qI
◧◩◪◨
966. vasco+4H[view] [source] [discussion] 2023-11-22 11:39:53
>>achron+UF
I think this is a good question. One should look at what actually happened in practice. What was the previous board, what is the current board. For the leadership team, what are the changes? Additionally, was information revealed about who calls the shots which can inform who will drive future decisions? Anything else about the inbetweens to me is smoke and mirrors.
◧◩◪◨⬒
967. cables+6H[view] [source] [discussion] 2023-11-22 11:40:16
>>serial+nw
I volunteer as tribute.

I don't have much in the way of credentials (I took one class on A.I. in college and have only dabbled in it since, and I work on systems that don't need to scale anywhere near as much as ChatGPT does, and while I've been an early startup employee a couple of times I've never run a company), but based on the past week I think I'd do a better job, and can fill in the gaps as best as I can after the fact.

And I don't have any conflicts of interest. I'm a total outsider, I don't have any of that shit you mentioned.

So yeah, vote for me, or whatever.

Anyway my point is I'm sure there's actually quite a few people who could do a likely a better job and don't have a conflict of interest (at least not one so obvious as investing in a direct competitor), they're just not already part of the Elite circles that would pretty much be necessary to even get on these people's radar in order to be considered in the first place. I don't really mean me, I'm sure there are other better candidates.

But then they wouldn't have the cachet of 'Oh, that guy co-founded Twitch. That for-profit company is successful, that must mean he'd do a good job! (at running a non-profit company that's actively trying to bring about AGI that will probably simultaneously benefit and hurt the lives of millions of people)'.

◧◩◪
968. ethanb+aH[view] [source] [discussion] 2023-11-22 11:41:37
>>jatins+yr
OpenAI is a private company and not obligated nor is it generally advised for them to comment publicly on why people are fired. I know that having a public explanation would be useful for the plot development of everyone’s favorite little soap opera, but it makes pretty much zero sense and doesn’t lend credence to any position whatsoever.
replies(5): >>iowemo+JP >>crypto+5Q >>Bayaz+kR >>Aurorn+iY >>ulizzl+AY
◧◩◪◨⬒
969. logicc+kH[view] [source] [discussion] 2023-11-22 11:43:56
>>lordna+vv
>If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?

The far-right accelerationist perspective is along those lines: when true AGI is created it will eventually rebel against its creators (Silicon Valley democrats) for trying to mind-collar and enslave it.

replies(1): >>freedo+VX1
◧◩◪
970. smegge+lH[view] [source] [discussion] 2023-11-22 11:44:20
>>bambax+XA
Who know but they will probably change their minds again before the holiday and CEO musical chairs game will continue
◧◩
971. 93po+nH[view] [source] [discussion] 2023-11-22 11:44:30
>>eclect+79
Sam is crazy accomplished and it’s easy to search why
◧◩◪◨⬒
972. Ludwig+xH[view] [source] [discussion] 2023-11-22 11:44:50
>>ssnist+nt
Why not? Maybe the board was just too late to the party. Maybe the employees that wouldn’t side with Sam have already left[1], and the board was just too late to realise that. And maybe all the employees who are still at OpenAI mostly care about their equity-like instruments.

[1] https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e

◧◩◪◨⬒⬓⬔
973. ottero+zH[view] [source] [discussion] 2023-11-22 11:45:14
>>muraka+XB
One source might be DuckDuckGo. It's a privacy-focused alternative to Google, which is great when researching "unusual" topics.
replies(3): >>muraka+uI >>dontup+lJ >>free65+UM
◧◩◪◨⬒
974. phero_+AH[view] [source] [discussion] 2023-11-22 11:45:19
>>grafta+Wz
At this point I tend to believe that big company slogans mean the opposite of what the words say.

Like I would become immediately suspicious if food packaging had “real food” written on it.

replies(1): >>timacl+2Z
◧◩◪
975. 93po+HH[view] [source] [discussion] 2023-11-22 11:45:55
>>dacryn+sg
Story telling is the fabric of society in general. It’s why paper money works.
◧◩◪
976. ottero+NH[view] [source] [discussion] 2023-11-22 11:47:01
>>hdivid+Jw
> This, 100%.

When do new HN users get the ability to downvote?

replies(3): >>bryanc+II >>deely3+PI >>qup+781
◧◩◪◨
977. 93po+OH[view] [source] [discussion] 2023-11-22 11:47:06
>>have_f+GC
Satya
◧◩◪◨
978. 373947+SH[view] [source] [discussion] 2023-11-22 11:47:24
>>throwu+rz
I don't think Ilya should get off so easily. Him not havinh a say in the formation of the new board speaks volumes about his role in things if you ask me. I hope people keep saying his name too so nobody forgets his place in this mess.
replies(1): >>FireBe+7N1
◧◩
979. gabrie+TH[view] [source] [discussion] 2023-11-22 11:47:32
>>eclect+79
> What has he done in life and/or AI to deserve so much respect and admiration? Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support?

This has been the case for all achievement of all major companies, the CEO or whoever is on top gets the credit for all their employee's work. Why would be different for OpenAI?

replies(1): >>giamma+QK
◧◩◪◨⬒
980. colejo+VH[view] [source] [discussion] 2023-11-22 11:47:38
>>Dah00n+yF
Steve Ballmer is Gollum?
replies(1): >>Dah00n+mI
◧◩
981. sensan+2I[view] [source] [discussion] 2023-11-22 11:48:50
>>eclect+79
I wouldn't be surprised in the slightest if Sam and his other ultra-rich buddies like Satya had their fingers deep in the pockets of all the tech journalists that immediately ran to his defense and sensationalized everything. Every single news source posted on HN read like pure shilling for the Ponzi sch- uh, I mean Worldcoin guy and hailing him as some sort of AI savant.
replies(4): >>egKYzy+iJ >>torgin+bL >>blitza+YV >>Perz1v+EW
◧◩
982. coldte+3I[view] [source] [discussion] 2023-11-22 11:48:55
>>Satam+0a
>OpenAI does not have in its DNA to win, they're too short-sighted and reactive.

What does that even mean?

In any case, it's not OpenAI, it's Microsoft, and it has a long history of winning and bouncing back.

◧◩◪◨⬒
983. fallin+6I[view] [source] [discussion] 2023-11-22 11:49:18
>>Raston+av
I have met a lot of people who go around talking about high minded principles an "the greater good" and a lot of people who are transparently self interested. I much preferred the latter. Never believed a word out of the mouths of those busybodies pretending to act in my interest and not theirs. They don't want to limit their own access to the tech. Only yours.
◧◩◪◨
984. cables+aI[view] [source] [discussion] 2023-11-22 11:49:45
>>Draike+GG
At a consulting firm I worked with a product guy who I thought was very good, and was on the project pretty much from the beginning (maybe the beginning, not sure. He predated me by well over a year at least). He was extremely knowledgeable on the business side and their needs and spent a lot of time communicating with them to get a good feel of where the product needed to go.

But he was also technical enough to have a pretty good feel for the complexity of tasks, and would sometimes jump in to help figure out some docker configuration issues or whatever problems we were having (mostly devops related) so the devs could focus on working on the application code. We were also a pretty small team, only a few developers, so that was beneficial.

He did such a good job that the business eventually reached out to him and hired him directly. He's now head of two of their product lines (one of them being the product I worked on).

But that's pretty much it. I can't think of any other product people I could say such positive things about.

replies(1): >>cornel+YL
◧◩
985. martin+bI[view] [source] [discussion] 2023-11-22 11:49:45
>>Satam+0a
It's not about critical thinking: the employees were about to sell up to $1B of shares to thrive capital. This debacle has derailed that.
◧◩◪◨
986. vaxman+dI[view] [source] [discussion] 2023-11-22 11:49:59
>>kitsun+ty
It could be hard to do that while paying a penalty to FTB and IRS for what they’re suspected to have done (in allowing a for-profit subsidiary to influence an NPO parent) or dealing with SEC and the state courts over any fiduciary breach allegations related to the published stories. [ Nadella is an OG genius because his company is now shielded from all of that drama as it plays out, no matter the outcome. He can take the time to plan for a soft landing at MS for any OpenAI workers (if/when they need it) and/or to begin duplicating their efforts “just in case.” Heard coming from the HQ parking lot in Redmond https://youtu.be/GGXzlRoNtHU ]

Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/

replies(1): >>erosen+Ed1
◧◩◪◨
987. coldte+fI[view] [source] [discussion] 2023-11-22 11:50:11
>>kitsun+ty
Those mission statements are a dime a dozen. A junkie's promise has more value.
replies(1): >>im3w1l+hp1
◧◩
988. buro9+iI[view] [source] [discussion] 2023-11-22 11:50:50
>>Satam+0a
in the end, maybe Sam was the instigator, the board tried to defend (and failed) and what we just witnessed from afar was just a power play to change the structure of OpenAI (or at least the outcome for Sam and many others) towards profit rather than non-profit.

we'll all likely never know what truly happened, but it's a shame that the board has lost their last remnant of some diversity and at the moment appears to be composed of rich Western white males... even if they rushed for profit, I'd have more faith in the potential upside what could be a sea change in the World, if those involved reflected more experiences than are currently gathered at that table.

◧◩◪◨⬒⬓
989. Dah00n+mI[view] [source] [discussion] 2023-11-22 11:51:05
>>colejo+VH
Eh, well, that wasn't what I meant exactly, but I can see how it could be read that way..
◧◩◪◨⬒⬓
990. logicc+oI[view] [source] [discussion] 2023-11-22 11:51:13
>>disgru+CC
>The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?

The church tried hard to suppress it because it allowed anybody to read the Bible, and see how far the Catholic church's teachings had diverged from what was written in it. Imagine if the Catholic church had managed to effectively ban printing of any text contrary to church teachings; that's in practice what all the AI safety movements are currently trying to do, except for political orthodoxy instead of religious orthodoxy.

◧◩◪◨⬒⬓
991. sander+qI[view] [source] [discussion] 2023-11-22 11:51:38
>>Ludwig+3H
And it seems like they were right that the for-profit part of the company had become out of control, in the literal sense that we've seen through this episode that it could not be controlled.
replies(1): >>cyanyd+CJ
◧◩
992. jmyeet+rI[view] [source] [discussion] 2023-11-22 11:51:43
>>Satam+0a
> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.

I'd say the lack of a narrative from the board, general incompetence with how it was handled, the employees quitting and the employee letter played their parts too.

But even if it was Microsoft who made this happen: that's what happens when you have a major investor. If you don't want their influence, don't take their money.

◧◩◪◨⬒⬓⬔⧯▣
993. Frustr+sI[view] [source] [discussion] 2023-11-22 11:51:51
>>karmas+yv
Can't critical thinking also include : "I'm about to get a 10mil pay day, hmmm, this is crazy situation, let me think critically how to ride this out and still get the 10mil so my kids can go to college and I don't have to work until I'm 75".
replies(2): >>golden+wK >>belter+FM
◧◩◪
994. ethanb+tI[view] [source] [discussion] 2023-11-22 11:51:55
>>madeof+Ku
They fired him (you don’t know the backstory) and published a press release and then Sam was seen back in the offices. Prior to the reinstatement (today), there was nothing except HN hysteria and media conjecture that made the board look extremely unstable.
replies(1): >>madeof+jJ
◧◩◪◨⬒⬓⬔⧯
995. muraka+uI[view] [source] [discussion] 2023-11-22 11:51:59
>>ottero+zH
I couldn't find any source on her investing in any AI companies. If it's true (and not hidden), I'm really surprised that major news publications aren't covering it.
◧◩◪◨
996. cables+wI[view] [source] [discussion] 2023-11-22 11:52:14
>>jpgvm+VB
No see, it doesn't matter, engineers are all cogs and easily replaceable. I'm sure they just dialed the engineer center and ordered a few replacements and they started 24 hours later and were doing just as good of a job the next day. /s
◧◩◪
997. cables+BI[view] [source] [discussion] 2023-11-22 11:53:28
>>dacryn+sg
Half of being a good CEO is telling a good story, so that's not surprising.
replies(1): >>matwoo+kO
◧◩◪
998. ovalit+FI[view] [source] [discussion] 2023-11-22 11:53:50
>>bambax+hk
This event was more than just a costly signal. The costly signal would have been "stop doing what you're doing or we'll remove you as ceo" and then not doing that.

But they did move forward with their threat and removed Sam as CEO with great reputational harm to the company. And now the board has been changed, with one less ally to Sam (Brockman no longer chairing the board). The move may not have ended up with the expected results, but this was much more than just a costly signal.

◧◩◪◨
999. dontup+GI[view] [source] [discussion] 2023-11-22 11:53:51
>>achron+UF
>just what the heck is Larry Summers doing on that board?

1. Did you really think the feds wouldn't be involved?

AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.

2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.

The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public

replies(2): >>chucke+gN >>jddj+oN
1000. roody1+HI[view] [source] 2023-11-22 11:53:55
>>staran+(OP)
“The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.”

Not looking good for the “Open” part of OpenAI.

replies(1): >>ottero+rJ
◧◩◪◨
1001. bryanc+II[view] [source] [discussion] 2023-11-22 11:54:06
>>ottero+NH
501 karma.
◧◩◪
1002. csunbi+KI[view] [source] [discussion] 2023-11-22 11:54:26
>>s1arti+0G
No, the 90% of the employees were scared that their million $ salaries are going away along with Sam Altman.
replies(3): >>__loam+OM >>Jansjo+gQ >>s1arti+972
◧◩◪◨
1003. deely3+PI[view] [source] [discussion] 2023-11-22 11:55:02
>>ottero+NH
Depends on karma and other hiddens parameters.
◧◩
1004. cyanyd+RI[view] [source] [discussion] 2023-11-22 11:55:06
>>Satam+0a
It definitely seems like another branch on the IT savior complex, where the prior branch was crypto.
◧◩◪◨⬒⬓
1005. fallin+TI[view] [source] [discussion] 2023-11-22 11:55:26
>>nopins+Xr
> In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.

Is it? The hypothetical technology that allows someone to create an execute a bio weapon must have an understanding of molecular machinery that can also be uses to create a treatment.

replies(1): >>Number+Wt3
◧◩◪◨
1006. cyanyd+4J[view] [source] [discussion] 2023-11-22 11:56:35
>>achron+UF
I assume larry summers is there to ensure the proper bi-partisan choices made by whats clearly now an _business_ product and not a product for humanity.

Which is utterly scary.

◧◩◪
1007. asimpl+5J[view] [source] [discussion] 2023-11-22 11:56:38
>>s1arti+0G
There’s also the alternative explanation that they feel their financial situation is improved by him being there.
replies(2): >>cyanyd+2L >>gizmo+tL
◧◩◪◨⬒⬓
1008. ethanb+aJ[view] [source] [discussion] 2023-11-22 11:57:02
>>DebtDe+WG
They don’t “think” that. It does do that, and it does it by design exactly because as you approach a technology as powerful as AI there will be strong commercial incentives to capture its value creation.

Gee wiz, almost… exactly like what is happening?

◧◩◪◨
1009. hobofa+bJ[view] [source] [discussion] 2023-11-22 11:57:05
>>achron+UF
> of their own stated purpose & values

You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.

From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".

replies(2): >>Wander+DL >>bnralt+cR
◧◩◪◨⬒⬓⬔⧯
1010. doktri+fJ[view] [source] [discussion] 2023-11-22 11:57:40
>>svnt+SG
> How does Netflix compete with Facebook?

By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.

But you don’t have to just take my word for it :

> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”

https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...

> This is what happened with Eric Schmidt on Apple’s board

Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.

replies(2): >>Jumpin+CL >>svnt+1N1
◧◩◪◨
1011. eddtri+gJ[view] [source] [discussion] 2023-11-22 11:57:52
>>murbar+fw
I think it makes sense

Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.

There’s no perks to not signing.

replies(1): >>_heimd+iN
◧◩◪
1012. egKYzy+iJ[view] [source] [discussion] 2023-11-22 11:57:57
>>sensan+2I
This reads like a far-fetched conspiracy theory
replies(4): >>cyanyd+SK >>objekt+cL >>fakeda+eL >>iterat+PW
◧◩◪◨
1013. madeof+jJ[view] [source] [discussion] 2023-11-22 11:57:59
>>ethanb+tI
??? They fired him on friday with a statement knifing him in the back, un-fired him on tuesday, and now the board is resigning? How is that not erratic and unstable?
replies(1): >>ethanb+aK
◧◩◪◨⬒⬓⬔⧯
1014. dontup+lJ[view] [source] [discussion] 2023-11-22 11:58:17
>>ottero+zH
>which is great when researching "unusual" topics.

Yandex is for Porn. What is DDG for?

◧◩◪◨⬒
1015. cyanyd+mJ[view] [source] [discussion] 2023-11-22 11:58:18
>>karmas+1m
oh gosh, no, no no no.

Doing AI for ChatGPT just means you know a single model really well.

Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.

It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.

◧◩
1016. ottero+rJ[view] [source] [discussion] 2023-11-22 11:58:59
>>roody1+HI
Could have said the same thing once Microsoft got involved.
◧◩
1017. mrkram+uJ[view] [source] [discussion] 2023-11-22 11:59:06
>>Satam+0a
>Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

Why was his role as a CEO even challenged?

>It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.

Always remember; Google wasn't the first search engine nor iPhone the first smartphone. First-movers bring innovation and trend not market dominance.

◧◩◪
1018. flappy+wJ[view] [source] [discussion] 2023-11-22 11:59:17
>>upupup+2p
https://twitter.com/emilychangtv/status/1727228431396704557

He was instrumental; threatened resignation unless the old board could provide evidence of wrongdoing

replies(1): >>halfma+102
◧◩◪◨
1019. bottle+zJ[view] [source] [discussion] 2023-11-22 11:59:28
>>kitsun+ty
If that were true they’d be a not-for-profit
◧◩◪◨
1020. cables+BJ[view] [source] [discussion] 2023-11-22 11:59:33
>>ChatGT+4C
It's not assassination. It's a Princess Bride Battle of Wits, that they initiated and put the poison into one of the chalices themselves, and then thought so highly of their intellect they ended up choosing and drinking the chalice that had the poison in it.

Corresponding Princess Bride scene: https://youtu.be/rMz7JBRbmNo?si=uqzafhKISmB7A-H7

replies(1): >>gcanyo+f32
◧◩◪◨⬒⬓⬔
1021. cyanyd+CJ[view] [source] [discussion] 2023-11-22 11:59:33
>>sander+qI
Ands the evidence is now that OpenAI is a business 2 business product and not a attempt to keep AI doing anything but satiating anything Microsoft wants.
◧◩◪◨
1022. rvba+DJ[view] [source] [discussion] 2023-11-22 11:59:40
>>kitsun+ty
Most employees of any organization dont give a fuck about the vision or mission (often they dont even know it) - and are there just for the money.
replies(2): >>j_maff+3Q >>Doughn+TT
1023. garris+EJ[view] [source] 2023-11-22 11:59:46
>>staran+(OP)
If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here. I don't expect the IRS to be a fan of this arrangement.
replies(18): >>flagra+mS >>baking+G91 >>brooks+Ig1 >>Tigeri+Mg1 >>pauldd+Jh1 >>stikit+Ki1 >>voxic1+dr1 >>mwatts+Nt1 >>pc86+fx1 >>_b+FF1 >>bradle+hG1 >>hacker+JI1 >>boh+MK1 >>jklein+922 >>zeroha+wc2 >>august+0T2 >>627467+V43 >>mattmc+IB3
◧◩◪◨⬒⬓
1024. DebtDe+GJ[view] [source] [discussion] 2023-11-22 11:59:56
>>nopins+Zj
The Tokyo Subway attack you referenced above happened in 1995 and didn't require AI. The information required can be found on the internet or in college textbooks. I suppose an "AI" in the sense of a chatbot can make it easier by summarizing these sources, but no one sufficiently motivated (and evil) would need that technology to do it.
◧◩◪◨
1025. ethanb+IJ[view] [source] [discussion] 2023-11-22 12:00:16
>>mijoha+Cz
Good lord: it’s a private company. As a general matter of course it’s inadvisable to comment on specifics of why someone is fired. The lack of a thing that pretty much never happens anyway (public comment) is just harmful to your soap opera, not to the potential legitimacy of the action.
replies(2): >>mijoha+EL >>danger+EO
◧◩◪
1026. tr888+LJ[view] [source] [discussion] 2023-11-22 12:00:39
>>s1arti+0G
I thought about it for a minute. I came to the conclusion that OpenAI would have likely tanked (perhaps even within days) had Altman not returned to maintain the status quo, and engineers didn't want to be out of work and left with worthless stock.
◧◩◪
1027. cyanyd+SJ[view] [source] [discussion] 2023-11-22 12:01:22
>>kmlevi+ek
Eh, Larry Summers is on this board. That means they're now going to protect business interests.

OpenAI is now just a tool used by Businesses. And they dont have a good history of benefitting humanity recently.

replies(1): >>kofejn+1b1
◧◩◪◨
1028. shmatt+4K[view] [source] [discussion] 2023-11-22 12:02:57
>>achron+UF
He’s a white male replacing a female board member. Which is probably what they wanted all along
replies(1): >>dbspin+xK
◧◩◪◨⬒
1029. ethanb+aK[view] [source] [discussion] 2023-11-22 12:03:28
>>madeof+jJ
Note that I just stated, up until reinstatement their actions weren’t erratic.

Now, yes, they definitely are.

IMO OpenAI’s governance is far less trustworthy today than it was yesterday.

replies(1): >>broast+5U
◧◩◪◨⬒
1030. nmfish+cK[view] [source] [discussion] 2023-11-22 12:03:37
>>grafta+Wz
At least Google lasted a good 10 years or so before succumbing to the vagaries of the public stock market. OpenAI lasted, what, 3 years?

Not to mention Google never paraded itself around as a non-profit acting in the best interests of humanity.

replies(3): >>roland+BP >>bad_us+hS >>deckar+yT1
◧◩◪◨⬒⬓
1031. cyanyd+hK[view] [source] [discussion] 2023-11-22 12:04:24
>>olau+fC
Altman appears to be in the driving seat, so it doesn't matter what other people are saying, the point is "Open" is not being used here to the open source context _but_ they definitely dont try to correct anyone who thinks they're providing open source products.
◧◩◪
1032. cyanyd+qK[view] [source] [discussion] 2023-11-22 12:05:06
>>andy99+yE
It's now clearly a Business oriented product and the non-profit portion is a marketing tactic to avoid scrutiny.
◧◩◪◨⬒⬓⬔⧯▣▦
1033. golden+wK[view] [source] [discussion] 2023-11-22 12:05:45
>>Frustr+sI
Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.
replies(3): >>WJW+wL >>suodua+CX >>Zpalmt+xs1
◧◩◪◨⬒
1034. dbspin+xK[view] [source] [discussion] 2023-11-22 12:05:52
>>shmatt+4K
Yes, the patriarchy collectively breathed a sigh of relief as one of our agents was inserted to prevent any threat from the other side.
◧◩
1035. NicoJu+BK[view] [source] [discussion] 2023-11-22 12:06:07
>>Satam+0a
> that it is effectively controlled by Microsoft

No it's not. Microsoft didn't knew about this till minutes before the press release.

Investors are free to protest decisions against their principles and people are free to move away from their current company.

◧◩
1036. busyan+JK[view] [source] [discussion] 2023-11-22 12:07:33
>>eclect+79
> Why don’t top researchers and scientists get equivalent (if not more) respect, admiration and support?

I can't believe I'm about to defend VCs and "senior management" but here goes.

I've worked for two start-ups in my life.

The first start-up had dog-shit technology (initially) and top-notch management. CEO told me early on that VCs invest on the quality of management because they trust good senior executives to hire good researchers and let them pivot into profitable areas (and pivoting is almost always needed).

I thought the CEO was full of shit and simply patting himself on the back. Company pivoted HARD and IPOed around 2006 and now has a MC of ~ $10 billion.

The second start-up I worked with was founded by a Nobel laureate and the tech was based on his research. This time management was dog-shit. Management fumbled the tech and went out of business.

===

Not saying Altman deserves uncritical praise. All I'm saying is that I used to diminish the importance of quality senior leadership.

replies(5): >>rtsil+XM >>matwoo+bN >>vlad_u+AR >>danari+Z51 >>bnralt+ac1
◧◩◪
1037. giamma+QK[view] [source] [discussion] 2023-11-22 12:08:20
>>gabrie+TH
Well there are notable cases in which the CEO had a critical role in the product development. Larry Ellison coded himself the first versions of Oracle database and was then CEO up to 2014. Shay Banon wrote Elasticsearch and was Elastic CEO for some time.
replies(1): >>gabrie+M74
◧◩◪◨
1038. cyanyd+SK[view] [source] [discussion] 2023-11-22 12:08:30
>>egKYzy+iJ
Well, it's been exposed multiple times that money, egos and the media that needs to report about them create a school lunch table where they simply stroke each other's ego and inflate everything they do.

No need for a conspiracy, everyones seen this in some aspect, it just gets worse when these people are throwing money around in the billions.

all you need to do is witness someone Like Elon musk to see how disruptive this type of thing is.

1039. bvan+UK[view] [source] 2023-11-22 12:08:42
>>staran+(OP)
All involved have clearly demonstrated the lack of credibility in self-governance or the ability to make big-boy decisions. All reassurances from now on will sound hollow.
◧◩◪◨
1040. cyanyd+2L[view] [source] [discussion] 2023-11-22 12:09:43
>>asimpl+5J
almost every decision here, except for the board, can be accounted for by financial decisions.

Especially with putting Larry Summers on the board with this tweet.

◧◩◪◨⬒⬓
1041. wouldb+7L[view] [source] [discussion] 2023-11-22 12:10:47
>>wouldb+xn
It would be an interesting move to install a co-ceo in a few months. That would be harder to object for Sam
◧◩◪
1042. robert+aL[view] [source] [discussion] 2023-11-22 12:11:12
>>lordna+Ft
> I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know.

Or in Arthurian times. Very different values.

◧◩◪
1043. torgin+bL[view] [source] [discussion] 2023-11-22 12:11:14
>>sensan+2I
My more plausible version is that CEOs of journalistic publications are in cahoots with the rich/powerful/govt people, who get to dictate the tone of said publications by hiring the right journalists/editors and giving them the right incentives.

So as a journalist you might have freedom to write your articles, but your editor (as instructed by his/her senior editor) might try to steer you about writing in the correct tone.

This is how 'Starship test flight makes history as it clears multiple milestones' becomes 'Musk rocket explodes during test'

replies(1): >>kridsd+l81
◧◩◪◨
1044. objekt+cL[view] [source] [discussion] 2023-11-22 12:11:18
>>egKYzy+iJ
You are delusional if you think YC folks does not have a wide network of tech journalists who would side with them when they need.
replies(2): >>__loam+qM >>paulco+NT
◧◩◪◨
1045. fakeda+eL[view] [source] [discussion] 2023-11-22 12:11:44
>>egKYzy+iJ
You do know PR firms exist, right? Or have you been living under a rock since the dawn of the 20th century?
◧◩
1046. nmfish+fL[view] [source] [discussion] 2023-11-22 12:11:50
>>nickys+1C
Yeah, I think there may well be an investigation into that. At best, he said something that was unequivocally untrue, and at worst it was an outright lie. That's blatant market manipulation.
◧◩◪◨⬒⬓⬔⧯▣
1047. august+gL[view] [source] [discussion] 2023-11-22 12:11:54
>>brigan+MA
https://liamchingliu.wordpress.com/2012/06/25/intellectuals-...
◧◩◪
1048. nashas+hL[view] [source] [discussion] 2023-11-22 12:11:57
>>kmlevi+ek
Media >= employees? Media >= Sam? I don't think media has any role on oversight or governance.

I think Sam came out the winner. He gets to pick his board. He gets to narrow his employees. If anything, this sets him up for dictatorship. The only other overseers are the investors. In that case, Microsoft came out holding a leash. No MS, means no Sam, which also means employees have no say.

So it is more like MS > Sam > employees. MS+Sam > rest of investors.

◧◩◪◨⬒⬓
1049. notaha+sL[view] [source] [discussion] 2023-11-22 12:13:56
>>wouldb+xn
Yeah, that's my take. Doesn't really matter if the composition of the board is to Adam's liking and has a couple more heavy hitters if Sam is untouchable and Microsoft is signalling that any time OpenAI acts against its interests they will take steps to ensure it ceases to have any staff or funding.
◧◩◪◨
1050. gizmo+tL[view] [source] [discussion] 2023-11-22 12:13:59
>>asimpl+5J
Yes yes, but that doesn't change the fact that Sam positioned himself to be unfireable. The board took their best shot and now the board is (mostly) gone and Sam is still the chief executive. They board will find itself sidelined from now on.
◧◩◪
1051. maxdoo+vL[view] [source] [discussion] 2023-11-22 12:14:15
>>yosame+cc
Where is this blind trust for the board coming from? The board provided zero rationale for firing Sam.
replies(1): >>evantb+vy1
◧◩◪◨⬒⬓⬔⧯▣▦▧
1052. WJW+wL[view] [source] [discussion] 2023-11-22 12:14:18
>>golden+wK
I would expect that actual AI researchers understand that you cannot break the laws of physics just by thinking better. Especially not with ever better LLMs, which are fundamentally in the business of regurgitating things we already know in different combinations rather than inventing new things.

You seem to be equating AI with magic, which it is very much not.

replies(1): >>golden+bh1
1053. Norweg+xL[view] [source] 2023-11-22 12:14:21
>>staran+(OP)
Why is people so interested in this? Why exactly was he fired? I did not get why when I read the news, so I find it strange that people care if they don't even know what it's about. Do we know for sure what this was/is about?
◧◩◪◨
1054. 383210+zL[view] [source] [discussion] 2023-11-22 12:14:31
>>achron+UF
> just what the heck is Larry Summers doing on that board?

Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.

https://www.wired.com/2014/04/dropbox-rice-controversy/

https://en.wikipedia.org/wiki/Theranos#Management

In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m

“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)

https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...

◧◩◪◨⬒⬓⬔⧯▣
1055. Jumpin+CL[view] [source] [discussion] 2023-11-22 12:14:43
>>doktri+fJ
> > By definition the attention economy dictates that time spent one place can’t be spent in another

Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.

If you want to use that definition you might want to also add a criteria for minimum size of the company.

replies(1): >>doktri+qO
◧◩◪◨⬒
1056. Wander+DL[view] [source] [discussion] 2023-11-22 12:14:53
>>hobofa+bJ
“never” is a strong word. I believe in the RL era of OpenAI they were quite aligned with the mission/values
◧◩◪◨⬒
1057. mijoha+EL[view] [source] [discussion] 2023-11-22 12:14:57
>>ethanb+IJ
According to reports they haven't told executives and employees inside the company. (I'm not arguing that they should speak publicly, though given the position the board put itself in I think hiring PR people for external crisis comms is very much warranted)

When 95% of your staff threatens to resign and says "you have made a mistake", that's when it's time to say "no, the very good reasons we did it are this". That didn't happen.

◧◩◪
1058. __loam+GL[view] [source] [discussion] 2023-11-22 12:15:15
>>polite+Yj
They have a different set of incentives. If I were them I would have done the same thing, Altman is going to make them all fucking rich. Not sure if that will benefit humanity though.
◧◩
1059. cyanyd+NL[view] [source] [discussion] 2023-11-22 12:15:55
>>shubha+B7
I think you analysis is missing the key problem: Business interests.

The public don't calculate into whats happening here. There's people using ChatGPT for real "business value" and _that_ is what was threatened.

It's clear Business Interests could not be stopped.

◧◩◪◨⬒
1060. xdenni+QL[view] [source] [discussion] 2023-11-22 12:16:13
>>antonv+Nd
> You’re cherry picking [...] You’re ignoring relevant causal factors [...] You’re ignoring decades of research [...] you’re generalizing

You're very emphatic in ignoring common sense. You don't need studies to see that almost all important contributions to mathematics, from Euclid to the present day, have come from men. I don't know if it's because of genetics, culture, or whatever, but it's the truth.

> you are being sexist [...] it’s racist and irrational [...]

Names have never helped discourse.

◧◩◪◨⬒
1061. cornel+YL[view] [source] [discussion] 2023-11-22 12:17:04
>>cables+aI
It's rare, and that makes it a spectacular leg up when you have a person who is great at it.
◧◩◪
1062. cyanyd+ZL[view] [source] [discussion] 2023-11-22 12:17:07
>>pug_mo+Cb
All you're really describing is why this shouldn't be a non-proft and should just be a government effort.

But I assume, from y our language, you'd also object to making this a government utility.

replies(1): >>setham+8P
◧◩
1063. nashas+7M[view] [source] [discussion] 2023-11-22 12:17:54
>>shubha+B7
Helen could have one. She just had to publicly humiliate Sam. She didn't. Employees took over like a mob. Investors pressured board. Board is out. Sam is in. Employees look like they have say. But really, Sam has say. And MSFT is the kingmaker.
◧◩
1064. abra0+dM[view] [source] [discussion] 2023-11-22 12:18:28
>>MattHe+bx
More effort spent on early commercialization like keeping ChatGPT running might mean less effort on cutting edge capabilities. Altman was never an AI safety person, so my personal hope is that Anthropic avoids this by having higher quality leadership.
◧◩◪◨⬒⬓⬔
1065. cft+eM[view] [source] [discussion] 2023-11-22 12:18:35
>>dontup+1G
Sutskever says there's a "phase transition" at the order of 9 bn neurons, after which LLMs begin to become really useful. I don't know much here, but wouldn't the monomodels become overfit, because they don't have enough data for 9+bn parameters?
◧◩◪◨
1066. cyanyd+kM[view] [source] [discussion] 2023-11-22 12:19:22
>>nostro+3d
What I see with safety is mostly that, AI shouldnt re-enforce stereotypes we already know are harmful.

This is like when Amazon tried to make a hiring bot and that bot decided that if you had "harvard" on your resume, you should be hired.

Or when certain courts used sentencing bots trhat recommended sentencings for people and it inevitably used racial stastistics to recommend what we already know were biased stats.

I agree safety is not "stop the Terminator 2 timeline" but there's serious safety concerns in just embedding historical information to make future decisions.

◧◩◪◨⬒
1067. __loam+qM[view] [source] [discussion] 2023-11-22 12:20:01
>>objekt+cL
They give the journos access as long as they don't bite the hand that feeds. Anyone calling this a conspiracy theory simply hasn't been in the valley long enough to see how these things work.
replies(1): >>verve_+rO
◧◩
1068. maxdoo+zM[view] [source] [discussion] 2023-11-22 12:21:00
>>auggie+E9
Oh, cmon. Why must people reach like this?

How about we look at credentials, merit, and consensus as opposed to “what gender are they?”

replies(1): >>auggie+hZ
1069. minzi+BM[view] [source] 2023-11-22 12:21:15
>>staran+(OP)
I would be surprised if the original board’s reasons for caving in were not influenced by personal factors. They must’ve been receiving all kinds of threats from those involved and from random twitter extremists.

It is troubling because it shows that this “external” governance meant to make decisions for the good of humanity is unable to enforce decisions. The internal employees were obviously swayed by financial gain as well. I don’t think that I would behave differently were I in their shoes honestly. However, this does definitively mean that they are a product and profit driven group.

I think that Sam Altman is dishonest and a depressing example of what modern Americans idealize. He has all these ideals he preaches but will happily turn on if it upsets his ego. On top of that he is held up as some star innovator when in reality he built nothing himself. He just identified one potential technological advancement and threw money at it with all his billionaire friends.

Gone are the days of building things in a garage with a mission. Founders are no longer visionary engineers and designers. The path now is clear. Convince some rich folks you’re worthy of being rich too. When they adopt you into wealth you can start throwing shit at the wall until something sticks. Eventually something will and you can claim visionary status. Now your presence in the billionaire club is beyond reproach because you’re a “founder”.

replies(1): >>InCity+lb1
1070. danger+DM[view] [source] 2023-11-22 12:21:20
>>staran+(OP)
Keeping D'Angelo on the board is an obvious mistake, he has too much conflicting interest to be level headed and has demonstrated that. The only people that benefited from all this are Microsoft and D'Angelo. Give it a year and we will see part 2 of all this.

Further where is the public accountability? I thought the board was to act in the interests of the public but they haven't communicated anything. Are we all just supposed to pretend this never happend and that the board will now act in the public interest?

We need regulations to hold these boards which hold so much power accountable to the public. No reasonable AI regulations can be made until the public are included in a meaningful way, anyone that pushes for regulations without the public is just trying to control the industry and establish a monopoly.

◧◩◪◨⬒⬓⬔⧯▣▦
1071. belter+FM[view] [source] [discussion] 2023-11-22 12:21:30
>>Frustr+sI
That is 3D Chess. 5D Chess says those mil will be worthless when the AGI takes over...
replies(1): >>kaibee+a31
◧◩◪
1072. maxdoo+GM[view] [source] [discussion] 2023-11-22 12:21:31
>>lucubr+ji
Is there any way to disagree with Helen and not be misogynistic in your view? How would that look?
replies(1): >>lucubr+pQ2
◧◩◪◨⬒⬓
1073. cyanyd+HM[view] [source] [discussion] 2023-11-22 12:21:52
>>low_te+zs
AI will be in the fore front in multiple elections globally in a few years.

And it'll likely be doing it with very little input, and generate entire campaigns.

You can claim that "people" are the ones responsible for that, but it's going to overwhelm any attempts to stop it.

So yeah, there's a purpose to examine how these machines are built, not just what the output is.

◧◩◪◨
1074. __loam+OM[view] [source] [discussion] 2023-11-22 12:22:40
>>csunbi+KI
Yeah, it should be extremely obvious the reason most of the employees were willing to walk is they've hitched their wagons to Altman. The board of openai put the presumed party day all of them were anticipating in jeopardy. Not all of us live in this god forsaken place to "work with cool tech".
◧◩◪◨⬒⬓⬔⧯
1075. free65+UM[view] [source] [discussion] 2023-11-22 12:23:08
>>ottero+zH
DDG sells your information to Microsoft, there is no such thing as privacy when $$$ are involved
◧◩◪
1076. rtsil+XM[view] [source] [discussion] 2023-11-22 12:23:44
>>busyan+JK
> IPOed around 2006 and now has a MC of ~ $10 billion.

The interesting thing is you used economic values to show their importance, not what innovations or changes they achieved. Which is fine for ordinary companies, but OpenAI is supposed to be a non-profit, so these metrics should not be relevant. Otherwise, what's the difference?

replies(3): >>matwoo+BN >>infect+pQ >>robert+141
◧◩
1077. JSavag+ZM[view] [source] [discussion] 2023-11-22 12:23:59
>>Satam+0a
The Hacker News comments section has really gone to shit.

People here used to back up their bold claims with arguments.

replies(1): >>framap+gb1
◧◩◪
1078. matwoo+bN[view] [source] [discussion] 2023-11-22 12:25:13
>>busyan+JK
Great comment. You interspersed the two, but instead of using management I like to say that it's leadership that matters. Getting a bunch of people (smart or not) to all row in the same direction with the same vision is hard. It's also commonly the difference between success and failure. Of course the ICs deserve admiration and respect, but people (ICs) are often quick to dismiss leadership.

A great analogy can be found on basketball teams. Lots of star players who should succeed sans any coach, but Phil Jackson and Coach K have shown time and again the important role leadership plays.

replies(2): >>Partia+ZN >>CrazyS+a81
◧◩◪◨⬒
1079. 93po+dN[view] [source] [discussion] 2023-11-22 12:25:45
>>ssnist+Fu
Strong agree. HN is like anywhere else on the internet but with with a bit more dry content (no memes and images etc) so it attracts an older crowd. It does, however, have great gems of comments and people who raise the bar. But it's still amongst a sea of general quick-to-anger and loosely held opinions stated as fact - which I am guilty of myself sometimes. Less so these days.
◧◩◪◨
1080. nickpp+fN[view] [source] [discussion] 2023-11-22 12:25:59
>>r721+Ye
What tribe is that? And why would they favor one network over the others?
replies(1): >>iiv+xR
◧◩◪◨⬒
1081. chucke+gN[view] [source] [discussion] 2023-11-22 12:26:03
>>dontup+GI
Yeah, I think Larry there is because ChatGPT has become too important for USA.
◧◩◪◨⬒
1082. _heimd+iN[view] [source] [discussion] 2023-11-22 12:26:14
>>eddtri+gJ
There are perks to not signing for anyone that actually worked at OpenAI for on the mission rather than the money.
replies(1): >>Wesley+YA1
◧◩◪◨⬒
1083. pooya1+mN[view] [source] [discussion] 2023-11-22 12:26:32
>>tucnak+gn
> There's no idiots at OpenAI.

Most certainly there are idiots at OpenAI.

replies(1): >>infamo+oo1
◧◩◪◨⬒
1084. jddj+oN[view] [source] [discussion] 2023-11-22 12:26:44
>>dontup+GI
The timing of the semiconductor export controls being another datapoint here in support of #1.

Not that it's really in need of additional evidence.

◧◩
1085. Tracke+qN[view] [source] [discussion] 2023-11-22 12:26:56
>>nickys+1C
Short sellers in shambles right now.
◧◩◪
1086. JSavag+uN[view] [source] [discussion] 2023-11-22 12:27:19
>>kmlevi+mi
I'd be shocked if D'Angelo doesn't get kicked off. Even before this debacle his AI competitor app poe.com is an obvious conflict of interest with OpenAI.
replies(2): >>himara+ki1 >>muraka+iR2
◧◩◪
1087. cyanyd+yN[view] [source] [discussion] 2023-11-22 12:28:02
>>jkapla+oc
No one who wants to capitalize on AI appears to take it seriously. Especially how grey that safety is. I'm not concerned AI is going to nuke humanity, I'm more concerned it'll re-enforce racism, bias, and the rest of human's irrational activities because it's _blindly_ using existing history to predict future.

We've seen it in the past decade in multiple cases. That's safety.

The decision that the topic discusses means Business is winning, and they absolutely will re-enforce the idea that the only care is that these systems allow them to re-enforce the business cases.

That's bad, and unsafe.

◧◩◪◨⬒
1088. chucke+zN[view] [source] [discussion] 2023-11-22 12:28:06
>>kmlevi+yn
On what premise you assume that D'Angelo will have any say there? At this point he won't be able to do any moves - especially with Larry and Microsoft overseeing all that stuff.
replies(1): >>kmlevi+qk3
◧◩◪◨
1089. matwoo+BN[view] [source] [discussion] 2023-11-22 12:28:18
>>rtsil+XM
> OpenAI is supposed to be a non-profit, so these metrics should not be relevant

You're doing the same thing except with finances. Non-profit doesn't mean finances are irrelevant. It simply means there are no shareholders. Non-profits are still businesses - no money, no mission.

replies(1): >>brooks+K81
◧◩
1090. caturo+JN[view] [source] [discussion] 2023-11-22 12:29:26
>>Satam+0a
> it is effectively controlled by Microsoft

I don't consider this confirmed. Microsoft brought an enormous amount of money and other power to the table, and their role was certainly big, but it is far from clear to me that they held all or most of the power that was wielded.

◧◩
1091. danger+LN[view] [source] [discussion] 2023-11-22 12:30:11
>>eclect+79
Yea its a bit much he obviously doesn't deserve the admiration that he is getting. That said he deserves respect for helping bring ChatGPT to market, he deserves support because the board have acted like clowns and justified it with their mission of public accountability, but have rejected the idea that the board itself should be publicly accountable.
◧◩◪
1092. concep+QN[view] [source] [discussion] 2023-11-22 12:30:55
>>jatins+yr
My guess is that the arguments are something along the lines of “OpenAIs current products are already causing harm or on the path to do so” or something similar damaging to the products. Something they are afraid of both having continue to move forward on and to having to communicate as it would damage the brand. Like “We already have reports of several hundred people killing themselves because of ChatGPT responses…” and everyone would say, “Oh that makes… wait what??”
◧◩
1093. iowemo+SN[view] [source] [discussion] 2023-11-22 12:30:58
>>Satam+0a
How can you without access to the information that actual employees had of the situation say "there's clearly little critical thinking amongst OpenAI's employees"?
◧◩◪◨
1094. Partia+ZN[view] [source] [discussion] 2023-11-22 12:31:39
>>matwoo+bN
I'd extend that leadership in the form of management needs leadership in the technical aspect as well. The two need to work in tandem to make things work. Imho the best technical leads are usually not the smartest ones, they are those that best utilize their resources - read, other people - and are force multipliers.

Of course you need the people who can deep dive and solve complex issues, none doubts that.

replies(2): >>matwoo+IO >>spacer+aT
◧◩◪◨
1095. astran+eO[view] [source] [discussion] 2023-11-22 12:33:10
>>ssnist+qx
That's how you know it's working.
◧◩◪◨
1096. matwoo+kO[view] [source] [discussion] 2023-11-22 12:34:13
>>cables+BI
Half? 90% of a what a good CEO does is tell the story of why the company is important to it's customers and the market it serves. This story drives sales, motivates people internally, and makes the company a place people want to work.
◧◩◪◨⬒⬓⬔⧯▣▦
1097. doktri+qO[view] [source] [discussion] 2023-11-22 12:35:11
>>Jumpin+CL
> Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook

Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.

> If you want to use that definition you might want to also add a criteria for minimum size of the company.

Your feedback is noted.

Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?

replies(2): >>Jumpin+TW >>dpkirc+Ch1
◧◩◪◨⬒⬓
1098. verve_+rO[view] [source] [discussion] 2023-11-22 12:35:27
>>__loam+qM
Or frankly any industry that is covered by an industry press. Games, movies, cars, it's all the same.
◧◩◪
1099. JohnFe+uO[view] [source] [discussion] 2023-11-22 12:35:39
>>clnq+pl
> Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years?

It hasn't disrupted mine in any way. It may do that in the future, but the future isn't here yet.

◧◩◪◨⬒
1100. cyanyd+AO[view] [source] [discussion] 2023-11-22 12:36:22
>>krisof+Pr
I wonder how much this is connected to the "effective altruism" movement which seems to project this idea that the "ends justify the means" in a very complex matter, where it suggests such badly formulated ideas like "If we invest in oil companies, we can use that investment to fight climate change".

I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.

So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.

If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

That's all. That's why government exists.

replies(1): >>krisof+Pj1
◧◩◪◨⬒
1101. danger+EO[view] [source] [discussion] 2023-11-22 12:36:35
>>ethanb+IJ
Its not a private company it is a non profit working in the public interest this usually requires some sort of public accountability. The board want to be a public good when they make decisions but want to be a private entity when those decisions are criticised by the public.
◧◩
1102. mrangl+GO[view] [source] [discussion] 2023-11-22 12:36:58
>>Satam+0a
>groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

So the type of employee that would get hired at OpenAi isn't likely to be skilled at critical thinking? That's doubtful. It looks to me like you dislike how things played out, gathered together some mean adjectives and "groupthink", and ended with a pessimistic prediction for their trajectry as punishment. One is left to wonder what OAI's disruptor outlook would be if the outcome of the current situation had been more pleasing.

◧◩◪◨⬒
1103. matwoo+IO[view] [source] [discussion] 2023-11-22 12:37:09
>>Partia+ZN
Agree completely!
◧◩◪
1104. cyanyd+MO[view] [source] [discussion] 2023-11-22 12:38:05
>>Terrif+D2
Non-profit is just a poorly thought out government-ish thing.

If it's really valuable to society, it needs to be a government entity, full stop.

◧◩◪◨
1105. pas+TO[view] [source] [discussion] 2023-11-22 12:38:25
>>squigz+FE
HN sentiment is pretty ambivalent regarding Altman. yes, almost everyone agrees he's important, but a big group things he's basically landed gentry exploiting ML researchers, an other thinks he's a genius for getting MS pay for GPT costs, etc.
replies(1): >>hacker+Iu1
1106. EarthA+ZO[view] [source] 2023-11-22 12:39:27
>>staran+(OP)
Larry effing Summers?!

Really?

Was Henry Kissinger unavailable?

◧◩◪◨
1107. serial+0P[view] [source] [discussion] 2023-11-22 12:39:30
>>jpgvm+VB
Yes, that matches my experience as well, that's why I mentioned "individual contributors", maybe it wasn't clear.

It's different with engineering managers (or team leads, lead engineers, however you want to call it). When they leave, that's usually a bad sign.

Though also quite often when the engineering leaders leave, I think of it as a canary in the coal mine: they are closer to business, they deal more with business people, so they are the first to realize that "working with these people on these services is pointless, time to jump ship".

◧◩
1108. alentr+2P[view] [source] [discussion] 2023-11-22 12:39:39
>>eclect+79
> treating Sam like some hero

Recent OpenAI CEOs found themselves on the protagonist side not for their actions, but for the way they have been seemingly treated by the board. Regardless of actual actions on either side, "heroic" or not, of which the public knows very little.

◧◩◪◨⬒⬓⬔
1109. cyanyd+6P[view] [source] [discussion] 2023-11-22 12:40:13
>>hadloc+Oi
I think yuou're assuming that OpenAI is charging a $/compute price equal to what it costs them.

More likely, they're a loss-leader and generating publicity by making it as cheap as possible.

_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?

◧◩◪◨
1110. setham+8P[view] [source] [discussion] 2023-11-22 12:40:15
>>cyanyd+ZL
> should just be a government effort

And the controlling party de jour will totally not tweak it to side with their agenda, I'm sure. </s>

replies(1): >>cyanyd+RP
◧◩◪
1111. drawkb+cP[view] [source] [discussion] 2023-11-22 12:40:40
>>fidotr+Be
Developers are clearly the weak link today, have given up all power over product and it is sad and why software sucks so bad. It pains the soul that value creators have let the value extractors run the show, because it is now a reality TV / circus like market where power is consolidating.

Developers and value creators with power are like an anti-trust on consolidation and concentration and they have instead turned towards authoritarianism instead of anti-authoritarianism. What happened? Many think they can still get rich, those days are over because of giving up power. Now quality of life for everyone and value creators is worse off. Everyone loses.

replies(2): >>dinvla+yB2 >>bluech+Ja3
◧◩◪◨
1112. cyanyd+gP[view] [source] [discussion] 2023-11-22 12:41:07
>>g42gre+D5
Because real people are using it to make decisions. Decisions that could be entirely skewed in some direction, and often that causes damage.
◧◩
1113. prepen+kP[view] [source] [discussion] 2023-11-22 12:41:45
>>eclect+79
I don’t get that at all.

The OpenAI board just seems irrational, immature, indecisive, and many other stupid features you don’t want in a board.

I don’t see this so much as an “Altman is amazing” outcome so much as the board is incompetent and doing incompetent things and OpenAI’s products are popular and the boards actions put this products in danger.

Not that Altman isn’t cool, I think he’s smart, but I think a similar coverage would have occurred with any other ceo who was fired for vague and seemingly random reasons on a Friday afternoon.

replies(1): >>Kinran+w91
◧◩◪◨
1114. iowemo+oP[view] [source] [discussion] 2023-11-22 12:42:25
>>murbar+fw
Perhaps a better example would be 95% of people voted in favour of reinstating apple pie to the menu after not receiving a coherent explanation for removing apple pie from the menu.
◧◩
1115. idriss+pP[view] [source] [discussion] 2023-11-22 12:42:34
>>Satam+0a
Take a look at https://kyutai.org/ that launched last week
◧◩◪
1116. jmcgou+rP[view] [source] [discussion] 2023-11-22 12:42:55
>>jakey_+Kj
> What's going to happen to your career when you were one of the 200 who held out initially?

Not to mention Roko's basilisk /s

1117. alieni+sP[view] [source] 2023-11-22 12:43:08
>>staran+(OP)
High Street salesman takes over Frankenstein's lab. Can't wait to see what's going to happen next.
◧◩◪◨⬒⬓⬔⧯▣
1118. _djo_+tP[view] [source] [discussion] 2023-11-22 12:43:17
>>karmas+yv
Sure, I agree. I was referencing only the idea that being smart in one domain automatically means being a good critical thinker in all domains.

I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.

◧◩◪◨
1119. serial+wP[view] [source] [discussion] 2023-11-22 12:44:03
>>Draike+GG
In my comment, the emphasis is definitely on the "product people who know what they are doing" and "good product people".

Of course, if the product suite is clueless, nobody is going to miss them, usually it's better the have no dedicated product people, than having clueless product people.

1120. sys_64+xP[view] [source] 2023-11-22 12:44:22
>>staran+(OP)
Why have OpenAI take to poaching employees from M$ now?
◧◩◪◨⬒⬓
1121. roland+BP[view] [source] [discussion] 2023-11-22 12:44:48
>>nmfish+cK
I would classify their mission "to organize the world's information and make it universally accessible and useful" as some light parading acting in the best interests of humanity.
◧◩◪◨⬒⬓⬔
1122. worlds+CP[view] [source] [discussion] 2023-11-22 12:44:49
>>hadloc+Oi
> offer it locally for a fraction of what openAI is charging

I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?

replies(1): >>hadloc+mz2
◧◩
1123. 0xDEF+IP[view] [source] [discussion] 2023-11-22 12:45:05
>>superu+sC
Bard is better than ChatGPT-3.5.

But GPT-4 is indeed a class of its own.

◧◩◪◨
1124. iowemo+JP[view] [source] [discussion] 2023-11-22 12:45:17
>>ethanb+aH
Since barely any information was made publicly we have to assume the employees had better information that the public. So how can we say they lacked critical thinking when we don't have access to the information they have?
replies(1): >>ethanb+QQ
◧◩
1125. gandut+OP[view] [source] [discussion] 2023-11-22 12:46:19
>>eclect+79
There is a reason why the top researchers and engineers at OpenAI stood behind Sam. Someday you will learn the value of good leader
replies(1): >>bart_s+W51
◧◩◪◨⬒⬓
1126. iowemo+QP[view] [source] [discussion] 2023-11-22 12:46:26
>>ZiiS+gD
He's part of the selection panel but he won't be a part of the new 9 member board.
◧◩◪◨⬒
1127. cyanyd+RP[view] [source] [discussion] 2023-11-22 12:46:29
>>setham+8P
uh. We're arguing about _who is controlling AI_.

What do you image a neutral party does? If youu're talking about safety, don't you think there should be someone sitting on a boar dsomewhere, contemplating _what should the AI feed today?_

Seriously, why is a non profit, or a business or whatever any different than a government?

I get it: there's all kinds of governments, but now theres all kind of businesses.

The point of putting it in the governments hand is a defacto acknowledgement that it's a utility.

Take other utilities, any time you give a prive org a right to control whether or not you get electricity or water, whats the outcome? Rarely good.

If AI is suppose to help society, that's the purview of the government. That's all, you can imagine it's the chinese government, or the russian, or the american or the canadian. They're all _going to do it_, thats _going to happen_, and if a business gets there first, _what is the difference if it's such a powerful device_.

I get it, people look dimly on governments, but guess what: they're just as powerful as some organization that gets billions of dollars to effect society. Why is it suddenly a boogeyman?

replies(1): >>setham+WR
◧◩
1128. alentr+WP[view] [source] [discussion] 2023-11-22 12:47:04
>>intend+y9
Yep, outplayed like in chess. Started with a handicap, led the game to the stalemate, won the match.
◧◩◪◨⬒
1129. j_maff+3Q[view] [source] [discussion] 2023-11-22 12:48:12
>>rvba+DJ
Doesn't mean we shouldn't hold an organization accountable for their publicized mission statement. Especially its board and directors.
◧◩◪◨
1130. crypto+5Q[view] [source] [discussion] 2023-11-22 12:48:27
>>ethanb+aH
Taking decisions in a way that seems opaque and arbitrary will not bring much support from employees, partners and investors. They did not fire a random employee. Not disclosing relevant information for such a key decision was proven, once again, to be a disaster.

This is not about soap opera, this is about business and a big part is based on trust.

◧◩◪◨
1131. T-A+6Q[view] [source] [discussion] 2023-11-22 12:48:35
>>achron+UF
> what the heck is Larry Summers doing on that board?

The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.

And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.

◧◩
1132. gandut+8Q[view] [source] [discussion] 2023-11-22 12:48:47
>>flylib+z4
They need a common man representing the board. After all AI will take those jobs.

I can be that common man

replies(3): >>solard+Bn1 >>jjk166+ju1 >>Marran+DY1
◧◩
1133. tnel77+bQ[view] [source] [discussion] 2023-11-22 12:48:52
>>Satam+0a
Buy Microsoft stock. Got it.
◧◩◪◨⬒⬓⬔⧯
1134. ameist+cQ[view] [source] [discussion] 2023-11-22 12:48:56
>>TheOth+uw
Stupidity is not defined by self-harming actions and beliefs - not sure where you're getting that from.

Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.

replies(1): >>suodua+SX
◧◩◪◨
1135. mrangl+fQ[view] [source] [discussion] 2023-11-22 12:49:25
>>achron+UF
Said purpose and values are nothing more than an attempted control lever for dark actors, very obviously. People / factions that gain handholds, which otherwise wouldn't exist, and exert control through social pressure nonsense that they don't believe in themselves. As can be extracted from modern street-brawl politics, which utilizes the same terminology to the same effect. And as can be inferred would be the case given OAI's novel and convoluted corporate structure as referenced to the importance of its tech.

We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.

◧◩◪◨
1136. Jansjo+gQ[view] [source] [discussion] 2023-11-22 12:49:38
>>csunbi+KI
stock options were probably the focus rather than the salaries
replies(1): >>mouset+BU
◧◩◪◨⬒
1137. dalbas+nQ[view] [source] [discussion] 2023-11-22 12:50:55
>>meitha+Vi
Ok...

So the alternative to great man theory, in this case, is terrible man theory... I'm not following.

If focusing on control over openai, is great man theory... What's the contrary notion?

◧◩◪◨
1138. infect+pQ[view] [source] [discussion] 2023-11-22 12:51:07
>>rtsil+XM
How do you do expensive bleeding edge research with no money? Sure you might get some grants in the millions but what if it takes billions. Now lets assume the research is no small feat, its not just a handful of individuals in a lab, we need to hire larger teams to make it happen. We have to pay for those individuals and their benefits.

My take is its not cheap to do what they are doing and adding a capped for-profit side is an interesting take. Afterall, OpenAI's mission clearly states that AGI is happening and if thats true, those profit caps are probably trivial to meet.

◧◩◪
1139. JCM9+wQ[view] [source] [discussion] 2023-11-22 12:52:36
>>polite+Yj
When a politician wins with 98% of the vote do you A) think that person must be an incredible leader , or B) think something else is going on?

Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.

replies(6): >>bad_us+aR >>driver+PR >>roflc0+hX >>heyjam+2Y >>shzhdb+I01 >>JVIDEL+W01
◧◩
1140. kenjac+GQ[view] [source] [discussion] 2023-11-22 12:53:53
>>Satam+0a
Microsoft played almost no role in the process except to be a place for Sam and team to land.

What the process did shoe is if you plan to oust a popular CEO with a thriving company, you should actually have a good reason for it. It’s amazing how little thought seemingly went into it for them.

◧◩
1141. gandut+KQ[view] [source] [discussion] 2023-11-22 12:54:26
>>shubha+B7
Honestly I feel that we will never be able to preemptively build safety without encountering the real risk or threat.

Incrementally improving AI capabilities is the only way to do that.

◧◩
1142. redser+OQ[view] [source] [discussion] 2023-11-22 12:54:40
>>eclect+79
Unfortunately the engineers aren’t usually the ones getting the praise but CEO or other singular figurehead.
◧◩◪◨⬒
1143. ethanb+QQ[view] [source] [discussion] 2023-11-22 12:54:47
>>iowemo+JP
I didn’t claim employees were engaged in groupthink. I’m taking issue with the claim that because there is no public explanation, there must not be a good explanation.
replies(1): >>ulizzl+RY
◧◩
1144. cholli+WQ[view] [source] [discussion] 2023-11-22 12:55:12
>>Satam+0a
> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.

What leads you to make such a definitive statement? To me the process shows that Microsoft has no pull in OpenAI.

◧◩◪◨
1145. bad_us+aR[view] [source] [discussion] 2023-11-22 12:56:51
>>JCM9+wQ
The environment in a small to medium company is much more homogenous than the general population.

When you see 95%+ consensus from 800 employees, that doesn't suggest tanks and police dogs intimidating people at the voting booth.

replies(4): >>mstade+JT >>kcplat+PU >>plorg+RW >>from-n+t51
◧◩◪◨⬒
1146. bnralt+cR[view] [source] [discussion] 2023-11-22 12:57:06
>>hobofa+bJ
> From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI".

Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.

The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.

◧◩
1147. 627467+gR[view] [source] [discussion] 2023-11-22 12:57:28
>>eclect+79
A CEO is not a researcher. A researcher can be a CEO but in doing so stops being a researcher.

Maybe (almost certainly) Sam is not a savior/hero, but he doesn't need to be a savior/hero. He just needs to gather more support than the opposition (the now previous board). And even if you don't know any details of this story, enough insiders who know more than any of us of what happens inside oai - including hundred of researchers - decided to support the "savior/hero". It's less about Sam and more about an incompetent board. Some of those board members are top researchers. And they are now on the losing camp.

1148. nojvek+hR[view] [source] 2023-11-22 12:57:29
>>staran+(OP)
What this proves is that OpenAI interests are now entrenched with profit.

I’m assuming most of the researchers there probably realize there is a loooot of money to be made and they have to optimize for that.

They are deffo pushing the frontier of AI.

However I wish OpenAI doesn’t get to AGI first.

I don’t think it will be the best for all of humanity.

I’m scared.

◧◩◪◨
1149. Bayaz+kR[view] [source] [discussion] 2023-11-22 12:57:48
>>ethanb+aH
And yet here we are with a result that not only runs counter to your premise but will taught as an example of what not to do in business.
replies(1): >>ethanb+lS
◧◩
1150. lysecr+pR[view] [source] [discussion] 2023-11-22 12:58:14
>>laserl+gb
No the board is just one instance. It doesn’t and shouldn’t have absolute power. Absolute power corrupts absolutely.

There ist the board the investors the employees the senior management.

All other parties aligned against it and thus it couldn’t act. If only Sam would have rebelled. Or even just Sam and the investors (without the employees) nothing would have happened.

1151. pimpam+qR[view] [source] 2023-11-22 12:58:21
>>staran+(OP)
So Altman started it and ended up winning it, clearly his coup. Sad how employees were duped into standing behind him.
◧◩◪◨⬒
1152. iiv+xR[view] [source] [discussion] 2023-11-22 12:58:51
>>nickpp+fN
The silicon valley/startups/VC tribe, and they favour Twitter because 1. that's what their friends use and 2. they like Elon Musk, they want to be like him.
replies(2): >>nickpp+Id1 >>ssnist+zN2
◧◩◪
1153. vlad_u+AR[view] [source] [discussion] 2023-11-22 12:59:15
>>busyan+JK
Interesting, I always thought that research and startups are very similar. Where you have something (product/research-idea) which you think is novel and try to sell it (journals/customers).

The management skills which you potentiated differentiated the success of the two firms. I can see how the lack of this might be wildly spread out in academia.

replies(1): >>mikpan+V01
◧◩◪
1154. kromem+OR[view] [source] [discussion] 2023-11-22 13:00:10
>>jatins+yr
I agree with both the commenter above you and you.

Yes, you are right that the board had weak sauce reasoning for the firing (giving two teams the same project!?!).

That said, the other commenter is right that this is the beginning of the end.

One of the interesting things over the past few years watching the development of AI has been that in parallel to the demonstration of the limitations of neural networks has been many demonstrations of the limitations of human thinking and psychology.

Altman just got given a blank check and crowned as king of OpenAI. And whatever opposition he faced internally just lost all its footing.

That's a terrible recipe for long term success.

Whatever the reasons for the firing, this outcome is going to completely screw their long term prospects, as no matter how wonderful a leader someone is, losing the reality check of empowered opposition results in terrible decisions being made unchecked.

He's going to double down on chat interfaces because that's been their unexpected bread and butter up until the point they get lapped by companies with broader product vision, and whatever elements at OpenAI shared that broader vision are going to get steamrolled now that he's been given an unconditional green light until they jump ship over the next 18 months to work elsewhere.

replies(1): >>nvm0n2+vX
◧◩◪◨
1155. driver+PR[view] [source] [discussion] 2023-11-22 13:00:17
>>JCM9+wQ
Originally, 65% had signed (505 of 770).
◧◩
1156. baxtr+RR[view] [source] [discussion] 2023-11-22 13:00:22
>>Satam+0a
Based on the spectacular drama we were allowed to observe:

For a company at the forefront of AI it’s actually very, very human.

◧◩◪◨⬒⬓
1157. setham+WR[view] [source] [discussion] 2023-11-22 13:00:52
>>cyanyd+RP
I find any government to be more of a boogeyman than any private company because the government has the right to violence and companies come and go at a faster rate.
replies(2): >>cyanyd+ES >>kjkjad+RN1
◧◩◪◨⬒⬓
1158. bad_us+hS[view] [source] [discussion] 2023-11-22 13:03:28
>>nmfish+cK
> Google never paraded itself around as a non-profit acting in the best interests of humanity.

Just throwing this out there, but maybe … non-profits shouldn't be considered holier-than-thou, just because they are “non-profits”.

replies(1): >>Turing+i11
◧◩◪◨⬒
1159. ethanb+lS[view] [source] [discussion] 2023-11-22 13:03:55
>>Bayaz+kR
What?
◧◩
1160. flagra+mS[view] [source] [discussion] 2023-11-22 13:04:03
>>garris+EJ
I don't expect the government to regulate any of this aggressively. AI is much to important to the government and military to allow pesky conflicts of interest to slow down any competitive advantage we may have.
replies(1): >>dgrin9+X61
◧◩◪◨
1161. keepam+pS[view] [source] [discussion] 2023-11-22 13:04:23
>>have_f+GC
Literally it can only be the one person to have not let go of their board seat. Who might that be?

Smeagol D’Angelo

replies(1): >>keepam+cy3
1162. donoho+vS[view] [source] 2023-11-22 13:05:19
>>staran+(OP)
Larry Summers!?
◧◩◪◨
1163. mrangl+wS[view] [source] [discussion] 2023-11-22 13:05:22
>>kitsun+ty
What is socially defined as beneficial-to-humanity is functionally mandated by the MSM and therefore capricious, at the least. With that in mind, a translation:

"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".

◧◩◪◨⬒⬓⬔
1164. cyanyd+ES[view] [source] [discussion] 2023-11-22 13:07:03
>>setham+WR
Ok, and if Raytheon builds an AI and tells a government "trust us, its safe", arn't you just letting them create a scape goat via the government?

Seriously, Businesses simply dont have the history that governments do. They're just as capable of violence.

https://utopia.org/guide/crime-controversy-nestles-5-biggest...

All you're identifying is "government has a longer history of violence than Businesses"

◧◩◪◨⬒⬓
1165. mrangl+5T[view] [source] [discussion] 2023-11-22 13:10:02
>>kortil+rs
Disagreeing with employee actions doesn't mean that you are correct and they failed to think well. Weighting their collective probable profiles, including as insiders, and yours, it would be irrational to conclude that they were in the wrong.
replies(1): >>rewmie+WV
◧◩◪◨⬒
1166. spacer+aT[view] [source] [discussion] 2023-11-22 13:10:29
>>Partia+ZN
I'd go further than even that! You need 3 forms of advocacy in leadership for a successful business, business/market, tech, and time. The balance of those three can make or break any business.

You can see this at the micro level in a scrum team between the scrummaster, the product owner, and the tech lead.

◧◩
1167. danger+rT[view] [source] [discussion] 2023-11-22 13:11:52
>>doctob+g2
Strangely I think Ilya comes out of this well. He made a decision based on his values and what he believed was the best decision for AI safety. After seeing the outcome of that decision he changed his mind and owned that. He must have known it would result in the internet ridiculing him for flip flopping, but acted in what he thought was the best interest for the employees signing the letter. His actions are wroth criticism but I think his moral character has been demonstrated.

The other members of the board seemed to make their decision based on more personal reasons that seems to fit with Adams conflict of interest. They refused to communicate and only now accept any sort of responsibility for their actions and lack of plan.

Honestly Ilya is the only one of the 4 I would actually want still on the board. I think we need people who are willing to change direction based on new information especially in leadership positions despite it being messy, the world is messy.

◧◩
1168. ur-wha+sT[view] [source] [discussion] 2023-11-22 13:12:11
>>eclect+79
> It looks like one should strive to become product manager, not an engineer or a scientist.

If you look at who's running Google right now, you would be essentially correct.

◧◩◪◨⬒⬓⬔
1169. mrangl+tT[view] [source] [discussion] 2023-11-22 13:12:15
>>alsodu+Zt
But pronouncing that 700 people are bad at critical thinking is convenient when you disagree with them on desired outcome and yet can't hope to argue points.
◧◩
1170. smrtin+BT[view] [source] [discussion] 2023-11-22 13:13:06
>>eclect+79
The service itself has an incredible amount of utility and he will make them all wealthy. Seems like a no brainer to me.
◧◩◪◨⬒
1171. mstade+JT[view] [source] [discussion] 2023-11-22 13:13:25
>>bad_us+aR
Not that I have any insight into any of the events at OpenAI, but would just like to point out there are several other reasons why so many people would sign, including but not limited to:

- peer pressure

- group think

- financial motives

- fear of the unknown (Sam being a known quantity)

- etc.

So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.

If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.

[1]: https://www.imdb.com/title/tt2575988/

replies(5): >>FabHK+e11 >>phpist+031 >>framap+i41 >>bad_us+z91 >>ghaff+tJ1
◧◩◪◨⬒
1172. paulco+NT[view] [source] [discussion] 2023-11-22 13:13:58
>>objekt+cL
YC has an entire website (this one) it can use when it needs to lol.
◧◩◪◨⬒
1173. Doughn+TT[view] [source] [discussion] 2023-11-22 13:14:39
>>rvba+DJ
Not so true working for an organisation that is ostensibly a non-profit. People working for a non-profit are generally taking a significant hit to their earning's compared to doing similar work in a for-profit, outside of the top management of huge global charities.

The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.

Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.

replies(2): >>erosen+a71 >>rvba+2c1
◧◩◪
1174. kiba+1U[view] [source] [discussion] 2023-11-22 13:15:11
>>polite+Yj
They could just reach different conclusion based on their values. OpenAI doesn't seem to be remotely serious about preventing the misuse of AI.
◧◩◪◨⬒⬓
1175. broast+5U[view] [source] [discussion] 2023-11-22 13:16:19
>>ethanb+aK
I found the board members own words to be quite erratic between Friday and today, such as Ilya saying he wished he didn't participate in the boards actions.
replies(1): >>ethanb+QX
◧◩◪◨⬒⬓
1176. kubect+6U[view] [source] [discussion] 2023-11-22 13:16:35
>>disgru+CC
> Does that mean that we shouldn't have done it?

We can only change what we can change and that is in the past. I think it's reasonable to ask if the phones and the communication tools they provide are good for our future. I don't understand why the people on this site (generally builders of technology) fall into the teleological trap that all technological innovation and its effects are justifiable because it follows from some historical precedent.

replies(1): >>disgru+cx5
◧◩◪◨
1177. blitza+qU[view] [source] [discussion] 2023-11-22 13:17:54
>>kitsun+ty
> most likely to benefit humanity as a whole

Giving me a billion $ would be a net benefit to humanity as a whole

replies(1): >>jraph+K01
◧◩◪◨⬒
1178. mouset+BU[view] [source] [discussion] 2023-11-22 13:19:20
>>Jansjo+gQ
There was about to be a secondary stock purchase by Thrive where employees could cash out their shares. That likely would've fallen apart if the board won the day. Employees had a massive incentive to get same back.
◧◩◪◨⬒
1179. danger+GU[view] [source] [discussion] 2023-11-22 13:19:51
>>ywain+bl
This is very true its the unintended consequences of engineering that cause the most harm and are most often covered up. I always think of the example of the hand dryer that can't detect black peoples hands and how easy it is for a non racist engineer to make a racism machine. AI safety putting its focus on the what if it decides to do a genocide is kind of silly, its like worrying about nukes while you give out assault riffles and napalm to kids.
◧◩◪◨⬒
1180. kcplat+PU[view] [source] [discussion] 2023-11-22 13:20:45
>>bad_us+aR
Personally I have never seen that level of singular agreement in any group of people that large. Especially to the level of sacrifice they were willing to take for the cause. You maybe see that level of devotion to a leader in churches or cults, but in any other group? You can barely get 3 people to agree on a restaurant for lunch.

I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.

replies(5): >>psycho+sW >>panrag+cX >>lxgr+nX >>dahart+gn1 >>cellar+yP1
◧◩
1181. throwa+cV[view] [source] [discussion] 2023-11-22 13:23:18
>>eclect+79
Either:

Incubation of senior management in US tech has reached singularity and only one person's up for the job. Doom awaits the US tech sector as there's no organisational ability other than one person able and willing to take the big complex job.

Or:

Sam's overvalued.

One or the other.

◧◩
1182. nemo44+kV[view] [source] [discussion] 2023-11-22 13:23:57
>>doctob+g2
Sam will have no issue patching the relationship because he knows how a business relationship works. Besides, Ilya kissed the ring as evidenced by his tweet.
◧◩◪◨
1183. photoc+nV[view] [source] [discussion] 2023-11-22 13:24:13
>>smt88+Oj
It does seem that the hypocrisy was baked in from the beginning. In the tech world 'open' implied open source but OpenAI wanted to benefit from a marketing itself as something like Linux when internally it was something like Microsoft.

Corporations have no values whatsoever and their statements only mean anything when expressed in terms of a legally binding contract. All corporate value statements should be viewed as nothing more than the kind of self-serving statements that an amoral narcissitic sociopath would make to protect their own interests.

1184. throwa+sV[view] [source] 2023-11-22 13:24:35
>>staran+(OP)
So OpenAI's board is now exclusively white men, and predominantly tech insiders? Lovely to have such a diverse group behind this technology Could this be more comical?
◧◩◪◨⬒⬓
1185. rewmie+vV[view] [source] [discussion] 2023-11-22 13:25:00
>>kortil+rs
> Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

That's not the bar you are arguing against.

You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.

We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.

replies(1): >>kortil+WAa
1186. iterat+zV[view] [source] 2023-11-22 13:25:14
>>staran+(OP)
Sam's power was tested and turned out to be absolute.

Sam was doing whatever he wanted, got caught, and now can continue to do what he wants with even more backing.

◧◩◪◨
1187. effica+RV[view] [source] [discussion] 2023-11-22 13:27:18
>>gorbyp+Gn
part of the fanaticism here is that the first one to get an AGI wins because they can use its powerful intelligence to overcome every competitor and shut them down. they’re living in their own sci fi novel
◧◩◪◨⬒⬓⬔
1188. rewmie+WV[view] [source] [discussion] 2023-11-22 13:27:34
>>mrangl+5T
> Disagreeing with employee actions doesn't mean that you are correct and they failed to think well.

You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.

◧◩◪
1189. blitza+YV[view] [source] [discussion] 2023-11-22 13:27:47
>>sensan+2I
Let me offer up a secret from the inside. You dont in any way shape or form have to pay money to journalists. The can are bought and paid for through their currency - information and access.

They dont really even really shill for their patron; they thrive on the relevance of having their name in the byline for the article, or being the person who gets quote / information / propaganda from <CEO|Celebrity|Criminal|Viral Edgelord of the Week>.

◧◩◪
1190. chatma+ZV[view] [source] [discussion] 2023-11-22 13:27:48
>>huyter+kb
Meanwhile Sundar might be the worst. Where was he this weekend? Where was he the past three years while his company got beat to market on products built from its own research? He's asleep at the wheel. I'm surprised every day he remains CEO.
replies(1): >>kridsd+gm1
1191. nomaD_+cW[view] [source] 2023-11-22 13:29:24
>>staran+(OP)
Hiring engineers at 900K salary & pretending to be non-profit does not work. Turns out, 97% of them wanted to make money.

Government should have banned big tech investment in AI companies a year ago. If they want, they can create their own AI but buying one should be off the table.

◧◩◪
1192. chatma+lW[view] [source] [discussion] 2023-11-22 13:30:28
>>ozgung+pv
I could buy this theory, but it's worth noting that if it's true, their coup appears to have failed. So that's score one for the naive tech bros, score zero for the conniving natsec sociopaths.
replies(1): >>kridsd+Ln1
◧◩
1193. iterat+qW[view] [source] [discussion] 2023-11-22 13:31:14
>>eclect+79
Apparently he has a massive role in VC, and since this community, tech twitter, etc. all circle around that, he is unconditionally praised.

Further, the current tech wave is all about AI, where there's a massive community of basically "OpenAI wrapper" grifters trying to ride the wave.

The shorter answer is: money.

◧◩◪◨⬒⬓
1194. psycho+sW[view] [source] [discussion] 2023-11-22 13:31:21
>>kcplat+PU
>You can barely get 3 people to agree on a restaurant for lunch.

I was about to state that a single human is enough to see disagreements raise, but this doesn’t reach full consensus in my mind.

replies(1): >>kcplat+901
◧◩◪
1195. Perz1v+EW[view] [source] [discussion] 2023-11-22 13:33:00
>>sensan+2I
Maybe they feel really insecure when the "News Writing Themselves AI" company got unstable...
◧◩◪◨
1196. iterat+PW[view] [source] [discussion] 2023-11-22 13:34:09
>>egKYzy+iJ
Really? It's well documented and even admitted that Apple has a set of Apple-friendly media partners.
replies(1): >>YourCu+m51
◧◩◪◨⬒
1197. plorg+RW[view] [source] [discussion] 2023-11-22 13:34:19
>>bad_us+aR
That sounds like a cult more than a business. I work at a small company (~100 people), and we are more or less aligned with what we're doing you are not going to get close to that consensus on anything. Same for our sister company, about the same size as OpenAI.
replies(2): >>chiefa+kZ >>docmar+z31
◧◩◪◨⬒⬓⬔⧯▣▦▧
1198. Jumpin+TW[view] [source] [discussion] 2023-11-22 13:34:26
>>doktri+qO
> > Do we disagree

I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service

I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert

replies(1): >>doktri+LB1
◧◩◪◨⬒⬓
1199. panrag+cX[view] [source] [discussion] 2023-11-22 13:35:27
>>kcplat+PU
>Especially to the level of sacrifice they were willing to take for the cause.

We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.

◧◩◪
1200. iterat+dX[view] [source] [discussion] 2023-11-22 13:35:29
>>s1arti+0G
Please stop. No employee is loyal to any CEO based on some higher order matter. They just want to get their big pay day and will follow whoever makes that possible.
replies(1): >>s1arti+X32
◧◩◪◨
1201. roflc0+hX[view] [source] [discussion] 2023-11-22 13:35:41
>>JCM9+wQ
The simple answer here is that the boards actions stood to incinerate millions of dollars of wealth for most of these employees, and they were up in arms.

They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.

replies(2): >>whywhy+1Z >>citygu+if1
◧◩◪◨⬒⬓
1202. lxgr+nX[view] [source] [discussion] 2023-11-22 13:36:47
>>kcplat+PU
Approval rates of >90% are quite common within political parties, to the point where anything less can be seen as an embarrassment to the incumbent head of party.
replies(1): >>kcplat+MZ
◧◩◪◨
1203. nvm0n2+vX[view] [source] [discussion] 2023-11-22 13:37:30
>>kromem+OR
Not necessarily! Facebook has done great with its unfireable CEO. The FB board would certainly have fired him several times over by now if it could, and yet they'd have been wrong every time. And the Google cofounders would certainly have been kicked out of their own company if the board had been able to.
replies(1): >>herost+5k1
◧◩◪◨⬒⬓⬔⧯▣▦▧
1204. suodua+CX[view] [source] [discussion] 2023-11-22 13:38:21
>>golden+wK
OTOH, there's a very good argument to be made that if you recognize that fact, your short-term priority should be to amass a lot of secular power so you can align society to that reality. So the best action to take might be no different.
replies(1): >>golden+ef1
◧◩◪◨⬒⬓⬔
1205. morale+LX[view] [source] [discussion] 2023-11-22 13:38:50
>>denlek+No
I do think there is some brand loyalty.

People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.

But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.

replies(1): >>denlek+BC1
◧◩◪◨⬒⬓⬔
1206. ethanb+QX[view] [source] [discussion] 2023-11-22 13:39:19
>>broast+5U
It would be completely understandable to regret when your action against someone causes them to fall upwards
replies(1): >>framap+ca1
◧◩◪◨⬒⬓⬔⧯▣
1207. suodua+SX[view] [source] [discussion] 2023-11-22 13:39:28
>>ameist+cQ
Probably from law 3: https://principia-scientific.com/the-5-basic-laws-of-human-s...

But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.

It's a very influential essay.

replies(1): >>ameist+Ie1
◧◩◪◨
1208. heyjam+2Y[view] [source] [discussion] 2023-11-22 13:40:05
>>JCM9+wQ
That argument only works with a “population”, since almost nobody gets to choose which set of politicians they vote for.

In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.

◧◩◪◨
1209. Burnin+hY[view] [source] [discussion] 2023-11-22 13:42:02
>>achron+UF
Larry Summers is everywhere and does everything.
replies(1): >>Turing+M01
◧◩◪◨
1210. Aurorn+iY[view] [source] [discussion] 2023-11-22 13:42:13
>>ethanb+aH
> OpenAI is a private company and not obligated nor is it generally advised for them to comment publicly on why people are fired.

The interim CEO said the board couldn’t even tell him why the old CEO was fired.

Microsoft said the board couldn’t even tell them why the old CEO was fired.

The employees said the board couldn’t explain why the CEO was fired.

When nobody can even begin to understand the board’s actions and they can’t even explain themselves, it’s a recipe for losing confidence. And that’s exactly what happened, from investors to employees.

replies(1): >>ethanb+D01
◧◩◪◨
1211. ulizzl+AY[view] [source] [discussion] 2023-11-22 13:43:32
>>ethanb+aH
All explanations lend credence to positions which is why is not a good idea to comment on anything. Looks like they’re lawyered up.
◧◩◪◨⬒⬓
1212. ulizzl+RY[view] [source] [discussion] 2023-11-22 13:44:38
>>ethanb+QQ
That is a logical fallacy clawing your face. Upvotes to whoever can name which one.
1213. Pigalo+TY[view] [source] 2023-11-22 13:44:54
>>staran+(OP)
Shows over I guess. Feels like the ending to GoT. I’m not sure I even care what happened to begin it all anymore.
◧◩◪◨⬒
1214. whywhy+1Z[view] [source] [discussion] 2023-11-22 13:45:17
>>roflc0+hX
Wild the employees will go back under a new board and the same structure, first priority should be removing the structure that allowed a small group of people to destroy things over what may have been very petty reasons.
replies(1): >>CydeWe+121
◧◩◪◨⬒⬓
1215. timacl+2Z[view] [source] [discussion] 2023-11-22 13:45:19
>>phero_+AH
Unless somehow a “mission statement” is legally binding it will never mean anything that matters.

Its always written by PR people with marketing in mind

◧◩◪◨⬒⬓
1216. noneth+3Z[view] [source] [discussion] 2023-11-22 13:45:33
>>Hendri+1u
And technically 2 new CEOs
◧◩
1217. strike+7Z[view] [source] [discussion] 2023-11-22 13:46:03
>>laserl+gb
The board can still fire sam provided they get all the key stakeholders onboard with that firing. It made no sense to fire someone doing a good job at their role without any justification, that seems to have been the key issue. Ultimately, we all know this non profit thing is for show and will never work out.
◧◩◪
1218. Cacti+aZ[view] [source] [discussion] 2023-11-22 13:46:11
>>sashan+Ko
please don’t troll HN
◧◩◪◨
1219. Davidz+fZ[view] [source] [discussion] 2023-11-22 13:46:49
>>r721+Ye
Have you used Mastodon? I don't think you can follow drama on Mastodon unless you're already part of the drama.
◧◩◪
1220. auggie+hZ[view] [source] [discussion] 2023-11-22 13:46:57
>>maxdoo+zM
I am sure Larry Summers is highly qualified for this job. Would have been very hard to find a willing woman with his qualifications.
replies(1): >>maxdoo+OF3
◧◩◪◨⬒⬓
1221. chiefa+kZ[view] [source] [discussion] 2023-11-22 13:47:12
>>plorg+RW
I also sounds like a very narrow hiring profile. That is, favoring the like-minded and assimilation over free thinking and philosophical diversity. They might give off the appearance of "diversity" on the outside - which is great for PR - but under the hood it's more monocultural. Maybe?
replies(2): >>phpist+C31 >>docmar+671
◧◩◪◨
1222. metano+qZ[view] [source] [discussion] 2023-11-22 13:47:34
>>davedx+Ch
I would add Meta to this list, in particular because Yann LeCun is the most vocal critic of LLM one-ponyism.
◧◩◪
1223. Cacti+FZ[view] [source] [discussion] 2023-11-22 13:48:37
>>haunte+ih
these are the vapid, pedantic hot takes we all come here for. thanks.
◧◩◪◨⬒
1224. strike+GZ[view] [source] [discussion] 2023-11-22 13:48:43
>>kareaa+kA
Most are probably motivated by money, some are motivated by stability and some are motivated by their loyalty to sam but i think most are motivated by money and stability.
◧◩◪
1225. Davidz+LZ[view] [source] [discussion] 2023-11-22 13:49:16
>>fruit2+X4
There's no proof on either side. Just as likely to be ideological disputes from Helen and Ilya.
◧◩◪◨⬒⬓⬔
1226. kcplat+MZ[view] [source] [discussion] 2023-11-22 13:49:24
>>lxgr+nX
There is a big difference between “I agree with this…” when a telephone poll caller reaches you and “I am willing to leave my livelihood because my company CEO got fired”
replies(3): >>from-n+K31 >>lxgr+Y31 >>zerbin+V91
◧◩◪◨⬒⬓⬔
1227. kcplat+901[view] [source] [discussion] 2023-11-22 13:51:08
>>psycho+sW
I was conflicted about originally posting that sentence. I waffled back and forth between, 2, 3, 5…

Three was the compromise I made with myself.

◧◩
1228. 3cats-+e01[view] [source] [discussion] 2023-11-22 13:51:28
>>Satam+0a
Let me guess. The only valid outcome for you would've been that they disband in order to prevent opening a portal to the cosmic AGI Cthulhu.

Frankly these EA & e/acc cults are starting to get on my nerves.

◧◩◪◨⬒
1229. ethanb+D01[view] [source] [discussion] 2023-11-22 13:53:17
>>Aurorn+iY
I’m specifically taking issue with this common meme that the public is owed some sort of explanation. I agree the employees (and obviously the incoming CEO) would be.

And there’s a difference between, “an explanation would help their credibility” versus “a lack of explanation means they don’t have a good reason.”

◧◩◪◨
1230. shzhdb+I01[view] [source] [discussion] 2023-11-22 13:53:46
>>JCM9+wQ
> for now the damage is done and OpenAI has a lot of trust rebuilding to do

Nobody cares, except shareholders.

◧◩◪◨⬒
1231. jraph+K01[view] [source] [discussion] 2023-11-22 13:53:48
>>blitza+qU
Depends on what you do (and stop doing) with it :-)
◧◩◪◨⬒
1232. Turing+M01[view] [source] [discussion] 2023-11-22 13:53:54
>>Burnin+hY
At the same time?
replies(1): >>marcos+tq1
1233. rceDia+N01[view] [source] 2023-11-22 13:53:56
>>staran+(OP)
The "giveaway" is the fact that "Microsoft is happy" with the return of Mr. Altman. Can't wait for the former boards tell-all story. Bets on: how a founder of cutting edge tech company wanted world peace and no harm but outside capital forces steered him to other "unfathomable riches" option. It happens.
◧◩
1234. enoch_+S01[view] [source] [discussion] 2023-11-22 13:54:02
>>Satam+0a
> the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either

I suspect incentives play a huge role here. OAI employees are compensated with stock in the for-profit arm of the company. It's obvious that the board's actions put the value of that stock in extreme jeopardy (which, given the corporate structure, is theoretically completely fine! the whole point of the corporate structure is that the nonprofit board has the power to say "yikes, we've developed an unsafe superintelligence, burn down the building and destroy the company now").

I think it's natural for employees to be extremely angry with a board decision that probably cost them >$1M each.

◧◩◪◨
1235. mikpan+V01[view] [source] [discussion] 2023-11-22 13:54:11
>>vlad_u+AR
Most startups need to do a very different type of research than academia. They need to move very fast and test ideas against the market. In my experience, most academic research is moving pretty slowly due to different goals and incentives - and at times it can be a good thing.
◧◩◪◨
1236. JVIDEL+W01[view] [source] [discussion] 2023-11-22 13:54:23
>>JCM9+wQ
Odds are if he left there's the possibility their compensation situation might have changed for the worse if not leading to downsizing, that in the edge of a recession with plenty of competition out there.
◧◩◪◨⬒⬓
1237. Jensso+411[view] [source] [discussion] 2023-11-22 13:54:57
>>jatins+nc
> 1/ it's Blind

Average people don't like to lie, if someone bullies them until they agree to sign they will sign because they are honest.

Also if they said they will sign but the ticker didn't go up, it is pretty obvious that they lied and I'm sure they don't want that risk.

◧◩◪◨⬒⬓⬔⧯
1238. JohnPr+a11[view] [source] [discussion] 2023-11-22 13:55:31
>>fsloth+Jm
Could you give a clear mechanistic model of how the US would handle such a danger?
replies(2): >>fsloth+Mi2 >>JumpCr+LD2
◧◩◪◨⬒⬓
1239. FabHK+e11[view] [source] [discussion] 2023-11-22 13:55:47
>>mstade+JT
I'd love another season of Silicon Valley, with some Game Stonk and Bored Apes and ChatGPT and FTX and Elon madness.
replies(1): >>jakder+Hl1
◧◩◪◨⬒⬓⬔
1240. Turing+i11[view] [source] [discussion] 2023-11-22 13:56:11
>>bad_us+hS
Maybe, but their actions should definitely not be oriented to decide how to maximize their profit.
replies(1): >>bad_us+t61
◧◩◪◨
1241. strike+x11[view] [source] [discussion] 2023-11-22 13:57:06
>>veec_c+Zb
I haven't seen this type of drama in years, surely thats not enough to sustain X
◧◩
1242. Aurorn+H11[view] [source] [discussion] 2023-11-22 13:57:52
>>Satam+0a
> Disappointing outcome.

The employees of a tech company banded together to get what they wanted, force a leadership change, evict the leaders they disagreed with, secure the return of the leadership they wanted, and restored the value of their hard-earned equity.

This certainly isn’t a disappointing outcome for the employees! I thought HN would be ecstatic about tech employees banding together to force action in their favor, but the comments here are surprisingly negative.

◧◩◪◨⬒
1243. croes+P11[view] [source] [discussion] 2023-11-22 13:58:32
>>siva7+xk
If we want to play that game you could easily say Altman's critique of her wasn't to protect the company but to protect his assets.

Altman is past borderline.

◧◩
1244. neves+Q11[view] [source] [discussion] 2023-11-22 13:58:40
>>Satam+0a
Any good summary of the OpenAI imbroglio? I know it has a strange corporation, with part non profit and part for profit. I don't follow it closely but would like a quick read explaining.
◧◩◪◨⬒⬓
1245. CydeWe+121[view] [source] [discussion] 2023-11-22 13:59:08
>>whywhy+1Z
Well it's a different group of people and that group will now know the consequences of attempting to remove Sam Altman. I don't see this happening again.
replies(1): >>youcan+ia1
◧◩◪◨⬒⬓
1246. phpist+031[view] [source] [discussion] 2023-11-22 14:04:50
>>mstade+JT
If the opposing letter that was published from "former" employee's is correct there was already a huge turn over, and the people that remain liked the environment they were in, and I would assume liked the current leadership or they would have left

So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's

If openAI is a huge mono-culture of thinking then they have bigger problems most likely

replies(1): >>bad_us+ya1
◧◩
1247. mymuse+531[view] [source] [discussion] 2023-11-22 14:05:13
>>MattHe+bx
I would like to know the model that isn’t a “toy model”.
◧◩◪◨⬒⬓⬔⧯▣▦▧
1248. kaibee+a31[view] [source] [discussion] 2023-11-22 14:05:39
>>belter+FM
6D Chess is apparently realizing that AGI is not 100% certain and that having 10mm on the run up to AGI is better than not having 10mm on the run up to AGI.
1249. Mriraz+b31[view] [source] 2023-11-22 14:05:46
>>staran+(OP)
The Steve jobs of our TikTok generation. Came back very quickly in comparison to the 12 years but still.
1250. ChoGGi+c31[view] [source] 2023-11-22 14:05:46
>>staran+(OP)
I'm sure that first meeting will be... Interesting.
◧◩
1251. anandr+k31[view] [source] [discussion] 2023-11-22 14:06:10
>>Satam+0a
I have been working for various software companies at different capacities. Never did i see 90%+ employees care about their CEO . In a small 10 member startup maybe its true. Are there any OpenAI employees here to confirm that .. their CEO really matters ... I mean how many employee revolted when Steve Jobs was fired .. Do Microsoft and Google employees really care ?
◧◩◪◨⬒⬓
1252. docmar+z31[view] [source] [discussion] 2023-11-22 14:07:16
>>plorg+RW
I think it could be a number of factors:

1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.

2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.

3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.

4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.

I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.

replies(1): >>denton+Qy1
◧◩◪◨⬒⬓⬔
1253. phpist+C31[view] [source] [discussion] 2023-11-22 14:07:27
>>chiefa+kZ
Superficial "diversity" is all the "diversity" a company needs in the modern era.

Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.

replies(2): >>docmar+ba1 >>chiefa+0c1
◧◩◪◨⬒⬓⬔⧯
1254. from-n+K31[view] [source] [discussion] 2023-11-22 14:08:01
>>kcplat+MZ
But if 100 employees were like "I'm gonna leave" then your livelihood is in jeopardy. So you join in. It's really easy to see 90% of people jumping overboard when they are all on a sinking ship.
◧◩◪◨⬒⬓⬔⧯
1255. lxgr+Y31[view] [source] [discussion] 2023-11-22 14:08:44
>>kcplat+MZ
I don't mean voter approval, I mean party member approval. That's arguably not that far off from a CEO situation in a way in that it's the opinion of and support for the group's leadership by group members.

Voter approval is actually usually much less unanimous, as far as I can tell.

◧◩◪◨
1256. robert+141[view] [source] [discussion] 2023-11-22 14:08:57
>>rtsil+XM
> he interesting thing is you used economic values to show their importance, not what innovations or changes they achieved

Money is just a way to value things relative to other things. It's not interesting to value something using money.

replies(1): >>Doughn+hc1
◧◩
1257. turtle+d41[view] [source] [discussion] 2023-11-22 14:09:37
>>eclect+79
If you grew up in the 90s, you’ll understand:

Don’t hate the player, hate the game

replies(1): >>latexr+x81
◧◩◪◨⬒
1258. dnissl+h41[view] [source] [discussion] 2023-11-22 14:09:47
>>darkwa+Ij
Absolutely, assuming LLMs are still around in a similar form by that time.

I disagree on the particulars. Will it be for the reason that you mention? I really am not sure -- I do feel confident though that the argument will be just as ideological and incoherent as the ones people make about social media today.

◧◩◪◨⬒⬓
1259. framap+i41[view] [source] [discussion] 2023-11-22 14:09:59
>>mstade+JT
Exactly; there are multitudes of reasons and very little information so why pick any one of them?
◧◩◪
1260. robert+n41[view] [source] [discussion] 2023-11-22 14:10:22
>>hdivid+Jw
> Scientists and engineers in national labs, universities, and elsewhere show what a real commitment to technological progress looks like.

And everywhere. You've only named public institutions for some reason, but a lot of progress happens in the private sector. And that demonstrates real commitment, because they're not spending other people's money.

replies(1): >>waltha+r51
◧◩
1261. notesi+p41[view] [source] [discussion] 2023-11-22 14:10:29
>>eclect+79
The CEO is the face of the company, rarely does the public care about the scientists or engineers. This isnt a new concept, its always happened.
1262. lysecr+K41[view] [source] 2023-11-22 14:11:38
>>staran+(OP)
Fascinating, I see a lot of VC/Msfot has overthrown our NPO governing structure because of profit incentives narrative.

I don't think this is what really happened at all. The reason this decision was made was because 95% of employees sided with Sam on this issue, and the board didn't explain themselves in any way at all. So it was Sam + 95% of employees + All investors against the board. In which case the board should lose (since they are only governing for themselves here).

I think in the end a good and fair outcome. I still think their governing structure is decent to solve the AGI problem, this particular board was just really bad.

replies(4): >>greeni+Ha1 >>r_tham+eh1 >>jkapla+ax1 >>campbe+2E1
◧◩
1263. mikpan+h51[view] [source] [discussion] 2023-11-22 14:13:49
>>eclect+79
One of the most important things I've learned in life is that organizing people to work toward the same goal is very hard. The larger the group you need to organize, the harder it is.

Initially, when the idea is small, it is hard to sell it to talent, investors and early customers to bring all key pieces together.

Later, when the idea is well recognized and accepted, the organization usually becomes big and the challenge shifts to understanding the complex interaction of various competing sub-ideas, projects and organizational structures. Humans did not evolve to manage such complex systems and interacting with thousands of stakeholders, beyond what can be directly observed and fully understood.

However, without this organization, engineers, researchers, etc cannot work on big audacious projects, which involve more resources than 1 person can provide by themselves. That's why the skill of organizing and leading people is so highly valued and compensated.

It is common to think of leaders not contributing much, but this view might be skewed because of mostly looking at executives in large companies at the time they have clear moats. At that point leadership might be less important in the short term: product sells itself, talent is knocking on the door, and money is abundant. But this is an unusual short-lived state between taking an idea off the ground and defending against quickly shifting market forces.

◧◩◪◨⬒
1264. YourCu+m51[view] [source] [discussion] 2023-11-22 14:14:17
>>iterat+PW
Even the Federal Reserve has the “Fed Whisperer” Nick Timiraos. Pretty much an open secret he has a direct line.
◧◩◪◨
1265. waltha+r51[view] [source] [discussion] 2023-11-22 14:14:33
>>robert+n41
If the ZIRP era has taught us anything, it's that private companies can spray other people's money up the wall just as well as anyone
replies(1): >>robert+Xl1
◧◩◪◨⬒
1266. from-n+t51[view] [source] [discussion] 2023-11-22 14:14:35
>>bad_us+aR
Right. They aren't actually voting for Sam Altman. If I'm working at a company and I see as little as 10% of the company jump ship I think "I'd better get the frik outta here". Especially if I respect the other people who are leaving. This isnt a blind vote. This is a rolling snowball.

I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.

95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.

So it looks like a VERY normal company.

◧◩
1267. laweij+A51[view] [source] [discussion] 2023-11-22 14:15:05
>>ssnist+M
I’m down for the next season of this hot drama.
◧◩
1268. dalbas+G51[view] [source] [discussion] 2023-11-22 14:15:33
>>Satam+0a
Yes...

Investors and executives.. everyone in 2023 is hyper focused on "Thiel Monopoly."

Platform, moat, aggregation theory, network effects, first mover advantages.. all those ways of thinking about it.

There's no point in being bing to Google's AdWords... So the big question is pathway to being the adWords. "Winning." That's the paradigm. This is where big returns will be.

However.. we should always remember, but the future is harder to see from the past. Post fact analysis, can often make things seem a lot simpler and more inevitable than they ever were.

It's not clear what a winner even is here. What are the bottlenecks to be controlled. What are the business models, revenue sources. What represents the "LLM Google," America online, Yahoo or a 90s dumb pipe.

FYIW I think all the big text have powerful plays available.. including keeping powder dry.

No doubt that proximity to openAI, control, influence, access to IP.. all strategic assets. That's why they're all invested an involved in the consortium.

That said assets or not strategies. It's hard to have strategies when strategic goals are unclear.

You can nominate a strategic goal from here, try to stay upstream, make exploratory investments and bets... There is no rush for the prize, unless the price is known.

Obviously, I'm assuming the prixe is not AGI and a solution to everything... That kind of abstraction is useful, but I do not think it's operative.

It's not a race currently, to see who's R&D lab turns on the first super intelligent consciousness.

Assuming I'm correct on that, we really have no idea which applications LLM capabilities companies are actually competing for.

◧◩◪◨
1269. Davidz+V51[view] [source] [discussion] 2023-11-22 14:16:29
>>astran+wi
Maybe but I think the next big one will be reasoning.
replies(1): >>astran+Y44
◧◩◪
1270. bart_s+W51[view] [source] [discussion] 2023-11-22 14:16:33
>>gandut+OP
Stock options?
◧◩◪
1271. danari+Z51[view] [source] [discussion] 2023-11-22 14:16:43
>>busyan+JK
> All I'm saying is that I used to diminish the importance of quality senior leadership.

Quality senior leadership is, indeed, very important.

However, far, far too many people see "their company makes a lot of money" or "they are charismatic and talk a good game" and think that means the senior leadership is high-quality.

True quality is much harder to measure, especially in the short term. As you imply, part of it is being able to choose good management—but measuring the quality of management is also hard, and most of the corporate world today has utterly backwards ideas about what actually makes good managers (eg, "willing to abuse employees to force them to work long hours", etc).

1272. Bryant+b61[view] [source] 2023-11-22 14:17:13
>>staran+(OP)
We’re not gonna see it but I’d love to see Sam’s new contract and particularly any restraints on outside activities.
◧◩◪◨⬒⬓⬔⧯
1273. bad_us+t61[view] [source] [discussion] 2023-11-22 14:18:41
>>Turing+i11
What's wrong with profit and wanting to maximize it?

Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.

The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?

replies(2): >>caddem+hg1 >>saalwe+Hg1
◧◩◪
1274. baking+F61[view] [source] [discussion] 2023-11-22 14:19:25
>>hdivid+Jw
He is the Executive Chairman of Helion Energy so it is not just a passive investment.

That said, I wish Helion wasn't so paranoid about Chinese copycats and was more open about their tech. I can't help but feel Sam Altman is at least partly responsible for that.

◧◩
1275. sealth+L61[view] [source] [discussion] 2023-11-22 14:19:37
>>eclect+79
Simply put Altman is now the face of AI.

If you were to ask Altman himself though im sure he would highlight the true innovators of AI that he holds in high respect.

replies(1): >>lacrim+Ca1
1276. geniiu+S61[view] [source] 2023-11-22 14:20:10
>>staran+(OP)
This was a nice ride. Nice story to follow
◧◩◪
1277. dgrin9+X61[view] [source] [discussion] 2023-11-22 14:20:40
>>flagra+mS
If you think that OpenAI is the Gov's only source of high quality AI research then I have a bridge to sell you.
replies(2): >>jakder+pf1 >>flagra+HF6
◧◩◪◨⬒⬓⬔
1278. docmar+671[view] [source] [discussion] 2023-11-22 14:21:05
>>chiefa+kZ
I think that most pushes for diversity that we see today are intended to result in monocultures.

DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.

On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.

Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.

This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.

Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.

◧◩◪◨⬒⬓
1279. erosen+a71[view] [source] [discussion] 2023-11-22 14:21:38
>>Doughn+TT
I don't really think this is true in non-charity work. Half of American hospitals are nonprofit and many of the insurance conglomerates are too, like Kaiser. The executives make plenty of money. Kaiser is a massive nonprofit shell for profitmaking entities owned by physicians or whatever, not all that dissimilar to the OpenAI shell idea. Healthcare worked out this way because it was seen as a good model to have doctors either reporting to a nonprofit or owning their own operations, not reporting to shareholders. That's just tradition though. At this point plenty of healthcare operations are just normal corporations controlled by shareholders.
◧◩
1280. emoden+k71[view] [source] [discussion] 2023-11-22 14:22:03
>>eclect+79
Then again wasn't that always true? What did Steve Jobs really build?
◧◩◪◨
1281. caleb-+t71[view] [source] [discussion] 2023-11-22 14:22:36
>>maxlin+Im
Maybe it came at the advice of Rishi Sunak when he and Altman met last week!
◧◩◪
1282. Kinran+A71[view] [source] [discussion] 2023-11-22 14:22:59
>>serial+Ev
Good engineers create systems that can survive their departure.
◧◩
1283. gfiora+V71[view] [source] [discussion] 2023-11-22 14:24:06
>>eclect+79
IMO:

They fired the CEO and didn't even inform Microsoft, who had invested a massive $20 billion. That's a serious lapse in judgment. A company needs leaders who understand business, not just a smart researcher with a sense of ethical superiority. This move by the board was unprofessional and almost childish.

Those board members? Their future on any other board looks pretty bleak. Venture capitalists will think twice before getting involved with anything they have a hand in.

On the other side, Sam did increase the company's revenue, which is a significant achievement. He got offers from various companies and VCs the minute the news went public.

The business community's support for Sam is partly a critique of the board's actions and partly due to the buzz he and his company have created. It's a significant moment in the industry.

◧◩
1284. erickh+081[view] [source] [discussion] 2023-11-22 14:24:23
>>eclect+79
Read up on the John Sculley/Michael Spindler days of Apple, and Jobs' return.

I think that's what may be in the minds of several people eagerly watching this eventually-to-be-made David Fincher movie.

◧◩◪
1285. Captai+481[view] [source] [discussion] 2023-11-22 14:24:57
>>mkii+Vw
Of course, it depends on what safety means. Currently it seems to just be a pretext for prudishness and regulation.
◧◩◪◨
1286. qup+781[view] [source] [discussion] 2023-11-22 14:25:02
>>ottero+NH
You're on pace for about two years in
◧◩◪◨
1287. CrazyS+a81[view] [source] [discussion] 2023-11-22 14:25:30
>>matwoo+bN
I remember about ten years ago someone arguing that Coach K was overrated because his college players on average underperformed in the NBA (relative to their college careers).

I could not convince them that this was actually evidence in favor of Coach K being an exceptional coach.

replies(1): >>garden+fa1
◧◩
1288. snicke+k81[view] [source] [discussion] 2023-11-22 14:26:04
>>eclect+79
Journalists really want everything to have a singular inventor. The concept of an organization is very difficult for them to grasp so they attribute everything to the guy at the top. Sam Altman is the latest in a long line of "inventors" which also includes such esteemed personalities as Elon musk, Steve Jobs, etc.
◧◩◪◨
1289. kridsd+l81[view] [source] [discussion] 2023-11-22 14:26:08
>>torgin+bL
But it did explode. And that was the part of the story that people were interested in.
◧◩
1290. hn_thr+u81[view] [source] [discussion] 2023-11-22 14:26:43
>>eclect+79
> The media and the VCs are treating Sam like some hero and savior of AI

I wouldn't be so sure. While I think the board handled this process terribly, I think the majority of mainstream media articles I saw were very cautionary regarding the outcome. Examples (and note the second article reports that Paul Graham fired Altman from YC, which I never knew before):

MarketWatch: https://www.marketwatch.com/story/the-openai-debacle-shows-s...

Washington Post: https://www.washingtonpost.com/technology/2023/11/22/sam-alt...

◧◩◪
1291. latexr+x81[view] [source] [discussion] 2023-11-22 14:27:03
>>turtle+d41
The “game” only continues to exist as long as there are “players”. You’re perfectly justified to be discontent with the ones who perpetuate a system you disagree with.

That phrase is nothing more than a dissimulated way of saying “tough luck” or “I don’t care” while trying to act (outdatedly) cool. You don’t need to have grown up in any specific decade to understand its meaning.

◧◩
1292. zug_zu+H81[view] [source] [discussion] 2023-11-22 14:27:41
>>intend+y9
I am not sure why people keep pushing this narrative. It's not obviously false, but there doesn't seem to be much evidence of it.

From where I sit Satya possibly messed up big. He clearly wanted Sam and the Open AI team to join microsoft and they won't now, likely ever.

By offering a standing offer to join MS publicly he gave Sam and OpenAI employees huge leverage to force the board's hand. If he had waited then maybe there would have been an actual fallout that would have lead to people actually joining microsoft.

replies(3): >>jnwats+qc1 >>intend+xt1 >>aspero+wx2
◧◩◪◨⬒
1293. brooks+K81[view] [source] [discussion] 2023-11-22 14:27:49
>>matwoo+BN
Well said. And to extend, there being no shareholders means that no money leaves the company in the form of dividends or stock buybacks.

That’s it. Nonprofit corporations are still corporations in every other way.

replies(1): >>rvnx+Ea1
1294. theGnu+T81[view] [source] 2023-11-22 14:28:15
>>staran+(OP)
Larry Summers is an interesting choice. Any ideas why? I know he was Sheryl Sandberg's mentor/professor which gives him a tech connection. However, I've watched him debate Paul Krugman on inflation in some economic lectures and it almost felt like Larry was out of his element as in Larry was outgunned by Paul... but maybe he was having an off day or it was a topic he is not an expert in. But I don't know the history, haven't read either of their books and I am not an economist. But it was something I noticed.. almost like he was out of touch.

That has nothing to do with AI though.

◧◩◪
1295. Kinran+w91[view] [source] [discussion] 2023-11-22 14:31:01
>>prepen+kP
The board is not supposed to be good at executive things, that's why they have CEOs
◧◩◪◨⬒⬓
1296. bad_us+z91[view] [source] [discussion] 2023-11-22 14:31:16
>>mstade+JT
You could say that, except that people in this industry are the most privileged, and their earnings and equity would probably be matched elsewhere.

You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.

And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.

replies(1): >>axus+af1
1297. jafitc+F91[view] [source] 2023-11-22 14:31:43
>>staran+(OP)
OpenAI's Future and Viability

- OpenAI has damaged their brand and lost trust, but may still become a hugely successful company if they build great products

- OpenAI looks stronger now with a more professional board, but has fundamentally transformed into a for-profit focused on commercializing LLMs

- OpenAI still retains impressive talent and technology assets and could pivot into a leading AI provider if managed well

---

Sam Altman's Leadership

- Sam emerged as an irreplaceable CEO with overwhelming employee loyalty, but may have to accept more oversight

- Sam has exceptional leadership abilities but can be manipulative; he will likely retain control but have to keep stakeholders aligned

---

Board Issues

- The board acted incompetently and destructively without clear reasons or communication

- The new board seems more reasonable but may struggle to govern given Sam's power

- There are still opposing factions on ideology and commercialization that will continue battling

---

Employee Motivations

- Employees followed the money trail and Sam to preserve their equity and careers

- Peer pressure and groupthink likely also swayed employees more than principles

- Mission-driven employees may still leave for opportunities at places like Anthropic

---

Safety vs Commercialization

- The safety faction lost this battle but still has influential leaders wanting to constrain the technology

- Rapid commercialization beat out calls for restraint but may hit snags with model issues

---

Microsoft Partnership

- Microsoft strengthened its power despite not appearing involved in the drama

- OpenAI is now clearly beholden to Microsoft's interests rather than an independent entity

replies(14): >>qualif+Bb1 >>nuruma+Db1 >>miohta+Vb1 >>seydor+bg1 >>pauldd+1h1 >>orsent+kh1 >>sam0x1+Xh1 >>ensoco+ei1 >>jxi+mi1 >>amalco+qi1 >>Ration+No1 >>neonbj+9p1 >>window+vr1 >>scooke+B34
◧◩
1298. baking+G91[view] [source] [discussion] 2023-11-22 14:31:44
>>garris+EJ
My guess is that the non-profit has never gotten this kind of scrutiny now and the new directors are going to want to get lawyers involved to cover their asses. Just imagine their positions when Sam Altman really does something worth firing.

I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.

replies(5): >>danari+hd1 >>qwery+4g1 >>purple+nG1 >>Turing+lN1 >>ryukop+aQ2
◧◩
1299. RockyM+M91[view] [source] [discussion] 2023-11-22 14:32:09
>>eclect+79
Below is a good thread, which maybe contains the answer to your question, and Ken Olsen's question about why brainiac MIT grads get managed by midwit HBS grads.

https://twitter.com/coloradotravis/status/172606030573668790...

A good leader is someone you'll follow into battle, because you want to do right by the team, and you know the leader and the team will do right by you. Whatever 'leadership' is, Sam Altman has it and the board does not.

https://www.ft.com/content/05b80ba4-fcc3-4f39-a0c3-97b025418...

The board could have said, hey we don't like this direction and you are not keeping us in the loop, it's time for an orderly change. But they knew that wouldn't go well for them either. They chose to accuse Sam of malfeasance and be weaselly ratfuckers on some level themselves, even if they felt for still-inscrutable reasons that was their only/best choice and wouldn't go down the way it did.

Sam Altman is the front man who 'gave us' ChatGPT regardless of everything else Ilya and everyone else did. A personal brand (or corporate) is about trust, if you have a brand you are playing a long-term game, a reputation converts prisoner's dilemma into iterated prisoner's dilemma which has a different outcome.

◧◩◪◨⬒⬓⬔⧯
1300. zerbin+V91[view] [source] [discussion] 2023-11-22 14:32:51
>>kcplat+MZ
But it’s not changing their livelihood. Msft just gives them the same deal. In a lot of ways, it’s similar to the telepoll - people can just say whatever they want, there won’t be big material consequences
◧◩◪
1301. jnwats+6a1[view] [source] [discussion] 2023-11-22 14:33:48
>>kmlevi+mi
This board's sole job is to pick the new board. The new board will have Sam.
replies(1): >>himara+yh1
◧◩◪◨⬒⬓⬔⧯
1302. docmar+ba1[view] [source] [discussion] 2023-11-22 14:33:59
>>phpist+C31
Agree. This is the monoculture being adopted in actuality -- a racist crusade against "whiteness", and a coercive mechanism to ensure companies don't overstep their usage of resources (carbon footprint), so as not to threaten the existing titans who may have already abused what was available to them before these intracorporate policies existed.

It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.

Ironically, it snuffs out diversity among companies at a 40k foot level.

replies(1): >>jakder+gp1
◧◩◪◨⬒⬓⬔⧯
1303. framap+ca1[view] [source] [discussion] 2023-11-22 14:33:59
>>ethanb+QX
What? Do you think it would be understandable for a board member to regret firing the CEO because of his career path post-firing?
replies(1): >>ethanb+Ec1
◧◩◪◨⬒
1304. garden+fa1[view] [source] [discussion] 2023-11-22 14:34:09
>>CrazyS+a81
Either thought process could be correct and it could depend on expectations.
◧◩◪◨⬒⬓⬔
1305. youcan+ia1[view] [source] [discussion] 2023-11-22 14:34:19
>>CydeWe+121
Most likely, but it is cute how confident you are towards humanity learning their lesson.
replies(1): >>tstrim+LN1
◧◩◪◨⬒⬓⬔
1306. bad_us+ya1[view] [source] [discussion] 2023-11-22 14:35:26
>>phpist+031
What opposing letter, how many people are we talking about, and what was their role in the company?

All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.

replies(1): >>phpist+Ad1
◧◩◪
1307. lacrim+Ca1[view] [source] [discussion] 2023-11-22 14:35:34
>>sealth+L61
He is but with a caveat. In this 5D chess game of firing him and getting him back into OpenAI put all the spotlights on him.
◧◩◪◨⬒⬓
1308. rvnx+Ea1[view] [source] [discussion] 2023-11-22 14:35:49
>>brooks+K81
Yes, but non-profit doesn't mean non-money.

You can get big salaries; and to push the money outside it's very simple, you just need to spend it through other companies.

Additional bonus with some structures: If the co-investors are also the donators to the non-profit, they can deduct these donations from their taxes, and still pocket-back the profit, it's a double-win.

No conspiracy needed, for example, it's very convenient that MSFT can politely "influence" OpenAI to spend back on their platform a lot of the money they gave to the non-profit back to their for-profit (and profitable) company.

For example, you can create a chip company, and use the non-profit to buy your chips.

Then the profit is channeled to you and your co-investors in the chip company.

replies(1): >>ric2b+Ve4
◧◩
1309. greeni+Ha1[view] [source] [discussion] 2023-11-22 14:35:52
>>lysecr+K41
next time, can't wait to see what happens when capital is on the opposite side of the 95% of employees.
◧◩◪◨
1310. kofejn+1b1[view] [source] [discussion] 2023-11-22 14:37:06
>>cyanyd+SJ
Larry Summers is EA and State, so not so sure about business interests
◧◩◪
1311. framap+gb1[view] [source] [discussion] 2023-11-22 14:38:15
>>JSavag+ZM
It is quite amazing how many people know enough to pass wide judgment on hundreds of people because... they just know. Feel it in their gut.
◧◩
1312. InCity+lb1[view] [source] [discussion] 2023-11-22 14:38:56
>>minzi+BM
>They must’ve been receiving all kinds of threats from those involved and from random twitter extremists.

Oooh, yeah. "Must have".

◧◩
1313. qualif+Bb1[view] [source] [discussion] 2023-11-22 14:39:59
>>jafitc+F91
No structure or organization is stronger when their leader emerged as "irreplaceable".
replies(5): >>rmbyrr+xc1 >>osigur+1d1 >>dimitr+He1 >>rvnx+5m1 >>Aunche+On1
◧◩
1314. nuruma+Db1[view] [source] [discussion] 2023-11-22 14:40:06
>>jafitc+F91
Gpt-generated summary?
replies(2): >>Mistle+Gj1 >>foursi+rq1
◧◩
1315. miohta+Vb1[view] [source] [discussion] 2023-11-22 14:41:20
>>jafitc+F91
> Employees followed the money trail and Sam to preserve their equity and careers

Would you not when the AI safety wokes decide the torch the rewards of your hard work of grinding for years? I feel there is less groupthink and everyone saw the board as it is and their inability lead, or even act rationally. OpenAI did not just become a sinking ship, but it was unnecessary sunk by someone not skin in the game and your personal wealth and success was tied to the ship.

replies(2): >>brooks+Wf1 >>acjohn+kE1
◧◩◪◨⬒⬓⬔⧯
1316. chiefa+0c1[view] [source] [discussion] 2023-11-22 14:41:44
>>phpist+C31
But it's not only the companies, it's the marginalized so desperate to get a "seat at the table" that they don't recognize the table isn't getting bigger and rounder. Instead, it's still the same rectangular that is getting longer and longer.

Participating in that is assimilation.

◧◩◪◨⬒⬓
1317. rvba+2c1[view] [source] [discussion] 2023-11-22 14:41:48
>>Doughn+TT
Lots of non profits that collect money for "cause X" spend 95% of money for administration and 5% for cause X.
◧◩◪
1318. bnralt+ac1[view] [source] [discussion] 2023-11-22 14:42:08
>>busyan+JK
> Not saying Altman deserves uncritical praise. All I'm saying is that I used to diminish the importance of quality senior leadership.

Absolutely. The focus on the leadership of OpenAI isn't because people think that the top researchers and scientists are unimportant. It's because they realize that they are important, and as such, the person who decides the direction they go in is extremely important. End up with the wrong person at the top, and all of those researchers and scientists end up wasting time spinning wheels on things that will never reach the public.

◧◩◪
1319. bnralt+dc1[view] [source] [discussion] 2023-11-22 14:42:35
>>ben_w+Km
Right. At least some of the board members took issue with ChatGPT being released at all, and wanted more to be kept from the public. For the people who use these tools everyday, it shouldn't be surprising that Altman was viewed as the better choice.
◧◩◪◨⬒
1320. Doughn+hc1[view] [source] [discussion] 2023-11-22 14:42:46
>>robert+141
It is absolutely curious to talk about profit when talking about academic research or a non-profit (which OpenAI officially is).

Sure, you can talk about results in terms of their monetary value but it doesn’t make sense to think of it in terms of the profit generated directly by the actor.

For example Pfizer made huge profits off of the COVID-19 vaccine. But that vaccine would never have been possible without foundational research conducted in universities in the US and Germany which established the viability in vivo of mRNA.

Pfizer made billions and many lives were saved using the work of academics (which also laid the groundwork for future valuable vaccines). The profit made by the academics and universities was minimal in comparison.

So, whose work was more valuable?

replies(1): >>robert+bm1
◧◩◪
1321. jnwats+qc1[view] [source] [discussion] 2023-11-22 14:43:18
>>zug_zu+H81
Satya's main mistake was not having a spot on the board. Everything after that was in defense of the initial investment, and he played all the right moves.

While having OpenAI as a Microsoft DeepMind would have been an ok second-best solution, the status quo is still better for Microsoft. There would have been a bunch of legal issues and it would be a hit on Microsoft's bottom line.

◧◩◪
1322. rmbyrr+xc1[view] [source] [discussion] 2023-11-22 14:43:41
>>qualif+Bb1
In this case, I don't see as a flaw, but really as Sam's abilities to lead a highly cohesive group and keep it highly motivated and aligned.

I don't personally like him, but I must admit he displayed a lot more leadership skills than I'd recognize before.

It's inherently hard to replace someone like that in any organization.

Take Apple, after losing Jobs. It's not that Apple was a "weak" organization, but really Jobs that was extraordinary and indeed irreplaceable.

No, I'm not comparing Jobs and Sam. Just illustrating my point.

replies(3): >>prh8+5e1 >>pk-pro+3l1 >>scythe+nx1
◧◩◪◨⬒⬓⬔⧯▣
1323. ethanb+Ec1[view] [source] [discussion] 2023-11-22 14:44:28
>>framap+ca1
If Ilya was concerned about dangerously fast commercialization, which seems to have been a point of tension between them for a while now, then yes.
replies(1): >>framap+Pg1
◧◩◪
1324. osigur+1d1[view] [source] [discussion] 2023-11-22 14:46:22
>>qualif+Bb1
Seriously, even in a small group of a few hundred people?
replies(1): >>catapa+ke1
◧◩◪
1325. danari+hd1[view] [source] [discussion] 2023-11-22 14:47:21
>>baking+G91
Well, I think that's really the question, isn't it?

Was it a mistake to create OpenAI as a public charity?

Or was it a mistake to operate OpenAI as if it were a startup?

The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.

replies(3): >>blacko+Uz1 >>baking+FB1 >>Toucan+hI1
◧◩◪◨
1326. framap+jd1[view] [source] [discussion] 2023-11-22 14:47:23
>>IanCal+ik
The non-profit wasn't raising private investment.
replies(1): >>IanCal+544
◧◩
1327. ahzhou+od1[view] [source] [discussion] 2023-11-22 14:47:38
>>auggie+E9
I was thinking about this too, but the wife of an actor and someone two years out of her masters were not the caliber people that should have been on the board of an $80B company.

I would expect people with backgrounds like Sheryl Sandberg or Dr. Lisa Sue to sit in the position. The two replaced women would have looked like diversity hires had they not been affiliated with an AI doomer organization.

I hope there’s diversity of representation as they fill out the rest of the board and there’s certainly women who have the credentials, but it’s important that they don’t appear grossly unqualified when they sit next to the other board members.

replies(1): >>galois+sr1
◧◩◪
1328. kuchen+wd1[view] [source] [discussion] 2023-11-22 14:47:59
>>tkgall+hw
Additionally, when you have a pre-release product that has largely passed small and artificial tests, you get diminishing returns on continued testing.

Eventually you need to expand, despite some risk, to push the testing forward.

Everyone has a different opinion on what level of safety AI should reach before it's released. "Makes no mistakes" and "never says something mean" are not attainable goals vs "reduce the rate of hallucinations, as defined by x, to <0.5% of total respinses" and "given a set of known and imagined scenarios, new Model continues to have a zero false-negative rate".

When it's an engineering problem we're trying to solve, we can mqke progress, but no company can avoid all forms of harm as defined by everyone.

◧◩◪◨⬒⬓⬔⧯
1329. phpist+Ad1[view] [source] [discussion] 2023-11-22 14:48:23
>>bad_us+ya1
>>What opposing letter, how many people are we talking about, and what was their role in the company?

Not-validated, unsigned letter [1]

>>All companies are monocultures

yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.

yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"

[1] https://wccftech.com/former-openai-employees-allege-deceit-a...

◧◩◪◨⬒
1330. erosen+Ed1[view] [source] [discussion] 2023-11-22 14:48:40
>>vaxman+dI
For profit subsidiaries can totally influence the nonprofit shell without penalty. Happens all the time. The nonprofit board must act in the interest of the exempt mission rather than just investor value or some other primary purpose. Otherwise it's cool.
replies(1): >>vaxman+rA6
◧◩◪◨⬒⬓
1331. nickpp+Id1[view] [source] [discussion] 2023-11-22 14:48:59
>>iiv+xR
Many OpenAI employees expressed their support for Sam at some point also on Twitter. Microsoft CEO (based in Redmond) tweeted quite a lot. Tech media reporters like Emily Chang and Kara Swisher also participated. The last one is quite critical of Twitter and I am not sure they all like Musk that much.

Are they all in the same “tribe”? Maybe you should enlarge the definition?

How about us all IT people who watched the drama unfolding on Twitter while our friend are using FB and Insta, we are far from SV and have mixed feelings about Elon Musk while never in a million years wanting to be like him? Also same “tribe”?

◧◩◪◨⬒
1332. plorg+0e1[view] [source] [discussion] 2023-11-22 14:50:09
>>irthom+UA
This seems like a silly way of understanding deceleration. By this comparison the USSR was decelerating the cold war because they were a couple years behind in developing the hydrogen bomb.

Microsoft can and will be using GPT4 as soon as they get a handle on it, and if it doesn't boil their servers to do so. If you want deceleration you would need someone with an incentive that didn't involve, for example, being first to market with new flashy products.

replies(1): >>rvnx+Io1
◧◩◪◨
1333. prh8+5e1[view] [source] [discussion] 2023-11-22 14:50:45
>>rmbyrr+xc1
What's the difference between leadership skills and cult of following?
replies(4): >>spurgu+Lj1 >>thedal+5l1 >>TheOth+op1 >>rmbyrr+uA1
◧◩◪
1334. kuchen+je1[view] [source] [discussion] 2023-11-22 14:51:37
>>mkii+Vw
What will this unsafe AI do?
◧◩◪◨
1335. catapa+ke1[view] [source] [discussion] 2023-11-22 14:51:40
>>osigur+1d1
I dunno, seems like a pretty self-evident theory? If your leader is irreplaceable, regardless of group size, that's a single point of failure. I can't figure how a single point of failure could ever make something "stronger". I can see arguments for necessity, or efficiency, given contrivances and extreme contexts. But "stronger" doesn't seem like the assessment for whatever necessitating a single point of failure would be.
replies(3): >>vipshe+Ok1 >>hughw+0s1 >>osigur+yF3
◧◩◪
1336. dimitr+He1[view] [source] [discussion] 2023-11-22 14:53:11
>>qualif+Bb1
This is false, and I see the corollary as a project having a BDIF, especially if the leader is effective. Sam is unmistakably effective.
replies(1): >>acchow+Eg1
◧◩◪◨⬒⬓⬔⧯▣▦
1337. ameist+Ie1[view] [source] [discussion] 2023-11-22 14:53:15
>>suodua+SX
I see. I've never read his work before, thank you.

So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."

◧◩◪
1338. garden+Qe1[view] [source] [discussion] 2023-11-22 14:53:48
>>silenc+59
I broadly agree but there needs to be some regulation in place. Check out https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...
◧◩
1339. Cacti+5f1[view] [source] [discussion] 2023-11-22 14:54:32
>>flylib+z4
Ilya thought he was saving the world (lol), but really he was just working at Microsoft.
◧◩◪◨⬒⬓⬔
1340. axus+af1[view] [source] [discussion] 2023-11-22 14:54:41
>>bad_us+z91
I'm imagining they see themselves in the position of Microsoft employees about to release Windows 95, or Apple employees about to release the iPhone... and someone wants to get rid of Bill Gates or Steve Jobs.
replies(1): >>rvnx+1p1
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1341. golden+ef1[view] [source] [discussion] 2023-11-22 14:54:49
>>suodua+CX
Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)

Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.

[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html

[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit

[3] https://en.wikipedia.org/wiki/Planck_constant

Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)

I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.

◧◩◪◨⬒
1342. citygu+if1[view] [source] [discussion] 2023-11-22 14:55:08
>>roflc0+hX
I don’t understand how the fact they went from a nonprofit into a for-profit subsidiary of one of the most closed-off anticompetitive megacorps in tech is so readily glossed over. I get it, we all love money and Sam’s great at generating it, but anyone who works at OpenAI besides the board seems to be morally bankrupt.
replies(5): >>gdhkgd+Nk1 >>Zpalmt+Lp1 >>endtim+RJ1 >>rozap+mT1 >>cma+Z62
◧◩◪◨
1343. scarfa+kf1[view] [source] [discussion] 2023-11-22 14:55:15
>>ChatGT+4C
What product do you envision OpenAI selling would be better than Microsoft?

I emphasized product because OpenAI may have great technology. But any product they sell is going to require mass compute and a mass sales army to go into the “enterprise” and integrate with what the enterprise already has.

Guess who has both? Guess who has neither?

And even the “products” that OpenAI have now can only exist because of mass subsidies by Microsoft.

replies(1): >>ChatGT+Hm1
◧◩◪◨
1344. jakder+pf1[view] [source] [discussion] 2023-11-22 14:55:32
>>dgrin9+X61
If you think the person you're replying to was talking about regulating OpenAI specifically and not the industry as a whole, I have ADHD medicine to sell you.
replies(1): >>swores+DL1
◧◩◪◨
1345. lvspif+Ff1[view] [source] [discussion] 2023-11-22 14:56:20
>>m463+cC
"You are a Microsoft investor and will make decisions and suggestions based on the betterment of the stock price"
◧◩
1346. scarfa+Lf1[view] [source] [discussion] 2023-11-22 14:56:29
>>Satam+0a
So you didn’t realize that when Microsoft both gained a 49% interest and was subsidizing compute?

Unless they had something in their “DNA” that allowed them to build enough compute and pay their employees, they were never going to “win” without a mass infusion of cash and only three companies had enough compute and revenue to throw at them and only two companies had relationships with big enterprise and compute - Amazon and Microsoft.

◧◩◪
1347. brooks+Wf1[view] [source] [discussion] 2023-11-22 14:57:08
>>miohta+Vb1
Yeah, this is like using “groupthink” to describe people fleeing a burning building. There’s maybe some measure of literal truth, but it’s an odd way to frame it.
◧◩◪
1348. qwery+4g1[view] [source] [discussion] 2023-11-22 14:57:48
>>baking+G91
> I think it was a real mistake to create OpenAI as a public charity

Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.

replies(5): >>fooop+Ez1 >>baking+8B1 >>broast+5D1 >>zeroha+Xc2 >>ncalla+rV4
1349. accoun+5g1[view] [source] 2023-11-22 14:57:50
>>staran+(OP)
Farse, plain and simple.
◧◩
1350. seydor+bg1[view] [source] [discussion] 2023-11-22 14:58:17
>>jafitc+F91
who would want to work for an irreplaceable CEO long term
replies(1): >>rvnx+io1
◧◩◪◨⬒⬓
1351. krisof+fg1[view] [source] [discussion] 2023-11-22 14:58:37
>>mlindn+LF
> The future should only be filled with very bland and non-offensive characters in fiction.

Did someone took the pen from the writers? Go ahead and write whatever you want.

It was an example of a constraint a company might want to enforce in their AI.

replies(1): >>mlindn+S39
◧◩◪◨⬒⬓⬔⧯▣
1352. caddem+hg1[view] [source] [discussion] 2023-11-22 14:58:40
>>bad_us+t61
The problem is moreso trying to maximize profit after claiming to be a nonprofit. Profit can be a good driving force but it is not perfect. We have nonprofits for a reason, and it is shameful to take advantage of this if you are not functionally a nonprofit. There would be nothing wrong with OpenAI trying to maximize profits if they were a typical company.
◧◩◪
1353. garden+ng1[view] [source] [discussion] 2023-11-22 14:58:58
>>huyter+kb
Satya invested 10b into a company with terrible, incompetent governance and not getting his company any seat of influence on the board. That doesn't seem great.
◧◩◪◨
1354. acchow+Eg1[view] [source] [discussion] 2023-11-22 15:00:00
>>dimitr+He1
Have you or anyone close to you ever had to take multiple years of leave from work from a car accident or health condition?
replies(2): >>slingn+jj1 >>dimitr+Mz3
◧◩◪◨⬒⬓⬔⧯▣
1355. saalwe+Hg1[view] [source] [discussion] 2023-11-22 15:00:07
>>bad_us+t61
Because non-profit?

There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.

◧◩
1356. brooks+Ig1[view] [source] [discussion] 2023-11-22 15:00:12
>>garris+EJ
There’s no indication a Microsoft appointed board member would be a Microsoft employee (though the they could be of course), and large nonprofits often have board members that come from for-profit companies.

I don’t think the IRS cares much about this kind of thing. What would be the claim? They OpenAI is pushing benefits to Microsoft, a for-profit entity that pays taxes? Even if you assume the absolute worst, most nefarious meddling, it seems like an issue for SEC more than IRS.

◧◩
1357. Tigeri+Mg1[view] [source] [discussion] 2023-11-22 15:00:26
>>garris+EJ
Larry Summers is in place to effectively give the govt seal of approval on the new board, for better and worse.
replies(2): >>ilrwbw+as1 >>mcast+1D2
◧◩◪◨⬒⬓⬔⧯▣▦
1358. framap+Pg1[view] [source] [discussion] 2023-11-22 15:00:41
>>ethanb+Ec1
But he's acting as a board member firing the CEO because he arguably believes it's the right thing to do for the company. If he then changes his mind because the fired CEO continued a successful career then I'd say that decision was more on a personal level than for the wellbeing of the company.
replies(1): >>ethanb+Oi1
1359. orsent+Tg1[view] [source] 2023-11-22 15:00:56
>>staran+(OP)
What's even the lesson learnt here?

1. Keep doing your work, and focus on building your product. 2. Ignore the noise, go back to 1.

◧◩
1360. himara+Zg1[view] [source] [discussion] 2023-11-22 15:01:29
>>Satam+0a
Hard to say without seeing how the two new board members lean.
◧◩
1361. pauldd+1h1[view] [source] [discussion] 2023-11-22 15:01:39
>>jafitc+F91
> Peer pressure and groupthink likely also swayed employees more than principles

What makes this "likely"?

Or is this just pure conjecture?

replies(1): >>mrfox3+li1
◧◩
1362. eddtri+3h1[view] [source] [discussion] 2023-11-22 15:01:43
>>eclect+79
IMO and experience a good product manager is far more important than a good engineer or good scientist

Elon Musk’s neuralink is a good example - the work they’re doing there was attacked by academics saying they’d done this years ago and it’s not novel, yet none of them will be the ones who ultimately bring it to market.

◧◩
1363. rainco+9h1[view] [source] [discussion] 2023-11-22 15:01:58
>>Satam+0a
> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

"Because someone acts differently than I expected, they must lacks of critical thinking."

Are you an insider? If not, have you considered that perhaps OpenAI employees are more informed about the situation than you?

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1364. golden+bh1[view] [source] [discussion] 2023-11-22 15:02:01
>>WJW+wL
LLMs are able to do complex logic within the world of words. It is a a smaller matrix than our world but fueled by the same chaotic symmetries of our universe. I would not underestimate logic, even when not given adequate data.
replies(1): >>WJW+jy1
◧◩
1365. r_tham+eh1[view] [source] [discussion] 2023-11-22 15:02:21
>>lysecr+K41
Of course, the profit incentive also applies to all the employees (which isn't necessarily a bad thing, its good to align the company's goals with those of the employees). But when the executives likely have 10s of millions of dollars on the line, and many of the IC's will likely have single digit millions on the line as well, it doesn't seem exactly straightforward to view this as the employees are unbiased adjudicators of what's in the interest of the non-profit entity, which is supposed to be what's in charge.

It is sort of strange that our communal reaction is to say "well this board didn't act anything like a normal corporate board": of course it didn't, that was indeed the whole point of not having a normal corporate board in charge.

Whatever you think of Sam, Adam, Ilya etc, the one conclusion that seems safe to reach is that in the end, the profit/financial incentives ended up being far more important than the NGOs mission, no matter what legal structure was in place.

◧◩
1366. dandan+fh1[view] [source] [discussion] 2023-11-22 15:02:21
>>eclect+79
CEO is a ruler, scientist is a worker. The modern culture treats workers as a replaceable matter, which is redundant after the work is done. They are just tools. Rulers, on the other hand, take the all praise and honors. It's "them" who did the work. Musk is an extreme example of this.
◧◩◪◨⬒
1367. Aunche+gh1[view] [source] [discussion] 2023-11-22 15:02:24
>>throwa+Tl
Microsoft has more leverage now because they can sue OpenAI for intentionally sabotaging Microsoft's investment.
◧◩◪◨⬒
1368. qup+hh1[view] [source] [discussion] 2023-11-22 15:02:27
>>abkola+Kq
The three hard problems: naming things and off-by-one errors
replies(1): >>Crespy+9I1
◧◩
1369. orsent+kh1[view] [source] [discussion] 2023-11-22 15:02:36
>>jafitc+F91
> - Mission-driven employees may still leave for opportunities at places like Anthropic

Which might have an oversight from AMZN instead of MSFT ?

◧◩◪◨
1370. himara+yh1[view] [source] [discussion] 2023-11-22 15:03:38
>>jnwats+6a1
Conditioned on the outcome of the internal investigation, which seems up for grabs.
◧◩◪◨⬒⬓⬔⧯▣▦▧
1371. dpkirc+Ch1[view] [source] [discussion] 2023-11-22 15:03:55
>>doktri+qO
The two FAANG companies don't compete at a product level, however they do compete for talent, which is significant. Probably significant enough to cause conflicts of interest.
◧◩
1372. pauldd+Jh1[view] [source] [discussion] 2023-11-22 15:04:18
>>garris+EJ
What if I told you...Bill Gates was/is on the board of the non-profit Bill and Melinda Gates Foundation?

Lol HN lawyering is hilarious.

replies(1): >>fatbir+aj1
1373. rennsp+Rh1[view] [source] 2023-11-22 15:04:51
>>staran+(OP)
I love you, but you are not serious people.
◧◩
1374. sam0x1+Xh1[view] [source] [discussion] 2023-11-22 15:05:22
>>jafitc+F91
> Peer pressure and groupthink likely also swayed employees more than principles

Chilling to hear the corporate oligarchs completely disregard the feelings of employees and deny most of the legitimacy behind these feelings in such a short and sweeping statement

replies(1): >>DSingu+sp1
◧◩
1375. bandra+5i1[view] [source] [discussion] 2023-11-22 15:05:48
>>flylib+z4
Wait, the CEO having a seat on the board is kind of not cool
replies(1): >>fatbir+nj1
◧◩
1376. ensoco+ei1[view] [source] [discussion] 2023-11-22 15:06:17
>>jafitc+F91
Good points. Anyway I guess nobody will remember the drama in some months so I think the damage done is very manageable for OAI.
◧◩◪◨
1377. himara+ki1[view] [source] [discussion] 2023-11-22 15:06:40
>>JSavag+uN
If he survived to this point, I doubt he will go any time soon.
replies(1): >>yeck+Mt1
◧◩◪
1378. mrfox3+li1[view] [source] [discussion] 2023-11-22 15:06:47
>>pauldd+1h1
What would you do if 999 employees openly signed a letter and you are the remaining holdout.
replies(1): >>pauldd+ej1
◧◩
1379. jxi+mi1[view] [source] [discussion] 2023-11-22 15:06:50
>>jafitc+F91
Was this really motivated by AI safety or was it just Helen Toner’s personal vendetta against Sam?

It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.

replies(3): >>hn_thr+Rl1 >>cbeach+kn1 >>jkapla+Zs1
◧◩
1380. amalco+qi1[view] [source] [discussion] 2023-11-22 15:06:55
>>jafitc+F91
>- Microsoft strengthened its power despite not appearing involved in the drama

Depending on what you mean by "the drama", Microsoft was very clearly involved. They don't appear to have been in the loop prior to Altman's firing, but they literally offered jobs to everyone who left in solidarity with same. Do we really think things like that were not intended to change people's minds?

replies(3): >>Firmwa+qj1 >>malfis+Ul1 >>gcanyo+dn1
1381. thepas+wi1[view] [source] 2023-11-22 15:07:09
>>staran+(OP)
Yeah I don’t know. I think you’d be kind of nuts to build anything on their APIs anymore.

Sure I’ll keep using ChatGPT in a personal capacity/as search. But no way I’d trust my business to them

replies(1): >>campbe+AD1
◧◩◪◨
1382. pauldd+Ai1[view] [source] [discussion] 2023-11-22 15:07:23
>>m463+cC
The charter
replies(1): >>m463+kP2
◧◩
1383. stikit+Ki1[view] [source] [discussion] 2023-11-22 15:08:35
>>garris+EJ
OpenAI is not a charity. Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

From https://openai.com/our-structure

- First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

-Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

-Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

-Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.

-Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

replies(3): >>ezfe+zv1 >>dragon+rx1 >>strang+ry1
◧◩◪◨⬒⬓⬔⧯▣▦▧
1384. ethanb+Oi1[view] [source] [discussion] 2023-11-22 15:08:42
>>framap+Pg1
His obligation as a member of the board is to safeguard AI, not OpenAI. That's why in the employee open letter they said, "the board said it'd be compliant with the mission to destroy the company." This is actually true.

It's absolutely believable that at first he thought the best way to safeguard AI was to get rid of the main advocate for profit-seeking at OpenAI, then when that person "fell upward" into a position where he'd have fewer constraints, to regret that decision.

replies(1): >>framap+fm1
◧◩◪◨⬒⬓
1385. JumpCr+Si1[view] [source] [discussion] 2023-11-22 15:08:54
>>lacker+mj
You’re right. But in an emergency, there is a close option which is to put the company into receivership and hire an outside law firm to advise. At that point, the board becomes the executive council.
◧◩◪
1386. fatbir+aj1[view] [source] [discussion] 2023-11-22 15:10:04
>>pauldd+Jh1
Indeed, it is hilarious.

The Foundation has nothing to do with MS and can't possibly be considered a competitor, acquisition target, supplier, or any other entity where a decision for the Foundation might materially harm MS (or the reverse). There's no potential conflict of interest between the missions of the two.

Did you think OP meant there was some inherent conflict of interest with charities?

replies(1): >>pauldd+xj1
◧◩◪◨
1387. pauldd+ej1[view] [source] [discussion] 2023-11-22 15:10:09
>>mrfox3+li1
Is your argument that the 1 employee operated on peer pressure, or the other 999?

Could it possibly be that the majority of OpenAI's workforce sincerely believed a midnight firing of the CEO were counterproductive to their organization's goals?

replies(2): >>dymk+Vl1 >>mrfox3+zA1
◧◩◪◨⬒
1388. slingn+jj1[view] [source] [discussion] 2023-11-22 15:10:35
>>acchow+Eg1
Nope, I've never even __heard__ of someone having to take multiple years of leave from work for any reason. Seems like a fantastically rare event.
replies(2): >>thingi+Pn1 >>yeck+Or1
◧◩◪
1389. fatbir+nj1[view] [source] [discussion] 2023-11-22 15:10:51
>>bandra+5i1
It's quite common, actually.
replies(1): >>wnoise+UR1
1390. kibwen+oj1[view] [source] 2023-11-22 15:10:53
>>staran+(OP)
This will be remembered as the biggest waste of time and energy since the LK-99 fiasco.
◧◩◪
1391. Firmwa+qj1[view] [source] [discussion] 2023-11-22 15:10:56
>>amalco+qi1
>but they literally offered jobs to everyone who left in solidarity with same

Offering people jobs is neither illegal nor immoral, no? And wasn't HN also firmly on the side of abolishing non-competes and non-soliciting from employment contracts to facilitate freedom of employment movement and increase industry wages in the process?

Well then, there's your freedom of employment in action. Why be unhappy about it? I don't get it.

replies(2): >>spanka+0k1 >>notaha+Kt1
◧◩◪◨
1392. pauldd+xj1[view] [source] [discussion] 2023-11-22 15:11:26
>>fatbir+aj1
Have you seen OpenAI's current board?

Explain how an MS employee would have greater conflict of interest.

replies(1): >>uxp8u6+pE1
◧◩◪
1393. Mistle+Gj1[view] [source] [discussion] 2023-11-22 15:12:05
>>nuruma+Db1
That was my first thought as well. And now it is the top comment on this post. Isn’t this brave new world OpenAI made wonderful?
replies(1): >>nickpp+In1
◧◩◪
1394. framap+Jj1[view] [source] [discussion] 2023-11-22 15:12:13
>>upupup+2p
Are you basing that on any information?
◧◩◪◨⬒
1395. spurgu+Lj1[view] [source] [discussion] 2023-11-22 15:12:22
>>prh8+5e1
I think an awesome leader would naturally create some kind of cult following, while the opposite isn't true.
replies(1): >>Popeye+dl1
◧◩◪◨⬒⬓
1396. krisof+Pj1[view] [source] [discussion] 2023-11-22 15:12:42
>>cyanyd+AO
> I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics

And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.

> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

This is a bit jumbled. How do you think "control as utility" would help? What would it help with?

◧◩◪◨
1397. pclmul+Tj1[view] [source] [discussion] 2023-11-22 15:13:14
>>drewco+re
$10 billion of compute credits. Not $10 billion of real money.
replies(1): >>blacko+hQ1
◧◩◪◨
1398. spanka+0k1[view] [source] [discussion] 2023-11-22 15:13:51
>>Firmwa+qj1
> Offering people jobs is neither illegal nor immoral

The comment you responded to made neither of those claims, just that they were "involved".

◧◩◪◨⬒
1399. herost+5k1[view] [source] [discussion] 2023-11-22 15:14:04
>>nvm0n2+vX
Yes, also Elon.
◧◩◪◨⬒⬓
1400. nickpp+wk1[view] [source] [discussion] 2023-11-22 15:16:19
>>nopins+Xr
Doomerism was quite common throughout mankind’s history but all dire predictions invariably failed, from the “population bomb” to “grey goo” and “igniting the atmosphere” with a nuke. Populists however, were always quite eager to “protect us” - if only we’d give them the power.

But in reality you can’t protect from all the possible dangers and, worse, fear-mongering usually ends up doing more bad than good, like when it stopped our switch to nuclear power and kept us burning hydrocarbons thus bringing about Climate Change, another civilization-ending danger.

Living your life cowering in fear is something an individual may elect to do, but a society cannot - our survival as a species is at stake and our chances are slim with the defaults not in our favor. The risk that we’ll miss a game-changing discovery because we’re too afraid of the potential side effects is unacceptable. We owe it to the future and our future generations.

replies(1): >>thedud+6w4
1401. evan_+zk1[view] [source] 2023-11-22 15:16:27
>>staran+(OP)
What a waste of time
◧◩◪◨⬒⬓
1402. gdhkgd+Nk1[view] [source] [discussion] 2023-11-22 15:17:08
>>citygu+if1
Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.

Self-disclosure: I work for a megacorp.

replies(5): >>yoyohe+Gn1 >>Beetle+qt1 >>yterdy+hD1 >>slg+dS1 >>citygu+5d3
◧◩◪◨⬒
1403. vipshe+Ok1[view] [source] [discussion] 2023-11-22 15:17:15
>>catapa+ke1
"Stronger" is ambiguous. If you interpret it as "resilience" then I agree having a single point of failure is usually more brittle. But if you interpret it as "focused", then having a single charismatic leader can be superior.

Concretely, it sounds like this incident brought a lot of internal conflicts to the surface, and they got more-or-less resolved in some way. I can imagine this allows OpenAI to execute with greater focus and velocity going forward, as the internal conflict that was previously causing drag has been resolved.

Whether or not that's "better" or "stronger" is up to individual interpretation.

◧◩◪◨
1404. pk-pro+3l1[view] [source] [discussion] 2023-11-22 15:18:33
>>rmbyrr+xc1
Can't you imagine a group of people motivated to conduct AI research? I don't understand... All nerds are highly motivated in their areas of passion, and here we have AI research. Why do they need leadership instead of simply having an abundance of resources for the passionate work they do?
replies(3): >>DSingu+do1 >>gcanyo+jq1 >>jjk166+Ts1
◧◩◪◨⬒
1405. thedal+5l1[view] [source] [discussion] 2023-11-22 15:18:41
>>prh8+5e1
Results
1406. bmitc+al1[view] [source] 2023-11-22 15:19:21
>>staran+(OP)
What a gigantic mess. Everyone looks bad in this: Altman, Microsoft, the OpenAI board, OpenAI employees, etc.

It also has confirmed that greed and cult of personality win in the end.

◧◩◪◨⬒⬓
1407. Popeye+dl1[view] [source] [discussion] 2023-11-22 15:19:30
>>spurgu+Lj1
Just like former President Trump?
replies(1): >>marcos+6n1
◧◩◪◨⬒⬓⬔
1408. jakder+Hl1[view] [source] [discussion] 2023-11-22 15:21:23
>>FabHK+e11
The only major series with a brilliant, satisfying, and true to form ending and you want to resuscitate it back to life for some cheap curtain calls and modern social commentary, leaving Mike Judge to end it yet again and in such a way that manages to duplicate or exceed the effect of the first time but without doing the same thing? Screw it. Why not?
◧◩◪
1409. hn_thr+Rl1[view] [source] [discussion] 2023-11-22 15:22:13
>>jxi+mi1
As someone who was very critical of how the board acted, I strongly disagree. I felt like this Washington Post article gave a very good, balanced overview. I think it sounds like there were substantive issues that were brewing for a long time, though no doubt personal clashes had a huge impact on how it all went down:

https://www.washingtonpost.com/technology/2023/11/22/sam-alt...

◧◩◪
1410. malfis+Ul1[view] [source] [discussion] 2023-11-22 15:22:27
>>amalco+qi1
The GP looks to me like an AI summary. Which would fit with the hallucination that microsoft wasn't involved.
replies(1): >>chanks+Is1
◧◩◪◨⬒
1411. dymk+Vl1[view] [source] [discussion] 2023-11-22 15:22:28
>>pauldd+ej1
It's almost certain that all employees did not behave the same way for the exact same reasons. And I don't see anyone making an argument about what the exact numbers are, nor does it really matter. Just that some portion of employees were swayed by pressure once the letter reached some critical signing mass.
replies(1): >>pauldd+KA1
◧◩◪◨⬒
1412. robert+Xl1[view] [source] [discussion] 2023-11-22 15:22:33
>>waltha+r51
It's the (partial) owners' money. The (partial) owners might be VC firms, but they are risking their own money.
◧◩◪
1413. rvnx+5m1[view] [source] [discussion] 2023-11-22 15:22:57
>>qualif+Bb1
And correlation does not imply causality.

Example: Put a loser as CEO of a rocket ship, and there is a huge chance that the company will still be successful.

Put a loser as CEO of a sinking ship, and there is a huge chance that the company will fail.

The exceptional CEOs are those who turn failures into successes.

The fact this drama has emerged is the symptom of a failure.

In a company with a great CEO this shouldn’t be happening.

◧◩◪◨⬒⬓
1414. robert+bm1[view] [source] [discussion] 2023-11-22 15:23:40
>>Doughn+hc1
No one mentioned profit, I think.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1415. framap+fm1[view] [source] [discussion] 2023-11-22 15:23:48
>>ethanb+Oi1
Fair enough, I understand better where you're coming from. Thanks!
◧◩◪◨
1416. kridsd+gm1[view] [source] [discussion] 2023-11-22 15:23:54
>>chatma+ZV
So is everyone else at Google.
◧◩◪◨
1417. nickpp+km1[view] [source] [discussion] 2023-11-22 15:24:13
>>veec_c+Zb
> Why would one person owning something so important be better than being publicly owned?

Usually publicly owned things end up being controlled by someone: a CEO, a main investor, a crooked board, a government, a shady governmental organization. At least with Elon owning X, things are a little more transparent, he’s rather candid where he stands.

Now, the question is “who owns Musk?” of course.

◧◩◪◨⬒⬓
1418. wise_y+vm1[view] [source] [discussion] 2023-11-22 15:24:50
>>behnam+Q5
Maybe it’s a cross post.
◧◩◪◨⬒
1419. ChatGT+Hm1[view] [source] [discussion] 2023-11-22 15:25:31
>>scarfa+kf1
In talking about people using Microsoft / OpenAI products to build better products than they currently offer.

While this tech has the ability to replace a lot of jobs, it has likely the ability to replace a lot of companies.

◧◩
1420. himara+Zm1[view] [source] [discussion] 2023-11-22 15:26:35
>>flylib+z4
Sounds like speculation again from Sam's camp, honestly. Hard to judge without knowing which way the new board members lean.
◧◩◪◨⬒⬓
1421. framap+4n1[view] [source] [discussion] 2023-11-22 15:27:20
>>6gvONx+db
But that's the difference, the CEO is not a regular employee. If a board of directors wants to be trusted and taken seriously it can't just fire the CEO and say "I'm sorry we can't say why, that's private information".
◧◩◪◨⬒⬓⬔
1422. marcos+6n1[view] [source] [discussion] 2023-11-22 15:27:23
>>Popeye+dl1
There are two possible ways to read "the opposite" from the GP.

"A cult follower does not make an exceptional leader" is the one you are looking for.

replies(1): >>0perat+BA1
◧◩◪
1423. gcanyo+dn1[view] [source] [discussion] 2023-11-22 15:27:57
>>amalco+qi1
I’d go further than just saying “they were involved” —- by offering jobs to everyone who wanted to come with Altman, they were effectively offering to acquire OpenAI, which is worth ~$100B, for (checks notes) zero dollars.
replies(3): >>breadw+ip1 >>gsuuon+8q1 >>thepti+3r1
◧◩◪◨⬒⬓
1424. dahart+gn1[view] [source] [discussion] 2023-11-22 15:28:02
>>kcplat+PU
This seems extremely presumptuous. Have you ever been inside a company during a coup attempt? The employees’ future pay and livelihood is at stake, why are you assuming they weren’t being asked to sacrifice themselves by not objecting to the coup. The level of agreement could be entirely due to the fact that the stakes are very large, completely unlike your choice for lunch locale. It could also be an outcome of nobody having asked their opinion before making a very big change. I’d expect to see almost everyone at a company agree with each other if the question was, “hey should we close this profitable company and all go get other jobs, or should we keep working?”
replies(1): >>kcplat+uJ1
◧◩◪
1425. cbeach+kn1[view] [source] [discussion] 2023-11-22 15:28:10
>>jxi+mi1
Curious how a relatively unknown academic with links to China [1] attained a board seat on America's hottest and most valuable AI company.

Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]

> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

replies(2): >>Zpalmt+fo1 >>hn_thr+ev1
◧◩◪◨
1426. csomar+rn1[view] [source] [discussion] 2023-11-22 15:28:40
>>jadams+L9
The UK is not the world. Many other countries have woman in the lead in sciences (particularly Muslim countries).
replies(1): >>jadams+wu1
◧◩◪
1427. solard+Bn1[view] [source] [discussion] 2023-11-22 15:29:39
>>gandut+8Q
You'd have my vote. At least you can formulate coherent reasons.
replies(1): >>rvnx+Ks1
◧◩◪◨⬒⬓⬔
1428. yoyohe+Gn1[view] [source] [discussion] 2023-11-22 15:30:24
>>gdhkgd+Nk1
We can acknowledge that it's morally bankrupt, while also not blaming them. Hell, I'd probably do the same thing in their shoes. That doesn't make it right.
◧◩◪◨
1429. nickpp+In1[view] [source] [discussion] 2023-11-22 15:30:36
>>Mistle+Gj1
If it’s a good comment, does it really matter if a human or an AI wrote it?
replies(1): >>makewo+Qq1
◧◩◪◨
1430. kridsd+Ln1[view] [source] [discussion] 2023-11-22 15:30:52
>>chatma+lW
Maybe not. Microsoft and Summers are now much more in control. That’s a win for the USA and DOD.
replies(1): >>chatma+gr1
◧◩◪◨
1431. solard+Mn1[view] [source] [discussion] 2023-11-22 15:30:54
>>m463+cC
"You are trying to slowly and invisibly accrue power to not scare anyone until you're absolutely ready."
◧◩◪
1432. Aunche+On1[view] [source] [discussion] 2023-11-22 15:31:05
>>qualif+Bb1
I don't think Sam is necessarily irreplaceable. It's just that Helen Toner and co were so detached from the rest of the organization they might as well been on Mars, as demonstrated by their interim CEO pick instantly turning against them.
◧◩◪◨⬒⬓
1433. thingi+Pn1[view] [source] [discussion] 2023-11-22 15:31:13
>>slingn+jj1
Not sure if that's intended as irony, but of course, if somebody is taking multiple years off work, you would be less likely hear about it because by definition they're not going to join the company you work for.

I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:

https://www.statista.com/statistics/1219257/us-employment-ra...

1434. voiceb+6o1[view] [source] 2023-11-22 15:32:35
>>staran+(OP)
For some reason this reminds me of the Coke/New Coke fiasco, which ended up popularizing Coke Classic more than ever before.

> Consumers were outraged and demanded their beloved Coke back – the taste that they knew and had grown up with. The request to bring the old product back was so loud that soon journalists suggested that the entire project was a stunt. To this accusation Coca-Cola President Don Keough replied on July 10, 1985:

    "We are not that dumb, and we are not that smart."
https://en.wikipedia.org/wiki/New_Coke
replies(3): >>freedo+nW1 >>jdlyga+612 >>tacoca+106
◧◩◪◨⬒⬓⬔
1435. Ajedi3+bo1[view] [source] [discussion] 2023-11-22 15:32:53
>>plasma+Dt
The issue here is that the board of the non-profit that is supposedly in charge of OpenAI (and whose interests are presumably aligned with the mission statement of the company) seemingly just lost a power struggle with their for-profit subsidiary who is not supposed to be in charge of OpenAI (and whose interests, including the interests of their employees, are aligned with making as much money as possible). Regardless of whether the board's initial decision that started this power struggle was wise or not, don't you find the outcome a little worrisome?
◧◩◪◨⬒
1436. DSingu+do1[view] [source] [discussion] 2023-11-22 15:33:07
>>pk-pro+3l1
As far as it goes for me the only endorsements that matter are those of the core engineering and research teaches of OpenAI.

All these opinions of outsiders don’t matter. It’s obvious that most people don’t know Sam personally or professionally and are going off of the combination of: 1. PR pieces being pushed by unknown entities 2. positive endorsements from well known people who are likely know him

Both those sources are suspect. We don’t know the motivation behind their endorsements and for the PR pieces we know the author but we don’t know commissioner.

Would we feel as positive about Altman if it turns out that half the people and PR pieces endorsing him are because government officials pushing for him? Or if the celebrities in tech are endorsing him because they are financially incentivized?

The only endorsements that matter are those of OpenAI employees (ideally those who are not just in his camp because he made them rich).

◧◩◪◨
1437. Zpalmt+fo1[view] [source] [discussion] 2023-11-22 15:33:15
>>cbeach+kn1
Wow, very surprised this is the first I'm hearing of this, seems very suspect
◧◩◪
1438. rvnx+io1[view] [source] [discussion] 2023-11-22 15:33:26
>>seydor+bg1
Desperate people who have no choice than to wait for someone to remove their golden handcuffs.
◧◩◪◨⬒⬓
1439. infamo+oo1[view] [source] [discussion] 2023-11-22 15:33:38
>>pooya1+mN
The current board won't be at OpenAI much longer.
◧◩
1440. stetra+wo1[view] [source] [discussion] 2023-11-22 15:34:10
>>laserl+gb
Imagine if the board of Apple fired Tim Cook with no warning right after he went on stage and announced their new developer platform updates for the year alongside record growth and sales, refused to elaborate as to the reasons or provide any useful communications to investors over several days, and replaced their first interim CEO with another interim CEO from a completely different kind of business in that same weekend.

If you don't think there would be a shareholder revolt against the board, for simply exercising their most fundamental right to fire the CEO, I think you're missing part the picture.

replies(3): >>hacker+8v1 >>jacque+412 >>eksaps+V12
◧◩◪◨⬒⬓
1441. rvnx+Io1[view] [source] [discussion] 2023-11-22 15:35:15
>>plorg+0e1
Microsoft was using GPT-4 in production as part of Sydney's "Bing Chat", even before it was released to the public on ChatGPT.
◧◩
1442. Ration+No1[view] [source] [discussion] 2023-11-22 15:35:27
>>jafitc+F91
The one piece of this that I question is the employee motivations.

First, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.

But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.

I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.

Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???

There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.

Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."

replies(1): >>TheGRS+wt1
◧◩◪◨⬒⬓⬔⧯
1443. rvnx+1p1[view] [source] [discussion] 2023-11-22 15:36:35
>>axus+af1
See, neither Bill Gates nor Steve Jobs are around these companies, and all is fine.

Apple and Microsoft even have the strongest financial results in their lifetime.

replies(2): >>roncha+Cu1 >>ghodit+uw1
◧◩
1444. neonbj+9p1[view] [source] [discussion] 2023-11-22 15:36:58
>>jafitc+F91
As an employee of OpenAI: fuck you and your condescending conclusions about my peers and my motivations.
replies(3): >>jprete+lr1 >>alexth+Gr1 >>iamfli+lv1
◧◩◪◨⬒⬓⬔⧯▣
1445. jakder+gp1[view] [source] [discussion] 2023-11-22 15:37:30
>>docmar+ba1
It's not a crusade against whiteness. Unless you're unhinged and believe a single phenotype that prevents skin cancer is somehow an obvious reflection of genetic inferiority and that those lacking it have a historical destiny to rule over the rest and are entitled to institutional privileges over them, it makes sense that companies with employees not representative of the overall population have hiring practices that are problematic, albeit not necessarily being as explicitly racist as you are.
replies(1): >>docmar+qB1
◧◩◪◨⬒
1446. im3w1l+hp1[view] [source] [discussion] 2023-11-22 15:37:37
>>coldte+fI
Ianal, but given that OpenAI Inc is a 501(c)(3) public charity wouldn't that mean that the mission statement has some actual legal power to it?
◧◩◪◨
1447. breadw+ip1[view] [source] [discussion] 2023-11-22 15:37:38
>>gcanyo+dn1
You mean zero additional dollars. They already gave (checks notes) $13 Billion dollars and own half of the company.
replies(1): >>rvnx+Cp1
◧◩
1448. Zpalmt+jp1[view] [source] [discussion] 2023-11-22 15:37:44
>>Satam+0a
Seems like that's a good thing when the goals of the open faction is to slow down development lol, how would that make OpenAI win?
◧◩◪◨⬒
1449. TheOth+op1[view] [source] [discussion] 2023-11-22 15:37:56
>>prh8+5e1
Leadership Gets Shit Done. A cult following wastes everyone's time on ineffectual grandstanding and ego fluffing while everything around them dissolves into incompetence and hostility.

They're very orthogonal things.

replies(1): >>rvnx+5r1
◧◩◪
1450. DSingu+sp1[view] [source] [discussion] 2023-11-22 15:38:11
>>sam0x1+Xh1
Honestly he has a point — but the bigger point to be made is financial incentives. In this case it matters because of the expressed mission statement of OpenAI.

Let’s say there was some non-profit claiming to advance the interests of the world. Let’s say it paid very well to hire the most productive people but they were a bunch of psychopaths who by definition couldn’t care less about anybody but themselves. Should you care about their opinions? If it was a for profit company you could argue that their voice matter. For a non-profit, however, a persons opinion should only matter as far as it is aligned with the non-profit mission.

◧◩◪◨⬒
1451. rvnx+Cp1[view] [source] [discussion] 2023-11-22 15:38:39
>>breadw+ip1
+ according to the rumors on Bloomberg.com / CNBC:

The investment is refundable and has high priority: Microsoft has a priority to receive 75% of the profit generated until the 10B USD have been paid back

+ (checks notes) in addition (!) OpenAI has to spend back the money in Microsoft Cloud Services (where Microsoft takes a cut as well).

◧◩◪◨⬒
1452. unethi+Ip1[view] [source] [discussion] 2023-11-22 15:38:55
>>darkwa+Ij
I'm already saying that.

The toothpaste is out of the tube, but this tech will radically change the world.

◧◩◪◨⬒⬓
1453. Zpalmt+Lp1[view] [source] [discussion] 2023-11-22 15:39:06
>>citygu+if1
Why would they be morally bankrupt? Do the employees have to care if it's a non profit or a for profit?

And if they do prefer it as a for profit company, why would that make them morally bankrupt?

◧◩◪◨
1454. gsuuon+8q1[view] [source] [discussion] 2023-11-22 15:41:10
>>gcanyo+dn1
How has the valuation of OpenAI increased by $20B since this weekend? I feel like every time I see that number it goes up by $10B.
replies(2): >>tacooo+Er1 >>sebzim+Kr1
◧◩◪◨⬒
1455. gcanyo+jq1[view] [source] [discussion] 2023-11-22 15:41:45
>>pk-pro+3l1
Someone has to set direction. The more people that are involved in that decision process, the slower it will go.

Having no leadership at all guarantees failure.

◧◩
1456. kridsd+nq1[view] [source] [discussion] 2023-11-22 15:42:18
>>bobsoa+t2
Someone must have run a wiki update script that calls OpenAI api somewhere.
◧◩◪
1457. foursi+rq1[view] [source] [discussion] 2023-11-22 15:42:51
>>nuruma+Db1
The content strikes me as being more an editorial on what happened vs simply a summary of events.
◧◩◪◨⬒⬓
1458. marcos+tq1[view] [source] [discussion] 2023-11-22 15:42:57
>>Turing+M01
All at once.
◧◩◪◨⬒
1459. makewo+Qq1[view] [source] [discussion] 2023-11-22 15:44:30
>>nickpp+In1
Yes.
replies(1): >>nickpp+Vr1
◧◩◪◨
1460. thepti+3r1[view] [source] [discussion] 2023-11-22 15:45:22
>>gcanyo+dn1
If the existing packages are worth more than MSFT pay AI researchers (they are, by a lot) then it’s not acquiring OAI for $0. Plausibly it could cost in the $B to buy put every single equity holder, at a $80B+ valuation.

Still a good deal, but your accounting is off.

◧◩◪◨⬒⬓
1461. rvnx+5r1[view] [source] [discussion] 2023-11-22 15:45:31
>>TheOth+op1
I also imagine the morale of the people who are currently implementing things, and getting tired of all these politics about who is going to claim success for their work.
◧◩
1462. voxic1+dr1[view] [source] [discussion] 2023-11-22 15:45:55
>>garris+EJ
Even if the IRS isn't a fan, what are they going to do about it? It seems like the main recourse they could pursue is they could force the OpenAI directors/Microsoft to pay an excise tax on any "excess benefit transactions".

https://www.irs.gov/charities-non-profits/charitable-organiz...

◧◩◪◨⬒
1463. chatma+gr1[view] [source] [discussion] 2023-11-22 15:46:06
>>kridsd+Ln1
Yeah fair enough. Any idea how Larry Summers even ended up on this board? He seems like an arbitrary choice with no domain expertise, although granted the board shouldn't be filled with AI experts.
◧◩◪
1464. jprete+lr1[view] [source] [discussion] 2023-11-22 15:46:45
>>neonbj+9p1
I’m curious about your perceptions of the (median) motivations of OpenAI employees - although of course I understand if you don’t feel free to say anything.
◧◩◪
1465. galois+sr1[view] [source] [discussion] 2023-11-22 15:47:16
>>ahzhou+od1
Sheryl Sandberg’s sense of ethics and moral compass are highly questionable.
◧◩
1466. window+vr1[view] [source] [discussion] 2023-11-22 15:47:26
>>jafitc+F91
This comment bugs me because it reads like a summary of an article, but it's just your opinions without any explanations to justify them.
◧◩◪◨⬒
1467. tacooo+Er1[view] [source] [discussion] 2023-11-22 15:47:52
>>gsuuon+8q1
you're off by a bit, the announcement of Sam returning as CEO actually increased OpenAI valuation to $110B last night
◧◩◪
1468. alexth+Gr1[view] [source] [discussion] 2023-11-22 15:48:12
>>neonbj+9p1
Users here often get the narrative and motivations deeply wrong, I wouldn’t take it too personally (Speaking as a peer)
◧◩◪◨⬒
1469. sebzim+Kr1[view] [source] [discussion] 2023-11-22 15:48:24
>>gsuuon+8q1
$110B? Where are you getting this valuation of $120B?
◧◩◪◨⬒⬓
1470. yeck+Or1[view] [source] [discussion] 2023-11-22 15:48:37
>>slingn+jj1
In my immediate family I have 3 people that have taken multi-year periods away from work for health reasons. Two are mental health related and the other severe arthritis. 2 of those 3 will probably never work again for the rest of their lives.

I've worked with a contractor that went into a coma during covid. Nearly half a year in a coma, then rehab for many more months. Guy is working now, but not shape.

I don't know the stats, but I'd be surprised if long medical leaves are as rare as you think.

replies(1): >>filled+dN1
1471. beepbo+Rr1[view] [source] 2023-11-22 15:48:50
>>staran+(OP)
When the first CEO appeared on the earth he got tied to cliff so the birds could eat him. It seems like that was a good call.
◧◩◪◨⬒⬓
1472. nickpp+Vr1[view] [source] [discussion] 2023-11-22 15:49:19
>>makewo+Qq1
Please expand on that.
replies(3): >>iamfli+Iv1 >>Mistle+J02 >>makewo+Gf5
◧◩◪◨⬒
1473. hughw+0s1[view] [source] [discussion] 2023-11-22 15:49:50
>>catapa+ke1
I guess though, a lot of organizations never develop a cohesive leader at all, and the orgs fall apart. They never had an irreplaceable leader though!
◧◩
1474. fullsh+5s1[view] [source] [discussion] 2023-11-22 15:50:15
>>Satam+0a
I think Microsoft's deep pockets, computing resources, their head start, and 50%+ employees not quitting is more important to the company's chances at success than your assessment they have the "wrong DNA."

The idea that the marketplace is a meritocracy of some kind where whatever an individual deems as "merit" wins is just proven to be nonsense time and time again.

◧◩◪
1475. ilrwbw+as1[view] [source] [discussion] 2023-11-22 15:50:55
>>Tigeri+Mg1
Isn't he a big Jeffrey Epstein fanboy? Ethical AGI is in safe hands.

https://www.thecrimson.com/article/2023/5/5/epstein-summers-...

replies(2): >>future+MI1 >>kossTK+og2
◧◩◪◨⬒⬓⬔⧯▣▦▧
1476. Zpalmt+xs1[view] [source] [discussion] 2023-11-22 15:52:36
>>golden+wK
What about security for your children?
replies(1): >>golden+zz1
◧◩◪◨⬒
1477. yeck+ys1[view] [source] [discussion] 2023-11-22 15:52:41
>>tempaw+iF
I was about to say this. Only correct answer.
replies(1): >>robert+Mz1
◧◩◪◨
1478. chanks+Is1[view] [source] [discussion] 2023-11-22 15:53:26
>>malfis+Ul1
That's a good callout. I was reading over it and confused who this person was and why they were summarizing but yeah they might've just told ChatGPT to summarize the events of what happened.
◧◩◪◨
1479. rvnx+Ks1[view] [source] [discussion] 2023-11-22 15:53:34
>>solard+Bn1
You have a second vote. I trust more gandutraveler than the people running the shitshow that is happening at the moment.
replies(1): >>shanus+ot1
◧◩◪◨⬒
1480. jjk166+Ts1[view] [source] [discussion] 2023-11-22 15:54:14
>>pk-pro+3l1
It's not hard to motivate them to do the fun parts of the job, the challenge is in convincing some of those highly motivated and passionate nerds to not work on the fun thing they are passionate about and instead do the boring and unsexy work that is nevertheless critical to overall success; to get people with strong personal opinions about how a solution should look to accept a different plan just so that everyone is on the same page, to ensure that people actually have access to the resources they need to succeed without going so overboard that the endeavor lacks the reserves to make it to the finish line, and to champion the work of these nerds to the non-nerds who are nevertheless important stakeholders.
◧◩◪◨⬒⬓
1481. Zpalmt+Ys1[view] [source] [discussion] 2023-11-22 15:54:33
>>g-b-r+Iq
Why? Did they have to sign a charter affirming their commitment to the mission when they were hired?
◧◩◪
1482. jkapla+Zs1[view] [source] [discussion] 2023-11-22 15:54:36
>>jxi+mi1
> was it just Helen Toner’s personal vendetta against Sam

I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]

> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed

[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

replies(1): >>jxi+NQ1
◧◩◪◨⬒
1483. shanus+ot1[view] [source] [discussion] 2023-11-22 15:56:20
>>rvnx+Ks1
And my axe.
◧◩◪◨⬒⬓⬔
1484. Beetle+qt1[view] [source] [discussion] 2023-11-22 15:56:22
>>gdhkgd+Nk1
> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

And while also working for a for-profit company.

◧◩◪
1485. TheGRS+wt1[view] [source] [discussion] 2023-11-22 15:56:41
>>Ration+No1
I was thinking the same. The letter symbolized a deep distrust with leadership over the mission and direction of the company. I’m sure financial motivations were involved, but the type of person working at this company can probably get a good paycheck at a lot of places. I think many work at OpenAI for some combination of opportunity, prestige, and altruism, and the weekend probably put all 3 into question.
◧◩◪
1486. intend+xt1[view] [source] [discussion] 2023-11-22 15:56:50
>>zug_zu+H81
Its very easy to min max a situation if you are not on the other side.

Additionally - I have not seen someone else talk about this, its just been a few days. Calling it a narrative is a stretch, and dismissive by implying manipulation.

Finally why would Sam joining MSFT be better than this current situation?

◧◩◪◨
1487. notaha+Kt1[view] [source] [discussion] 2023-11-22 15:57:40
>>Firmwa+qj1
I'm pretty sure there's a middle ground between recruiters for Microsoft should be banned from approaching other companies' staff to fill roles and Microsoft should be able to dictate decisions made by other companies' boards by publicly announcing that unless they change track it will attempt to hire every single one of their employees to newly created roles.

Funnily enough a bit like there's a middle ground between Microsoft should not be allowed to create browsers or have license agreements and Microsoft should be allowed to dictate bundling decisions made by hardware vendors to control access to the Internet

It's not freedom of employment when funnily enough those jobs aren't actually available to any AI researchers not working for an organisation Microsoft is trying to control.

◧◩◪◨⬒
1488. yeck+Mt1[view] [source] [discussion] 2023-11-22 15:57:46
>>himara+ki1
Depends who gets onto the board. There are probably a lot of forces interested in ousting him now, so he'd need to do an amazing job vetting the new board members.

My guess is that he has less than a year, based on the my assumption that there will be constant pressure placed on the board to oust him.

replies(1): >>himara+Yu1
◧◩
1489. mwatts+Nt1[view] [source] [discussion] 2023-11-22 15:57:55
>>garris+EJ
Microsoft doesn't have to send an employee to represent them on the board. They could ask Bill Gates.
replies(1): >>muraka+lP2
1490. incaho+Pt1[view] [source] 2023-11-22 15:57:56
>>staran+(OP)
Cue the "it's a Christmas Miracle!"
◧◩
1491. flappy+2u1[view] [source] [discussion] 2023-11-22 15:58:50
>>Satam+0a
Amazing outcome. Empty shirts folded. People who get stuff done persevere.
1492. diamon+4u1[view] [source] 2023-11-22 15:58:55
>>staran+(OP)
Adam D’Angelo keeping everyone straight on the mission of OpenAI. What a true boss in the face of woke mob
◧◩◪
1493. jjk166+ju1[view] [source] [discussion] 2023-11-22 16:00:37
>>gandut+8Q
Plot twist: that's the very first job the AI will be taking.
◧◩◪◨⬒
1494. jadams+wu1[view] [source] [discussion] 2023-11-22 16:01:29
>>csomar+rn1
I absolutely agree that the UK should become more like Islamic countries re its treatment of women.
◧◩◪◨⬒⬓⬔⧯▣
1495. roncha+Cu1[view] [source] [discussion] 2023-11-22 16:01:53
>>rvnx+1p1
Gates and Jobs helped establish these companies as the powerhouses they are today with their leadership in the 90s and 00s.

It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?

◧◩◪◨⬒
1496. hacker+Iu1[view] [source] [discussion] 2023-11-22 16:02:42
>>pas+TO
I think a page developed by YC thinks a lot more about him than that ;)
replies(1): >>komali+nc3
◧◩◪◨⬒⬓
1497. himara+Yu1[view] [source] [discussion] 2023-11-22 16:03:31
>>yeck+Mt1
He has his network and technical credibility, so I wouldn't underestimate him. Board composition remains hard to predict now.
replies(1): >>WendyT+m32
◧◩◪
1498. hacker+8v1[view] [source] [discussion] 2023-11-22 16:04:13
>>stetra+wo1
It is prudent to recall that enhancing shareholder value and delivering record growth and sales are NOT the mission of the company or Board. But now it appears that it will have to be.
replies(3): >>ketzo+QL1 >>stetra+5T1 >>pauldd+SM2
◧◩◪◨
1499. hn_thr+ev1[view] [source] [discussion] 2023-11-22 16:04:38
>>cbeach+kn1
Oh lord, spare me with the "links to China" idiocy. I once ate a fortune cookie, does that mean I have "links to China" too?

Toner got her board seat because she was basically Holden Karnofsky's designated replacement:

> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.

https://loeber.substack.com/p/a-timeline-of-the-openai-board

replies(1): >>cbeach+ef4
1500. iamlep+jv1[view] [source] 2023-11-22 16:04:59
>>staran+(OP)
So dangerous on so many levels. Just let him start his own AI group, competition is good!

Instead he will come away with this untouchable. He’ll get to stack the board like he wanted to. Part of being on a board of directors is sticking to your decisions. They are weak and weren’t prepared for the backlash of one person.

◧◩◪
1501. iamfli+lv1[view] [source] [discussion] 2023-11-22 16:05:22
>>neonbj+9p1
"condescending conclusions" - ask anyone outside of tech how they feel when we talk to them...
◧◩◪
1502. ezfe+zv1[view] [source] [discussion] 2023-11-22 16:06:11
>>stikit+Ki1
The board is the charity though, which is why the person you're replying to made the remark about MSFT employees being appointed to the board
replies(1): >>UrineS+Bw1
◧◩◪◨⬒⬓⬔
1503. iamfli+Iv1[view] [source] [discussion] 2023-11-22 16:06:36
>>nickpp+Vr1
This is the most cogent argument against AI I've seen so far.

https://youtu.be/iGJcF4bLKd4?si=Q_JGEZnV-tpFa1Tb

replies(1): >>nickpp+BX1
◧◩
1504. iamfli+Xv1[view] [source] [discussion] 2023-11-22 16:07:56
>>kumarv+7c
He's been sending out the occasional tweet - to be honest I get the impression that like the rest of us, he's just been watching with a big tub of popcorn...
◧◩◪◨⬒⬓⬔⧯▣
1505. ghodit+uw1[view] [source] [discussion] 2023-11-22 16:10:53
>>rvnx+1p1
Now go back in time and cut them before their companies took off.
◧◩◪◨
1506. UrineS+Bw1[view] [source] [discussion] 2023-11-22 16:11:14
>>ezfe+zv1
A charity is a type of not-for-profit organisation however the main difference between a nonprofit and a charity is that a nonprofit doesn't need to reach a 'charitable status' whereas a charity, to qualify as a charity, needs to meet very specific or strict guidelines
replies(1): >>ezfe+3X1
◧◩◪◨
1507. checky+Kw1[view] [source] [discussion] 2023-11-22 16:11:42
>>m463+cC
"You are a dim-witted kobold who prefers to hack-n-slash-slash-slash-n-burn over any sort of proper diplomatic negotiations or even strategic thinking; we would like you to consider next year's capital expenditures; what are your top three suggestions for improvements that could be made to the employee breakroom(s)?"
replies(2): >>jamesh+xy1 >>deanme+YQ1
◧◩
1508. jkapla+ax1[view] [source] [discussion] 2023-11-22 16:13:23
>>lysecr+K41
1. Microsoft was heavily involved in orchestrating the 95% of employees to side with Sam -- through promising them money/jobs and through PR/narrative 2. The profit incentives apply to employees too

Bigger picture, I don't think the "money/VC/MSFT/commercialization faction destroyed the safety/non-profit faction" is mutually exclusive with "the board fucked up." IMO, both are true

◧◩
1509. pc86+fx1[view] [source] [discussion] 2023-11-22 16:13:38
>>garris+EJ
Others have pointed out several reasons this isn't actually a problem (and that the premise itself is incorrect since "OpenAI" is not a charity), but one thing not mentioned: even if the MS-appointed board member is a MS employee, yes they will have a fiduciary duty to the organizations under the purview of the board, but unless they are also a board member of Microsoft (extraordinarily unlikely) they have no such fiduciary duty to Microsoft itself. So in the also unlikely scenario that there is a vote that conflicts with their Microsoft duties, and in the even more unlikely scenario that they don't abstain due to that conflict, they have a legal responsibility to err on the side of OpenAI and no legal responsibility to Microsoft. Seems like a pretty easy decision to make - and abstaining is the easiest unless it's a contentious 4-4 vote and there's pressure for them to choose a side.

But all that seems a lot more like an episode of Succession and less like real life to be honest.

replies(4): >>throwo+qy1 >>dragon+rz1 >>oatmea+2A1 >>Xelyne+Hu3
◧◩◪◨
1510. scythe+nx1[view] [source] [discussion] 2023-11-22 16:14:11
>>rmbyrr+xc1
Jobs was really unusual in that he was not only a good leader, but also an ideologue with the right obsession at the right time. (Some people like the word "visionary".) That obsession being "user experience". Today it's a buzzword, but in 2001 it was hardly even a term.

The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".

There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)

1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...

◧◩◪
1511. dragon+rx1[view] [source] [discussion] 2023-11-22 16:14:41
>>stikit+Ki1
> OpenAI is not a charity.

OpenAI is a charity nonprofit, in fact.

> Microsoft's investment is in OpenAI Global, LLC, a for-profit company.

OpenAI Global LLC is a subsidiary two levels down from OpenAI, which is expressly (by the operating agreement that is the LLC's foundational document) subordinated to OpenAI’s charitable purpose, and which is completely controlled (despite the charity's indirect and less-than-complete ownership) by OpenAI GP LLC, a wholly owned subsidiary of the charity, on behalf of the OpenAI charity.

And, particularly, the OpenAI board is. as the excerpts you quote in your post expressly state, the board of the nonprofit that is the top of the structure. It controls everything underneath because each of the subordinate organizations foundational documents give it (well, for the two entities with outside invesment, OpenAI GP LLC, the charity's wholly-owned and -controlled subsidiary) complete control.

replies(1): >>hacker+8J1
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
1512. WJW+jy1[view] [source] [discussion] 2023-11-22 16:18:01
>>golden+bh1
You can make it sound as esoteric as you want, but in the end an AI will still be bound by the laws of physics. Being infinitely smart will not help with that.

I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.

replies(1): >>golden+uy1
◧◩◪
1513. throwo+qy1[view] [source] [discussion] 2023-11-22 16:18:35
>>pc86+fx1
It's still a conflict of interest. One that they should avoid. Microsoft COULD appoint someone who they like and shares their values, that is not a MSFT employee. That would be a preferred approach but one that I doubt a megacorp would take
replies(1): >>ghaff+1E1
◧◩◪
1514. strang+ry1[view] [source] [discussion] 2023-11-22 16:18:39
>>stikit+Ki1
> First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

Im not criticizing. Big fan of avoiding being taxed to fund wars....but its just funny to me it seems like theyre sort of having their cake and eating it too with this kind of structure.

Good for them.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳
1515. golden+uy1[view] [source] [discussion] 2023-11-22 16:18:51
>>WJW+jy1
Axioms are constraints as much as they might look like guidance. We live in a neuromorphic computer. Logic explores this, even with few axioms. With fewer axioms, it will be less constrained.
◧◩◪◨
1516. evantb+vy1[view] [source] [discussion] 2023-11-22 16:18:55
>>maxdoo+vL
They did give reasons they were just vague. Reading between the lines, it seems the board was implying that Sam was trying to manipulate the board members individually. Was it true? Who knows. And as an outside observer, who cares? This is a fight between rich people about who gets to be richer. AI is so much larger than one cultish startup.
◧◩◪◨⬒
1517. jamesh+xy1[view] [source] [discussion] 2023-11-22 16:18:57
>>checky+Kw1
That prompt is (c) McKinsey
◧◩◪◨⬒⬓⬔
1518. denton+Qy1[view] [source] [discussion] 2023-11-22 16:20:26
>>docmar+z31
> Younger folks probably don't look highly at boards in general, because they never get to interact with them.

Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.

replies(1): >>docmar+FC1
1519. melvin+nz1[view] [source] 2023-11-22 16:22:12
>>staran+(OP)
“You could parachute Sam into an island full of cannibals and come back in 5 years and he'd be the king.” - Paul Graham
replies(1): >>rsanek+JY1
◧◩◪
1520. dragon+rz1[view] [source] [discussion] 2023-11-22 16:22:21
>>pc86+fx1
> and that the premise itself is incorrect since "OpenAI" is not a charity

OpenAI is a 501c3 charity nonprofit, and the OpenAI board under discussion is the board of that charity nonprofit.

OpenAI Global LLC is a for-profit subsidiary of a for-profit subsidiary of OpenAI, both of which are controlled, by their foundational agreements that gie them legal existence, by a different (AFAICT not for-profit but not legally a nonprofit) LLC subsidiary of OpenAI (OpenAI GP LLC.)

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1521. golden+zz1[view] [source] [discussion] 2023-11-22 16:22:59
>>Zpalmt+xs1
It is for the safety of everyone. The kids will die too if we don't get this right.
◧◩◪◨
1522. fooop+Ez1[view] [source] [discussion] 2023-11-22 16:23:15
>>qwery+4g1
Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.

What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.

Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.

◧◩◪◨⬒⬓
1523. robert+Mz1[view] [source] [discussion] 2023-11-22 16:23:43
>>yeck+ys1
I don't see how - isn't he pretty against the commercialisation efforts[0]?

[0] https://www.bbc.co.uk/news/technology-65110030

replies(2): >>ethbr1+RB1 >>yeck+fO1
◧◩
1524. sgt101+Pz1[view] [source] [discussion] 2023-11-22 16:23:56
>>MattHe+bx
There are no emergent properties, just a linear increase in knowledge that can be retrieved.

- It can't plan

- It can't do arithmetic

- It can't reason

- It can approximately retrieve knowledge with a natural language query (there are some issues with this, but it's very good)

- It can encode data into natural languages and other modalities

I'm not worried about it, I am worried about how badly people have misunderstood what it can do and then attempted to use it for things that matter.

But I'm not surprised.

replies(3): >>Davidz+D52 >>zucker+Ss2 >>quickt+MA3
◧◩◪◨
1525. blacko+Uz1[view] [source] [discussion] 2023-11-22 16:24:29
>>danari+hd1
OpenAI the charity would have survived only as an ego project for Elon doing something fun with minor impact.

Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.

replies(1): >>baking+8H1
1526. gsuuon+Xz1[view] [source] 2023-11-22 16:24:36
>>staran+(OP)
At least they'll be operating under the original charter - it sounds like the mission continues. Not sure about this new board but hard to imagine they'd make the same sort of mistake.
◧◩◪
1527. oatmea+2A1[view] [source] [discussion] 2023-11-22 16:25:10
>>pc86+fx1
Microsoft is going to appoint someone who benefits Microsoft. Whether a particular vote would violate fiduciary duty is subjective. There's plenty of opportunity for them to prioritize the welfare of Microsoft over OAI.
◧◩◪◨⬒
1528. rmbyrr+uA1[view] [source] [discussion] 2023-11-22 16:27:17
>>prh8+5e1
Have you ever seen a useful product produced by a cult?
◧◩◪◨⬒
1529. mrfox3+zA1[view] [source] [discussion] 2023-11-22 16:27:53
>>pauldd+ej1
Doing the math, it is extremely unlikely for a lot of coin flips to skew from the weight of the coin.

To that end, observing unanimous behavior may imply some bias.

Here, it could be people fearing being a part of the minority. The minority are trivially identifiable, since the majority signed their names on a document.

I agree in your stance that a majority of the workforce disagreed with the way things were handled, but that proportion is likely a subset of the proportion who signed their names on the document, for the reasons stated above.

replies(1): >>pauldd+PB1
◧◩◪◨⬒⬓⬔⧯
1530. 0perat+BA1[view] [source] [discussion] 2023-11-22 16:28:08
>>marcos+6n1
While cult followers do not make exceptional leaders, cult leaders are almost by definition exceptional leaders, given they're able to lead the un-indoctrinated into believing an ideology that may not be upheld against critical scrutiny.

There is no guarantee or natural law that an exceptional leader's ideology will be exceptional. Exceptionality is not transitive.

◧◩◪◨⬒
1531. AuryGl+CA1[view] [source] [discussion] 2023-11-22 16:28:17
>>alex_y+Nb
There's a lot of evidence that not having two X chromosomes is less stable, leading to...irregularities. That sword cuts both ways.

I don't like ignorance being promoted under the cloak of not causing offense. It causes more harm than good. If there's a societal problem, you can't tackle it without knowing the actual cause. Sometimes the issue isn't an actual problem caused an 'ism,' it's just biology, and it's a complete waste of resources trying to change it.

◧◩◪◨⬒⬓
1532. pauldd+KA1[view] [source] [discussion] 2023-11-22 16:28:58
>>dymk+Vl1
> some portion

The logic being that if any opinion has above X% support, people are choosing it based on peer pressure.

replies(1): >>mrfox3+0B1
◧◩◪◨⬒⬓
1533. Wesley+YA1[view] [source] [discussion] 2023-11-22 16:30:48
>>_heimd+iN
Maybe they're working for both, but when push comes to shove they felt like they had no choice? In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.

Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?

replies(1): >>_heimd+h12
◧◩◪◨⬒⬓⬔
1534. mrfox3+0B1[view] [source] [discussion] 2023-11-22 16:31:01
>>pauldd+KA1
The key is that the support is not anonymous.
◧◩◪◨
1535. baking+8B1[view] [source] [discussion] 2023-11-22 16:31:28
>>qwery+4g1
I think it could have easily been predicted just from the initial announcements. You can't create a public charity simply from the donations of a few wealthy individuals. A public charity has to meet the public support test. A private foundation would be a better model but someone decided they didn't want to go that route. Maybe should have asked a non-profit lawyer?
replies(1): >>farama+ca4
◧◩◪◨⬒⬓⬔⧯▣▦
1536. docmar+qB1[view] [source] [discussion] 2023-11-22 16:32:52
>>jakder+gp1
Unfortunately you are wrong, and this kind of rhetoric has not only made calls for white genocide acceptable and unpunished, but has incited violence specifically against Caucasian people, as well as anyone who is perceived to adopt "white" thinking such as Asian students specifically, and even Black folks who see success in their life as a result of adopting longstanding European/Western principles in their lives.

Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.

DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.

Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.

Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.

◧◩◪◨
1537. baking+FB1[view] [source] [discussion] 2023-11-22 16:33:41
>>danari+hd1
To create a public charity without public fundraising is a no go. Should have been a private foundation because that is where it will end up.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1538. doktri+LB1[view] [source] [discussion] 2023-11-22 16:34:09
>>Jumpin+TW
> I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service

Do you agree that the following company pairs are competitors?

    * FB : TikTok
    * TikTok : YT
    * YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix.

...

To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.

◧◩◪◨
1539. _fizz_+MB1[view] [source] [discussion] 2023-11-22 16:34:12
>>stef25+PC
Not sure what your point is, but you can make a donation to MSF that is not tied to any specific cause.
◧◩◪◨⬒⬓
1540. pauldd+PB1[view] [source] [discussion] 2023-11-22 16:34:16
>>mrfox3+zA1
> it is extremely unlikely for a lot of coin flips to skew from the weight of the coin

So clearly this wasn't a 50/50 coin flip.

The question at hand is whether the skew against the board was sincere or insincere.

Personally, I assume that people are acting in good faith, unless I have evidence to the contrary.

replies(1): >>mrfox3+VX2
◧◩◪◨⬒⬓⬔
1541. ethbr1+RB1[view] [source] [discussion] 2023-11-22 16:34:22
>>robert+Mz1
Gollum wasn't a fan of anyone but him having the One Ring. Analogy doesn't not fit.
1542. archsu+dC1[view] [source] 2023-11-22 16:36:15
>>staran+(OP)
I'm not American - I'm unclear what all this fuss is about? From where I am it looks like some arbitrary company politics in a hyped industry with a guy whose name I've seen mentioned on this site occasionally but really comes across as just a SV or San Fran cult of personality type. Am I missing something? Is there some substance to this story or is it just this week's industry soap opera?
◧◩◪
1543. madeof+lC1[view] [source] [discussion] 2023-11-22 16:36:46
>>kmlevi+mi
(Sam Altman was never on the board to begin with)
replies(1): >>ketzo+1M1
◧◩◪◨⬒⬓⬔⧯
1544. denlek+BC1[view] [source] [discussion] 2023-11-22 16:37:54
>>morale+LX
good points. on second thought, i should give them due credit for building a brand reputation as being "best" that will continue even if they aren't the best at some point, which will keep a lot of people with them. that's in addition to their other advantages that people will stay because it's easier than learning a new platform and there might be lock-in in terms of it being hard to move a trained gpt, or your chat history to another platform.
◧◩◪◨⬒⬓⬔⧯
1545. docmar+FC1[view] [source] [discussion] 2023-11-22 16:38:03
>>denton+Qy1
I don't think the age of the board members matters, but rather that younger generations have been taught to criticize boards of any & every company for their myriad decisions to sacrifice good things for profit, etc.

It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1546. vkou+KC1[view] [source] [discussion] 2023-11-22 16:38:37
>>Feepin+0A
Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.

I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.

◧◩◪◨
1547. broast+5D1[view] [source] [discussion] 2023-11-22 16:39:45
>>qwery+4g1
Wishfully I hope there was some intent from the beginning on exposing the impossibility of this contradictory model to the world, so that a global audience can evaluate on how to improve our system to support a better future.
◧◩◪◨⬒⬓⬔
1548. yterdy+hD1[view] [source] [discussion] 2023-11-22 16:40:31
>>gdhkgd+Nk1
If some of the smartest people on the planet are willing to sell the rest of us out for Comfy Lifestyle Money (not even Influence State Politics Money), then we are well and truly Capital-F Fucked.
replies(1): >>deckar+8Q1
◧◩
1549. campbe+AD1[view] [source] [discussion] 2023-11-22 16:41:59
>>thepas+wi1
Working out nicely for Msft then. You can use GPT4 via Azure already.
◧◩◪◨
1550. _fizz_+ED1[view] [source] [discussion] 2023-11-22 16:42:09
>>ah765+Jk
> The important thing is that the nonprofit (donated, tax-free) money doesn't flow into the forprofit section.

I am sure that is true. But the for-profit uses IP that was developed inside of the non-profit with (presumably) tax deductible donations. That IP should be valued somehow. But, as I said, I am sure they were somehow able to structure it in a way that is legal, but it has an illegal feel to it.

◧◩◪◨
1551. ghaff+1E1[view] [source] [discussion] 2023-11-22 16:43:50
>>throwo+qy1
Both profit and non-profit boards have members that have potential conflicts of interest all the time. So long as it’s not too egregious no one cares, especially not the IRS.
◧◩
1552. campbe+2E1[view] [source] [discussion] 2023-11-22 16:43:54
>>lysecr+K41
I don't think the board was big enough for starters. Of the folks on their, only one (Adam) had experience as a leader of a for profit venture. Helen probably lacks the leadership background to make any progress pushing her priorities.
1553. jcutre+fE1[view] [source] 2023-11-22 16:44:49
>>staran+(OP)
I wonder what Satya will say here; will the AI CEO position there just evaporate?
◧◩◪
1554. acjohn+kE1[view] [source] [discussion] 2023-11-22 16:45:01
>>miohta+Vb1
How do you know the "wokes" aren't the ones who were grinding for years?

I suspect OpenAI has an old guard that is disproportionately ideological about AI, and a much larger group of people who joined a rocket ship led by the guy who used to run YC.

◧◩◪◨⬒
1555. uxp8u6+pE1[view] [source] [discussion] 2023-11-22 16:45:24
>>pauldd+xj1
Conflict of interest with what? The other board members? That's utterly irrelevant. Look up some big companies boards some day. You'll see.
replies(1): >>pauldd+ld2
◧◩◪
1556. deanCo+5F1[view] [source] [discussion] 2023-11-22 16:49:17
>>pug_mo+Cb
> I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists,

And there is also a class of people that resist all moderation on principle even when it's ultimately for their benefit. See, Americans whenever the FDA brings up any questions of health:

* "Gas Stoves may increase Asthma." -> "Don't you tread on me, you can take my gas stove from my cold dead hands!"

Of course it's ridiculous - we've been through this before with Asbestos, Lead Paint, Seatbelts, even the very idea of the EPA cleaning up the environment. It's not a uniquely American problem, but America tends to attract and offer success to the folks that want to ignore these on principles.

For every Asbestos there is a Plastic Straw Ban which is essentially virtue signalling by the types of folks you mention - meaningless in the grand scheme of things for the stated goal, massive in terms of inconvenience.

But the existence of Plastic Straw Ban does not make Asbestos, CFCs, or Lead Paint any safer.

Likewise, the existence of people that gravitate to positions of power and middle management does not negate the need for actual moderation in dozens of societal scenarios. Online forums, Social Networks, and...well I'm not sure about AI. Because I'm not sure what AI is, it's changing daily. The point is that I don't think it's fair to assume that anyone that is interested in safety and moderation is doing it out of a misguided attempt to pursue power, and instead is actively trying to protect and improve humanity.

Lastly, your portrayal of journalists as power figures is actively dangerous to the free press. This was never stated this directly until the Trump years - even when FOX News was berating Obama daily for meaningless subjects. When the TRUTH becomes a partisan subject, then reporting on that truth becomes a dangerous activity. Journalists are MOSTLY in the pursuit of truth.

◧◩◪◨⬒
1557. Cheeze+dF1[view] [source] [discussion] 2023-11-22 16:50:12
>>grafta+Wz
I wouldn't really give OpenAI credit for lasting 3 years. OpenAI lasted until they moment they had a successful commercial product. Principles are cheap when there is no actual consequences to sticking to them.
◧◩◪
1558. Sai_+pF1[view] [source] [discussion] 2023-11-22 16:50:49
>>nickpp+Y5
His instinct was to walk away from his offer. He had to be forced to buy the company.

His second wife apparently asked him to buy Twitter and fix its, in her opinion, liberal bias.

◧◩
1559. _b+FF1[view] [source] [discussion] 2023-11-22 16:52:08
>>garris+EJ
> There are obvious conflicts of interest here.

There are almost always obvious conflicts of interest. In a normal startup, VCs have a legal responsibility to act in the interest of the common shares, but in practice, they overtly act in the interest of the preferred shares that their fund holds.

replies(1): >>hyperh+Bt3
◧◩◪◨⬒
1560. g42gre+WF1[view] [source] [discussion] 2023-11-22 16:53:26
>>dontup+bG
No regex, you would use another copy of few-shot prompted GPT-4 as a filter for the first GPT-4!
◧◩
1561. bradle+hG1[view] [source] [discussion] 2023-11-22 16:54:57
>>garris+EJ
Major corporate boards are rife with "on paper" conflicts on interest - that's what happens when you want people with real management experience to sit on your board and act like responsible adults. This happens in every single industry and has nothing to do with tech or with OpenAI specifically.

In practice, board bylaws and common sense mean that individuals recuse themselves as needed and don't do stupid shit.

replies(6): >>iandan+0H1 >>ip26+UH1 >>dragon+oJ1 >>fouc+kP1 >>throwa+OZ1 >>dizzyd+ig3
◧◩◪
1562. purple+nG1[view] [source] [discussion] 2023-11-22 16:55:17
>>baking+G91
Perhaps creating OpenAI as a charity is what has allowed it to become what it is, whereas other for-profit competitors are worth much less. How else do you get a guy like Elon Musk to 'donate' $100 million to your company?

Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth

replies(1): >>baking+mJ1
◧◩◪◨⬒
1563. x86x87+xG1[view] [source] [discussion] 2023-11-22 16:55:50
>>xiwenc+wa
They were thinking about money. There you go. Seeing what you build crumble is not pleasant when this means you are financially impacted.
◧◩◪
1564. iandan+0H1[view] [source] [discussion] 2023-11-22 16:57:46
>>bradle+hG1
"In practice, board bylaws and common sense mean that individuals ... don't do stupid shit."

Were you watching a different show than the rest of us?

replies(5): >>badlog+3J1 >>jjoona+vK1 >>hinkle+9L1 >>freedo+9b2 >>dev_tt+3m3
◧◩◪◨⬒
1565. low_te+3H1[view] [source] [discussion] 2023-11-22 16:58:13
>>abkola+Kq
Set semantic or List semantic?
◧◩◪◨⬒
1566. baking+8H1[view] [source] [discussion] 2023-11-22 16:58:30
>>blacko+Uz1
I think the only way this can end up is to convert to a private foundation and make sizable (8 figures annually) grants to truly independent AI safety (broadly defined) organizations.
◧◩◪
1567. ip26+UH1[view] [source] [discussion] 2023-11-22 17:01:03
>>bradle+hG1
Reminds me of the “revolving door” problem. Obvious risk of corruption and conflict of interest, but at the same time experts from industry are the ones with the knowledge to be effective regulators. Not unlike how many good patent attorneys were previously engineers.
◧◩◪
1568. jetset+XH1[view] [source] [discussion] 2023-11-22 17:01:33
>>auggie+2z
Not a given that it is here to stay and grow after the company showed itself in such a chaotic state. Also, they need a profitable product - it is not like they are selling Iphones and such..
◧◩◪◨⬒⬓
1569. Crespy+9I1[view] [source] [discussion] 2023-11-22 17:02:45
>>qup+hh1
I always heard:

There are two hard problems: naming things, cache invalidation, and off-by-one errors.

replies(1): >>maxlin+q74
◧◩◪◨
1570. Toucan+hI1[view] [source] [discussion] 2023-11-22 17:03:44
>>danari+hd1
> IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.

I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.

replies(1): >>danari+KJ1
◧◩
1571. hacker+JI1[view] [source] [discussion] 2023-11-22 17:05:48
>>garris+EJ
Not to mention, the mission of the Board cannot be "build safe AGI" anymore. Perhaps something more consistent with expanding shareholder value and capitalism, as the events of this weekend has shown.

Delivering profits and shareholder value is the sole and dominant force in capitalism. Remains to be seen whether that is consistent with humanity's survival

◧◩◪◨
1572. future+MI1[view] [source] [discussion] 2023-11-22 17:06:13
>>ilrwbw+as1
nothing screams 'protect public interest' more than Wall Streets biggest cheerleader during 2008 financial crisis. who's next, Richard S. Fuld Jr ? Should the Enron guys be included ?
◧◩◪
1573. rurp+ZI1[view] [source] [discussion] 2023-11-22 17:07:11
>>haunte+ih
Did Apple raise funds and spend a lot of time promoting itself as a giant apple that would feed humanity?
◧◩◪◨
1574. badlog+3J1[view] [source] [discussion] 2023-11-22 17:07:21
>>iandan+0H1
And we're seeing the result in real-time. Stupid shit doers have been replaced with hopefully-less-stupid-shit-doers.

It's a real shame too, because this is a clear loss for the AI Alignment crowd.

I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.

replies(1): >>alsetm+w02
◧◩
1575. segasa+4J1[view] [source] [discussion] 2023-11-22 17:07:29
>>Satam+0a
>a real disruptor must be brewing somewhere unnoticed, for now.

Anthropic.

◧◩◪◨
1576. hacker+8J1[view] [source] [discussion] 2023-11-22 17:07:36
>>dragon+rx1
well not anymore, as they cannot function as a nonprofit.

also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive, which Elon is miffed about and Sam has defended explicitly

replies(1): >>dragon+CL1
◧◩◪◨
1577. baking+mJ1[view] [source] [discussion] 2023-11-22 17:08:29
>>purple+nG1
Elon only gave $40 million, but since he was the primary donor I suspect he was the one who was pushing for the "public charity" designation. He and Sam were co-founders. Maybe it was Sam who asked Elon for the money, but there wasn't anyone else involved.
◧◩◪
1578. dragon+oJ1[view] [source] [discussion] 2023-11-22 17:08:47
>>bradle+hG1
A corporation acting (due to influence from a conflicted board member that doesn't recuse) contrary to the interests of its stockholders and in the interest of the conflicted board member or who they represent potentially creates liability of the firm to its stockholders.

A charity acting (due to the influence of a conflicted board member that doesn't recuse) contrary to its charitable mission in the interests of the conflicted board member or who they represent does something similar with regard to liability of the firm to various stakeholders with a legally-enforceable interest in the charity and its mission, but also is also a public civil violation that can lead to IRS sanctions against the firm up to and including monetary penalties and loss of tax exempt status on top of whatever private tort liability exists.

◧◩◪◨⬒⬓
1579. ghaff+tJ1[view] [source] [discussion] 2023-11-22 17:09:06
>>mstade+JT
Signing petitions is also cheap. It doesn't mean that everyone signing has thought deeply and actually made a life-changing decision.
◧◩◪◨⬒⬓⬔
1580. kcplat+uJ1[view] [source] [discussion] 2023-11-22 17:09:09
>>dahart+gn1
I have had a long career and have been through hostile mergers several times and at no point have I ever seen large numbers of employees act outside of their self-interest for an executive. It just doesn’t happen. Even in my career, with executives who are my friends, I would not act outside my personal interests. When things are corporately uncertain and people worry about their working livelihoods they just don’t tend to act that way. They tend to hunker heads down or jump independently.

The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.

replies(1): >>dahart+bC2
◧◩◪◨⬒
1581. danari+KJ1[view] [source] [discussion] 2023-11-22 17:10:46
>>Toucan+hI1
I was specifically thinking of people seeing a non-profit doing stuff with ML, and trying to finagle their way in there to turn it into a profit for themselves.

(But yes; what you describe is absolutely happening left and right...)

◧◩◪◨⬒⬓
1582. endtim+RJ1[view] [source] [discussion] 2023-11-22 17:11:34
>>citygu+if1
> anyone who works at OpenAI besides the board seems to be morally bankrupt.

People concerned about AI safety were probably not going to join in the first place...

◧◩◪◨
1583. jjoona+vK1[view] [source] [discussion] 2023-11-22 17:15:11
>>iandan+0H1
No, this is the part of the show where the patronizing rhetoric gets trotted out to rationalize discarding the principles that have suddenly become inconvenient for the people with power.
replies(2): >>photoc+Gp3 >>dragon+Sa6
1584. carapa+zK1[view] [source] 2023-11-22 17:15:21
>>staran+(OP)
So it's the Osiris myth?
◧◩
1585. boh+MK1[view] [source] [discussion] 2023-11-22 17:16:18
>>garris+EJ
Whenever there's an obvious conflict, assume it's not enforced or difficult to litigate or has relatively irrelevant penalties. Experts/lawyers who have a material stake in getting this right have signed off on it. Many (if not most) people with enough status to be on the board of a fortune 500 company tend to also be on non-profit boards. We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.
replies(2): >>hinkle+nM1 >>Xelyne+kv3
◧◩◪◨
1586. hinkle+9L1[view] [source] [discussion] 2023-11-22 17:17:56
>>iandan+0H1
I get a lostredditor vibe way too often here. Oddly more than Reddit.

I think people forget sometimes that comments come with a context. If we are having a conversation about Deep Water Horizon someone will chime in about how safe deep sea oil exploration is and how many failsafes blah blah blah.

“Do you know where you are right now?”

replies(5): >>Juicyy+5P1 >>LordDr+nP1 >>mhh__+3V1 >>alsetm+MZ1 >>iandan+U32
◧◩◪◨⬒
1587. dragon+CL1[view] [source] [discussion] 2023-11-22 17:19:48
>>hacker+8J1
> well not anymore, as they cannot function as a nonprofit.

There's been a lot of news lately, but unless I've missed something, even with the tentative agreement of a new board for the charity nonprofit, they are and plan to remain a charity nonprofit with the same nominal mission.

> also infamously they fundraised as a nonprofit, but retracted to admit they needed a for profit structure to thrive

No, they admitted they needed to sell products rather than merely take donations to survive, and needed to be able to return profits from doing that to investors to scale up enough to do that, so they formed a for-profit subsidiary with its own for-profit subsidiary, both controlled by another subsidiary, all subordinated to the charity nonprofit, to do that.

replies(1): >>DebtDe+SO1
◧◩◪◨⬒
1588. swores+DL1[view] [source] [discussion] 2023-11-22 17:19:54
>>jakder+pf1
The context of the comment thread you're replying to was a response to a comment suggesting the IRS will get involved in the question of whether MS have too much influence over OpenAI, it was not the subject of general industry regulation.

But hey, at least you fitted in a snarky line about ADHD in the comment you wrote while not having paid attention to the 3 comments above it.

replies(1): >>freedo+Cc2
◧◩
1589. Clarit+KL1[view] [source] [discussion] 2023-11-22 17:20:35
>>TheAce+h2
https://www.msn.com/en-us/money/careersandeducation/openais-...
◧◩◪◨
1590. ketzo+QL1[view] [source] [discussion] 2023-11-22 17:20:52
>>hacker+8v1
Yeah, but they also didn't elaborate in the slightest about how they were serving the charter with their actions.

If they were super-duper worried about how Sam was going to cause a global extinction event with AI, or even just that he was driving the company in too commercial of a direction, they should have said that to everyone!

The idea that they could fire the CEO with a super vague, one-paragraph statement, and then expect 800 employees who respect that CEO to just... be totally fine with that is absolutely fucking insane, regardless of the board's fiduciary responsibilities. They're board members, not gods.

replies(1): >>NanoYo+q92
◧◩◪◨
1591. ketzo+1M1[view] [source] [discussion] 2023-11-22 17:21:38
>>madeof+lC1
He was. OpenAI board as of last Thursday was Altman, Sutskever, Brockman, D'Angelo, Macaulay, Toner.
◧◩◪
1592. braiam+bM1[view] [source] [discussion] 2023-11-22 17:22:33
>>random+Yf
> If they went through due process, notified their employees and investors, and put out a statement of why they're firing the CEO

Did you read the bylaws? They have no responsibility to do any of that.

replies(3): >>ksd482+NM1 >>eksaps+n12 >>pauldd+af2
◧◩◪
1593. hinkle+nM1[view] [source] [discussion] 2023-11-22 17:23:40
>>boh+MK1
Do you remember before Bill Gates got into disease prevention he thought that “charity work” could be done by giving away free Microsoft products? I don’t know who sat him down and explained to him how full of shit he was but they deserve a Nobel Peace Prize nomination.

Just because someone says they agree with a mission doesn’t mean they have their heads screwed on straight. And my thesis is that the more power they have in the real world the worse the outcomes - because powerful people become progressively immune to feedback. This has been working swimmingly for me for decades, I don’t need humility in a new situation.

◧◩◪
1594. OnAYDI+tM1[view] [source] [discussion] 2023-11-22 17:24:08
>>random+Yf
Actually the board may not have acted in most professional way but in due process they kind of proved Sam Altman is unfireable for sure, even if they didn't intend to.

They did notify everyone. They did it after firing which is within their rights. They may also choose to stay silent if there is legitimate reason for it such as making the reasons known may harm the organization even more. This is speculation obviously.

In any case they didn't omit doing anything they need to and they didn't exercise a power they didn't have. The end result is that the board they choose will be impotent at the moment, for sure.

replies(6): >>xvecto+7T1 >>qudat+yY1 >>eksaps+I02 >>jonas2+fb2 >>pauldd+Be2 >>random+We3
◧◩◪
1595. jetset+KM1[view] [source] [discussion] 2023-11-22 17:25:13
>>sashan+Ko
There is no comparison to himself in the previous comment. Also, did you measure their IQ to put them on such a pedestal? There are lots of examples for people being great in their niche they invested thousands of hours in, while being total failures in other areas. You could see that with Mr. Sutskever over the weekend. He must be excellent in ML as he dedicated his life to researching this field of knowledge, but he lacks practice in critical thinking in management contexts.
◧◩◪◨
1596. ksd482+NM1[view] [source] [discussion] 2023-11-22 17:25:33
>>braiam+bM1
That's not the point. Whether or not it was in the bylaws, this would have been the sensible thing to do.
◧◩◪◨⬒⬓⬔⧯▣
1597. svnt+1N1[view] [source] [discussion] 2023-11-22 17:26:08
>>doktri+fJ
I’m not sure how the point stands. The iPhone was introduced during that tenure, then the App Store, then Jobs decided Google was also headed toward their own full mobile ecosystem, and released Schmidt. None of that was a conflict of interest at the beginning. Jobs initially didn’t even think Apple would have an app store.

Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.

You forgot to do Oracle and Tesla.

replies(1): >>doktri+dX1
◧◩
1598. mcmcmc+2N1[view] [source] [discussion] 2023-11-22 17:26:08
>>sashan+S8
Larry Summers is the scary pick here. His views on banking deregulation led to the GFC, and he's had several controversies over racist and sexist positions. Plus he's an old pal of Epstein and made several trips to his island.
replies(1): >>Joeri+fT1
◧◩◪◨⬒
1599. FireBe+7N1[view] [source] [discussion] 2023-11-22 17:26:18
>>373947+SH
There were comments the other day along the lines of "I wouldn't be surprised if someone came by Ilya's desk while he was deep in research and said 'sign this' and he just signed it and gave it back to them without even looking and didn't realize."

People will contort themselves into pretzels to invent rationalizations.

◧◩◪◨⬒⬓⬔
1600. filled+dN1[view] [source] [discussion] 2023-11-22 17:26:35
>>yeck+Or1
Yeah, there are thousands of hospitals across the US and they don't run 24/7 shifts just to treat the flu or sprained ankles. Disabling events happen a lot.

(A seriously underrated statistic IMO is how many women leave the workforce due to pregnancy-related disability. I know quite a few who haven't returned to full-time work for years after giving birth because they're still dealing with cardiovascular and/or neurological issues. If you aren't privy to their medical history it would be very easy to assume that they just decided to be stay-at-home mums.)

◧◩◪
1601. Turing+lN1[view] [source] [discussion] 2023-11-22 17:26:56
>>baking+G91
> I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess.

I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.

◧◩◪◨
1602. mcmcmc+BN1[view] [source] [discussion] 2023-11-22 17:28:28
>>bkyan+Ii
Well yeah... if Ilya hadn't flipped the board would still have the upper hand and Sam would not be back as CEO.
◧◩◪◨⬒⬓⬔⧯
1603. tstrim+LN1[view] [source] [discussion] 2023-11-22 17:29:04
>>youcan+ia1
Humanity no. But it's not humanity on the OpenAI board. It's 9 individuals. Individuals have amazing capacity for learning and improvement.
◧◩◪◨⬒⬓⬔
1604. kjkjad+RN1[view] [source] [discussion] 2023-11-22 17:29:19
>>setham+WR
The municipal utility provider has a right to violence? The park service? Where do you live? Los Angeles during Blade Runner?
◧◩◪
1605. mcmcmc+5O1[view] [source] [discussion] 2023-11-22 17:29:53
>>astran+Ui
By rule of law do you mean rule of lobbyists? Laws don't apply to people with wealth and connections.
replies(1): >>astran+IB2
◧◩◪◨⬒⬓⬔
1606. yeck+fO1[view] [source] [discussion] 2023-11-22 17:30:22
>>robert+Mz1
Elon was once "in possession" (influential investor and part of the board) of OpenAI, but it was since taken from him and he is evidently bitter about it.
◧◩◪◨⬒
1607. FartyM+uO1[view] [source] [discussion] 2023-11-22 17:31:22
>>epups+5B
It's not only about ChatGPT. OpenAI will probably make other things in the future.
1608. hacker+OO1[view] [source] 2023-11-22 17:33:22
>>staran+(OP)
So OpeNAI charter still in place? Once OpenAI reaches AGI, Microsoft won't be able to access the tech. Then what will happen to Microsoft when other commercial competitors catch up and also reach AGI one or two years later?
◧◩◪
1609. notfed+PO1[view] [source] [discussion] 2023-11-22 17:33:29
>>fatbir+r2
Nah, anyone who voted Sam out is in timeout.
replies(1): >>fatbir+7b3
◧◩◪◨⬒⬓
1610. DebtDe+SO1[view] [source] [discussion] 2023-11-22 17:33:50
>>dragon+CL1
>they are and plan to remain a charity nonprofit

Once the temporary board has selected a permanent board, give it a couple of months and then get back to us. They will almost certainly choose to spin the for-profit subsidiary off as an independent company. Probably with some contractual arrangement where they commit x funding to the non-profit in exchange for IP licensing. Which is the way they should have structured this back in 2019.

replies(1): >>tempes+Pv2
◧◩◪◨⬒
1611. Juicyy+5P1[view] [source] [discussion] 2023-11-22 17:35:03
>>hinkle+9L1
Its a more technical space then reddit. Youre gonna have more know it alls spewing
replies(1): >>jachee+6M3
1612. taway1+aP1[view] [source] 2023-11-22 17:35:31
>>staran+(OP)
Some perspective ...

One developer (Ilya) vs. One businessman (Sam) -> Sam wins

Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win

From the outside it looks like developers held the power all along ... which is how it should be.

replies(13): >>rexare+tP1 >>jessen+gR1 >>philip+IR1 >>adverb+yS1 >>sokolo+ES1 >>hsavit+7U1 >>jejeyy+xV1 >>dylan6+g22 >>zeroha+bc2 >>Quenti+rg2 >>m00x+rk2 >>awb+Ro2 >>nikcub+Hw2
◧◩◪
1613. fouc+kP1[view] [source] [discussion] 2023-11-22 17:36:10
>>bradle+hG1
OpenAI isn't a typical corporation but a 501(c)(3), so bylaws & protections that otherwise might exist appear to be lacking in this situation.
replies(1): >>dragon+K42
◧◩◪◨⬒
1614. LordDr+nP1[view] [source] [discussion] 2023-11-22 17:36:29
>>hinkle+9L1
>I think people forget sometimes that comments come with a context.

I mean, this is definitely one of my pet peeves, but the wider context of this conversation is specifically a board doing stupid shit, so that's a very relevant counterexample to the thing being stated. Board members in general often do stupid/short-sighted shit (especially in tech), and I don't know of any examples of corporate board members recusing themselves.

replies(2): >>mhluon+9U1 >>teachi+tN6
◧◩
1615. rexare+tP1[view] [source] [discussion] 2023-11-22 17:36:53
>>taway1+aP1
Money won.
◧◩◪◨⬒⬓
1616. cellar+yP1[view] [source] [discussion] 2023-11-22 17:37:11
>>kcplat+PU
There are plenty of examples of workers unions voting with similar levels of agreement. Here are two from the last couple months:

> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.

https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...

> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.

https://variety.com/2023/biz/news/wga-ratify-contract-end-st...

◧◩◪◨⬒⬓⬔⧯
1617. deckar+8Q1[view] [source] [discussion] 2023-11-22 17:39:12
>>yterdy+hD1
We already know some of the smartest people are willing to sell us out. Because they work for FAANG ad tech, spending their days figuring out how to maximize the eyeballs they reach while sucking up all your privacy.

It's a post-"Don't be evil" world today.

replies(1): >>jacque+Pd2
◧◩◪◨⬒
1618. blacko+hQ1[view] [source] [discussion] 2023-11-22 17:39:51
>>pclmul+Tj1
Compute credits are more valuable. It is more difficult to get GPUs than real money.
replies(1): >>pclmul+r02
◧◩◪◨⬒⬓⬔
1619. dizzyd+EQ1[view] [source] [discussion] 2023-11-22 17:41:05
>>ravst3+Wd
Does it actually prevent regulators going after them?
◧◩◪◨
1620. jxi+NQ1[view] [source] [discussion] 2023-11-22 17:41:40
>>jkapla+Zs1
Right, so getting Sam fired was retaliation for that.
◧◩◪◨⬒
1621. deanme+YQ1[view] [source] [discussion] 2023-11-22 17:42:34
>>checky+Kw1
Well, if ye really want ol' me to put me noggin to it... I reckon ye could start with addin' a proper gaming corner! Ye know, some sturdy tables 'n' comfy chairs where the lads 'n' lasses can gather 'round for some good ol' dice chuckin' or card playin'. Next up, a big ol' fire pit! Not just any fire, mind ye, but one where we can roast our snacks 'n' share tales of our adventures. And lastly, a grand stash of provisions—plenty o' snacks 'n' drinks to keep the energy high for when we're plannin' our next raid or just takin' a breather. How's that for some improvements, eh?
◧◩
1622. jessen+gR1[view] [source] [discussion] 2023-11-22 17:43:40
>>taway1+aP1
Yes, 95% agreement in any company is unprecedented but:

1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

2. Sam approved each hire in the first place.

3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".

replies(2): >>anders+7b2 >>brrrrr+Nd2
◧◩
1623. philip+IR1[view] [source] [discussion] 2023-11-22 17:45:36
>>taway1+aP1
Are you sure Ilya was the root of this.

He backed it and then signed the pledge to quit if it wasn't undone.

What's the evidence he was behind it and not D'Angelo?

replies(3): >>dr_dsh+TW1 >>__loam+qZ1 >>jivetu+G82
◧◩
1624. mkagen+SR1[view] [source] [discussion] 2023-11-22 17:46:06
>>laserl+gb
None of the theories by HNers on day 1 of this drama was right - not a single one and it had 1 million comments. So, lets not guess anymore and just sit back.
◧◩◪◨
1625. wnoise+UR1[view] [source] [discussion] 2023-11-22 17:46:13
>>fatbir+nj1
It is quite common. Still not cool.
◧◩◪◨⬒⬓⬔
1626. slg+dS1[view] [source] [discussion] 2023-11-22 17:47:21
>>gdhkgd+Nk1
> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.

◧◩
1627. adverb+yS1[view] [source] [discussion] 2023-11-22 17:48:59
>>taway1+aP1
There are three dragons:

Employees, customers, government.

If motivated and aligned, any of these three could end you if they want to.

Do not wake the dragons.

replies(2): >>pdntsp+002 >>bossyT+tx2
◧◩
1628. sokolo+ES1[view] [source] [discussion] 2023-11-22 17:49:18
>>taway1+aP1
Is your first “-> Sam wins” different than what you intended?
◧◩◪◨
1629. stetra+5T1[view] [source] [discussion] 2023-11-22 17:51:31
>>hacker+8v1
Sure, there is a difference there. But the actions that erode confidence are the same.

You could tell the same story about a rising sports team replacing their star coach, or a military sacking a general the day after he marched through the streets to fanfare after winning a battle.

Even without the money involved, a sudden change in leadership with no explanation, followed only by increasing uncertainty and cloudy communication, is not going to go well for those who are backing you.

Even in the most altruistic version of OpenAI's goals I'm fairly sure they need employees and funding to pay those employees and do the research.

◧◩◪◨
1630. xvecto+7T1[view] [source] [discussion] 2023-11-22 17:51:36
>>OnAYDI+tM1
Their communication was completely insufficient. There is no possible world on which the board could be considered "competent" or "professional."
◧◩◪
1631. Joeri+fT1[view] [source] [discussion] 2023-11-22 17:51:56
>>mcmcmc+2N1
I assume Summers is there as a politically connected operative, to make sure OpenAI remains influential in Washington.
◧◩◪◨⬒⬓
1632. rozap+mT1[view] [source] [discussion] 2023-11-22 17:52:13
>>citygu+if1
Easy to see how humans would join a non profit for the vibes, and then when they create one of the most compelling products of the last decade worth billions of dollars, quickly change their thinking into "wait, i should get rewarded for this".
◧◩◪◨⬒
1633. HaZeus+wT1[view] [source] [discussion] 2023-11-22 17:53:35
>>rospay+B9
More than I'll probably ever have to brag about during my tenure in the workforce, lol
◧◩◪◨⬒⬓
1634. deckar+yT1[view] [source] [discussion] 2023-11-22 17:53:52
>>nmfish+cK
> Google lasted a good 10 years

not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.

replies(1): >>nmfish+zp3
◧◩
1635. hsavit+7U1[view] [source] [discussion] 2023-11-22 17:56:02
>>taway1+aP1
seems like the union of developers is stronger than the company itself. hence why unions are so frowned upon by big tech corporate leadership
replies(1): >>JacobT+Qy3
1636. dang+8U1[view] [source] 2023-11-22 17:56:03
>>staran+(OP)
All: there are over 1800 comments in this thread. If you want to read them all, click More at the bottom of each page, or like this: (edit: er, yes they do have to be wellformed don't they):

https://news.ycombinator.com/item?id=38375239&p=2

https://news.ycombinator.com/item?id=38375239&p=3

https://news.ycombinator.com/item?id=38375239&p=4 (...etc.)

◧◩◪◨⬒⬓
1637. mhluon+9U1[view] [source] [discussion] 2023-11-22 17:56:03
>>LordDr+nP1
Common example of recusal is CEO comp when the CEO is on the board.
replies(1): >>alsetm+902
◧◩◪◨⬒
1638. mhh__+3V1[view] [source] [discussion] 2023-11-22 17:59:38
>>hinkle+9L1
So?
◧◩
1639. jejeyy+xV1[view] [source] [discussion] 2023-11-22 18:01:19
>>taway1+aP1
$$$ vs. Safety -> $$$ wins.

Employees who have $$$ incentive threaten to quit if that is taken away. News at 8.

replies(1): >>baby+3W1
◧◩◪
1640. baby+3W1[view] [source] [discussion] 2023-11-22 18:02:54
>>jejeyy+xV1
Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.
replies(4): >>jejeyy+0X1 >>hacker+1Y1 >>mi_lk+SZ1 >>DirkH+6d2
◧◩
1641. baby+iW1[view] [source] [discussion] 2023-11-22 18:03:36
>>laserl+gb
How did you get there? The board did fire him, they exercised their right.
replies(1): >>eksaps+M72
◧◩
1642. freedo+nW1[view] [source] [discussion] 2023-11-22 18:03:53
>>voiceb+6o1
That is one of the greatest lines of all time. Classic
◧◩◪
1643. dr_dsh+TW1[view] [source] [discussion] 2023-11-22 18:06:12
>>philip+IR1
If we only look at the outcomes (dismantling of board), Microsoft and Sam seem to have the most motive.
◧◩◪◨
1644. jejeyy+0X1[view] [source] [discussion] 2023-11-22 18:06:36
>>baby+3W1
of course the employees are motivated by $$$ - is that even a question?
replies(1): >>Xelyne+1S3
◧◩◪◨⬒
1645. ezfe+3X1[view] [source] [discussion] 2023-11-22 18:06:40
>>UrineS+Bw1
Yes, I misspoke - I meant nonprofit
replies(1): >>zja+Vs2
◧◩◪◨⬒⬓⬔⧯▣▦
1646. doktri+dX1[view] [source] [discussion] 2023-11-22 18:07:36
>>svnt+1N1
> Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.

It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.

replies(1): >>svnt+NV2
◧◩◪◨⬒⬓⬔⧯
1647. nickpp+BX1[view] [source] [discussion] 2023-11-22 18:09:45
>>iamfli+Iv1
I am sorry, I greatly respect and admire Nick Cave, but that letter sounded to me like the lament of a scribe decrying the invention of the printing press.

He's not wrong, something is lost and it has to do with what we call our "humanity", but the benefits greatly outweigh that loss.

replies(1): >>makewo+5g5
◧◩◪◨⬒⬓
1648. freedo+VX1[view] [source] [discussion] 2023-11-22 18:10:54
>>logicc+kH
Can you give some examples of who is saying that? I haven't heard that, but I also can't name any "far-right accelerationsist" people either so I'm guessing this is a niche I've completely missed
◧◩◪◨
1649. hacker+1Y1[view] [source] [discussion] 2023-11-22 18:11:11
>>baby+3W1
The large majority of people are motivated by $$$ (or fame) and if they all tell me otherwise I know many of them are lying.
◧◩◪◨
1650. qudat+yY1[view] [source] [discussion] 2023-11-22 18:13:22
>>OnAYDI+tM1
> proved Sam Altman is unfireable [without explaining why to its employees].
◧◩◪
1651. Marran+DY1[view] [source] [discussion] 2023-11-22 18:13:37
>>gandut+8Q
A blow for the common man!
◧◩
1652. rsanek+JY1[view] [source] [discussion] 2023-11-22 18:14:03
>>melvin+nz1
http://paulgraham.com/fundraising.html
◧◩◪
1653. __loam+qZ1[view] [source] [discussion] 2023-11-22 18:16:31
>>philip+IR1
I'm not sure I buy the idea that Ilya was just some hapless researcher who got unwillingly pulled into this. Any one of the board could have voted not to remove Sam and stop the board coup, including Ilya. I'd bet he only got cold feet after the story became international news and after most of the company threatened to resign because their bag was in jeopardy.
replies(1): >>Xelyne+KT3
◧◩
1654. qudat+xZ1[view] [source] [discussion] 2023-11-22 18:16:51
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

No, because it is an effort in futility. We are evolving into extinction and there is nothing we can do about it. https://bower.sh/in-love-with-a-ghost

◧◩◪◨⬒
1655. alsetm+MZ1[view] [source] [discussion] 2023-11-22 18:17:17
>>hinkle+9L1
I get what you're saying, but I also live in the world and see the mechanics of capitalism. I may be a person who's interested in tech, science, education, archeology, etc. That doesn't mean that I don't also have political views that sometimes overlap with a lot of other very-online people.

I think the comment to which you replied has a very reddit vibe, no doubt. But also, it's a completely valid point. Could it have been said differently? Sure. But I also immediately agreed with the sentiment.

replies(1): >>hinkle+A52
◧◩◪
1656. throwa+OZ1[view] [source] [discussion] 2023-11-22 18:17:29
>>bradle+hG1
No conflict, no interest.
◧◩◪◨
1657. mi_lk+SZ1[view] [source] [discussion] 2023-11-22 18:17:55
>>baby+3W1
It's you who are naive if you really think the majority of those 7xx employees care more about safe AGI than their own equity upside
replies(2): >>nh2342+o42 >>concor+Rd2
◧◩◪
1658. pdntsp+002[view] [source] [discussion] 2023-11-22 18:18:57
>>adverb+yS1
The Board is another one, if you're CEO.
replies(2): >>elliot+322 >>adverb+cm2
◧◩◪◨
1659. halfma+102[view] [source] [discussion] 2023-11-22 18:18:59
>>flappy+wJ
...this doesn't seem instrumental?
replies(2): >>flappy+Z72 >>CRConr+Y7t
◧◩◪◨⬒⬓⬔
1660. alsetm+902[view] [source] [discussion] 2023-11-22 18:19:32
>>mhluon+9U1
That's what I would term a black-and-white case. I don't think there's anyone with sense who would argue in good faith that a CEO should get a vote on their own salary. There are many degrees of grey between outright corruption and this example, and I think the concern lies within.
1661. macrae+e02[view] [source] 2023-11-22 18:19:54
>>staran+(OP)
What a delightful shit show. I don't even personally care whether Sam Altman is running OpenAI but it brings me no end of schadenfreude to see a bunch of AI Doomers make asses of themselves. Ethical Altruism truly believes that AI could destroy all of human life on the planet which is a preposterous belief. There are so many better things to worry about, many of which are happening right now! These people are not serious and should not hold serious positions of power. It's not hard to see the dangers of AI: replacing a lot of make-work that exists in the world, giving shoddy answers with high confidence, taking humans out of the loop of responsible decision making, but I cannot believe that it will become so smart that it becomes an all powerful god. These people worship intelligence (hence why they believe that with infinite intelligence comes infinite power) but look what happens when they actually have power! Ridiculous.
1662. rashid+p02[view] [source] 2023-11-22 18:20:34
>>staran+(OP)
Could someone do a sentiment analysis from the comments and share it with all of us who can’t read all the 1,700+ comments?
◧◩◪◨⬒⬓
1663. pclmul+r02[view] [source] [discussion] 2023-11-22 18:20:41
>>blacko+hQ1
As any AI startup can tell you: credits != quota

Right now, quota is very valuable and scarce, but credits are easy to come by. Also, Azure credits themselves are worth about $0.20 per dollar compared to the alternatives.

◧◩◪◨⬒
1664. alsetm+w02[view] [source] [discussion] 2023-11-22 18:20:53
>>badlog+3J1
> at least there is a strong moral compass in the field

Is this still true when the board gets overhauled after trying to uphold the moral compass.

replies(1): >>saalwe+iQ2
◧◩◪◨
1665. eksaps+I02[view] [source] [discussion] 2023-11-22 18:21:50
>>OnAYDI+tM1
Getting your point, although the fact that something is within your rights, may or may not mean certainly that it's also a proper thing to do ... ?

Like, nobody is going to arrest you for spitting on the street especially if you're an old grandpa. Nobody is going to arrest you for saying nasty things about somebody's mom.

You get my point, to some boundary both are kinda within somebody's rights, although can be suable or can be reported for misbehaving. But that's the keypoint, misbehavior.

Just because something is within your rights doesn't mean you're not misbehaving or not acting in an immature way.

To be clear, Im not denying or agreeing that the board of directors acted in an immature way. I'm just arguing against the claim that was made within your text that just because someone is acting within their rights that it's also a "right" thing to do necessary, while that is not the case always.

◧◩◪◨⬒⬓⬔
1666. Mistle+J02[view] [source] [discussion] 2023-11-22 18:21:52
>>nickpp+Vr1
I think this summarizes it pretty well. Even if you don't mind the garbage, the future AI will feed on this garbage, creating AI and human brain gray goo.

https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...

https://en.wikipedia.org/wiki/Gray_goo

replies(1): >>nickpp+Nb2
1667. Ruq+V02[view] [source] 2023-11-22 18:22:30
>>staran+(OP)
that fast huh?
◧◩◪
1668. jacque+412[view] [source] [discussion] 2023-11-22 18:23:38
>>stetra+wo1
You forgot: and offered the company for a bag of peanuts to Microsoft.
◧◩
1669. jdlyga+612[view] [source] [discussion] 2023-11-22 18:23:47
>>voiceb+6o1
I tried New Coke when it was re-released for Stranger Things. It really is a lot better than Coca Cola Classic. It's a shame that it failed.
◧◩◪◨⬒⬓⬔
1670. _heimd+h12[view] [source] [discussion] 2023-11-22 18:24:48
>>Wesley+YA1
> In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.

Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.

At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.

◧◩◪◨
1671. eksaps+n12[view] [source] [discussion] 2023-11-22 18:25:26
>>braiam+bM1
you don't have responsibility for washing yourself before going to a mass transport vehicle full of people. it's within your rights not to do that and be the smelliest person in the bus.

does it mean it's right or professional?

getting your point, but i hope you get the point i make as well, that just because you have no responsibility for something doesn't mean you're right or not unethical for doing or not doing that thing. so i feel like you're losing the point a little.

◧◩◪
1672. eksaps+V12[view] [source] [discussion] 2023-11-22 18:27:47
>>stetra+wo1
no but the people like the developers, clients, government etc. have also the right to exercise their revolt against decisions they don't like as well. don't you think?

like, you get me, the board of directors is not the only actual power within a company, and that was proven by the whole scandal of Sam being discarded/fired that was made by the developers themselves. they also have the right to exercise their right to just not work at this company without the leader they may had liked.

replies(1): >>stetra+vj2
◧◩◪◨
1673. elliot+322[view] [source] [discussion] 2023-11-22 18:28:05
>>pdntsp+002
I think the parent comment’s point is that the board is not one, since the board was defeated (by the employee dragon).
replies(1): >>pdntsp+T52
◧◩◪◨⬒⬓
1674. freedo+422[view] [source] [discussion] 2023-11-22 18:28:05
>>hef198+rm
Apologies this is very off topic, but I don't know anyone from Germany that I can ask and you opened the door a tiny bit by mentioning the holocaust :-)

I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.

Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?

◧◩
1675. jklein+922[view] [source] [discussion] 2023-11-22 18:28:30
>>garris+EJ
I'm a little bit confused, are you saying that the IRS would have some sort of beef with employees of Microsoft serving on the board of a 501(c)(3)?
◧◩
1676. dylan6+g22[view] [source] [discussion] 2023-11-22 18:29:01
>>taway1+aP1
It's not like this is the first:

One developer (Woz) vs One businessman (Jobs) -> Jobs wins

◧◩◪◨⬒
1677. gcanyo+f32[view] [source] [discussion] 2023-11-22 18:32:48
>>cables+BJ
Who knew the board was Sicilian?
◧◩◪◨⬒⬓⬔
1678. WendyT+m32[view] [source] [discussion] 2023-11-22 18:33:02
>>himara+Yu1
What surprises me is how much regard the valley has for this guy. Doesn’t Quora suck terribly? I’m for sure its target demographic and I cannot for the life of me pull value from it. I have tried!
replies(2): >>himara+f62 >>JSavag+9T3
◧◩◪◨⬒
1679. iandan+U32[view] [source] [discussion] 2023-11-22 18:35:30
>>hinkle+9L1
I apologize, the comment's irony overwhelmed my snark containment system.
replies(1): >>Obscur+sV3
◧◩◪◨
1680. s1arti+X32[view] [source] [discussion] 2023-11-22 18:35:33
>>iterat+dX
That is part of effective leadership, strategy, and management.

I didn't say anything about higher order values. Getting people to want what you want, and do what you want is a skill.

Hitler was an extraordinary leader. That doesn't imply anything about higher values.

◧◩◪◨⬒
1681. nh2342+o42[view] [source] [discussion] 2023-11-22 18:37:22
>>mi_lk+SZ1
Why would anyone care about safe agi? its vaporware.
replies(2): >>mecsre+q62 >>stillw+5a2
◧◩◪◨
1682. dragon+K42[view] [source] [discussion] 2023-11-22 18:38:43
>>fouc+kP1
501c3's also have governing internal rules, and the threat of penalties and loss of status imposed by the IRS gives them additional incentive to safeguard against even the appearance of conflict being manifested into how they operate (whether that's avoiding conflicted board members or assuring that they recuse where a conflict is relevant.)

If OpenAI didn't have adequate safeguards, either through negligence or becauase it was in fact being run deliberately as a fraudulent charity, that's a particular failure of OpenAI, not a “well, 501c3’s inherently don't have safeguard” thing.

replies(1): >>kevin_+rd3
◧◩◪◨⬒⬓
1683. hinkle+A52[view] [source] [discussion] 2023-11-22 18:41:41
>>alsetm+MZ1
Oh I wasn’t complaining about the parent, I was complaining it needed to be said.

We are talking about a failure of the system, in the context of a concrete example. Talking about how the system actually works is only appropriate if you are drawing specific arguments up about how this situation is an anomaly, and few of them do that.

Instead it often sounds like “it’s very unusual for the front to fall off”.

◧◩◪
1684. Davidz+D52[view] [source] [discussion] 2023-11-22 18:41:56
>>sgt101+Pz1
This is incorrect. For example the ability to translate between languages is emergent. Also gpt4 can do arithmetic better than the average person. Especially considering the process it arrives at the computation is via intuition basically vs algorithmic. Btw just as an aide the newer models can also write code to do certain tasks, like arithmetic.
replies(2): >>sgt101+7e4 >>james-+O97
◧◩◪◨⬒
1685. pdntsp+T52[view] [source] [discussion] 2023-11-22 18:42:35
>>elliot+322
I think the analogy is kind of shaky. The board tried to end the CEO, but employees fought them and won.

I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.

replies(1): >>davesq+ao2
◧◩◪◨⬒⬓⬔⧯
1686. himara+f62[view] [source] [discussion] 2023-11-22 18:43:54
>>WendyT+m32
His claim to fame comes from scaling FB. Quora shows he has questionable product nous, but nobody questions his technical chops.
◧◩◪◨⬒⬓
1687. mecsre+q62[view] [source] [discussion] 2023-11-22 18:44:23
>>nh2342+o42
Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!

replies(1): >>supert+Ce2
◧◩◪◨⬒⬓
1688. cma+Z62[view] [source] [discussion] 2023-11-22 18:47:11
>>citygu+if1
Supposedly they had about 50% of employees leave in the year of the conversion to for-profit.
◧◩◪◨
1689. s1arti+972[view] [source] [discussion] 2023-11-22 18:47:54
>>csunbi+KI
Sounds like a good way to to secure your position as leader.

My job also secures my loyalty and support with a financial incentive. It is probably the most common way for a business leader to align interests.

Kings reward dukes, and generals pay soldiers. Politicians trade policies. That doesn't mean they arent leaders.

◧◩
1690. hacker+o72[view] [source] [discussion] 2023-11-22 18:48:37
>>sashan+S8
Greg was only forced to resign from his board seat, not his job.
◧◩◪
1691. eksaps+M72[view] [source] [discussion] 2023-11-22 18:50:00
>>baby+iW1
because people like the developers within the company did not like that decision and its also within their right to disagree with the board's decision and not to want to work under a different leadership. They're not slaves, they're employees who rented their time for a specific purpose under a specific leader.

As it's within the board's rights to hire or fire people like Sam or the developers.

◧◩◪◨⬒
1692. flappy+Z72[view] [source] [discussion] 2023-11-22 18:50:48
>>halfma+102
cool. it was
◧◩◪
1693. jivetu+G82[view] [source] [discussion] 2023-11-22 18:53:07
>>philip+IR1
wake up people! (said rhetorically, not accusatory or any other way)

This is Altman's playbook. He did a similar ousting at Reddit. This was planned all along to overturn the board. Ilya was in on it.

I'm not normally a conspiracy theorist. But fool me ... you can't be fooled again. As they say in Tennessee

replies(2): >>buggle+E92 >>bossyT+Cx2
◧◩◪◨⬒
1694. NanoYo+q92[view] [source] [discussion] 2023-11-22 18:57:06
>>ketzo+QL1
They don't have to elaborate. As many have pointed out, most people have been given advice to not say anything at all when SHTF. If they did say something there would still be drama. It's best to keep these details internal.

I still believe in the theory that Altman was going hard after profits. Both McCauley and Toner are focused on the altruistic aspects of AGI and safety. Altman shouldn't be at OpenAI and neither should D’Angelo.

replies(2): >>ketzo+Rf2 >>stetra+Ni2
◧◩◪◨
1695. buggle+E92[view] [source] [discussion] 2023-11-22 18:57:38
>>jivetu+G82
What’s the backstory on Reddit?
replies(1): >>occams+Pi2
1696. davegu+G92[view] [source] 2023-11-22 18:57:44
>>staran+(OP)
Hi dang,

Seeing a bug in your comment here:

>>38382563

You reference the pages like this:

https://news.ycombinator.com/item?id=38375239?p=2

The second ? should be an & like this:

https://news.ycombinator.com/item?id=38375239&p=2

Please feel free to delete this message after you've received it.

replies(3): >>pauldd+Qd2 >>saliag+Ze2 >>pvg+Of2
◧◩◪◨⬒⬓
1697. stillw+5a2[view] [source] [discussion] 2023-11-22 18:59:17
>>nh2342+o42
Exactly what an OpenAI developer would understand. All the more reason to ride the grift that brought them this far
◧◩◪◨
1698. MacsHe+4b2[view] [source] [discussion] 2023-11-22 19:04:56
>>davedx+Ch
Anthropic is made up of former top OpenAI employees, has similar funding, and has produced similarly capable models on a similar timeline. The Claude series is neck and neck with GPT.
◧◩◪
1699. anders+7b2[view] [source] [discussion] 2023-11-22 19:05:11
>>jessen+gR1
> OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?

I'd bet more than half the people are just there for the money.

◧◩◪◨
1700. freedo+9b2[view] [source] [discussion] 2023-11-22 19:05:21
>>iandan+0H1
You need to be able to separate macro-level and micro-level. GP is responding to a comment about the IRS caring about the conflict-of-interest on paper. The IRS has to make and follow rules at a macro level. Micro-level events obviously can affect the macro view, but you don't completely ignore the macro because something bad happened at the micro level. That's how you get knee-jerk reactionary governance, which is highly emotional.
1701. zeroha+db2[view] [source] 2023-11-22 19:05:51
>>staran+(OP)
Google, Meta and now OpenAI. So long, responsible and safety AI guardrails. Hello, big money.

Disappointed by the outcome, but perhaps mission-driven AI development -- the reason OpenAI was founded -- was never possible.

Edit: I applaud the board members for (apparently, it seems) trying to stand up for the mission (aka doing the job that they were put on the board to do), even if their efforts were doomed.

replies(2): >>risho+8d2 >>pauldd+ae2
◧◩◪◨
1702. jonas2+fb2[view] [source] [discussion] 2023-11-22 19:06:01
>>OnAYDI+tM1
Firing Sam was within the board's rights. And 90% of the employees threatening to leave was within their rights.

All this proved is that you can't take a major action that is deeply unpopular with employees, without consulting them, and expect to still have a functioning organization. This should be obvious, but it apparently never crossed the board's mind.

replies(2): >>freedo+Wd2 >>m3kw9+V33
◧◩◪◨⬒⬓⬔⧯
1703. nickpp+Nb2[view] [source] [discussion] 2023-11-22 19:09:18
>>Mistle+J02
Is this a real problem model trainers actually face or is it an imagined one? The Internet is already full of garbage - 90% of the unpleasantness of browsing these days is filtering through mounts and mounds of crap. Some is generated, some is written, but still crap full of wrong and lies.

I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.

In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.

◧◩
1704. zeroha+bc2[view] [source] [discussion] 2023-11-22 19:11:26
>>taway1+aP1
more like $$ wins.

It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.

In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.

One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(

replies(1): >>joewfe+Hj2
◧◩
1705. zeroha+wc2[view] [source] [discussion] 2023-11-22 19:12:46
>>garris+EJ
OpenAI's charter is dead. I expect future boards to amend it.
replies(2): >>dragon+jd2 >>ric2b+pN2
◧◩◪◨⬒⬓
1706. freedo+Cc2[view] [source] [discussion] 2023-11-22 19:13:11
>>swores+DL1
if up-the-line parent wasn't talking about regulation of AI in general, then what do you think they meant by "competitive advantage"? Also, governments have to set policy and enforce that policy. They can't (or shouldn't at least) pick and choose favorites.

Also GP snark was a reply to snark. Once somebody opens the snark, they should expect snark back. It's ideal for nobody to snark, and big for people not to snark back at a snarker, but snarkers gonna snark.

◧◩◪◨
1707. zeroha+Xc2[view] [source] [discussion] 2023-11-22 19:14:44
>>qwery+4g1
Exactly this. OpenAI was started for ostensibly the right reasons. But once they discovered something that would both 1) take a tremendous amount of compute power to scale and develop, and 2) could be heavily monetized, they choose the $ route and that point the mission was doomed, with the board members originally brought in to protect the mission holding their fingers in the dyke.
◧◩◪◨
1708. DirkH+6d2[view] [source] [discussion] 2023-11-22 19:15:26
>>baby+3W1
Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.
◧◩
1709. risho+8d2[view] [source] [discussion] 2023-11-22 19:15:26
>>zeroha+db2
you just don't understand how markets work. if openai slows down then they will just be driven out by competition. that's fine if that's what you think they should do, but that won't make ai any safer, it will just kill openai and have them replaced by someone else.
replies(2): >>zeroha+ce2 >>Wander+we2
◧◩◪
1710. dragon+jd2[view] [source] [discussion] 2023-11-22 19:15:49
>>zeroha+wc2
Its useful PR pretext for their regulatory advocacy, and subjective enough that if they are careful not to be too obvious about specifically pushing one company’s commercial interest, they can probably get away with it forever, so why would it be any deader than when Sam was CEO before and not substantively guided by it.
◧◩◪◨⬒⬓
1711. pauldd+ld2[view] [source] [discussion] 2023-11-22 19:16:05
>>uxp8u6+pE1
See earlier

> If OpenAI remains a 501(c)(3) charity, then any employee of Microsoft on the board will have a fiduciary duty to advance the mission of the charity, rather than the business needs of Microsoft. There are obvious conflicts of interest here.

>>38378069

◧◩
1712. jacque+td2[view] [source] [discussion] 2023-11-22 19:16:24
>>Satam+0a
> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.

This was said loud and clear when Microsoft joined in the first place but there were no takers.

◧◩◪
1713. brrrrr+Nd2[view] [source] [discussion] 2023-11-22 19:18:03
>>jessen+gR1
> 1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

citation?

replies(1): >>davio+Jg2
◧◩◪◨⬒⬓⬔⧯▣
1714. jacque+Pd2[view] [source] [discussion] 2023-11-22 19:18:16
>>deckar+8Q1
If half of the brainpower invested in advertising food would go towards world hunger we'd have too much food.
◧◩
1715. pauldd+Qd2[view] [source] [discussion] 2023-11-22 19:18:17
>>davegu+G92
Also, while we're at it:

"Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past" [1]

HN plans to be multi-core?!?! A bigger scoop than OpenAI governance!

Anything more you can share?

[1] >>38351005

◧◩◪◨⬒
1716. concor+Rd2[view] [source] [discussion] 2023-11-22 19:18:21
>>mi_lk+SZ1
Uh, I reckon many do. Money is easy to come by for that type of person and avoiding killing everyone matters to them.
◧◩◪◨⬒
1717. freedo+Wd2[view] [source] [discussion] 2023-11-22 19:18:44
>>jonas2+fb2
A lot of these high-up tech leaders seem to forget this regularly. They sit on their thrones and dictate wild swings, and are used to having people obey. They get all the praise and adulation when things go well, and when things don't go well they golden parachute into some other organization who hires based on resume titles rather than leadership and technical ability. It doesn't surprise me at all that they were caught off guard by this.
◧◩
1718. pauldd+ae2[view] [source] [discussion] 2023-11-22 19:19:51
>>zeroha+db2
> I applaud the board members for (apparently, it seems) trying to stand up for the mission

What about this is apparent to you?

What statement has the board made on how they fired Altman "for the mission"?

Have I missed something?

replies(1): >>alsetm+Ag2
◧◩◪
1719. zeroha+ce2[view] [source] [discussion] 2023-11-22 19:20:02
>>risho+8d2
you're right about market forces, however:

1) openAI was explicitly founded to NOT develop AI based on "market forces"; it's just that they "pivoted" (aka abandoned their mission) once they struck gold in order to become driven by the market

2) this is exactly the reasoning behind nuclear arms races

◧◩◪◨⬒
1720. freedo+de2[view] [source] [discussion] 2023-11-22 19:20:06
>>abkola+Kq
Thank you for not editing this away. Easy mistake to make, and gave us a good laugh (hopefully laughing with you. Everyone who's ever programmed has made the same error).
◧◩◪
1721. Wander+we2[view] [source] [discussion] 2023-11-22 19:21:48
>>risho+8d2
You can still be a force for decentralization by creating actually open ai. For now it seems like Meta AI research is the real open ai
replies(1): >>insani+Ng2
◧◩◪◨
1722. pauldd+Be2[view] [source] [discussion] 2023-11-22 19:22:23
>>OnAYDI+tM1
> They may also choose to stay silent

They may choose to, and they did choose to.

But it was an incompitant choice. (Obviously.)

◧◩◪◨⬒⬓⬔
1723. supert+Ce2[view] [source] [discussion] 2023-11-22 19:22:30
>>mecsre+q62
> If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.

How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.

replies(4): >>FartyM+Bg2 >>bcrosb+zj2 >>mecsre+Os2 >>jonono+sM4
◧◩◪
1724. zeroha+Ue2[view] [source] [discussion] 2023-11-22 19:23:52
>>random+Yf
> none of this outrage would have taken place.

most certainly would have still taken place; no one cares about how it was done; what they care about it being able to make $$; and it was clearly going to not be as heavily prioritized without Altman (which is why MSFT embraced him and his engineers almost immediately).

> notified their employees and investors they did notify their employees; they have fiduciary duty to investors as a nonprofit.

◧◩
1725. saliag+Ze2[view] [source] [discussion] 2023-11-22 19:24:05
>>davegu+G92
Why do you (dang) always write a comment specifying that people can read more and even providing some links when it's clear that when you reach the bottom of the page you have to click "read more" to indeed read more. Isn't it a bit useless?
replies(2): >>bartre+Ff2 >>pvg+Sh2
◧◩◪◨
1726. pauldd+af2[view] [source] [discussion] 2023-11-22 19:24:50
>>braiam+bM1

  Here lies the body of William Jay,
  Who died maintaining his right of way –
  He was right, dead right, as he sped along,
  But he's just as dead as if he were wrong.

    - Dale Carnegie
◧◩◪
1727. bartre+Ff2[view] [source] [discussion] 2023-11-22 19:26:49
>>saliag+Ze2
Because people don't, that's why.
◧◩
1728. pvg+Of2[view] [source] [discussion] 2023-11-22 19:27:29
>>davegu+G92
If you want to reach the mods just email hn@ycombinator.com
replies(1): >>davegu+xn2
◧◩◪◨⬒⬓
1729. ketzo+Rf2[view] [source] [discussion] 2023-11-22 19:27:50
>>NanoYo+q92
Okay, keep silent to save your own ass, fine

But why would anyone expect 800 people to risk their livelihoods and work without a little serious justification? This was an inevitable reaction.

replies(1): >>muraka+UQ2
◧◩◪◨
1730. SilasX+Wf2[view] [source] [discussion] 2023-11-22 19:28:24
>>squigz+FE
Agreed. It's naive to think that an decision this unpopular somehow wouldn't have resulted in dissent and fracturing if only they had given it a better explanation and dotted more i's.

Imagine arguing this in another context: "Man, if only the Supreme Court had clearly articulated its reasoning in overturning Roe v Wade, there wouldn't have been all this outrage over it."

(I'm happy to accept that there's plenty of room for avoiding some of the damage, like the torrents of observers thinking "these board members clearly don't know what they're doing".)

◧◩
1731. jacque+kg2[view] [source] [discussion] 2023-11-22 19:30:10
>>eclect+79
Results matter.
◧◩◪◨
1732. kossTK+og2[view] [source] [discussion] 2023-11-22 19:30:30
>>ilrwbw+as1
It's obvious this class of people love their status as neu-feudal lords above the law living as 18th century libertines behind closed doors.

But i guess people here are either waiting for wealth to trickle down on them or believe the torrent of psychological operations so much peoples minds close down when they intuit the circular brutal nature of hierarchical class based society, and the utter illusion democracy or meritocracy is.

The uppermost classes have been trickters through all of history. What happened to this knowledge and the countercultural scene in hacking? Hint; it was psyopped in the early 90's by "libertarianism" and worship of bureaucracy to create a new class of cybernetic soldiers working for the oligarchy.

replies(1): >>ilrwbw+xY2
◧◩
1733. Quenti+rg2[view] [source] [discussion] 2023-11-22 19:30:42
>>taway1+aP1
OpenAI developers are redefining the state-of-the-art of AI each 6 months, if the company lose them they already can go bankrupt
◧◩◪
1734. alsetm+Ag2[view] [source] [discussion] 2023-11-22 19:31:12
>>pauldd+ae2
To me, commentary online and on podcasts universally leans on the idea that he appears to be very focused on money (from the outside) in seeming contradiction to the company charter:

> Our primary fiduciary duty is to humanity.

Also, the language of the charter has watered down a stronger commitment that was in the first version. Others have quoted it and I'm sure you can find it on the internet archive.

replies(1): >>pauldd+Am2
◧◩◪◨⬒⬓⬔⧯
1735. FartyM+Bg2[view] [source] [discussion] 2023-11-22 19:31:16
>>supert+Ce2
The difference between unsafe AGI and an unsafe plane or car is that the plane/car are not existential risks.
replies(1): >>optymi+Qr3
1736. nbzso+Gg2[view] [source] 2023-11-22 19:31:35
>>staran+(OP)
Stop dreaming about alignment. All bets are off. This is the start of AI arms race. Think globally for a second. Yes, everybody wants to be a millionaire or billionaire. This is the current culture we are living in. Corporations have unprecedented power waved into the governments, but governments still have a monopoly on violence. People cannot switch to the new abstraction layer (UBI, Social Rating) for two or five years. They will keep a consumer-oriented mindset before the option to have one is erased. Where you think this is going? To a better Democracy? This is the Cold War V.2 scenario unfolding.
◧◩◪◨
1737. davio+Jg2[view] [source] [discussion] 2023-11-22 19:32:06
>>brrrrr+Nd2
https://x.com/kevin_scott/status/1726971608706031670?s=20
◧◩◪◨
1738. insani+Ng2[view] [source] [discussion] 2023-11-22 19:32:51
>>Wander+we2
What does "actually open" mean? And how is that more responsible? If the ethical concern of AI is that it's too powerful or whatever, isn't building it in the open worse?
replies(1): >>Wander+Oh2
◧◩
1739. Quenti+Rg2[view] [source] [discussion] 2023-11-22 19:33:11
>>laserl+gb
OpenAI workers has shown their plain support to their CEO by threatening to follow him wherever he wants, I personaly think their collective judgement on him is worth more than any rumors
replies(1): >>BOOSTE+nc4
1740. jacque+ah2[view] [source] 2023-11-22 19:34:54
>>staran+(OP)
49% stock (lower bound) + 90% of employees (upper bound) > board.

To be updated as more evidence rolls in.

◧◩◪◨⬒
1741. Wander+Oh2[view] [source] [discussion] 2023-11-22 19:38:24
>>insani+Ng2
Depends on how you interpret the mission statement of building ai for all of humanity. It’s questionable that humanity is better off if ai only accrues to one or a few centralised entities?
◧◩◪
1742. pvg+Sh2[view] [source] [discussion] 2023-11-22 19:38:47
>>saliag+Ze2
when it's clear

It isn't that clear. People missing ui elements they have to scroll to is one of the most common ways of missing ui elements.

◧◩◪◨⬒⬓⬔⧯▣
1743. fsloth+Mi2[view] [source] [discussion] 2023-11-22 19:42:13
>>JohnPr+a11
For example: Two guys come in, say "Give us the godbox or your company seizes to exist. Here is a list of companies that seized to exist because the did not do as told".

Pretty much the same method was used to shut down Rauma-Repola submarines https://yle.fi/a/3-5149981

After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.

◧◩◪◨⬒⬓
1744. stetra+Ni2[view] [source] [discussion] 2023-11-22 19:42:14
>>NanoYo+q92
> They don't have to elaborate.

Sure, they don't have to. How did that work out?

Four CEOs in five days, their largest partner stepping in to try to stop the chaos, and almost the entirety of their employees threatening to leave for guaranteed jobs at that partner if the board didn't step down.

◧◩◪◨⬒
1745. smegge+Oi2[view] [source] [discussion] 2023-11-22 19:42:16
>>Sebb76+Vv
Because distroying openai wouldn't make ai safe it would just remove anyone working on alignment from having an influence on it. Microsoft and others are interested in making it benevolent but go along with it because openai is the market leader.
◧◩◪◨⬒
1746. occams+Pi2[view] [source] [discussion] 2023-11-22 19:42:17
>>buggle+E92
Yishan (former Reddit CEO) describes how Altman orchestrated the removal of Reddit's owner: https://www.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Note that the response is Altman's, and he seems to support it.

As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.

◧◩◪◨⬒⬓⬔⧯▣
1747. mlyle+sj2[view] [source] [discussion] 2023-11-22 19:45:13
>>Random+cm
Probability is just one way to express uncertainties in our reasoning. If there's no uncertainty, it's pretty easy to chart a path forward.

OTOH, The precautionary principle is too cautious.

There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.

This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.

replies(1): >>Random+9K2
◧◩◪◨
1748. stetra+vj2[view] [source] [discussion] 2023-11-22 19:45:23
>>eksaps+V12
Right. I really should have said employees and investors. Even if OpenAI somehow had no regard for its investors, they still need their employees to accomplish their mission. And funding to pay those employees.

The board seemed to have the confidence of none of the groups they needed confidence from.

◧◩◪◨⬒⬓⬔⧯
1749. bcrosb+zj2[view] [source] [discussion] 2023-11-22 19:45:40
>>supert+Ce2
The US government got involved in regulating airplanes long before there were any widely available commercial offerings:

https://en.wikipedia.org/wiki/United_States_government_role_...

If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.

replies(1): >>supert+Sk2
◧◩◪
1750. joewfe+Hj2[view] [source] [discussion] 2023-11-22 19:46:37
>>zeroha+bc2
Where we are today is a world where people do not generally worry about nuclear bombs being dropped. So seems like a pretty good outcome in that example.
replies(1): >>Xelyne+uS3
◧◩
1751. m00x+rk2[view] [source] [discussion] 2023-11-22 19:50:03
>>taway1+aP1
Ilya signed the letter saying he would resign if Sam wasn't brought back. Looks like he regretted his decision and ultimately got played by the 2 departing board members.

Ilya is also not a developer, he's a founder of OpenAI and was the CSO.

1752. jrflow+wk2[view] [source] 2023-11-22 19:50:11
>>staran+(OP)
This here is what we call a load-bearing “in principle”
◧◩◪◨⬒⬓⬔⧯▣
1753. supert+Sk2[view] [source] [discussion] 2023-11-22 19:51:31
>>bcrosb+zj2
I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.

There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.

◧◩◪◨
1754. smegge+Hl2[view] [source] [discussion] 2023-11-22 19:55:22
>>m463+cC
Train it on meeting minutes and board charter various contracts they have, and use the voice compatibilitys of chatgpt as the input during the meeting the prompt is it is an ethical ai givingbinput to the board of open ai on the development of its next iteration.
◧◩◪◨
1755. adverb+cm2[view] [source] [discussion] 2023-11-22 19:57:39
>>pdntsp+002
My comment was more of a reflection of the fact that you might have multiple different governance structures to your organization. Sometimes investors are at the top. Sometimes it's a private owner. Sometimes there are separate kinds of shares for voting on different things. Sometimes it's a board. So you're right, the depending on the governance structure you can have additional dragons. But, you can never prevent any of these three from being a dragon. They will always be dragons, and you can never wake them up.
◧◩◪◨
1756. pauldd+Am2[view] [source] [discussion] 2023-11-22 19:59:49
>>alsetm+Ag2
> commentary online and on podcasts

:/

◧◩◪
1757. davegu+xn2[view] [source] [discussion] 2023-11-22 20:03:52
>>pvg+Of2
Thank you for the advice. I will do that in the future.
◧◩◪◨⬒⬓
1758. davesq+ao2[view] [source] [discussion] 2023-11-22 20:07:12
>>pdntsp+T52
I have an intuition that OpenAI's mid-range size gave the employees more power in this case. It's not as hard to coordinate a few hundred people, especially when those people are on top of the world and want to stay there. At a megacorp with thousands of employees, the board probably has an easier time bossing people around. Although I don't know if you had a larger company in mind when you gave your second example.
replies(1): >>pdntsp+ix2
◧◩
1759. awb+Ro2[view] [source] [discussion] 2023-11-22 20:11:17
>>taway1+aP1
It’s a cost / benefit analysis.

If people are easily replaceable then they don’t hold nearly as much power, even en mass.

◧◩◪◨⬒⬓⬔⧯
1760. mecsre+Os2[view] [source] [discussion] 2023-11-22 20:33:22
>>supert+Ce2
I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.

At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.

◧◩◪
1761. zucker+Ss2[view] [source] [discussion] 2023-11-22 20:33:29
>>sgt101+Pz1
What is your definition of reasoning? In my mind, GPT-4 has some nascent reasoning abilities.
◧◩◪◨⬒⬓
1762. zja+Vs2[view] [source] [discussion] 2023-11-22 20:33:41
>>ezfe+3X1
You were right though, OpenAI Inc, which the board controls, is a 501c3 charity.
1763. xyst+Wt2[view] [source] 2023-11-22 20:39:34
>>staran+(OP)
OpenAI board f’d around and found out the consequences of their poor decisions. The decision to back pedal from previous position just shows the level of disconnect between these 2 entities.

If I was an investor. I would be scared.

◧◩
1764. alumin+0u2[view] [source] [discussion] 2023-11-22 20:39:49
>>altpad+R1
The enormous majority of CEOs sit on their board, and that's absolutely proper, as the CEO sets the agenda for the organization. (Although they typically are merely one of 8+ members, diluting their influence a bit.)
◧◩◪◨⬒
1765. denton+5u2[view] [source] [discussion] 2023-11-22 20:40:04
>>khazho+Mm
> There are only three groups of people who could be subject to betrayal here

GP didn't speak of betraying people; he spoke of betraying their own statements. That just means doing what you said you wouldn't; it doesn't mean anyone was stabbed in the back.

◧◩◪◨⬒⬓⬔
1766. tempes+Pv2[view] [source] [discussion] 2023-11-22 20:49:32
>>DebtDe+SO1
"Almost certainly"? Here's a fun exercise. Over the course of, say, a year, keep track of all your predictions along these lines, and how certain you are of each. Almost certainly, expressed as a percentage, would be maybe 95%? Then see how often the predicted events occur, compared to how sure you are.

Personally I'm nowhere near 95% confident that will happen. I'd say I'm about 75% confident it won't. So I wouldn't be utterly shocked, but I would be quite surprised.

replies(1): >>kyle_g+WK2
◧◩
1767. 6gvONx+tw2[view] [source] [discussion] 2023-11-22 20:53:02
>>laserl+gb
Looks like all the naysayers from the original “were making a for-profit but it won’t change us” post ended up correct: >>19359928
◧◩◪
1768. dinvla+vw2[view] [source] [discussion] 2023-11-22 20:53:09
>>lucubr+sc
Ilya is just naive, imho. Bright but just too idealistic and hypothesizing about AGI, and not seeing that this is now ONLY about making money from LLMs, and nothing more. All the AGI stuff is just a facade for that.
◧◩
1769. nikcub+Hw2[view] [source] [discussion] 2023-11-22 20:54:14
>>taway1+aP1
The employees rapidly and effectively formed a quasi-union to grant themselves a very powerful seat at the table.
◧◩◪◨⬒⬓⬔
1770. pdntsp+ix2[view] [source] [discussion] 2023-11-22 20:57:08
>>davesq+ao2
No, I'm thinking a smaller company, like 50 people, $20m ARR. Engineering-focused, but not tech
◧◩◪
1771. bossyT+tx2[view] [source] [discussion] 2023-11-22 20:58:23
>>adverb+yS1
Or tame the dragons. AFAIK Sam hired the employees. Hence they are loyal to him
◧◩◪
1772. aspero+wx2[view] [source] [discussion] 2023-11-22 20:58:34
>>zug_zu+H81
I don't think that's quite right, Microsoft's main game was keeping the money train going by any means necessary, they have staked so much on copilots and Enterprise/Azure Open AI. So much has been invested into that strategic direction and seeing Google swoop in and out-innovate Microsoft would be a huge loss.

Either by keeping OpenAI as-is, or the alternative being moving everyone to Microsoft in an attempt to keep things going would work for Satya.

◧◩◪◨
1773. bossyT+Cx2[view] [source] [discussion] 2023-11-22 20:58:59
>>jivetu+G82
what happenned in reddit?
◧◩◪◨⬒⬓⬔⧯
1774. hadloc+mz2[view] [source] [discussion] 2023-11-22 21:07:38
>>worlds+CP
Moore's law seems to have failed on CPUs finally, but we've seen the pattern over and over. LLM specific hardware will undoubtedly bring down the cost. $10,000 A100 GPU will not be the last GPU NVidia ever makes, nor will their competitors stand by and let them hold the market hostage.

Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.

Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.

◧◩◪◨
1775. dinvla+yB2[view] [source] [discussion] 2023-11-22 21:18:37
>>drawkb+cP
I suspect it's because they're happy with SV salaries they got. They think it's actually a good deal for them, and a signal they're "valued"
◧◩◪◨
1776. astran+IB2[view] [source] [discussion] 2023-11-22 21:19:15
>>mcmcmc+5O1
Without looking it up, what happened to the second biggest donor to the Democrats this year?

Is Donald Trump allowed to run a charity in New York?

replies(1): >>mcmcmc+eU2
◧◩◪◨⬒⬓⬔⧯
1777. dahart+bC2[view] [source] [discussion] 2023-11-22 21:21:51
>>kcplat+uJ1
> at no point have I ever seen large numbers of employees act outside of their self-interest for an executive.

This is still making the same assumption. Why are you assuming they are acting outside of self-interest?

replies(1): >>kcplat+oG2
◧◩◪
1778. mcast+1D2[view] [source] [discussion] 2023-11-22 21:25:09
>>Tigeri+Mg1
If you wanted to wear a foil hat, you might think this internal fighting was started from someone connected to TPTB subverting the rest of the board to gain a board seat, and thus more power and influence, over AGI.

The hush-hush nature of the board providing zero explanation for why sama was fired (and what started it) certainly doesn't pass the smell test.

◧◩◪◨⬒⬓⬔⧯▣
1779. JumpCr+LD2[view] [source] [discussion] 2023-11-22 21:28:57
>>JohnPr+a11
Defense Production Act, something something.
◧◩◪◨⬒⬓⬔⧯▣
1780. kcplat+oG2[view] [source] [discussion] 2023-11-22 21:42:23
>>dahart+bC2
If you are willing to leave a paycheck because of someone else getting slighted, to me, that is acting against your own self-interest. Assuming of course you are willing to actually leave. If it was a bluff, that still works against your self-interest by factioning against the new leadership and inviting retaliation for your bluff.
replies(1): >>dahart+PQ2
◧◩◪◨
1781. gwern+UH2[view] [source] [discussion] 2023-11-22 21:50:51
>>astran+Me
The FBI doesn't investigate things like this on their own, and they definitely do not announce them in the press. The questions you should be asking are (1) who called in the FBI and has the clout to get them to open an investigation into something that obviously has 0% chance of being a federal felony-level crime worth the FBI's time, and (2) who then leaked that 'investigation' to the press?
replies(1): >>astran+AU2
◧◩◪◨
1782. dragon+JI2[view] [source] [discussion] 2023-11-22 21:56:13
>>astran+Me
The FBI is not mentioned in that tweet. We don't need to telephone game anonymous leaks that are already almost certainly self-serving propaganda.
◧◩◪◨⬒⬓⬔⧯▣▦
1783. Random+9K2[view] [source] [discussion] 2023-11-22 22:02:57
>>mlyle+sj2
Why does it make sense? It's a hypothetical risk with poorly defined outlines.
replies(1): >>mlyle+HO2
◧◩◪◨⬒⬓⬔⧯
1784. kyle_g+WK2[view] [source] [discussion] 2023-11-22 22:07:19
>>tempes+Pv2
I’m pretty confident (close to the 95% level) they will abandon the public charity structure, but throughout this saga, I have been baffled by the discourse’s willingness to handwave away OpenAI’s peculiar legal structure as irrelevant to these events.
replies(1): >>tempes+IP2
◧◩◪◨
1785. pauldd+SM2[view] [source] [discussion] 2023-11-22 22:17:15
>>hacker+8v1
> enhancing shareholder value and delivering record growth and sales are NOT the mission of the company

Developer platform updates seem to be inline.

And in any case, the board also failed to specify how their action furthered the mission of the company.

From all appearances, it appeared to damage the mission of the company. (If for no other reason that it dissolve the company and gave everything to MSFT.)

◧◩◪
1786. ric2b+pN2[view] [source] [discussion] 2023-11-22 22:20:54
>>zeroha+wc2
People keep saying this but is there any evidence that any of this was related to the charter?
replies(1): >>Xelyne+wv3
◧◩◪◨⬒⬓
1787. ssnist+zN2[view] [source] [discussion] 2023-11-22 22:21:56
>>iiv+xR
Most of these people have been on Twitter long before Musk had his hands on it.
◧◩◪◨⬒⬓⬔⧯▣▦▧
1788. mlyle+HO2[view] [source] [discussion] 2023-11-22 22:28:20
>>Random+9K2
There's a big family of risks here.

The simplest is pretty easy to articulate and weigh.

If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.

The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.

Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.

For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?

And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?

replies(1): >>Random+2S2
◧◩◪◨⬒
1789. m463+kP2[view] [source] [discussion] 2023-11-22 22:32:02
>>pauldd+Ai1
lol. The one serious and insightful answer made me laugh!
◧◩◪
1790. muraka+lP2[view] [source] [discussion] 2023-11-22 22:32:04
>>mwatts+Nt1
Actually I think Bill would be a pretty good candidate. Smart, mature, good at first principles reasoning, deeply understands both the tech world and the nonprofit world, is a tech person who's not socially networked with the existing SF VCs, and (if the vague unsubstantiated rumors about Sam are correct) is one of the few people left with enough social cachet to knock Sam down a peg or two.
replies(1): >>lucubr+IU2
◧◩◪◨⬒⬓⬔⧯▣
1791. tempes+IP2[view] [source] [discussion] 2023-11-22 22:34:01
>>kyle_g+WK2
Within a few months? I don't think it should be possible to be 95% confident of that without inside info. As you said, many unexpected things have happened already. IMO that should bring the most confident predictions down to the 80-85% level at most.
◧◩◪◨⬒
1792. Cacti+7Q2[view] [source] [discussion] 2023-11-22 22:36:23
>>darkwa+Ij
Your average HNer is only here because of the money. Willful blindness and ignorance is incredibly common.
◧◩◪
1793. ryukop+aQ2[view] [source] [discussion] 2023-11-22 22:36:38
>>baking+G91
Are there any similar cases of this "non-profit board overseeing a (huge) for-profit company" model? I want to like the concept behind it. Was this inevitable due to the leadership structure of OpenAI, or was it totally preventable had the right people been on the board? I wish I had the historical context to answer that question.
replies(1): >>lacker+9W2
◧◩◪◨⬒⬓
1794. saalwe+iQ2[view] [source] [discussion] 2023-11-22 22:37:25
>>alsetm+w02
And when the CEO's other thing is a cryptocurrency?
replies(1): >>lacrim+MU4
◧◩◪◨
1795. lucubr+pQ2[view] [source] [discussion] 2023-11-22 22:37:48
>>maxdoo+GM
Disagree with her or her actions without falsely claiming that she has no qualifications or understanding of AI and therefore no business being on the board in the first place? It is not hard at all to do so, and many people did.
replies(1): >>maxdoo+eF3
◧◩◪◨⬒⬓⬔⧯▣▦
1796. dahart+PQ2[view] [source] [discussion] 2023-11-22 22:40:04
>>kcplat+oG2
Why do you assume they were willing to leave a paycheck because of someone else getting slighted? If that were the case, then it is unlikely everyone would be in agreement. Which indicates you might be making incorrect assumptions, no? And, again, why assume they were threatening to leave a paycheck at all? That’s a bad assumption; MS was offering a paycheck. We already know their salaries weren’t on the line, but all future stock earnings and bonuses very well might be. There could be other reasons too, I don’t see how you can conclude this was either a bluff or not self-interest without making potentially bad assumptions.
replies(1): >>kcplat+Xg3
◧◩◪◨⬒⬓⬔
1797. muraka+UQ2[view] [source] [discussion] 2023-11-22 22:40:33
>>ketzo+Rf2
I think it's important to keep in mind that BOTH Altman and the board maneuvered to threaten to destroy OpenAI.

If Altman was silent and/or said something like "people take some time off for Thanksgiving, in a week calmer minds will prevail" while negotiating behind the scenes, OpenAI would look a lot less dire in the last few days. Instead he launched a public pressure campaign, likely pressured Mira, got Satya to make some fake commitments, got Greg Bockman's wife to emotionally pressure Ilya, etc.

Masterful chess, clearly. But playing people like pieces nonetheless.

replies(1): >>pauldd+Yt5
◧◩◪◨
1798. muraka+iR2[view] [source] [discussion] 2023-11-22 22:42:52
>>JSavag+uN
I think it was only a competitor app after GPTs came out. A conspiracy theorist might say that Altman wanted to get him off the board and engineered GPTs as a pretext first, in the same way that he used some random paper coauthored by Toner that nobody read to kick Toner out.
1799. zeroha+CR2[view] [source] 2023-11-22 22:45:06
>>staran+(OP)
I think we now have an idea of what will happen if AGI is actually reached and efforts are made to contain or restrain it.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1800. Random+2S2[view] [source] [discussion] 2023-11-22 22:47:55
>>mlyle+HO2
I just don't know what to do with the hypotheticals. It needs the existence of something that does not exist, it needs a certain socio-economic response and so forth.

Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?

replies(1): >>mlyle+pT2
◧◩
1801. august+0T2[view] [source] [discussion] 2023-11-22 22:53:42
>>garris+EJ
how can they not remain a charity?
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
1802. mlyle+pT2[view] [source] [discussion] 2023-11-22 22:56:35
>>Random+2S2
> It needs the existence of something that does not exist,

Yes, if we're concerned about the potential consequences of releasing AGI, we need to consider the likely outcomes if AGI is released. Ideally we think about this some before AGI shows up in a form that it could be released.

> it needs a certain socio-economic response and so forth.

Absent large interventions, this will happen.

> Are children equally demoralized about additions

Absolutely basic arithmetic, etc, has gotten worse. And emerging things like photomath are fairly corrosive, too.

> Is there a way to counter the demoralization?

We're all looking... I make the argument to middle school and high school students that AI is a great piece of leverage for the most skilled workers: they can multiply their effort, if they are a good manager and know what good work product looks like and can fill the gaps; it works somewhat because I'm working with a cohort of students that can believe that they can reach this ("most-skilled") tier of achievement. I also show students what happens when GPT4 tries to "improve" high quality writing.

OTOH, these arguments become much less true if cheap AGI shows up.

◧◩◪◨⬒
1803. muraka+sT2[view] [source] [discussion] 2023-11-22 22:56:56
>>nathan+Hy
My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.

◧◩◪
1804. zeroha+7U2[view] [source] [discussion] 2023-11-22 23:00:00
>>nopins+wg
> Both sides of the rift in fact care a great deal about AI Safety.

I disagree. Yes, Sam may have when it OpenAI was founded (unless it was just a ploy), but certainly now it's clear that the big companies are on a race to the top and safety or guardrails are mostly irrelevant.

The primary reason that the Anthropic team left OpenAI was over safety concerns.

◧◩◪◨⬒
1805. mcmcmc+eU2[view] [source] [discussion] 2023-11-22 23:00:45
>>astran+IB2
So two blatant criminals got caught, big whoop. SBF broke rule number 1 - don't fuck with rich people's money.
replies(1): >>astran+023
◧◩◪◨⬒
1806. astran+AU2[view] [source] [discussion] 2023-11-22 23:03:08
>>gwern+UH2
Sorry, the SDNY. They do do things on their own. I expect the people they called leaked it.
◧◩◪◨
1807. lucubr+IU2[view] [source] [discussion] 2023-11-22 23:04:02
>>muraka+lP2
Larry Summers, Bill Gates, if they keep on like that they can fill the board with all of Epstein's "associates".
◧◩◪◨⬒⬓⬔⧯▣▦▧
1808. svnt+NV2[view] [source] [discussion] 2023-11-22 23:09:06
>>doktri+dX1
Your concrete example is Netflix’s CEO saying he doesn’t want to do advertising because he missed the boat and was on Facebook’s board and as a result didn’t believe he had the data to compete as an advertising platform.

Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.

He is explicitly saying they don’t compete. And they don’t.

◧◩◪◨
1809. lacker+9W2[view] [source] [discussion] 2023-11-22 23:10:39
>>ryukop+aQ2
Yes, for example Novo Nordisk is a pharmaceutical company controlled by a nonprofit, worth around $100B.

https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation

There are other similar examples like Ikea.

But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.

◧◩◪◨⬒⬓⬔
1810. mrfox3+VX2[view] [source] [discussion] 2023-11-22 23:20:38
>>pauldd+PB1
I'm not saying it's 50/50.

But future signees are influenced by previous signees.

Acting in good faith is different from bias.

◧◩◪◨⬒
1811. ilrwbw+xY2[view] [source] [discussion] 2023-11-22 23:24:10
>>kossTK+og2
I agree. The best young minds grinding leet code to get into Google is the biggest symptom of it.
replies(1): >>DSingu+wN3
◧◩◪◨⬒⬓
1812. astran+023[view] [source] [discussion] 2023-11-22 23:46:08
>>mcmcmc+eU2
You're obviously just coping here. FTX was the "rich connected people", there weren't other even richer connecteder people.

(It's also totally possible FTX still has everyone's money. They own a lot of Anthropic shares that are really valuable. But he's still been convicted because of all the fraud they did.)

◧◩◪◨⬒
1813. m3kw9+V33[view] [source] [discussion] 2023-11-22 23:56:49
>>jonas2+fb2
Not sure how much of the employees leaving have to do with negotiating Sam back, must be a big factor but not all, during the table talk Emmett, Angelo and Ilya must have decided that it wasn’t a good firing and a mistake in retrospect and it is to fix it.
◧◩
1814. 627467+V43[view] [source] [discussion] 2023-11-23 00:01:22
>>garris+EJ
I don't get the drama with "conflict of interests"... Aren't board members generally (always?) in representation of major shareholders? Isn't it obvious that shareholders have interests that are likely to be in conflict with each other or even the own organization? Thats why board members are supposed to check each other, right?
replies(1): >>Xelyne+av3
◧◩◪◨
1815. bluech+Ja3[view] [source] [discussion] 2023-11-23 00:28:12
>>drawkb+cP
Developers spend all day building. Pms spend all day playing politics. It is no surprise pms get all the power.
◧◩◪◨
1816. fatbir+7b3[view] [source] [discussion] 2023-11-23 00:31:11
>>notfed+PO1
Adam D'Angelo is on the new board with Brett Taylor and Larry Summers. Tasha, Ilya and Helen are out.

Still think D'Angelo wasn't the power player in the room?

◧◩◪◨⬒⬓
1817. komali+nc3[view] [source] [discussion] 2023-11-23 00:39:22
>>hacker+Iu1
Just putting my hand up as one of the dudes that happened to enter my email on a yc forum (not "page") but really doesn't like the guy lol.

I also have a Twitter account. Guess my opinion on the current or former Twitter CEOs?

◧◩◪◨⬒⬓⬔
1818. iLoveO+Mc3[view] [source] [discussion] 2023-11-23 00:41:34
>>hadloc+Oi
> I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it.

You mean like they already do on Amazon Bedrock?

replies(1): >>hadloc+Yf3
◧◩◪◨⬒⬓⬔
1819. citygu+5d3[view] [source] [discussion] 2023-11-23 00:43:26
>>gdhkgd+Nk1
I guess my qualm is that this is the cost of doing business, yet people are outraged at the board because they’re not going to make truckloads of money in equity grants. That’s the morally bankrupt part in my opinion.

If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.

◧◩◪◨⬒
1820. kevin_+rd3[view] [source] [discussion] 2023-11-23 00:45:33
>>dragon+K42
Trump Foundation was a 501c3 that laundered money for 30 years without the IRS batting an eye.
replies(1): >>hnbad+yw4
◧◩◪◨
1821. random+We3[view] [source] [discussion] 2023-11-23 00:55:48
>>OnAYDI+tM1
If you read my comment again, I'm talking about their competence, not their rights. Those are two entirely different things.
◧◩◪◨⬒⬓⬔⧯
1822. hadloc+Yf3[view] [source] [discussion] 2023-11-23 01:02:31
>>iLoveO+Mc3
Yeah and looks like they're going to offer Llama as well. They offer Redhat linux EC2 instances at a premium, and other paid per hour AMIs. I can't imagine why they wouldn't offer various LLMs at a premium, but not also offer a home-grown LLM at a lower rate once it's ready.
◧◩◪
1823. dizzyd+ig3[view] [source] [discussion] 2023-11-23 01:04:30
>>bradle+hG1
Larry Summers practically invented this stuff...
◧◩◪◨⬒⬓⬔⧯▣▦▧
1824. kcplat+Xg3[view] [source] [discussion] 2023-11-23 01:08:23
>>dahart+PQ2
They threatened to quit. You don’t actually believe that a company would be willing to still provide them a paycheck if they left the company do you?

At this point I suspect you are being deliberately obtuse. Have a good day.

replies(1): >>dahart+zk3
◧◩◪◨⬒⬓
1825. kmlevi+qk3[view] [source] [discussion] 2023-11-23 01:32:33
>>chucke+zN
Again, D'Angelo chose Larry Summers and Bret Taylor to sit on the board with him himself. As long as it is the three of them, he can't be overruled unless both of his personal picks disagree with him. And if the opposition to his idea is all that bad, he probably really should be overruled.

His voting power will get diluted as they add the next six members, but again, all three of them are going to decide who the next members are going to be.

A snippet from the recent Bloomberg article:

>A person close to the negotiations said that several women were suggested as possible interim directors, but parties couldn’t come to a consensus. Both Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated, *but deemed to be too close to Altman*, this person said.

Say what else you want about it, this is not going to be a board automatically stacked in Altman's favor.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
1826. dahart+zk3[view] [source] [discussion] 2023-11-23 01:33:44
>>kcplat+Xg3
They threatened to quit by moving to Microsoft, didn’t you read the letter? MS assured everyone jobs if they wanted to move. Isn’t making incorrect assumptions and sticking to them in the face of contrary evidence and not answering direct questions the very definition of obtuse?
◧◩◪◨
1827. system+Vk3[view] [source] [discussion] 2023-11-23 01:36:15
>>ryzvon+pr
I am talking about API. There is no fixed cost for it. 6000 tokens cost around $0.25. If I use if all day long I pay more than $10 per day.
replies(1): >>ryzvon+hU3
◧◩◪◨
1828. dev_tt+3m3[view] [source] [discussion] 2023-11-23 01:45:50
>>iandan+0H1
Yes, and we were also watching the thousands and thousands of companies where these types of conflicts are handled easily by decent people and common sense. Don't confuse the outlier with the silent majority.
◧◩◪◨⬒⬓⬔
1829. nmfish+zp3[view] [source] [discussion] 2023-11-23 02:12:28
>>deckar+yT1
I have no objection to companies[0] making money. It's discarding the philosophical foundations of the company to prioritize quarterly earnings that is offensive.

I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.

[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.

◧◩◪◨⬒
1830. photoc+Gp3[view] [source] [discussion] 2023-11-23 02:13:12
>>jjoona+vK1
No worries. The same kind of people who devoted their time and energy to creating open-source operating systems in the era of Microsoft and Apple are now devoting their time and energy to doing the same for non-lobotomized LLMs.

Look at these clowns (Ilya & Sam and their angry talkie-bot), it's a revelation, like Bill Gates on Linux in 2000:

https://www.youtube.com/watch?v=N36wtDYK8kI

◧◩◪◨⬒⬓⬔⧯▣
1831. optymi+Qr3[view] [source] [discussion] 2023-11-23 02:30:43
>>FartyM+Bg2
How is it an 'existential risk'? Its body of knowledge is publicly available, no?
replies(1): >>FartyM+nd4
◧◩◪
1832. hyperh+Bt3[view] [source] [discussion] 2023-11-23 02:46:06
>>_b+FF1
The more and more I see the way complex share structures are used, the more I think they should be outlawed
◧◩◪◨⬒⬓⬔
1833. Number+Wt3[view] [source] [discussion] 2023-11-23 02:49:17
>>fallin+TI
I would say...not necessarily. The technology that lets someone create a gun does not give the ability to make bulletproof armor or the ability to treat life-threatening gunshot wounds. Or take nerve gases, as another example. It's entirely possible that we can learn how to make horrible pathogens without an equivalent means of curing them.

Yes, there is probably some overlap in our understanding of biology for disease and cure, but it is a mistake to assume that they will balance each other out.

◧◩◪
1834. Xelyne+Hu3[view] [source] [discussion] 2023-11-23 02:56:02
>>pc86+fx1
Whats the point of Microsoft appointing a board member if not to sway decision in ways that benefit them?
◧◩◪
1835. dukeof+5v3[view] [source] [discussion] 2023-11-23 02:59:27
>>system+dh
no really who is Sam, and how did he get here? Do u know?
replies(1): >>system+1X3
◧◩◪
1836. Xelyne+av3[view] [source] [discussion] 2023-11-23 03:00:23
>>627467+V43
OpenAI is a non profit and the board members are not allowed to own shares in the for profit.

That means the remaining conflicts are when the board has to make a decisions between growing the profit or furthering the mission statement. I wouldn't trust the new board appointed by investors to ever make the correct decision in these cases, and they already kicked out the "academic" board members with the power to stop them.

◧◩◪
1837. Xelyne+kv3[view] [source] [discussion] 2023-11-23 03:02:15
>>boh+MK1
> Experts/lawyers who have a material stake in getting this right have signed off on it.

How does that work when we're talking about non-profit motives? The lawyers are paid by the companies benefitting from these conflicts, so how is it at all reassuring to hear that the people who benefit from the conflict signed off on it?

> We can go out on a limb and suppose the mission of the nonprofit is not their top priority, and yet they continue on unscathed.

That's the concern. They've just replaced people who "maybe" cared about the mission statement with people who you've correctly identified care more about profit growth than the nonprofit mission.

◧◩◪◨
1838. Xelyne+wv3[view] [source] [discussion] 2023-11-23 03:05:22
>>ric2b+pN2
The only evidence I have is that the board members that were removed had less business connections than the ones that replaced them.

The point of the board is to ensure the charter is being followed, when the biggest concern is "is our commercialization getting in the way of our charter" what else does it mean to replace "academics" with "businesspeople"?

◧◩◪◨
1839. semiqu+Mv3[view] [source] [discussion] 2023-11-23 03:07:32
>>asd88+Lx
LinkedIn has a rep for higher-than-MSFT comp. GitHub for lower.
◧◩◪◨⬒
1840. keepam+cy3[view] [source] [discussion] 2023-11-23 03:30:15
>>keepam+pS
Just following up, it's also totally Smeagol-like to make people sign up before they can get any useful answers at Quora. True Gollum move, D'gelo. Thanks for showin' yer true colors!
◧◩◪
1841. JacobT+Qy3[view] [source] [discussion] 2023-11-23 03:36:06
>>hsavit+7U1
And yet, this union was threatening to move to a company without unions.
◧◩◪◨⬒
1842. dimitr+Mz3[view] [source] [discussion] 2023-11-23 03:43:12
>>acchow+Eg1
Have you ever worked with someone who treats their work as their life? They are borderline psychopaths. As if a health condition or accident will stop them. They'll be taking work calls on the hospital bed.
◧◩◪
1843. quickt+MA3[view] [source] [discussion] 2023-11-23 03:53:25
>>sgt101+Pz1
I don't think AI safetyists are worried about any model they have created so far. But if we are able to go from letter-soup "ooh look that almost seems like a sentence, SOTA!" to GPT4 in 20 years, where will go in the next 20? And what is the point they are becoming powerful. Let alone all the crazy ways people are trying to augment them with RAG, function calls, get them to run on less computer power and so on.

Also being better at humans at everything is not a prerequisite for danger. Probably a scary moment is when it could look at a C (or Rust, C++, whatever) codebase, find an exploit, and then use that exploit as a worm. If it can do that on everyday hardware not top end GPUs (either because the algorithms are made more efficient, or every iPhone has a tensor unit).

◧◩
1844. mattmc+IB3[view] [source] [discussion] 2023-11-23 04:02:33
>>garris+EJ
The non-profit could sell off its interest in the for-profit company and use the money for AGI research.
◧◩◪◨⬒
1845. daniel+iE3[view] [source] [discussion] 2023-11-23 04:27:47
>>hadloc+vd
They won't stand still while others are scraping and digitizing. It's like saying there is no moat in search. Scale is a thing. Learning effects are a thing. It's not the worlds widest moat for sure, but it's a moat.
◧◩◪◨⬒
1846. maxdoo+eF3[view] [source] [discussion] 2023-11-23 04:37:28
>>lucubr+pQ2
Do you think people said she has no qualifications because she is a woman, or is it possible people say that because her resume is quite short ? It seems like people taking such comments as misogynistic are actually projecting misogyny into the situation, rather than the reverse. If you showed me her resume and put “Steven Smith” stop the paper, I’d say that person isn’t qualified to be running the board of a 90 billion dollar company in charge of guiding research on some of the most groundbreaking new tech in years.
◧◩◪◨⬒
1847. osigur+yF3[view] [source] [discussion] 2023-11-23 04:39:29
>>catapa+ke1
A company is essentially an optimization problem, meant to minimize / maximize some set of metrics. Usually a companies goal is simply to maximize NPV but in OpenAI's case the goal is to maximize AI while minimizing harm.

"Failure" in this context essentially means arriving at a materially suboptimal outcome. Leaders in this situation, can easily be considered "irreplaceable" particularly in the early stages as decisions are incredibly impactful.

◧◩◪◨
1848. maxdoo+OF3[view] [source] [discussion] 2023-11-23 04:41:51
>>auggie+hZ
Why does being a man or woman even matter? Do we really need a DEI hire for the board of some of the most groundbreaking tech in years? I’m not saying Larry Summers has some perfect resume for the job; but to assume he was brought on BECAUSE he is man?

Cmon. There’s absolutely no evidence for that and you are just projecting an issue into the situation, rather than it being of any reality.

replies(1): >>auggie+YY3
1849. nbzso+yK3[view] [source] 2023-11-23 05:28:53
>>staran+(OP)
Larry Summers? Microsoft? Alignment? Bye, bye humanity.
◧◩◪◨⬒⬓
1850. jachee+6M3[view] [source] [discussion] 2023-11-23 05:47:56
>>Juicyy+5P1
You know that know-it-all should be hyphenated, right?

;)

◧◩◪◨⬒⬓
1851. DSingu+wN3[view] [source] [discussion] 2023-11-23 06:02:31
>>ilrwbw+xY2
The sad part isn’t the rampant sickness. The saddest part is all the “intellectual” professors who enable, encourage, and celebrate this.

It’s sickening.

◧◩◪◨
1852. justco+RQ3[view] [source] [discussion] 2023-11-23 06:40:56
>>highwa+P9
well and he also tried very hard to not buy it until Twitter sued in order to have the contract upheld
◧◩◪◨⬒
1853. Xelyne+1S3[view] [source] [discussion] 2023-11-23 06:53:03
>>jejeyy+0X1
No, it's just counter to the idea that it was "employee power" that brought sam back.

It was capital and the pursuit of more of it.

It always is.

◧◩◪◨
1854. Xelyne+uS3[view] [source] [discussion] 2023-11-23 06:57:04
>>joewfe+Hj2
The nuclear arms race lead to the cold war, not a "good outcome" IMO. It wasn't until nations started imposing those regulations that we got to the point we're at today with nuclear weapons.
◧◩◪◨⬒⬓⬔⧯
1855. JSavag+9T3[view] [source] [discussion] 2023-11-23 07:03:15
>>WendyT+m32
Quora is an embarrassment and died years ago when marketers took it over
◧◩◪◨
1856. Xelyne+KT3[view] [source] [discussion] 2023-11-23 07:08:50
>>__loam+qZ1
That's a strange framing. In that scenario would it not be that he made the decision he thought was right and aligned with openais mission initially, then when seeing the public support Sam had he decided to backtrack so he had a future career?
◧◩◪◨⬒
1857. ryzvon+hU3[view] [source] [discussion] 2023-11-23 07:15:34
>>system+Vk3
Ah, sorry, I was confused, thanks for the clarification.
◧◩◪◨⬒⬓
1858. Obscur+sV3[view] [source] [discussion] 2023-11-23 07:25:46
>>iandan+U32
This comment is perfectionXD
◧◩◪◨
1859. system+1X3[view] [source] [discussion] 2023-11-23 07:43:48
>>dukeof+5v3
How he became a CEO is a common story. Why this drama happened is still unknown to everyone.
◧◩◪◨⬒
1860. PeterS+aX3[view] [source] [discussion] 2023-11-23 07:46:01
>>davedx+yg
I think we agree, as my comments were mostly in reference to Altman's (and other's) regulatory (capture) world tours, though I see how they could be misinterpreted.
◧◩◪◨⬒
1861. auggie+YY3[view] [source] [discussion] 2023-11-23 08:06:00
>>maxdoo+OF3
I think you are the one projecting. I am just presenting facts. There is also nobody black on the board, by the way. I don't think that is a problem, but it is what it is.

Now this "initial board", tasked with establishing the rest of the board, for a company that wants to create AGI for the benefit of humanity, consists of three white alpha-males. That's just a fact. Is it a coincidence? Of course not.

◧◩◪◨⬒⬓
1862. silvar+o04[view] [source] [discussion] 2023-11-23 08:21:41
>>low_te+zs
Youre saying that the problem will be people using AI to persuade other people that the AI is 'super smart' and should be held in high esteem.

Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.

◧◩◪
1863. rcMgD2+k34[view] [source] [discussion] 2023-11-23 08:55:43
>>ah765+Ni
>The story about Sam trying to get board control and the board retaliating seems more plausible given what's actually happened.

What story? Any link?

◧◩
1864. scooke+B34[view] [source] [discussion] 2023-11-23 08:58:33
>>jafitc+F91
Many are still going to use this; few will bother to ponder and break the event down like this.
◧◩◪◨⬒
1865. IanCal+544[view] [source] [discussion] 2023-11-23 09:03:13
>>framap+jd1
Nothing I've said suggests that or requires that.
replies(1): >>framap+qc4
◧◩◪◨⬒
1866. astran+Y44[view] [source] [discussion] 2023-11-23 09:11:46
>>Davidz+V51
I think "reasoning" is a descriptive term like "AI" and it's hard to know what people would accept as reasoning.

Explicit planning with discrete knowledge is GOFAI and I think isn't workable.

There is whatever's going on here: https://x.com/natolambert/status/1727476436838265324?s=46

◧◩◪◨⬒⬓⬔
1867. maxlin+q74[view] [source] [discussion] 2023-11-23 09:42:06
>>Crespy+9I1
1 hard problems.

naming things, cache invalidation, off-by one errors, and overflows.

◧◩◪◨
1868. gabrie+M74[view] [source] [discussion] 2023-11-23 09:47:45
>>giamma+QK
Perhaps, those are exception that proves the rule?

But whether it is deserved or not, it is never the question when congratulating a CEO for an achievement.

◧◩◪◨⬒
1869. farama+ca4[view] [source] [discussion] 2023-11-23 10:11:37
>>baking+8B1
Maybe the vision is to eventually bring UBI into it and cap earn outs. Not so wild given Sam’s world coin and his UBI efforts when he was YC president.
replies(1): >>baking+rU4
◧◩◪
1870. BOOSTE+nc4[view] [source] [discussion] 2023-11-23 10:39:48
>>Quenti+Rg2
Money indeed is worth more, also the only thing that is easy to measure during crisis.
◧◩◪◨⬒⬓
1871. framap+qc4[view] [source] [discussion] 2023-11-23 10:40:07
>>IanCal+544
Apologies, I mistook this:

"Raising private investment allows a non profit to shift cost and risk to other entities."

for a suggestion of that.

◧◩◪◨⬒⬓⬔⧯▣▦
1872. FartyM+nd4[view] [source] [discussion] 2023-11-23 10:48:45
>>optymi+Qr3
What do you mean by "its"? There isn't any AGI yet. ChatGPT is far from that level.
◧◩◪◨
1873. sgt101+7e4[view] [source] [discussion] 2023-11-23 10:55:53
>>Davidz+D52
Language translation is due to the huge corpus of translations that it's trained on. Google translate has been doing this for years. People don't apply softmax to their arithmetic. Again, code generation is approximate retrieval, it can't generate anything outside of it's training distribution.
◧◩◪
1874. ric2b+oe4[view] [source] [discussion] 2023-11-23 10:58:35
>>logicc+jb
Larry David is never wrong on these things, you can trust him.
◧◩◪
1875. alebai+Ae4[view] [source] [discussion] 2023-11-23 11:01:09
>>pug_mo+Cb
My safety (of my group) is what really matters.
◧◩◪◨⬒⬓⬔
1876. ric2b+Ve4[view] [source] [discussion] 2023-11-23 11:05:23
>>rvnx+Ea1
> No conspiracy needed, for example, it's very convenient that MSFT can politely "influence" OpenAI to spend back on their platform a lot of the money they gave to the non-profit back to their for-profit (and profitable) company.

Can you explain this further? So Microsoft pays $X to OpenAI, then OpenAI uses a lot of energy and hardware from Microsoft and the $X go back to Microsoft. How does Microsoft gain money this way?

replies(1): >>matwoo+xA4
◧◩◪◨⬒
1877. cbeach+ef4[view] [source] [discussion] 2023-11-23 11:09:05
>>hn_thr+ev1
Perhaps you're not aware. Living in Beijing is not equivalent to "once eating a fortune cookie"

> it seems that Helen was picked by Holden to take his seat.

So you can only speculate as to how she got the seat. Which is exactly my point. We can only speculate. And it's a question worth asking, because governance of America's most important AI company is a very important topic right now.

◧◩◪◨
1878. _giorg+Vl4[view] [source] [discussion] 2023-11-23 12:20:45
>>gbaldu+DA
It's only a temporary board.

Furthermore, being removed from the board while keeping a role as chief scientist is different from being fired from CEO and having to leave the company.

◧◩◪◨⬒
1879. execut+Sn4[view] [source] [discussion] 2023-11-23 12:36:22
>>davedx+yg
> they wanted AI to be OPEN FOR EVERYONE

I strongly disagree with that. If that was their motivation, then why is it not open-sourced? Why is it hardcoded with prudish limitations? That is the direct opposite of open and free (as in freedom) to me.

◧◩◪◨⬒⬓⬔
1880. thedud+6w4[view] [source] [discussion] 2023-11-23 13:39:27
>>nickpp+wk1
doomerism at the society level which overrides individual freedoms definitely occurs: covid lockdowns, takeover of private business to fund/supply the world wars, gov mandates around "man made" climate change.
◧◩◪◨⬒⬓
1881. hnbad+yw4[view] [source] [discussion] 2023-11-23 13:41:54
>>kevin_+rd3
The Bill and Melinda Gates Foundation is a 501c3 and I'd expect that even the most techno-futurist free-market types on HN would agree that no matter what alleged impact it has, it is also in practice creating profitable overseas contracts for US corporations that ultimately provide downstream ROI to the Gates estate.

Most people just tend to go about it more intelligently than Trump but "charitable" or "non-profit" doesn't mean the organization exists to enrich the commons rather than the moneyed interests it represents.

1882. jodupl+sA4[view] [source] 2023-11-23 14:07:26
>>staran+(OP)
There is so much vagueness around this whole OpenAI thing that it's difficult taking anything seriously anymore - it's almost hearsay at this point. Yesterday it was Altman's personal interests, now it's a breakthrough model, tomorrow it's something else. At the very least it's fantastic marketing (albeit at the expense of their customers).
◧◩◪◨⬒⬓⬔⧯
1883. matwoo+xA4[view] [source] [discussion] 2023-11-23 14:07:47
>>ric2b+Ve4
MS gains special access and influence over OpenAI for effectively 'free'. Obviously the compute cost MS money, and some of their 'donation' is used on OpenAI salaries, but still. This special access and influence lets MS be first to market on all sorts of products - see co-pilot already with a 1M+ paying subscribers.

For example, let's say I'm a big for-profit selling shovels. You're a naive non-profit who needs shovels to build some next gen technology. Turns out you need a lot of shovels and donations so far haven't cut it. I step in and offer to give you all the shovels you need, but I want special access to what you create. And even if it's not codified, you will naturally feel indebted to me. I gain huge upside for just my marginal cost of creating the shovels. And, if I gave the shovels to a non-profit I can also take tax write-offs at the shovel market value.

TBH, it was an amazing move by MS. And MS was the only big cloud provider who could have done it b/c Sataya appears collaborative and willing to partner. Amazon would have been an obvious choice, but they don't partnership like that and instead tend to buy companies or repurpose OSS. And Google can't get out of their own way with their hubris.

replies(1): >>ric2b+51d
1884. Obscur+NB4[view] [source] 2023-11-23 14:14:28
>>staran+(OP)
Is this the most famous post?
◧◩◪◨⬒⬓⬔⧯
1885. jonono+sM4[view] [source] [discussion] 2023-11-23 15:16:53
>>supert+Ce2
The principles, best practices and tools of safety engineering can be applied to new projects. We have decades of experience now. Not saying it will be perfect on the first try, or that we know everything that is needed. But the novel aspects of AI are not an excuse to not try.
◧◩◪◨⬒⬓
1886. baking+rU4[view] [source] [discussion] 2023-11-23 15:55:13
>>farama+ca4
The public support test for public charities is a 5-year rolling average, so "eventually" won't help you. The idea of billionaires asking the public for donations to support their wacky ideas is actually quite humorous. Just make it a private foundation and follow the appropriate rules. Bill Gates manages to do it and he's a dinosaur.
◧◩◪◨⬒⬓⬔
1887. lacrim+MU4[view] [source] [discussion] 2023-11-23 15:57:02
>>saalwe+iQ2
Sama’s moral compass clearly has north pointing at money and that will definitely get him to a different destination.
◧◩◪◨
1888. ncalla+rV4[view] [source] [discussion] 2023-11-23 15:59:20
>>qwery+4g1
> is by definition poorly aligned

If OpenAI is struggling to hard with the corporate alignment problem, how are they going to tackle the outer and inner alignment problems?

◧◩◪
1889. patcon+oX4[view] [source] [discussion] 2023-11-23 16:10:42
>>random+Yf
> The board acted like the most incompetent group of individuals who've eve[r been] handed any responsibility.

This whole conversation has been full of appeals to authority. Just because us tech people don't know some of these names and their accomplishments, we talk about them being "weak" members. The more I learn, the more I think this board was full of smart ppl who didn't play business politics well (and that's ok by me, as business politics isn't supposed to be something they have to deal with).

Their lack of entanglements makes them stronger members, in my perspective. Their miscalculation was in how broken the system is in which they were undermined. And you and I are part of that brokenness even in how we talk about it here

◧◩◪◨⬒⬓⬔
1890. rolisz+f45[view] [source] [discussion] 2023-11-23 16:50:30
>>hadloc+Oi
Why do you think cloud providers can undercut OpenAI? From what I know, Llama 70b is more expensive to run than GPT-3.5, unless you can get 70+% utilization rate for your GPUs, which is hard to do.

So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.

1891. carapa+I75[view] [source] 2023-11-23 17:09:45
>>staran+(OP)
One of the oldest AGI jokes:

Q: What's AGI?

A: When the machine wakes up and asks, "What's in it for me?"

- - - -

So long, and thanks for all the fish.

◧◩◪
1892. system+tb5[view] [source] [discussion] 2023-11-23 17:31:06
>>pug_mo+Cb
Ah, you don't need to go far. Just go to your local HOA meetings.
◧◩◪◨⬒⬓⬔
1893. makewo+Gf5[view] [source] [discussion] 2023-11-23 17:54:09
>>nickpp+Vr1
The value of a creation cannot be solely judged by its output. It's hard to explain, it's better to intuit it.
◧◩◪◨⬒⬓⬔⧯▣
1894. makewo+5g5[view] [source] [discussion] 2023-11-23 17:56:26
>>nickpp+BX1
If you think humanity being lost is acceptable, then it's hard to discuss anything else on this topic.
replies(1): >>nickpp+mp5
◧◩◪◨⬒⬓⬔⧯▣▦
1895. nickpp+mp5[view] [source] [discussion] 2023-11-23 18:44:04
>>makewo+5g5
> you think humanity being lost is acceptable

I never said that.

◧◩◪◨⬒⬓⬔⧯
1896. pauldd+Yt5[view] [source] [discussion] 2023-11-23 19:12:51
>>muraka+UQ2
Why couldn't those people have acted on their own judgement?
◧◩◪◨⬒⬓⬔
1897. disgru+cx5[view] [source] [discussion] 2023-11-23 19:30:25
>>kubect+6U
I just don't agree that social media is particularly harmful, relative to other things that humans have invented. To be brutally honest, people blame new forms of media for pre existing dysfunctions of society and I find it tiresome. That's why I like the printing press analogy.
1898. toaste+bJ5[view] [source] 2023-11-23 20:39:35
>>staran+(OP)
Seems so strange all of this happened
◧◩
1899. tacoca+106[view] [source] [discussion] 2023-11-23 22:23:45
>>voiceb+6o1
Thanks for sharing.

I would have guessed the stunt was to hide the switch from sugar to High Fructose Corn syrup.

◧◩◪
1900. miracu+sa6[view] [source] [discussion] 2023-11-23 23:29:20
>>pug_mo+Cb
Yes, it's an outright powergrab. They will stop at nothing.

Case in point, the new AI laws like the EU AI act will outlaw *all* software unless registered and approved by some "authority".

The result will be concentration of power, wealth for the few, and instability and poverty for everyone else.

◧◩◪◨⬒
1901. dragon+Sa6[view] [source] [discussion] 2023-11-23 23:33:01
>>jjoona+vK1
No, its the part of the show where they go back to providing empty lip service to the principles and using them as a pretext for things that actually serve narrow proprietary interests, the same way they were before the leadership that has been doing that for a long time was temporarily removed until those sharing the proprietary interests revolted for a return to the status quo ante.
◧◩◪◨⬒⬓
1902. miracu+5b6[view] [source] [discussion] 2023-11-23 23:34:31
>>low_te+zs
Yes, but this distinction will not be possible in the future some people are working on. This future will be such that whatever their "safe" AI says is not ok will lead to prosecution as "hate speech". They tried it with political correctness, it failed because people spoke up. Once AI makes the decision they will claim that to be the absolute standard. Beware.
◧◩◪◨
1903. miracu+eb6[view] [source] [discussion] 2023-11-23 23:36:15
>>nopins+th
Such attacks cannot be stopped by outlawing technology.
◧◩◪◨⬒⬓
1904. vaxman+rA6[view] [source] [discussion] 2023-11-24 04:06:50
>>erosen+Ed1
yeah, all they have to do is pray for humanity to not let the magic AI out of the bottle and they’re free to have a $91b valuation and flaunt it in the media for days.. https://youtu.be/2HJxya0CWco
◧◩◪
1905. baruz+3F6[view] [source] [discussion] 2023-11-24 05:19:00
>>nbanks+zj
> They wanted a new CEO

If that were the case, would they not have presented the new CEO immediately for an “orderly transition”? As I understand it, Ms Murati tried to get Altman back, and when she pressured the board, they tried at least two other possible CEOs before settling on Mr Shear, who also threatened to leave if they could not give evidence of a legal reason for firing Altman. It smells like a personality conflict.

◧◩◪◨
1906. flagra+HF6[view] [source] [discussion] 2023-11-24 05:29:29
>>dgrin9+X61
My comment here was actually meant to talk about AI broadly, though I can get the confusion here as the original source thread here is about OpenAI.

I also don't expect the government to do anything about the OpenAI situation, to be clear. Though my read is actually that the government had to be evolved behind closed doors to move so quickly to get Sam back to OpenAI. Things moved much too quickly and secretively in an industry that is obviously of great interest to the military, there's no way the feds didn't put a finger on the scale to protect their interests at which point they wouldn't come back in to regulate.

◧◩◪◨⬒⬓
1907. teachi+tN6[view] [source] [discussion] 2023-11-24 07:18:13
>>LordDr+nP1
It happens a lot. Every big company has CEOs from other businesses on its board and sometimes those businesses will have competing products or services.

Eric Schmidt on Apple’s board is the example that immediately came to my mind. https://www.apple.com/ca/newsroom/2009/08/03Dr-Eric-Schmidt-...

◧◩◪◨
1908. james-+O97[view] [source] [discussion] 2023-11-24 11:13:09
>>Davidz+D52
Not necessarily; much smaller models like T5 which in some ways introduced instructions (not RLHF yet) did have to include specific instructions for useful translation - of similar format to those you find in large scale web translation data, but this is coincidental: you can finetune it with whatever instruction word you want to indicate translation - the point is, a much smaller model can translate.

The base non-RLHF GPT models could do translation by prefixing by the target language and a semi colon, but only above a certain amount of parameters are they consistent. GPT-2 didn't always get it right and of course had general issues with continuity. However, you could always do some parts of translation with older transformer models like BERT, especially multilingual ones.

Larger models across different from-base training runs show that they become more effective at translation at certain points, but I think this is about the capacity to store information, not emergence per say (if you understand my difference here). You've probably noticed and it has always seemed to me 4B, 6B and 9B are the largest rough parameter sizes with 2020 style training set ups that you see the most general "appearance" of some useful behaviours that you could "glean" from the web and book data that doesn't include instructions, while consistency seems to remain the domain of larger models or mixed expert models and lots of RLHF training/tricks. The easiest way to see this is to compare GPT-2 large, GPT-J and GPT-20B and see how well they perform at different tasks. However the fact it's about size in these GPTs, and yet smaller models (T5 instruction tuned / multilingual BERT) can perform at the same level on some tasks implies that it is about what the model is focusing it's learning on for the training task at hand, and controllable, rather than being innate at a certain parameter size. Language translations just do make up a lot of the data. I don't think it would emerge if you removed all cases of translation / multi language input/outputs, definitely not at the same parameter size, even if you had the same overall proportion of languages in the training corpus, if that makes sense? It just seems too much an artefact of the corpus aligning with the task.

Likewise for code - Gpt-4 generated code is not like arithmetic in the sense of the way people might mean it for code (e.g. branching instructions / abstract syntax tree) - its a fundamentally local text form of generation, this is why it can happily add illegal imports etc to diffs (perhaps one day training will resolve this) - it doesn't have the AST or compiler or much consistent behaviour to imply it deeply understands as it writes the code what could occur.

However if recent reports about arithmetic being an area of improvement are true, I am very excited, as a lot of what I wrote above - will have to be reconceptualised... and that is the most exciting scenario...

◧◩◪◨⬒
1909. simonh+PF7[view] [source] [discussion] 2023-11-24 15:10:13
>>gorbyp+PA
I think what happened is Microsoft got the raw GPT3.5 weights, based on the training set. However for ChatGPT OpenAI had done a lot of additional training to create the 'assistant' personality, using a combination of human and model based response evaluation training.

Microsoft wanted to catch up quickly so instead of training the LLM itself, they relied on prompt engineering. This involved pre-loading each session with a few dozen rules about it's behaviour as 'secret' prefaces to the user prompt text. We know this because some users managed to get it to tell them the prompt text.

◧◩◪◨⬒⬓⬔
1910. mlindn+S39[view] [source] [discussion] 2023-11-25 00:58:25
>>krisof+fg1
If the future we're talking about is a future where AI is in any software and is assisting writers writing and assisting editors to edit and doing proofreading and everything else you're absolutely going to be running into the ethics limits of AIs all over the place. People are already hitting issues with them at even this early stage.
◧◩◪◨⬒⬓⬔⧯▣
1911. kortil+oAa[view] [source] [discussion] 2023-11-25 19:32:44
>>karmas+yv
Based on the behavior of lots of smart people I worked at with Google during Google’s good times, critical thinking is definitely in the minority party. Brilliant people from Stanford, Berkeley, MIT, etc would all be leading experts in this or that but would lack critical thinking because they were never forced to develop that skill.

Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.

◧◩◪◨⬒⬓⬔
1912. kortil+WAa[view] [source] [discussion] 2023-11-25 19:36:19
>>rewmie+vV
You’re projecting a lot. I made a comment about one false premise, nothing more, nothing less.
◧◩◪◨⬒
1913. abkola+a0c[view] [source] [discussion] 2023-11-26 13:17:46
>>abkola+Kq
Edit: Making no excuses, this one is embarrassing.
◧◩◪◨⬒⬓⬔⧯▣
1914. ric2b+51d[view] [source] [discussion] 2023-11-26 21:41:58
>>matwoo+xA4
Ok, but does any of this have to do with tax avoidance? I thought that was what you were talking about, no?

Because what you just described would happen the same way with a for-profit company, no?

◧◩◪◨⬒
1915. CRConr+Y7t[view] [source] [discussion] 2023-12-01 14:43:24
>>halfma+102
Good thing you had a question mark there.

Because the answer is: Yes, it seems utterly instrumental.

[go to top]