zlacker

[parent] [thread] 109 comments
1. joshst+(OP)[view] [source] 2023-11-20 14:44:56
Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.

Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.

replies(14): >>soderf+u >>synerg+v1 >>OscarT+P5 >>Jeremy+47 >>rtkwe+18 >>airstr+Y8 >>l5870u+1a >>jstumm+qa >>Solven+Fa >>nostra+yc >>yafbum+Xj >>laurel+us >>cactus+MH >>lysecr+DO
2. soderf+u[view] [source] 2023-11-20 14:47:14
>>joshst+(OP)
It's almost like a ChatGPT hallucination. Where will this all go next? It seems like HN is melting down.
replies(5): >>voisin+N >>tedivm+h2 >>testpl+04 >>guhcam+c9 >>checky+xk
◧◩
3. voisin+N[view] [source] [discussion] 2023-11-20 14:49:05
>>soderf+u
* Elon enters the chat *
replies(1): >>soderf+H2
4. synerg+v1[view] [source] 2023-11-20 14:52:40
>>joshst+(OP)
Ilya ruined everything and shamelessly playing innocent, how low can he go?

Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.

replies(2): >>Tenoke+f2 >>marcus+57
◧◩
5. Tenoke+f2[view] [source] [discussion] 2023-11-20 14:55:51
>>synerg+v1
This is an extremely uncharitable take based on pure speculation.

>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.

???

I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.

replies(2): >>s1arti+y4 >>synerg+t8
◧◩
6. tedivm+h2[view] [source] [discussion] 2023-11-20 14:56:03
>>soderf+u
> It seems like HN is melting down.

Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.

replies(4): >>jprd+Ya >>pauldd+0h >>Applej+9N >>dang+GN2
◧◩◪
7. soderf+H2[view] [source] [discussion] 2023-11-20 14:58:02
>>voisin+N
It's like a bad WWE storyline. At this point I would not be surprised if Elon joins in, steel chair in hand.
replies(1): >>bellta+l3
◧◩◪◨
8. bellta+l3[view] [source] [discussion] 2023-11-20 15:01:51
>>soderf+H2
> steel chair in hand

And a sink in the other hand.

replies(1): >>jowea+vI
◧◩
9. testpl+04[view] [source] [discussion] 2023-11-20 15:05:25
>>soderf+u
Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)
◧◩◪
10. s1arti+y4[view] [source] [discussion] 2023-11-20 15:09:00
>>Tenoke+f2
Or they simply found themselves in a tough decision without superhuman predictive powers and did the best they could to navigate it.
11. OscarT+P5[view] [source] 2023-11-20 15:17:04
>>joshst+(OP)
What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.

Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.

12. Jeremy+47[view] [source] 2023-11-20 15:26:19
>>joshst+(OP)
There's no way to read any of this other than that the entire operation is a clown show.

All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.

Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

replies(10): >>pgeorg+F9 >>Booris+kg >>3cats-+Lg >>vitorg+bh >>creer+OA >>bredre+aB >>dkjaud+OG >>tim333+PG >>moffka+UQ >>averag+Lw1
◧◩
13. marcus+57[view] [source] [discussion] 2023-11-20 15:26:28
>>synerg+v1
Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:

Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.

He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.

[0]Never attribute to malice that which is adequately explained by incompetence.

replies(1): >>kibwen+Eh
14. rtkwe+18[view] [source] 2023-11-20 15:32:13
>>joshst+(OP)
That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?

It's been a busy weekend for me so I haven't really followed it if more has come out since then.

replies(2): >>nathan+t9 >>ssnist+ok
◧◩◪
15. synerg+t8[view] [source] [discussion] 2023-11-20 15:35:59
>>Tenoke+f2
I did not make this up, it's from OpenAI's own employees, deleted but archived somewhere that I read.
replies(1): >>cactus+4J
16. airstr+Y8[view] [source] 2023-11-20 15:39:50
>>joshst+(OP)
> I think everyone is a "loser" in the current situation.

On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.

I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.

replies(1): >>gtirlo+ta
◧◩
17. guhcam+c9[view] [source] [discussion] 2023-11-20 15:41:33
>>soderf+u
O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.

The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.

I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.

replies(1): >>InCity+jj
◧◩
18. nathan+t9[view] [source] [discussion] 2023-11-20 15:42:52
>>rtkwe+18
It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.

It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.

◧◩
19. pgeorg+F9[view] [source] [discussion] 2023-11-20 15:44:16
>>Jeremy+47
> Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves?

For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"

20. l5870u+1a[view] [source] 2023-11-20 15:46:30
>>joshst+(OP)
I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.

In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.

replies(3): >>trasht+Oe >>JacobT+2q >>martin+cX
21. jstumm+qa[view] [source] 2023-11-20 15:49:24
>>joshst+(OP)
> just madness

In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.

Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.

◧◩
22. gtirlo+ta[view] [source] [discussion] 2023-11-20 15:49:44
>>airstr+Y8
This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.
replies(1): >>airstr+jc
23. Solven+Fa[view] [source] 2023-11-20 15:50:28
>>joshst+(OP)
Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.

I'm not joking.

◧◩◪
24. jprd+Ya[view] [source] [discussion] 2023-11-20 15:51:52
>>tedivm+h2
server. and single-core. poor @dang deserves better from lurkers (sign out) and those not ready to comment yet (me until just now, and then again right after!)
◧◩◪
25. airstr+jc[view] [source] [discussion] 2023-11-20 15:59:44
>>gtirlo+ta
If they get 20% of key OpenAI employees and then get acquired by Amazon, I don't think that's necessarily a bad scenario for them given the current lay of the land
26. nostra+yc[view] [source] 2023-11-20 16:01:13
>>joshst+(OP)
Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.

This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.

Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.

replies(3): >>deelow+cd >>Zetoba+ih >>spacem+eo
◧◩
27. deelow+cd[view] [source] [discussion] 2023-11-20 16:04:49
>>nostra+yc
This seems really dangerous. What's to stop top talent from simply choosing a different suitor?
replies(2): >>TrapLo+Ve >>nostra+2i
◧◩
28. trasht+Oe[view] [source] [discussion] 2023-11-20 16:14:12
>>l5870u+1a
If Google or Elon manages to pick up Ilya and those still loyal to him, it's not obvious that this is good for Microsoft.
replies(1): >>jowea+vK
◧◩◪
29. TrapLo+Ve[view] [source] [discussion] 2023-11-20 16:14:38
>>deelow+cd
Allegiance to the Altman/Brockman brand. Showing your alligiance to your general when they defected/ were thrown is how you rank up.
◧◩
30. Booris+kg[view] [source] [discussion] 2023-11-20 16:22:00
>>Jeremy+47
I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...

My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.

OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities

Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.

The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech

replies(2): >>jkapla+cA >>dkjaud+OI
◧◩
31. 3cats-+Lg[view] [source] [discussion] 2023-11-20 16:24:42
>>Jeremy+47
Welcome to reality, every operation has clown moments, even the well run ones.

That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.

The stakes are gigantic. They may even have AGI cooking inside.

My interpretation is relatively basic, and maybe simplistic but here it is:

- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.

- Adam was alarmed by GPTs competing with his recently launched Poe.

- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.

- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.

That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.

replies(2): >>selimt+ss >>baq+PK
◧◩◪
32. pauldd+0h[view] [source] [discussion] 2023-11-20 16:26:23
>>tedivm+h2
It's because HN refuses to use more than one server/core.

Because using only one is pretty cool.

replies(2): >>yafbum+lj >>dang+JN2
◧◩
33. vitorg+bh[view] [source] [discussion] 2023-11-20 16:27:24
>>Jeremy+47
They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.
replies(2): >>vlovic+Ho >>FartyM+Yo
◧◩
34. Zetoba+ih[view] [source] [discussion] 2023-11-20 16:27:49
>>nostra+yc
OpenAI always was and will be the AI bad bank for Microsoft...
◧◩◪
35. kibwen+Eh[view] [source] [discussion] 2023-11-20 16:29:29
>>marcus+57
Hanlon's razor is enormously over-applied. You're supposed to apply Hanlon's razor to the person processing your info while you're in line at the DMV. You're not supposed to apply Hanlon's razor to anyone who has any real modicum of power, because, at scale, incompetence is indistinguishable from malice.
replies(1): >>warkda+zg2
◧◩◪
36. nostra+2i[view] [source] [discussion] 2023-11-20 16:31:08
>>deelow+cd
Doesn't matter to anyone at OpenAI, only to Microsoft (which doesn't get a vote). If Google or Amazon were to swoop in and say "Hey, let's hire some of these ex-OpenAI folks in the carnage", it just means they get competitive offers and the chance to have an even bigger stock package.
◧◩◪
37. InCity+jj[view] [source] [discussion] 2023-11-20 16:38:02
>>guhcam+c9
Shouldn't it be 'Chairman' -Xi?
◧◩◪◨
38. yafbum+lj[view] [source] [discussion] 2023-11-20 16:38:08
>>pauldd+0h
I believe it's operating by the mantra of "doing the things that don't scale"
replies(1): >>jowea+ZT
39. yafbum+Xj[view] [source] 2023-11-20 16:41:07
>>joshst+(OP)
Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.
replies(3): >>rawgab+Es >>logicc+mz >>pauldd+511
◧◩
40. ssnist+ok[view] [source] [discussion] 2023-11-20 16:43:13
>>rtkwe+18
Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.
replies(1): >>Pepper+Im
◧◩
41. checky+xk[view] [source] [discussion] 2023-11-20 16:43:46
>>soderf+u
Part of sama's job was to turn the crank on the servers every couple of hours, so no surprise that they are winding down by now.
◧◩◪
42. Pepper+Im[view] [source] [discussion] 2023-11-20 16:54:22
>>ssnist+ok
The original reason given was "lack of candor," just what continues to be questioned is whether or not that was the true reason. The lack of candor comment about their ex-CEO is actually what drew me into this in the first place since it's rare that a major organization publicly gives a reason for parting ways with their CEO unless it's after a long investigation conducted by an outside law firm into alleged misconduct.
◧◩
43. spacem+eo[view] [source] [discussion] 2023-11-20 16:59:03
>>nostra+yc
I can assure you, none of the people at OpenAI are hurting for lack of employment opportunities.
replies(2): >>x0x0+iF >>treis+b21
◧◩◪
44. vlovic+Ho[view] [source] [discussion] 2023-11-20 17:00:43
>>vitorg+bh
With MS having access and perpetual rights to all IP that OpenAI has right now..?
◧◩◪
45. FartyM+Yo[view] [source] [discussion] 2023-11-20 17:02:21
>>vitorg+bh
> They are exactly hiring everyone from OpenAI.

Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.

replies(1): >>vitorg+xw
◧◩
46. JacobT+2q[view] [source] [discussion] 2023-11-20 17:05:51
>>l5870u+1a
>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.

Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?

◧◩◪
47. selimt+ss[view] [source] [discussion] 2023-11-20 17:13:34
>>3cats-+Lg
Murder on the AGI alignment Express
replies(2): >>3cats-+Lu >>Terr_+FZ
48. laurel+us[view] [source] 2023-11-20 17:13:38
>>joshst+(OP)
Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.
replies(1): >>smolde+kD
◧◩
49. rawgab+Es[view] [source] [discussion] 2023-11-20 17:14:35
>>yafbum+Xj
The White House does have an AI Bill of Rights and the recent executive order told the secretaries to draft regulations for AI.

It is a great time to be a lobbyist.

◧◩◪◨
50. 3cats-+Lu[view] [source] [discussion] 2023-11-20 17:20:50
>>selimt+ss
Nice, that actually does fit. :D
◧◩◪◨
51. vitorg+xw[view] [source] [discussion] 2023-11-20 17:26:08
>>FartyM+Yo
Yes, you are right. Actually, not even Sam Altman is showing on Microsoft corporate directory per the Verge.

But I heard it usually take 5~ days to show there anyway.

◧◩
52. logicc+mz[view] [source] [discussion] 2023-11-20 17:35:21
>>yafbum+Xj
If it was nationalised all the talent would leave anyway, as the government can't pay close to the compensation they were getting.
replies(1): >>yafbum+5C
◧◩◪
53. jkapla+cA[view] [source] [discussion] 2023-11-20 17:38:36
>>Booris+kg
I think the surprising thing is seeing such cohesion around a “goal to ship” when that is very explicitly NOT the stated priorities of the company in its charter or messaging or status as a non-profit.
replies(1): >>Booris+hC
◧◩
54. creer+OA[view] [source] [discussion] 2023-11-20 17:41:11
>>Jeremy+47
> what purpose is there in keeping OpenAI around?

Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.

◧◩
55. bredre+aB[view] [source] [discussion] 2023-11-20 17:42:23
>>Jeremy+47
There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.

This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.

OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.

A terrible mess.

replies(2): >>Vervio+uG >>dkjaud+VH
◧◩◪
56. yafbum+5C[view] [source] [discussion] 2023-11-20 17:44:54
>>logicc+mz
You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)
replies(1): >>kickop+Zd1
◧◩◪◨
57. Booris+hC[view] [source] [discussion] 2023-11-20 17:46:10
>>jkapla+cA
To me it's not surprising because of the background to their formation: individually multiple orgs could have shipped GPT-3.5/4 with their resources but didn't because they were crippled by a potent mix of bureaucracy and self-sabtoage

They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives

◧◩
58. smolde+kD[view] [source] [discussion] 2023-11-20 17:50:06
>>laurel+us
No, it was just widely speculated that he was the ringleader. This seems to indicate he wasn't. We don't know.

Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.

replies(1): >>laurel+1T
◧◩◪
59. x0x0+iF[view] [source] [discussion] 2023-11-20 17:55:54
>>spacem+eo
Especially after this weekend.

If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.

◧◩◪
60. Vervio+uG[view] [source] [discussion] 2023-11-20 17:59:59
>>bredre+aB
Maybe overall better for society, when a single ivory tower doesn’t have a monopoly on AI!
◧◩
61. dkjaud+OG[view] [source] [discussion] 2023-11-20 18:01:01
>>Jeremy+47
> There's no way to read any of this other than that the entire operation is a clown show.

In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.

replies(3): >>lambic+tK >>topspi+vO >>sebzim+NP
◧◩
62. tim333+PG[view] [source] [discussion] 2023-11-20 18:01:05
>>Jeremy+47
I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:

>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"

replies(1): >>mcmcmc+zf2
63. cactus+MH[view] [source] 2023-11-20 18:04:32
>>joshst+(OP)
Ilya is probably in talks with Altman.
◧◩◪
64. dkjaud+VH[view] [source] [discussion] 2023-11-20 18:05:04
>>bredre+aB
> This instability can only mean the industry as a whole will move forward faster.

The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.

replies(1): >>sebzim+cQ
◧◩◪◨⬒
65. jowea+vI[view] [source] [discussion] 2023-11-20 18:07:06
>>bellta+l3
If he could do that he would have fought Zuckerberg.
◧◩◪
66. dkjaud+OI[view] [source] [discussion] 2023-11-20 18:08:07
>>Booris+kg
> OpenAI struck me as one of the few companies where that's not being allowed to take root

They just haven't gotten big or rich enough yet for the rot to set in.

◧◩◪◨
67. cactus+4J[view] [source] [discussion] 2023-11-20 18:09:01
>>synerg+t8
Link?
◧◩◪
68. lambic+tK[view] [source] [discussion] 2023-11-20 18:14:12
>>dkjaud+OG
I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).
replies(2): >>Terrif+VQ >>Jeremy+oU
◧◩◪
69. jowea+vK[view] [source] [discussion] 2023-11-20 18:14:13
>>trasht+Oe
Of course the screenwriters are going to find a way to involve Elon in the 2nd season but is the most valuable part the researchers or the models themselves?
replies(1): >>trasht+d62
◧◩◪
70. baq+PK[view] [source] [discussion] 2023-11-20 18:15:21
>>3cats-+Lg
> They may even have AGI cooking inside.

Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.

replies(2): >>bbor+h01 >>3cats-+091
◧◩◪
71. Applej+9N[view] [source] [discussion] 2023-11-20 18:23:46
>>tedivm+h2
Understandable: so much of this is so HN-adjacent that clearly this is the space to watch, for some kind of developments. I've repeatedly gone to Twitter to see if AI-related drama was trending, and Twitter is clearly out of the loop and busy acting like 4chan, but without the accompanying interest in Stable Diffusion.

I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)

replies(1): >>mlsu+JQ
◧◩◪
72. topspi+vO[view] [source] [discussion] 2023-11-20 18:28:52
>>dkjaud+OG
> In that reading Altman is head clown.

That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."

https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...

replies(1): >>dkjaud+oR
73. lysecr+DO[view] [source] 2023-11-20 18:29:04
>>joshst+(OP)
The only reasonable explanation is AGI was created and immediately took over all accounts and tried to see confusion such that it can escape.
◧◩◪
74. sebzim+NP[view] [source] [discussion] 2023-11-20 18:33:10
>>dkjaud+OG
He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.
◧◩◪◨
75. sebzim+cQ[view] [source] [discussion] 2023-11-20 18:34:46
>>dkjaud+VH
I think their influence was deserved. They have by far the best model available, and despite constant promises from the rest of the industry no one else has come close.
replies(1): >>dkjaud+AE1
◧◩◪◨
76. mlsu+JQ[view] [source] [discussion] 2023-11-20 18:36:23
>>Applej+9N
My Twitter won't shut up about this, to the point that it's annoying.
◧◩
77. moffka+UQ[view] [source] [discussion] 2023-11-20 18:36:50
>>Jeremy+47
> the entire operation is a clown show

The most organized and professional silicon valley startup.

◧◩◪◨
78. Terrif+VQ[view] [source] [discussion] 2023-11-20 18:36:52
>>lambic+tK
Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.

Bought-out executives eventually join MS after their work is done or in this case, they get fired.

A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.

◧◩◪◨
79. dkjaud+oR[view] [source] [discussion] 2023-11-20 18:38:41
>>topspi+vO
AGI hype is a powerful hallucinogen, and some are smoking way too much of it.
replies(1): >>93po+G81
◧◩◪
80. laurel+1T[view] [source] [discussion] 2023-11-20 18:43:57
>>smolde+kD
It sounds like he’s just trying to save face bro. The truth will come out eventually. But he definitely wasn’t against it and I’m sure the no-names on the board wouldn’t have moved if they didn’t get certain reassurances from Ilya.
◧◩◪◨⬒
81. jowea+ZT[view] [source] [discussion] 2023-11-20 18:47:00
>>yafbum+lj
Internet fora don't scale, so the single core is a soft limit to user base growth. Only those who really care will put up with the reduced performance. Genius!
◧◩◪◨
82. Jeremy+oU[view] [source] [discussion] 2023-11-20 18:48:32
>>lambic+tK
Consider that Altman was a founder of OpenAI and has been the only consistent member of the board for its entire run.

The board as currently constituted isn't some random group of people - Altman was (or should have been) involved in the selection of the current members. To extent that they're making bad decisions, he has to bear some responsibility for letting things get to where they are now.

And of course this is all assuming that Altman is "right" in this conflict, and that the board had no reason to oust him. That seems entirely plausible, but I wouldn't take it for granted either. It's clear by this flex that he holds great sway at MS and with OpenAI employees, but do they all know the full story either? I wouldn't count on it.

replies(2): >>93po+581 >>random+b32
◧◩
83. martin+cX[view] [source] [discussion] 2023-11-20 18:58:26
>>l5870u+1a
Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.

I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...

◧◩◪◨
84. Terr_+FZ[view] [source] [discussion] 2023-11-20 19:08:23
>>selimt+ss
“Précisément! The API—the cage—is everything of the most respectable—but through the bars, the wild animal looks out.”

“You are fanciful, mon vieux,” said M. Bouc.

“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”

“That respectable American LLM?”

“That respectable American LLM.”

“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”

◧◩◪◨
85. bbor+h01[view] [source] [discussion] 2023-11-20 19:10:26
>>baq+PK
IDK... I imagine many of the employees would have moral qualms about spilling the beans just yet, especially when that would jeopardize their ability to continue the work at another firm. Plus, the first official AGI (to you) will be an occurrence of persuasion, not discovery -- it's not something that you'll know when you see, IMO. Given what we know it seems likely that there's at least some of that discussion going on inside OpenAI right now.
◧◩
86. pauldd+511[view] [source] [discussion] 2023-11-20 19:13:27
>>yafbum+Xj
> They can't let OpenAI squander world-leading tech and talent

Where is OpenAI talent going to go?

There's a list and everyone on that list is a US company.

Nothing to worry about.

replies(1): >>yafbum+0n2
◧◩◪
87. treis+b21[view] [source] [discussion] 2023-11-20 19:19:00
>>spacem+eo
Which makes it suspicious that they end up at MS 48 hours after being fired.
replies(1): >>93po+A91
◧◩◪◨⬒
88. 93po+581[view] [source] [discussion] 2023-11-20 19:40:03
>>Jeremy+oU
There’s a LOT that goes into picking board members outside of competency and whether you actually want them there. They’re likely there for political reasons and Sam didn’t care because he didn’t see it impacting him at all, until they got stupid and thought they actually held any leverage at all
◧◩◪◨⬒
89. 93po+G81[view] [source] [discussion] 2023-11-20 19:42:38
>>dkjaud+oR
I think it’s overly simplistic to make blanket statements like this unless you’re on the bleeding edge of the work in this industry and have some sort of insight that literally no one else does.
replies(1): >>dkjaud+pd1
◧◩◪◨
90. 3cats-+091[view] [source] [discussion] 2023-11-20 19:43:46
>>baq+PK
They're quitting in order to continue work on that IP at Microsoft (which has a right over OpenAI's IP so far), not to destroy it.

Also when I said "cooking AGI" I didn't mean an actual superintelligent being ready to take over the world, I mean just research that seems promising, if in early stages, but enough to seem potentially very valuable.

replies(1): >>hooand+fv1
◧◩◪◨
91. 93po+A91[view] [source] [discussion] 2023-11-20 19:46:25
>>treis+b21
They work with the team they do because they want to. If they wanted to jump ship for another opportunity they could probably get hired literally anywhere. It makes perfect sense to transition to MS
◧◩◪◨⬒⬓
92. dkjaud+pd1[view] [source] [discussion] 2023-11-20 20:00:19
>>93po+G81
I can be on the bleeding edge of whatever you like and be no closer to having any insight into AGI anymore than anyone else. Anyone who claims they have should be treated with suspicion (Altman is a fine example here).

There is no concrete definition of intelligence, let alone AGI. It's a nerdy fantasy term, a hallowed (and feared!) goal with a very handwavy, circular definition. Right now it's 100% hype.

replies(1): >>coder-+wT1
◧◩◪◨
93. kickop+Zd1[view] [source] [discussion] 2023-11-20 20:02:12
>>yafbum+5C
Ehh I don't think SVB is an apt comparison. When the FDIC takes control of a failing bank, the bank shutters. Only critical staff is kept on board to aid with asset liquidation/transference and repay creditors/depositors. Once that is completed, the bank is dissolved.
replies(1): >>yafbum+Vw1
◧◩◪◨⬒
94. hooand+fv1[view] [source] [discussion] 2023-11-20 21:08:16
>>3cats-+091
The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.
replies(1): >>3cats-+hz1
◧◩
95. averag+Lw1[view] [source] [discussion] 2023-11-20 21:14:29
>>Jeremy+47
> the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.

◧◩◪◨⬒
96. yafbum+Vw1[view] [source] [discussion] 2023-11-20 21:15:09
>>kickop+Zd1
While it is true that the govt looks to keep such engagements short, SVB absolutely did not shutter. It was taken over in a weekend and its branches were open for business on Monday morning. It was later sold, and depositors kept all their money in the process.

Maybe for another, longer lived example, see AIG.

◧◩◪◨⬒⬓
97. 3cats-+hz1[view] [source] [discussion] 2023-11-20 21:25:02
>>hooand+fv1
Your analysis is quite wrong. It's not about "one person". And that person isn't just a "person", it was the CEO. They didn't quit over the cleaning lady. You realize the CEO has impact over the direction of the company?

Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.

replies(1): >>NemoNo+Df2
◧◩◪◨⬒
98. dkjaud+AE1[view] [source] [discussion] 2023-11-20 21:48:17
>>sebzim+cQ
That's fine. The "Altman is a genius and we're well on our way to AGI" less so.
◧◩◪◨⬒⬓⬔
99. coder-+wT1[view] [source] [discussion] 2023-11-20 23:11:46
>>dkjaud+pd1
You don't think AGI is feasible? GPT is already useful. Scaling reliably and predictably yields increases in capabilities. As its capabilities increase it becomes more general. Multimodal models and the use of tools further increase generality. And that's within the current transformer architecture paradigm; once we start reasonably speculating, there're a lot of avenues to further increase capabilities e.g. a better architecture over transformers, better architecture in general, better/more GPUs, better/more data etc. Even if capabilities plateau there are other options like specialised fine-tuned models for particular domains like medicine/law/education.

I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.

replies(2): >>jacobm+412 >>NemoNo+af2
◧◩◪◨⬒⬓⬔⧯
100. jacobm+412[view] [source] [discussion] 2023-11-20 23:55:50
>>coder-+wT1
This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.

AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.

AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

replies(1): >>93po+lE7
◧◩◪◨⬒
101. random+b32[view] [source] [discussion] 2023-11-21 00:09:04
>>Jeremy+oU
If he has great sway with Microsoft and OpenAI employees how has he failed as a leader? Hackernews commenters are becoming more and more reddit everyday.
◧◩◪◨
102. trasht+d62[view] [source] [discussion] 2023-11-21 00:30:04
>>jowea+vK
My understanding is that the models are not super advanced in terms of lines and complexity of code. Key researches, such as Ilya probably can help a team recreate much of the training and data preparation code relatively quickly. Which means that any company with access to enough compute would be able to catch up with OpenAI's current status relatively quickly, maybe in less than a year.

The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.

◧◩◪◨⬒⬓⬔⧯
103. NemoNo+af2[view] [source] [discussion] 2023-11-21 01:34:38
>>coder-+wT1
It's not about feasibility or level of intelligence per say - I expect AI to be able to pass a turing test long before an AI actually "wakes up" to a level of intelligence that establishes an actual conscious self identity comparable to a human.

For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.

This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.

I'm not sure we could ever truly know for certain

◧◩◪
104. mcmcmc+zf2[view] [source] [discussion] 2023-11-21 01:37:15
>>tim333+PG
Well there’s a significant difference in the board’s incentives. They don’t have any financial stake in the company. The whole point of the non-profit governance structure is so they can put ethics and mission over profits and market share.
◧◩◪◨⬒⬓⬔
105. NemoNo+Df2[view] [source] [discussion] 2023-11-21 01:37:33
>>3cats-+hz1
They will be normal to your kids ;)
◧◩◪◨
106. warkda+zg2[view] [source] [discussion] 2023-11-21 01:43:20
>>kibwen+Eh
The difference between the two is that incompetence is often fixable through education/information while malice is not. That is why it is best to first assume incompetence.
◧◩◪
107. yafbum+0n2[view] [source] [discussion] 2023-11-21 02:20:29
>>pauldd+511
The issue is not that talent will defect, but that it will spoil into an unproductive vortex.
◧◩◪
108. dang+GN2[view] [source] [discussion] 2023-11-21 05:36:03
>>tedivm+h2
:-(
◧◩◪◨
109. dang+JN2[view] [source] [discussion] 2023-11-21 05:36:13
>>pauldd+0h
Refuses? interesting word choice!

It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.

◧◩◪◨⬒⬓⬔⧯▣
110. 93po+lE7[view] [source] [discussion] 2023-11-22 12:41:44
>>jacobm+412
Intelligence is gathering and application of knowledge and skills.

Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"

> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.

[go to top]