zlacker

[parent] [thread] 45 comments
1. reduce+(OP)[view] [source] 2023-11-18 02:52:52
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

replies(8): >>morale+A >>cactus+r1 >>gnulin+F2 >>drcode+O2 >>swatco+63 >>1lette+Y3 >>LZ_Kha+J4 >>alisto+j6
2. morale+A[view] [source] 2023-11-18 02:56:38
>>reduce+(OP)
Bull. Shit.

OpenAI and its people are there to maximize shareholder value.

This is the same company that went from "non-profit" to "jk, lol, we are actually for-profit now". I still think that move was not even legal but rules for thee not for me.

They ousted sama because it was bad for business. Why? We may never know, or we may know next week, who knows? Literally.

replies(2): >>reduce+A1 >>sainez+I3
3. cactus+r1[view] [source] 2023-11-18 03:02:49
>>reduce+(OP)
They aren’t anywhere close to AGI. It’s a joke at this point.
replies(1): >>mcmcmc+X1
◧◩
4. reduce+A1[view] [source] [discussion] 2023-11-18 03:04:09
>>morale+A
> OpenAI and its people are there to maximize shareholder value

Clearly not, as Sama has no equity and a board of four people with little, if any, equity, just unilaterally decided to upend their status quo and assured $ printer, to the bewilderment of their $2.5T 49% owner, Microsoft.

replies(1): >>morale+B4
◧◩
5. mcmcmc+X1[view] [source] [discussion] 2023-11-18 03:07:27
>>cactus+r1
An ego battle really
6. gnulin+F2[view] [source] 2023-11-18 03:12:54
>>reduce+(OP)
Haha yeah no I don't believe this. They're nowhere near AGI, even if it's possible at all to be there with the current tech we have, which is unconvincing. I don't believe professionals who work in biggest AI labs are spooked by GPT. I need more evidence to believe something like that sorry. It sounds a lot more like Sam Altman lied to the board.
replies(4): >>aidama+y5 >>cm2012+X5 >>spacem+Y8 >>skwirl+89
7. drcode+O2[view] [source] 2023-11-18 03:13:37
>>reduce+(OP)
Let's not put the cart before the horse

Even if they say this was for safety reasons, let's not blindly believe them. I am on the pro safety side, but I'm gonna wait till the dust settles before I come to any conclusions on this matter.

replies(1): >>Exoris+l7
8. swatco+63[view] [source] 2023-11-18 03:15:10
>>reduce+(OP)
From what's come out so far, it reads more like he thinks they're pushing too hard too fast on commercialization, not AGI. They're chasing profit opportunities at market instead of fulfilling the board's non-profit mission to build safe AGI.
◧◩
9. sainez+I3[view] [source] [discussion] 2023-11-18 03:18:58
>>morale+A
It seems you are conflating OpenAI the non-profit, with OpenAI the LLC: https://openai.com/our-structure
replies(1): >>morale+l5
10. 1lette+Y3[view] [source] 2023-11-18 03:21:03
>>reduce+(OP)
This is hand-wringing moral panic by nontechnical people who fear the unknown and don't understand AI/DL/ML/LLMs. It's shriekingly obvious no one sane will intentionally build "SkyNet", nor can they for decades.
◧◩◪
11. morale+B4[view] [source] [discussion] 2023-11-18 03:24:49
>>reduce+A1
>as Sama has no equity

Yeah and he got sacked.

12. LZ_Kha+J4[view] [source] 2023-11-18 03:25:19
>>reduce+(OP)
I trust the hell of a lot more in Ilya than Altman. Ilya is a scientist through and through, not a grifter.
◧◩◪
13. morale+l5[view] [source] [discussion] 2023-11-18 03:29:59
>>sainez+I3
No, that's the whole point, "AI for the benefit of humanity" and whatnot turned out to be a marketing strategy (if you could call it that).
replies(1): >>lucubr+oa
◧◩
14. aidama+y5[view] [source] [discussion] 2023-11-18 03:31:57
>>gnulin+F2
GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.

GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.

replies(6): >>SkyPun+g6 >>cscurm+T6 >>static+M7 >>haolez+58 >>lossol+f9 >>morsec+n9
◧◩
15. cm2012+X5[view] [source] [discussion] 2023-11-18 03:34:13
>>gnulin+F2
It's like a religion for these people.
◧◩◪
16. SkyPun+g6[view] [source] [discussion] 2023-11-18 03:36:32
>>aidama+y5
The only thing GPT 4 is missing is the ability to recognize it needs to ask more questions before it jumps into a problem.

When you compare it to an entry level data entry role, it's absolutely AGI. You loosely tell it what it needs to do, step-by-step, and it does it.

replies(1): >>dekhn+Tf
17. alisto+j6[view] [source] 2023-11-18 03:36:58
>>reduce+(OP)
ChatGPT blew the doors open on the AI arms race. Without Sam leading the charge, we wouldn't have an AI boom. We wouldn't have Google scrambling to launch catch up features. We wouldn't have startups raising 100s of millions, people talking about a new industrial revolution, llama (2), all the models on hugging face or any of the other crazy stuff that has come about in the past year.

Was the original launch of ChatGPT "safe?" Of course not, but it moved the industry forward immensely.

Swisher's follow up is even more eyebrow raising: "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."

What exactly from the demo day was "pushing too far?" We got a dall-e api, a larger context window and some cool stuff to fine tune GPT. I don't really see anything there that is too crazy... I also don't get the sense that Sam was cavalier about AI safety. That's why I am so surprised that the apparent reason for his ousting appears to be a boring, old, political turf war.

My sense is that there is either more to the story, or Sam is absolutely about to have his Steve Jobs moment. He's also likely got a large percentage of the OpenAI researcher's on his side.

replies(1): >>m_ke+y7
◧◩◪
18. cscurm+T6[view] [source] [discussion] 2023-11-18 03:42:16
>>aidama+y5
Sorry. Robust research says no. Remember, people thought Eliza was AGI too.

https://arxiv.org/abs/2308.03762

If it was really AGI, there won't even be ambiguity and room for comments like mine.

replies(2): >>iamnot+L9 >>Camper+Mb
◧◩
19. Exoris+l7[view] [source] [discussion] 2023-11-18 03:46:19
>>drcode+O2
Let's not confuse ChatGPT "safety" with the meaning of the word in most contexts. This is about safely keeping major media of all kinds aligned with the messaging of the neoliberal political party of your locale.
◧◩
20. m_ke+y7[view] [source] [discussion] 2023-11-18 03:47:49
>>alisto+j6
ChatGPT was definitely not some visionary project led by Sam. They had a great LLM in GPT-3 that was hard to use because it wasn't instruction tuned, so the research team did InstructGPT and then took it even further and added RLHF to turn it into a proper conversational bot. The UI was a hacky interface on top of it that definitely got way more popular than they expected.
replies(1): >>alisto+n8
◧◩◪
21. static+M7[view] [source] [discussion] 2023-11-18 03:49:12
>>aidama+y5
Fascinating. What do you make of the fact GPT 4 says you have no clue what you are talking about?
replies(1): >>postal+Z8
◧◩◪
22. haolez+58[view] [source] [discussion] 2023-11-18 03:51:43
>>aidama+y5
I kind of agree, but at the same time we can't be sure of what's going on behind the scenes. It seems that GPT-4 is a combination of several huge models with some logic to route the requests to the most apt models. Maybe an AGI would make more sense as a single, more cohese structure?

Also, the fact that it can't incorporate knowledge at the same time as it interacts with us kind of limits the idea of an AGI.

But regardless, it's absurdly impressive what it can do today.

◧◩◪
23. alisto+n8[view] [source] [discussion] 2023-11-18 03:54:21
>>m_ke+y7
I don't know if it was led by Sam, and don't dispute that it may have been "hacky," but there is no denying it was a visionary project.

Yes, other companies had similar models. I know Google, in particular, already had similar LLMs, but explicitly chose not to incorporate them into its products. Sam / OpenAI had the gumption to take the state of the art and package it in a way that it could be interacted with by the masses.

In fact, thinking about it more, the parallels with Steve Jobs are uncanny. Google is Xerox. ChatGPT is the graphical OS. Sam is...

◧◩
24. spacem+Y8[view] [source] [discussion] 2023-11-18 03:58:26
>>gnulin+F2
We barely understand the human brain, but sure we’re super close to AGI because we made chat bots that don’t completely suck anymore. It’s such hubris. Are the tools cool? Undoubtedly. But come down to earth for a second. People have lost all objectivity.
replies(2): >>MVisse+Ad >>totall+he
◧◩◪◨
25. postal+Z8[view] [source] [discussion] 2023-11-18 03:58:32
>>static+M7
How does knowing you are arguing against a GPT-4 bot?
◧◩
26. skwirl+89[view] [source] [discussion] 2023-11-18 03:59:47
>>gnulin+F2
>I don't believe professionals who work in biggest AI labs are spooked by GPT.

Then you haven't been paying any attention to them.

◧◩◪
27. lossol+f9[view] [source] [discussion] 2023-11-18 04:00:56
>>aidama+y5
Well, if it's so smart then maybe it will learn to count finally someday.

https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...

◧◩◪
28. morsec+n9[view] [source] [discussion] 2023-11-18 04:01:27
>>aidama+y5
These models can't even form new memories beyond the length of their context windows. It's impressive but it is clearly not AGI.
replies(1): >>MVisse+Yd
◧◩◪◨
29. iamnot+L9[view] [source] [discussion] 2023-11-18 04:05:38
>>cscurm+T6
It’s not AGI. But I’m not convinced we need a single model that can reason to make super powerful general purpose AI. If you can have a model detect where it can’t reason and pass off tasks appropriately to better methods or domain specific models you can get very powerful results. OpenAI already on the path to doing this with GPT
◧◩◪◨
30. lucubr+oa[view] [source] [discussion] 2023-11-18 04:09:44
>>morale+l5
That is what Ilya Sutskever and the board of the non-profit have effectively accused Sam Altman of in firing him, yes.
replies(1): >>morale+Fb
◧◩◪◨⬒
31. morale+Fb[view] [source] [discussion] 2023-11-18 04:19:56
>>lucubr+oa
???

Source?

replies(1): >>lucubr+ee
◧◩◪◨
32. Camper+Mb[view] [source] [discussion] 2023-11-18 04:21:19
>>cscurm+T6
As if most humans would do any better on those exercises.

This thing is two years old. Be patient.

replies(2): >>cscurm+Xp >>smolde+CC
◧◩◪
33. MVisse+Ad[view] [source] [discussion] 2023-11-18 04:34:02
>>spacem+Y8
Objectively speaking, we’re talking exponential growth in both compute and capabilities year over year.

Do you have any data that shows that we’ll plateau any time soon?

Because if this trend continues, we’ll have superhuman levels of compute within 5 years.

replies(1): >>spacem+Bl5
◧◩◪◨
34. MVisse+Yd[view] [source] [discussion] 2023-11-18 04:36:36
>>morsec+n9
Neither can you without your short-term memory system. Or your long-term memory system in your hippocampus.

People that have lost those abilities still have human level of intelligence.

replies(1): >>morsec+yi
◧◩◪◨⬒⬓
35. lucubr+ee[view] [source] [discussion] 2023-11-18 04:38:27
>>morale+Fb
Kara's reporting on motive:

https://twitter.com/karaswisher/status/1725678074333635028?t...

Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...

Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...

Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.

For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

replies(1): >>morale+xf
◧◩◪
36. totall+he[view] [source] [discussion] 2023-11-18 04:38:29
>>spacem+Y8
I've been watching this whole hype cycle completely horrified from the sidelines. Those early debates right here on HN with people genuinely worried about an LLM developing conscience and taking control of the world. Senior SWEs fearing for their jobs. And now we're just throwing the term AGI around like it's imminent.
◧◩◪◨⬒⬓⬔
37. morale+xf[view] [source] [discussion] 2023-11-18 04:46:52
>>lucubr+ee
I saw those tweets as well. Those are rumours at this point.

The only thing that is real is the PR from OpenAI and the "candid" line is quite ominous.

sama brought the company to where it is today, you don't kick out someone that way just because of misaligned interests.

I'm on the side that thinks that sama screwed up badly, putting OpenAI in a (big?) pickle and breaking ties with him asap is how they're trying to cover their ass.

replies(1): >>lucubr+Oh
◧◩◪◨
38. dekhn+Tf[view] [source] [discussion] 2023-11-18 04:48:54
>>SkyPun+g6
This sort of property ("loosely tell it what it needs to do, step-by-step, and it does it.") is definitely very exciting and remarkable, but I don't think it necessarily constitutes AGI. I would say instead it's more an emergent property of language models trained on extremely large corpora that contain many examples that, in embedding space, aren't that far from what you're asking it to do.

I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.

◧◩◪◨⬒⬓⬔⧯
39. lucubr+Oh[view] [source] [discussion] 2023-11-18 05:02:05
>>morale+xf
They're not rumours, they are reporting from the most well-known and creditable tech journalist in the world. The whole point of journalism and her multi-decade journalistic career is that when she reports something like that, we can trust that she has verified with sources who would have actual knowledge of events that it is the case. We should always consider the possibility that her sources were wrong, but that's incredibly unlikely now that Ilya gave an all hands meeting (that I linked you) which confirmed a majority of this reporting.
◧◩◪◨⬒
40. morsec+yi[view] [source] [discussion] 2023-11-18 05:07:30
>>MVisse+Yd
Sure, people with aphasia lose the ability to form speech at all but if ChatGPT responded unintelligibly every time you wouldn't characterize it as intelligent.
◧◩◪◨⬒
41. cscurm+Xp[view] [source] [discussion] 2023-11-18 06:02:40
>>Camper+Mb
This comparison again lol.

> As if most humans would do any better on those exercises.

Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.

This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.

> This thing is two years old. Be patient.

Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.

At various points from 1950, the gullible mass claimed AGI.

replies(1): >>Camper+cE
◧◩◪◨⬒
42. smolde+CC[view] [source] [discussion] 2023-11-18 08:05:53
>>Camper+Mb
Transformer-based LLMs are almost a half-decade old at this point, and GPT-4 is the least-efficient model of it's kind ever produced (that I am aware of).

OpenAI's performance is not and has never been proportional to the size of their models. Their big advantage is scale, which lets them ship unrealistically large models by leveraging subsidized cloud costs. They win by playing a more destructive and wasteful game, and their competitors can beat them by shipping a cheaper competitive alternative.

What exactly are we holding out for, at this point? A miracle?

◧◩◪◨⬒⬓
43. Camper+cE[view] [source] [discussion] 2023-11-18 08:20:21
>>cscurm+Xp
At various points from 1950, the gullible mass claimed AGI.

Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.

In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.

(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)

replies(1): >>cscurm+n12
◧◩◪◨⬒⬓⬔
44. cscurm+n12[view] [source] [discussion] 2023-11-18 17:44:39
>>Camper+cE
The guy I replied to is claiming AGI:

>>38314733

"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "

replies(1): >>Camper+I72
◧◩◪◨⬒⬓⬔⧯
45. Camper+I72[view] [source] [discussion] 2023-11-18 18:14:39
>>cscurm+n12
Fair enough, that seems premature. Transformers are clearly already exceeding human intelligence in some specific ways, going back to AlphaGo. It's almost as clear that related techniques are capable of approaching AGI in the 'G' (general) sense. What's needed now is refinement rather than revolution.

Being able to emit code to solve problems it couldn't otherwise handle is a huge deal, maybe an adequate definition of intelligence in itself. Parrots don't write Python.

◧◩◪◨
46. spacem+Bl5[view] [source] [discussion] 2023-11-19 17:23:50
>>MVisse+Ad
I’m pretty sure you have no data showing we’re heading to AGI because “compute and capabilities” is about as nebulous as it gets. You can’t just throw CPU cycles to strong arm your way to a solution to a problem you barely understand to begin with.
[go to top]