zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. rcpt+a1[view] [source] 2023-11-18 02:50:39
>>apsec1+(OP)
Wait this is just a corporate turf war? That's boring I already have those at work
◧◩
2. reduce+s1[view] [source] 2023-11-18 02:52:52
>>rcpt+a1
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

◧◩◪
3. gnulin+74[view] [source] 2023-11-18 03:12:54
>>reduce+s1
Haha yeah no I don't believe this. They're nowhere near AGI, even if it's possible at all to be there with the current tech we have, which is unconvincing. I don't believe professionals who work in biggest AI labs are spooked by GPT. I need more evidence to believe something like that sorry. It sounds a lot more like Sam Altman lied to the board.
◧◩◪◨
4. aidama+07[view] [source] 2023-11-18 03:31:57
>>gnulin+74
GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.

GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.

◧◩◪◨⬒
5. SkyPun+I7[view] [source] 2023-11-18 03:36:32
>>aidama+07
The only thing GPT 4 is missing is the ability to recognize it needs to ask more questions before it jumps into a problem.

When you compare it to an entry level data entry role, it's absolutely AGI. You loosely tell it what it needs to do, step-by-step, and it does it.

◧◩◪◨⬒⬓
6. dekhn+lh[view] [source] 2023-11-18 04:48:54
>>SkyPun+I7
This sort of property ("loosely tell it what it needs to do, step-by-step, and it does it.") is definitely very exciting and remarkable, but I don't think it necessarily constitutes AGI. I would say instead it's more an emergent property of language models trained on extremely large corpora that contain many examples that, in embedding space, aren't that far from what you're asking it to do.

I don't think LLMs have really demonstrated anything interesting around generalized intelligence, which although a fairly abstract concept, can be thought of as being able to solve truly novel problems outside their training corpora. I suspect there still needs to be a fair amount of work improving both the model design itself, the training data, and even the mental model of ML researchers, before we have systems that can truly reason in a way that demonstrates their generalized intelligence.

[go to top]