zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. rcpt+a1[view] [source] 2023-11-18 02:50:39
>>apsec1+(OP)
Wait this is just a corporate turf war? That's boring I already have those at work
◧◩
2. reduce+s1[view] [source] 2023-11-18 02:52:52
>>rcpt+a1
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

◧◩◪
3. gnulin+74[view] [source] 2023-11-18 03:12:54
>>reduce+s1
Haha yeah no I don't believe this. They're nowhere near AGI, even if it's possible at all to be there with the current tech we have, which is unconvincing. I don't believe professionals who work in biggest AI labs are spooked by GPT. I need more evidence to believe something like that sorry. It sounds a lot more like Sam Altman lied to the board.
◧◩◪◨
4. aidama+07[view] [source] 2023-11-18 03:31:57
>>gnulin+74
GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.

GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.

◧◩◪◨⬒
5. haolez+x9[view] [source] 2023-11-18 03:51:43
>>aidama+07
I kind of agree, but at the same time we can't be sure of what's going on behind the scenes. It seems that GPT-4 is a combination of several huge models with some logic to route the requests to the most apt models. Maybe an AGI would make more sense as a single, more cohese structure?

Also, the fact that it can't incorporate knowledge at the same time as it interacts with us kind of limits the idea of an AGI.

But regardless, it's absurdly impressive what it can do today.

[go to top]