zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. rcpt+a1[view] [source] 2023-11-18 02:50:39
>>apsec1+(OP)
Wait this is just a corporate turf war? That's boring I already have those at work
◧◩
2. reduce+s1[view] [source] 2023-11-18 02:52:52
>>rcpt+a1
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

◧◩◪
3. gnulin+74[view] [source] 2023-11-18 03:12:54
>>reduce+s1
Haha yeah no I don't believe this. They're nowhere near AGI, even if it's possible at all to be there with the current tech we have, which is unconvincing. I don't believe professionals who work in biggest AI labs are spooked by GPT. I need more evidence to believe something like that sorry. It sounds a lot more like Sam Altman lied to the board.
◧◩◪◨
4. aidama+07[view] [source] 2023-11-18 03:31:57
>>gnulin+74
GPT 4 is not remotely unconvincing. It is clearly more intelligent than the average human, and is able to reason in the exact same way as humans. If you provide the steps to reason through any concecpt, it is able to understand at human capability.

GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence.

◧◩◪◨⬒
5. cscurm+l8[view] [source] 2023-11-18 03:42:16
>>aidama+07
Sorry. Robust research says no. Remember, people thought Eliza was AGI too.

https://arxiv.org/abs/2308.03762

If it was really AGI, there won't even be ambiguity and room for comments like mine.

◧◩◪◨⬒⬓
6. Camper+ed[view] [source] 2023-11-18 04:21:19
>>cscurm+l8
As if most humans would do any better on those exercises.

This thing is two years old. Be patient.

◧◩◪◨⬒⬓⬔
7. cscurm+pr[view] [source] 2023-11-18 06:02:40
>>Camper+ed
This comparison again lol.

> As if most humans would do any better on those exercises.

Thats not the point. If you claim you have a machine that can fly, you can't get around a proof of that by saying "mOsT hUmAns cAnt fly" so therefore this machine not flying is irrelevant.

This thing either objectively reasons or not. It is irrelevant how well humans do on those tests.

> This thing is two years old. Be patient.

Nobody is cutting off the future. We are debating the current technology. AI has been around for 70 years. Just open any history book on AI.

At various points from 1950, the gullible mass claimed AGI.

◧◩◪◨⬒⬓⬔⧯
8. Camper+EF[view] [source] 2023-11-18 08:20:21
>>cscurm+pr
At various points from 1950, the gullible mass claimed AGI.

Who's claiming it now? All I see is a paper slagging GPT4 for struggling in tests that no one ever claimed it could pass.

In any case, if it were possible to bet $1000 that 90%+ of those tests will be passed within 10 years, I'd be up for that.

(I guess I should read the paper more carefully first, though, to make sure he's not feeding it unsolved Hilbert problems or some other crap that smart humans wouldn't be able to deal with. My experience with these sweeping pronouncements is that they're all about moving the goalposts as far as necessary to prove that nothing interesting is happening.)

◧◩◪◨⬒⬓⬔⧯▣
9. cscurm+P22[view] [source] 2023-11-18 17:44:39
>>Camper+EF
The guy I replied to is claiming AGI:

>>38314733

"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "

◧◩◪◨⬒⬓⬔⧯▣▦
10. Camper+a92[view] [source] 2023-11-18 18:14:39
>>cscurm+P22
Fair enough, that seems premature. Transformers are clearly already exceeding human intelligence in some specific ways, going back to AlphaGo. It's almost as clear that related techniques are capable of approaching AGI in the 'G' (general) sense. What's needed now is refinement rather than revolution.

Being able to emit code to solve problems it couldn't otherwise handle is a huge deal, maybe an adequate definition of intelligence in itself. Parrots don't write Python.

[go to top]