zlacker

[parent] [thread] 7 comments
1. malwar+(OP)[view] [source] 2023-11-18 04:13:42
Ilya Sutskever really seems to think AGI's birth is impending and likely to be delivered at OpenAI.

From that perspective it makes sense to keep capital at arms length.

Is there anything to his certainty? It doesn't feel like it's anywhere close.

replies(4): >>lucubr+81 >>kcb+92 >>Eji170+Q2 >>bart_s+kg
2. lucubr+81[view] [source] 2023-11-18 04:22:18
>>malwar+(OP)
We can't see inside, so we don't know. Their Chief Scientist and probably the best living + active ML scientist probably has better visibility into the answer to that question than we do, but just like any scientist could easily fall into the trap of believing too strongly in their own theories and work. That said... in a dispute between a silicon valley crypto/venture capitalist guy and the chief scientist about anything technical, I'm going to give a lot more weight to Ilya than Sam.
replies(1): >>manyos+X3
3. kcb+92[view] [source] 2023-11-18 04:29:54
>>malwar+(OP)
What I don't understand though is, doesn't that birth require an extreme amount of capital?
4. Eji170+Q2[view] [source] 2023-11-18 04:34:20
>>malwar+(OP)
My money, based on my hobby knowledge and talking to a few people in the field, is on "no fucking way".

Maybe he believes his own hype or is like that guy who thought ChatGPT was alive.

Maybe he's legit to be worried and has good reason to know he's on the corporate manhattan project.

Honestly though...if they were even that close I would find it super hard to believe that we wouldn't have the DoD shutting down EVERYTHING from the public and taking it over from there. Like if someone had just stumbled onto nuclear fission it wouldn't have just sat in the public sector. It'd still be a top secret thing (at least certain details).

replies(2): >>manyos+L3 >>lucubr+U7
◧◩
5. manyos+L3[view] [source] [discussion] 2023-11-18 04:40:37
>>Eji170+Q2
I think there is a good reason for you to be skeptical and I too am skeptical. But if there were a top five of the engineers in the world with the ability to really gauge the state of the art in AI and how advanced it was behind closed doors: Ilya Sutskever would be in that top five.
◧◩
6. manyos+X3[view] [source] [discussion] 2023-11-18 04:41:54
>>lucubr+81
Well said and I work in AI on LLM's as an engineer and am very skeptical in general that we're anywhere close to AGI, but I would listen to what Ilya Sutskever had to say with eager ears.
◧◩
7. lucubr+U7[view] [source] [discussion] 2023-11-18 05:09:03
>>Eji170+Q2
One of the board members who was closely aligned with Ilya in this whole thing was Helen Toner, who's a NatSec person. Frankly, this action by the board could be the US government making its preference about something felt with a white glove, rather than causing global panic and an arms race by pulling a 1939 Germany and shutting down all public research + nationalising the companies and scientists involved. If they can achieve the control without the giant commotion, they would obviously try to do that.
8. bart_s+kg[view] [source] 2023-11-18 06:13:47
>>malwar+(OP)
It may not feel close, but the rate of acceleration may mean that by the time it “feels” close it’s already here. It was barely a year ago that ChatGPT was released. Compare GPT-4 with the state of the art 2 years prior to its release, and the rate of progress is quite remarkable. I also think he has a better idea of what is coming down the pipeline than the average person on the outside of OpenAI does.
[go to top]