zlacker

[parent] [thread] 19 comments
1. refulg+(OP)[view] [source] 2024-05-17 23:29:49
Adding a disclaimer for people unaware of context (I feel same as you):

OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.

Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"

To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

replies(2): >>jasonf+K1 >>N0b8ez+52
2. jasonf+K1[view] [source] 2024-05-17 23:45:35
>>refulg+(OP)
To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

What's his track record on promises/predictions of this sort? I wasn't paying attention until pretty recently.

replies(2): >>refulg+T1 >>NomDeP+S3
◧◩
3. refulg+T1[view] [source] [discussion] 2024-05-17 23:47:30
>>jasonf+K1
honestly, I hadn't heard of him until 24-48 hours ago :x (he's also the new superalignment lead, I can't remember if I heard that first, or the podcast stuff first. Dwarkesh Patel podcast for anyone curious. Only saw a clip of it)
4. N0b8ez+52[view] [source] 2024-05-17 23:48:50
>>refulg+(OP)
>To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

Link? Is the ~2 year timeline a common estimate in the field?

replies(4): >>dboreh+n2 >>ctoth+E2 >>Curiou+r3 >>heavys+Y5
◧◩
5. dboreh+n2[view] [source] [discussion] 2024-05-17 23:51:51
>>N0b8ez+52
It's the "fusion in 20 years" of AI?
replies(1): >>dinvla+In
◧◩
6. ctoth+E2[view] [source] [discussion] 2024-05-17 23:54:14
>>N0b8ez+52
https://www.dwarkeshpatel.com/p/john-schulman
replies(1): >>N0b8ez+45
◧◩
7. Curiou+r3[view] [source] [discussion] 2024-05-18 00:01:52
>>N0b8ez+52
They can't even clearly define a test of "AGI" I seriously doubt they're going to reach it in two years. Alternatively, they could define a fairly trivial test and reach it last year.
replies(1): >>jfenge+oc
◧◩
8. NomDeP+S3[view] [source] [discussion] 2024-05-18 00:07:12
>>jasonf+K1
As a child I used to watch a TV programme called Tomorrows World. On it they predicted these very same things in similar timeframes.

That programme aired in the 1980's. Other than vested promises is there much to indicate it's close at all? Empty promises aside there isn't really any indication of that being likely at all.

replies(2): >>zdragn+67 >>Davidz+V8
◧◩◪
9. N0b8ez+45[view] [source] [discussion] 2024-05-18 00:20:33
>>ctoth+E2
Is the quote you're thinking of the one at 19:11?

> I don't think it's going to happen next year, it's still useful to have the conversation and maybe it's like two or three years instead.

This doesn't seem like a super definite prediction. The "two or three" might have just been a hypothetical.

replies(1): >>HarHar+VV
◧◩
10. heavys+Y5[view] [source] [discussion] 2024-05-18 00:28:55
>>N0b8ez+52
We can't even get self-driving down in 2 years, we're nowhere near reaching general AI.

AI experts who aren't riding the hype train and getting high off of its fumes acknowledge that true AI is something we'll likely not see in our lifetimes.

replies(2): >>N0b8ez+27 >>daniel+PG
◧◩◪
11. N0b8ez+27[view] [source] [discussion] 2024-05-18 00:40:21
>>heavys+Y5
Can you give some examples of experts saying we won't see it in our lifetime?
◧◩◪
12. zdragn+67[view] [source] [discussion] 2024-05-18 00:41:07
>>NomDeP+S3
In the early 1980's we were just coming out of the first AI winter and everyone was getting optimistic again.

I suspect there will be at least continued commercial use of the current tech, though I still suspect this crop is another dead end in the hunt for AGI.

replies(1): >>NomDeP+RB
◧◩◪
13. Davidz+V8[view] [source] [discussion] 2024-05-18 01:03:48
>>NomDeP+S3
are we living in the same world?????
replies(2): >>NomDeP+EB >>refulg+yP1
◧◩◪
14. jfenge+oc[view] [source] [discussion] 2024-05-18 01:48:30
>>Curiou+r3
I feel like we'll know it when we see it. Or at least, significant changes will happen even if people still claim it isn't really The Thing.

Personally I'm not seeing that the path we're on leads to whatever that is, either. But I think/hope I'll know if I'm wrong when it's in front of me.

◧◩◪
15. dinvla+In[view] [source] [discussion] 2024-05-18 05:21:25
>>dboreh+n2
Just like Tesla "FSD" :-)
◧◩◪◨
16. NomDeP+EB[view] [source] [discussion] 2024-05-18 08:51:04
>>Davidz+V8
I would assume so. I've spent some time looking into AI for software development and general use and I'm both slightly impressed and at the same time don't really get the hype.

It's better and quicker search at present for the area I specialise in.

It's not currently even close to being a x2 multiplier for me, it possibly even a negative impact, probably not but I'm still exploring. Which feels detached from the promises. Interesting but at present more hype than hyper. Also, it's energy inefficient so cost heavy. I feel that will likely cripple a lot of use cases.

What's your take?

◧◩◪◨
17. NomDeP+RB[view] [source] [discussion] 2024-05-18 08:55:13
>>zdragn+67
I'd agree with the commercial use element. It will definitely find areas that it can be applied. Just currently it's general application by a lot of the user base feel more like early Facebook apps or subjectively better Lotus Notes than an actual leap forward of any sort.
◧◩◪
18. daniel+PG[view] [source] [discussion] 2024-05-18 10:20:07
>>heavys+Y5
Is true AI the new true Scotsman?
◧◩◪◨
19. HarHar+VV[view] [source] [discussion] 2024-05-18 13:08:24
>>N0b8ez+45
Right at the end of the interview Schulman says that he expects AGI to be able to replace himself in 5 years. He seemed a bit sheepish when saying it, so hard to tell if he really believed it, or if was just saying what he'd been told to say (I can't believe Altman is allowing employees to be interviewed like this without telling them what they can't say, and what they should say).
◧◩◪◨
20. refulg+yP1[view] [source] [discussion] 2024-05-18 21:15:40
>>Davidz+V8
Yes

Incredulous reactions don't aid whatever you intend to communicate - there's a reason why everyone knows what AI the last 12 months, it's not made up or a monoculture. It would be very odd to expect discontinuation of commercial use without a black swan event

[go to top]