zlacker

[parent] [thread] 13 comments
1. ignora+(OP)[view] [source] 2023-11-20 05:59:22
> Did he really fire Sam over "AI safety" concerns? How is that remotely rational.

Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.

(like LeCun, I am not a doomer; but I am also not Hinton to know any better)

replies(3): >>mcpack+z1 >>sgregn+r3 >>esjeon+14
2. mcpack+z1[view] [source] 2023-11-20 06:09:20
>>ignora+(OP)
> "[Artificial General Intelligence] in a very narrow domain."

Which is it?

replies(2): >>ignora+S3 >>maxlin+x4
3. sgregn+r3[view] [source] 2023-11-20 06:19:09
>>ignora+(OP)
Can you please share the sources for Ilyas views?
replies(1): >>ignora+I3
◧◩
4. ignora+I3[view] [source] [discussion] 2023-11-20 06:20:43
>>sgregn+r3
https://archive.is/yjOmt
replies(1): >>zxexz+78
◧◩
5. ignora+S3[view] [source] [discussion] 2023-11-20 06:21:52
>>mcpack+z1
Read the paper linked above, and if you don't agree that's okay. There are many who don't.
replies(2): >>maxlin+45 >>calf+I6
6. esjeon+14[view] [source] 2023-11-20 06:22:29
>>ignora+(OP)
> AGI in a very narrow domain

The definition of AGI always puzzles me, because "G" in AGI is general, and the word certainly don't play well w/ "narrow". AGI is a new buzzword I guess.

replies(1): >>famous+85
◧◩
7. maxlin+x4[view] [source] [discussion] 2023-11-20 06:26:21
>>mcpack+z1
I think the guy read the paper he linked the wrong way. The paper explicitly separates "narrow" and "AGI" types where AlphaGo is in the virtuoso bracket for narrow AI, and ChatGPT is in the "emerging" bracket for "general" AI. Only thing it puts to be AGI is few levels up from virtuoso, but in the "general" type.
◧◩◪
8. maxlin+45[view] [source] [discussion] 2023-11-20 06:30:02
>>ignora+S3
Check it again, I think you might have misread the thing. It categorizes things in a way that clearly separates AlphaGO from even shooting towards "AGI". The "General" part of AGI can't really be skipped or words don't make any sense anymore.
replies(1): >>ignora+r5
◧◩
9. famous+85[view] [source] [discussion] 2023-11-20 06:30:30
>>esjeon+14
Well there's nothing narrow about sota LLMs. The main hinge is just competence.

i think the guy you're replying to misunderstood the article he's alluding to though. They don't claim anything about a narrow agi

◧◩◪◨
10. ignora+r5[view] [source] [discussion] 2023-11-20 06:32:40
>>maxlin+45
Ah, gotcha; I meant "superintelligence" (which is ASI and not AGI).
◧◩◪
11. calf+I6[view] [source] [discussion] 2023-11-20 06:41:32
>>ignora+S3
Has anyone written a response to this paper? Their main gist is to try to define AGI empirically using only what is measurable.
◧◩◪
12. zxexz+78[view] [source] [discussion] 2023-11-20 06:50:41
>>ignora+I3
For what it's worth, the MIT Technology Review these days is considered to be closer to a "tech tabloid" than an actual news source. I personally would find it hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models. And I'm of the 'faction' that AGI can't come soon enough.
replies(1): >>ignora+1b
◧◩◪◨
13. ignora+1b[view] [source] [discussion] 2023-11-20 07:10:44
>>zxexz+78
> hard to believe (on gut instinct, nothing empirical) that AGI has been achieved before we have a general playbook for domain-specific SotA models

Ilya is pretty serious about alignment (precisely?) due to his gut instinct: https://www.youtube.com/watch?v=Ft0gTO2K85A (2 Nov 2023)

replies(1): >>zxexz+zOj
◧◩◪◨⬒
14. zxexz+zOj[view] [source] [discussion] 2023-11-26 04:49:27
>>ignora+1b
I don't doubt he's serious in what he believes. I respect Ilya greatly as a researcher. Do you have notes or a time-point for me to listen to in that podcast? I bemoan the trend toward podcasts-as-references - even a time-point reference (or even multiple!) to the transcript would be greatly desirable!
[go to top]