zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. endisn+uc[view] [source] 2024-01-08 22:23:55
>>treebr+(OP)
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).

Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.

◧◩
2. murder+Cd[view] [source] 2024-01-08 22:29:52
>>endisn+uc
> I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

That bar is insane. By that logic, humans aren't intelligent.

◧◩◪
3. endisn+ce[view] [source] 2024-01-08 22:32:11
>>murder+Cd
What do you mean? By that same logic humans definitionally already have done everything they can or will do with technology.

I believe AGI must be definitionally superior. Anything else and you could argue it’s existed for a while, e.g. computers have been superior at adding numbers basically their entire existence. Even with reasoning, computers have been better for a while. Language models have allowed for that reasoning to be specified in English, but you could’ve easily written a formally verified program in the 90s that exhibits better reasoning in the form of correctness for discrete tasks.

Even with game playing Go, and Chess, games that require moderate to high planning skills are all but solves with computers, but I don’t consider them AGI.

I would not consider N entities that can each beat humanity in the Y tasks humans are capable of to be AGI, unless some system X is capable of picking N for Y as necessary without explicit prompting. It would need to be a single system. That being said I could see one disagreeing haha.

I am curious if anyone has different definition of AGI that cannot already be met now.

◧◩◪◨
4. dogpre+Fg[view] [source] 2024-01-08 22:43:40
>>endisn+ce
The comparison of the accomplishments of one entity versus the entirety of humanity is needlessly high. Imagine if we could duplicate everything humans could do but it required specialized AIs, (airplane pilot AI, software engineer AI, chemist AI, etc). That world would be radically different than the one we know and it doesn't reach your bar. So, in that sense it's a misplaced benchmark.
◧◩◪◨⬒
5. OJFord+fv[view] [source] 2024-01-08 23:55:49
>>dogpre+Fg
I think GP is thinking that those would be AIs yes, but a A General I would be able to do them all, like a hypothetical human GI would.

I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.

(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)

[go to top]