zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. endisn+uc[view] [source] 2024-01-08 22:23:55
>>treebr+(OP)
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).

Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.

◧◩
2. paxys+Ck[view] [source] 2024-01-08 23:00:52
>>endisn+uc
AGI doesn't have to mean superintelligence/singularity (which seems to be what you are describing).
◧◩◪
3. endisn+Sm[view] [source] 2024-01-08 23:12:24
>>paxys+Ck
What is your definition for AGI that isn't already met?
◧◩◪◨
4. paxys+Ho[view] [source] 2024-01-08 23:21:11
>>endisn+Sm
Intelligence involves self-learning and self-correction. AIs today are trained for specific tasks on specific data sets and cannot expand beyond that. If you give an LLM a question it cannot answer, and it goes and figures out how to answer it without additional help, that will be behavior that qualifies it as AGI.
[go to top]