zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. endisn+uc[view] [source] 2024-01-08 22:23:55
>>treebr+(OP)
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).

Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.

◧◩
2. buffer+Qi[view] [source] 2024-01-08 22:52:41
>>endisn+uc
That's an unreasonable metric for AGI.

You're basically requiring AGI to be smarter/better than the smartest/best humans in every single field.

What you're describing is ASI.

If we have AGI that is on the level of an average human (which is pretty dumb), it's already very useful. That gives you robotic paradise where robots do ALL mundane tasks.

[go to top]