zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. endisn+uc[view] [source] 2024-01-08 22:23:55
>>treebr+(OP)
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).

Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.

◧◩
2. golol+Dk[view] [source] 2024-01-08 23:00:58
>>endisn+uc
Why define AGI like that? General intelligence is supposed to be something like human intelligence. You are talking about ASI.
◧◩◪
3. endisn+6n[view] [source] 2024-01-08 23:13:15
>>golol+Dk
I'm curious to hear your definition of AGI that hasn't already been met, given computers have been superior to humans at a large variety of tasks since the 90s.
◧◩◪◨
4. golol+io[view] [source] 2024-01-08 23:19:07
>>endisn+6n
- passing a hard Turing test, adversarial and with a duration of a few weeks and comparing with 10th percentile humans.

- being a roughly human equivalent remote worker.

- having robust common sense on language tasks

- having robust common sense on video, audio and robotics tasks, basically housework androids (robotics is not the difficulty anymore).

Just to name a few. There is a huge gap between what LLMs van do and what you describe!

[go to top]