zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. endisn+uc[view] [source] 2024-01-08 22:23:55
>>treebr+(OP)
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).

Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.

◧◩
2. Neverm+Ad[view] [source] 2024-01-08 22:29:50
>>endisn+uc
If we consider OpenAI itself, a hybrid corporation/AI system, it's constraints are obvious.

It needs vast resources to operate. As the competition in AI heats up, it will continually have to create new levels of value to survive.

Not making any predictions about OpenAI, except that as its machines get smarter, they will also get more explicitly focused on its survival.

(As apposed to the implicit contribution of AI to its creation of value today. The AI is in a passive role for the time being.)

[go to top]