* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).
Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.
That bar is insane. By that logic, humans aren't intelligent.
I believe AGI must be definitionally superior. Anything else and you could argue it’s existed for a while, e.g. computers have been superior at adding numbers basically their entire existence. Even with reasoning, computers have been better for a while. Language models have allowed for that reasoning to be specified in English, but you could’ve easily written a formally verified program in the 90s that exhibits better reasoning in the form of correctness for discrete tasks.
Even with game playing Go, and Chess, games that require moderate to high planning skills are all but solves with computers, but I don’t consider them AGI.
I would not consider N entities that can each beat humanity in the Y tasks humans are capable of to be AGI, unless some system X is capable of picking N for Y as necessary without explicit prompting. It would need to be a single system. That being said I could see one disagreeing haha.
I am curious if anyone has different definition of AGI that cannot already be met now.
I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.
(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)