* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).
Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.
It needs vast resources to operate. As the competition in AI heats up, it will continually have to create new levels of value to survive.
Not making any predictions about OpenAI, except that as its machines get smarter, they will also get more explicitly focused on its survival.
(As apposed to the implicit contribution of AI to its creation of value today. The AI is in a passive role for the time being.)
That bar is insane. By that logic, humans aren't intelligent.
I believe AGI must be definitionally superior. Anything else and you could argue it’s existed for a while, e.g. computers have been superior at adding numbers basically their entire existence. Even with reasoning, computers have been better for a while. Language models have allowed for that reasoning to be specified in English, but you could’ve easily written a formally verified program in the 90s that exhibits better reasoning in the form of correctness for discrete tasks.
Even with game playing Go, and Chess, games that require moderate to high planning skills are all but solves with computers, but I don’t consider them AGI.
I would not consider N entities that can each beat humanity in the Y tasks humans are capable of to be AGI, unless some system X is capable of picking N for Y as necessary without explicit prompting. It would need to be a single system. That being said I could see one disagreeing haha.
I am curious if anyone has different definition of AGI that cannot already be met now.
Or a group of millions of such AGI instances in a similar time frame?
I'm inclined to believe this as well, but rather than "it won't happen", I take it to mean that AI and robotics just need to unify. That's already starting to happen.
You're basically requiring AGI to be smarter/better than the smartest/best humans in every single field.
What you're describing is ASI.
If we have AGI that is on the level of an average human (which is pretty dumb), it's already very useful. That gives you robotic paradise where robots do ALL mundane tasks.
I think this is very plausible--that AI won't really be AGI until it has a way to physically grow free from the umbilical chord that is the chip fab supply chain.
So it might take Brainoids/Brain-on-chip technology to get a lot more advanced before that happens. However, if there are some breakthroughs in that tech, so that a digital AI could interact with in vitro tissue, utilize it, and grow it, it seems like the takeoff could be really fast.
- being a roughly human equivalent remote worker.
- having robust common sense on language tasks
- having robust common sense on video, audio and robotics tasks, basically housework androids (robotics is not the difficulty anymore).
Just to name a few. There is a huge gap between what LLMs van do and what you describe!
I assure you computers already are superior to a human remote worker whose job it is to reliably categorize items or to add numbers. Look no further than the duolingo post that's ironically on the front page at the time of this writing with this very post.
computers have been on par with human translators at some languages since the 2010s. an hypothetical AGI is not god, it still would need exposure, similar to training with LLMs. We're already near the peak with respect to that problem.
I'm not familiar with a "hard turing test." What is that?
as I mention in another post, this is why I do not make any distinction between AGI and superintelligence. I believe they are the same thing. a thought experiment - what would it mean for a human to be superintelligent? presumably it would mean learning things with the least possible amount of exposure (not omniscience, necessarily).
• driving (at human level safety)
• folding clothes with two robotic hands
• write mostly correct code at large scale (not just leetcode problems), fix bugs after testing
• ability to reason beyond simple riddles
• perform simple surgeries unassisted
• look at a recipe and cook a meal
• most importantly, ability to learn new skills at average human level. Ability to figure out what it needs to learn to solve a given problem, watch some tutorials, and learn from that.
I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.
(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)
Francis Fukuyama wrote in "The Last Man":
> The life of the last man is one of physical security and material plenty, precisely what Western politicians are fond of promising their electorates. Is this really what the human story has been "all about" these past few millennia? Should we fear that we will be both happy and satisfied with our situation, no longer human beings but animals of the genus homo sapiens?
It's a fantastic essay (really, the second half of his seminal book) that I think everyone should read
Then those AIs aren't generally intelligences, as you said they are specialized.
Note that a set of AIs is still an AI, so AI should always be compared to groups of humans and not a single human. Since the AI needs to replace groups of humans and not individuals, very few workplaces has individual humans doing tasks alone without talking to coworkers.
Happiness is always fleeting. Aren't our lives a bit dystopian already if we need to do work and for what reason? So that we can possibly feel like we are meaningful with hopes that we don't lose our ability to be useful.
- Go on Linkedin or fiverr and look at the kinds of jobs being offered remote right now. developer, HR, bureaucrat, therapeut, editor, artist etc. Current AI agents can not do the large majority of these jobs just like that, without supervision. Yes they can perform certain aspects of the job, but not the actual job, people wouldn't hire them.
A hard Turing test is a proper Turing test that's long and not just smalltalk. Intelligence can't be "faked" then. Even harder is when it is performed adversarially, i.e. there is a team of humans that plans which questions it will ask and really digs deep. For example: commonsense reasoning and long-term memory are two pureky textual tasks where LLMs still fail. Yes they do amazingly well in comparison go what we had previously, which was nothing, but if you think they are human equivalent then imo you need to play with LLMs more.
Another hard Turing test would be: Can this agent be a fulfilling long-distance partner? And I'm not talking about fulfilling like current people are having relationships with crude agents. I am talking about really giving you the sense of being understood, learning you, enriching your live etc. We can't do that yet.
Give me an agent and 1 week and I can absolutely figure out whether it is a human or AI.