zlacker

[parent] [thread] 29 comments
1. endisn+(OP)[view] [source] 2024-01-08 22:23:55
Maybe I'm too pessimistic, but I doubt we will have AGI by even 2100. I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

* It also goes without saying that by this definition I mean to say that humanity will no longer be able to meaningfully help in any qualitative way with respect to intellectual tasks (e.g. AGI > human; AGI > human + computer; AGI > human + internet; AGI > human + LLM).

Fundamentally I believe AGI will never happen without a body. I believe intelligence requires constraints and the ultimate constraint is life. Some omniscient immortal thing seems neat, but I doubt it'll be as smart since it lacks any constraints to drive it to growth.

replies(10): >>Neverm+61 >>murder+81 >>novaga+x2 >>albert+W2 >>alanbe+16 >>buffer+m6 >>breck+e7 >>paxys+88 >>golol+98 >>abeppu+Te
2. Neverm+61[view] [source] 2024-01-08 22:29:50
>>endisn+(OP)
If we consider OpenAI itself, a hybrid corporation/AI system, it's constraints are obvious.

It needs vast resources to operate. As the competition in AI heats up, it will continually have to create new levels of value to survive.

Not making any predictions about OpenAI, except that as its machines get smarter, they will also get more explicitly focused on its survival.

(As apposed to the implicit contribution of AI to its creation of value today. The AI is in a passive role for the time being.)

3. murder+81[view] [source] 2024-01-08 22:29:52
>>endisn+(OP)
> I define AGI as the ability for an intelligence that is not human to do anything any human has ever done or will do with technology that does not include itself* (AGI).

That bar is insane. By that logic, humans aren't intelligent.

replies(1): >>endisn+I1
◧◩
4. endisn+I1[view] [source] [discussion] 2024-01-08 22:32:11
>>murder+81
What do you mean? By that same logic humans definitionally already have done everything they can or will do with technology.

I believe AGI must be definitionally superior. Anything else and you could argue it’s existed for a while, e.g. computers have been superior at adding numbers basically their entire existence. Even with reasoning, computers have been better for a while. Language models have allowed for that reasoning to be specified in English, but you could’ve easily written a formally verified program in the 90s that exhibits better reasoning in the form of correctness for discrete tasks.

Even with game playing Go, and Chess, games that require moderate to high planning skills are all but solves with computers, but I don’t consider them AGI.

I would not consider N entities that can each beat humanity in the Y tasks humans are capable of to be AGI, unless some system X is capable of picking N for Y as necessary without explicit prompting. It would need to be a single system. That being said I could see one disagreeing haha.

I am curious if anyone has different definition of AGI that cannot already be met now.

replies(1): >>dogpre+b4
5. novaga+x2[view] [source] 2024-01-08 22:36:15
>>endisn+(OP)
I'm optimistic in that I hope we don't have AGI by 2100 because it sounds like a truly dystopian future even in the best case scenario
replies(1): >>ehsanu+tc
6. albert+W2[view] [source] 2024-01-08 22:37:36
>>endisn+(OP)
You say that a single AGI model should be as powerful as doing everything what the whole humanity has done in the last 100k years or so?

Or a group of millions of such AGI instances in a similar time frame?

replies(1): >>endisn+a5
◧◩◪
7. dogpre+b4[view] [source] [discussion] 2024-01-08 22:43:40
>>endisn+I1
The comparison of the accomplishments of one entity versus the entirety of humanity is needlessly high. Imagine if we could duplicate everything humans could do but it required specialized AIs, (airplane pilot AI, software engineer AI, chemist AI, etc). That world would be radically different than the one we know and it doesn't reach your bar. So, in that sense it's a misplaced benchmark.
replies(3): >>endisn+X4 >>OJFord+Li >>Jensso+zr
◧◩◪◨
8. endisn+X4[view] [source] [discussion] 2024-01-08 22:47:06
>>dogpre+b4
I imagine AGI to be implemented something similar to MoE, so it seems fair to me.
◧◩
9. endisn+a5[view] [source] [discussion] 2024-01-08 22:47:59
>>albert+W2
No, not a single model. A single system. Based off of nothing I expect AGI to basically be implemented like MoE.
10. alanbe+16[view] [source] 2024-01-08 22:51:16
>>endisn+(OP)
> Fundamentally I believe AGI will never happen without a body

I'm inclined to believe this as well, but rather than "it won't happen", I take it to mean that AI and robotics just need to unify. That's already starting to happen.

11. buffer+m6[view] [source] 2024-01-08 22:52:41
>>endisn+(OP)
That's an unreasonable metric for AGI.

You're basically requiring AGI to be smarter/better than the smartest/best humans in every single field.

What you're describing is ASI.

If we have AGI that is on the level of an average human (which is pretty dumb), it's already very useful. That gives you robotic paradise where robots do ALL mundane tasks.

replies(1): >>endisn+ka
12. breck+e7[view] [source] 2024-01-08 22:57:06
>>endisn+(OP)
> I doubt we will have AGI by even 2100...Fundamentally I believe AGI will never happen without a body.

I think this is very plausible--that AI won't really be AGI until it has a way to physically grow free from the umbilical chord that is the chip fab supply chain.

So it might take Brainoids/Brain-on-chip technology to get a lot more advanced before that happens. However, if there are some breakthroughs in that tech, so that a digital AI could interact with in vitro tissue, utilize it, and grow it, it seems like the takeoff could be really fast.

13. paxys+88[view] [source] 2024-01-08 23:00:52
>>endisn+(OP)
AGI doesn't have to mean superintelligence/singularity (which seems to be what you are describing).
replies(1): >>endisn+oa
14. golol+98[view] [source] 2024-01-08 23:00:58
>>endisn+(OP)
Why define AGI like that? General intelligence is supposed to be something like human intelligence. You are talking about ASI.
replies(1): >>endisn+Ca
◧◩
15. endisn+ka[view] [source] [discussion] 2024-01-08 23:11:55
>>buffer+m6
What is your definition for AGI that isn't already met? Computers have already been superior to average humans in a variety of fields since the 90s. If we consider intelligence as the ability to acquire knowledge, then any "AGI" will be "ASI" in short order, therefore I make no distinction between the two.
replies(1): >>buffer+wh
◧◩
16. endisn+oa[view] [source] [discussion] 2024-01-08 23:12:24
>>paxys+88
What is your definition for AGI that isn't already met?
replies(1): >>paxys+dc
◧◩
17. endisn+Ca[view] [source] [discussion] 2024-01-08 23:13:15
>>golol+98
I'm curious to hear your definition of AGI that hasn't already been met, given computers have been superior to humans at a large variety of tasks since the 90s.
replies(1): >>golol+Ob
◧◩◪
18. golol+Ob[view] [source] [discussion] 2024-01-08 23:19:07
>>endisn+Ca
- passing a hard Turing test, adversarial and with a duration of a few weeks and comparing with 10th percentile humans.

- being a roughly human equivalent remote worker.

- having robust common sense on language tasks

- having robust common sense on video, audio and robotics tasks, basically housework androids (robotics is not the difficulty anymore).

Just to name a few. There is a huge gap between what LLMs van do and what you describe!

replies(1): >>endisn+Gc
◧◩◪
19. paxys+dc[view] [source] [discussion] 2024-01-08 23:21:11
>>endisn+oa
Intelligence involves self-learning and self-correction. AIs today are trained for specific tasks on specific data sets and cannot expand beyond that. If you give an LLM a question it cannot answer, and it goes and figures out how to answer it without additional help, that will be behavior that qualifies it as AGI.
replies(1): >>endisn+Dd
◧◩
20. ehsanu+tc[view] [source] [discussion] 2024-01-08 23:22:44
>>novaga+x2
What kind of best case are you imagining? I don't quite understand why the very best case would be dystopian.
replies(1): >>novaga+fp
◧◩◪◨
21. endisn+Gc[view] [source] [discussion] 2024-01-08 23:23:52
>>golol+Ob
I'm not sure these are good examples. What are the actual tasks involved? These are just as nebulous as "AGI".

I assure you computers already are superior to a human remote worker whose job it is to reliably categorize items or to add numbers. Look no further than the duolingo post that's ironically on the front page at the time of this writing with this very post.

computers have been on par with human translators at some languages since the 2010s. an hypothetical AGI is not god, it still would need exposure, similar to training with LLMs. We're already near the peak with respect to that problem.

I'm not familiar with a "hard turing test." What is that?

replies(1): >>golol+Q21
◧◩◪◨
22. endisn+Dd[view] [source] [discussion] 2024-01-08 23:27:59
>>paxys+dc
by that definition what you realize is that it's the same as what I said since it can easily be reduced down to any thing any human can do, and your definition says AGI can go figure out how to do it. you extrapolate this onto future tasks and viola.

as I mention in another post, this is why I do not make any distinction between AGI and superintelligence. I believe they are the same thing. a thought experiment - what would it mean for a human to be superintelligent? presumably it would mean learning things with the least possible amount of exposure (not omniscience, necessarily).

23. abeppu+Te[view] [source] 2024-01-08 23:34:02
>>endisn+(OP)
There's a lot of cool work being done on embodied intelligence -- what makes you think that 76 years wouldn't be enough to create an embodied agent with relevant in-built constraints?
◧◩◪
24. buffer+wh[view] [source] [discussion] 2024-01-08 23:49:33
>>endisn+ka
AGI must be comparable to humans' capabilities in most fields. That includes things like

• driving (at human level safety)

• folding clothes with two robotic hands

• write mostly correct code at large scale (not just leetcode problems), fix bugs after testing

• ability to reason beyond simple riddles

• perform simple surgeries unassisted

• look at a recipe and cook a meal

• most importantly, ability to learn new skills at average human level. Ability to figure out what it needs to learn to solve a given problem, watch some tutorials, and learn from that.

◧◩◪◨
25. OJFord+Li[view] [source] [discussion] 2024-01-08 23:55:49
>>dogpre+b4
I think GP is thinking that those would be AIs yes, but a A General I would be able to do them all, like a hypothetical human GI would.

I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.

(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)

◧◩◪
26. novaga+fp[view] [source] [discussion] 2024-01-09 00:43:37
>>ehsanu+tc
I believe the best case scenario is one where humans have all of our needs met and all jobs are replaced with AI. Money becomes pointless and we live in a post-scarcity society. The world is powered by clean energy and we become net-zero carbon. Life becomes pointless with nothing to strive toward or struggle against. Humans spend their lives consuming media and entertaining ourselves like the people in Wall-E. A truly meaningless existence.

Francis Fukuyama wrote in "The Last Man":

> The life of the last man is one of physical security and material plenty, precisely what Western politicians are fond of promising their electorates. Is this really what the human story has been "all about" these past few millennia? Should we fear that we will be both happy and satisfied with our situation, no longer human beings but animals of the genus homo sapiens?

It's a fantastic essay (really, the second half of his seminal book) that I think everyone should read

replies(1): >>mewpme+ny
◧◩◪◨
27. Jensso+zr[view] [source] [discussion] 2024-01-09 01:01:39
>>dogpre+b4
> Imagine if we could duplicate everything humans could do but it required specialized AIs

Then those AIs aren't generally intelligences, as you said they are specialized.

Note that a set of AIs is still an AI, so AI should always be compared to groups of humans and not a single human. Since the AI needs to replace groups of humans and not individuals, very few workplaces has individual humans doing tasks alone without talking to coworkers.

◧◩◪◨
28. mewpme+ny[view] [source] [discussion] 2024-01-09 02:00:10
>>novaga+fp
But are we right now or ever really happy specifically?

Happiness is always fleeting. Aren't our lives a bit dystopian already if we need to do work and for what reason? So that we can possibly feel like we are meaningful with hopes that we don't lose our ability to be useful.

replies(1): >>novaga+V22
◧◩◪◨⬒
29. golol+Q21[view] [source] [discussion] 2024-01-09 06:47:04
>>endisn+Gc
More specific tasks:

- Go on Linkedin or fiverr and look at the kinds of jobs being offered remote right now. developer, HR, bureaucrat, therapeut, editor, artist etc. Current AI agents can not do the large majority of these jobs just like that, without supervision. Yes they can perform certain aspects of the job, but not the actual job, people wouldn't hire them.

A hard Turing test is a proper Turing test that's long and not just smalltalk. Intelligence can't be "faked" then. Even harder is when it is performed adversarially, i.e. there is a team of humans that plans which questions it will ask and really digs deep. For example: commonsense reasoning and long-term memory are two pureky textual tasks where LLMs still fail. Yes they do amazingly well in comparison go what we had previously, which was nothing, but if you think they are human equivalent then imo you need to play with LLMs more.

Another hard Turing test would be: Can this agent be a fulfilling long-distance partner? And I'm not talking about fulfilling like current people are having relationships with crude agents. I am talking about really giving you the sense of being understood, learning you, enriching your live etc. We can't do that yet.

Give me an agent and 1 week and I can absolutely figure out whether it is a human or AI.

◧◩◪◨⬒
30. novaga+V22[view] [source] [discussion] 2024-01-09 15:08:53
>>mewpme+ny
Our lives are imperfect, but that doesn't make them dystopian. Some people hate their jobs, but I strongly believe that most people, especially men, would be utterly miserable if we felt unnecessary. You see this today, even, with many young men displaced from the economy and unable to find jobs or start families. A world in which humans are no longer needed in the economy will be inherently fragile, as I believe most people would go out of their way to destroy it
[go to top]