zlacker

[parent] [thread] 4 comments
1. endisn+(OP)[view] [source] 2024-01-08 22:32:11
What do you mean? By that same logic humans definitionally already have done everything they can or will do with technology.

I believe AGI must be definitionally superior. Anything else and you could argue it’s existed for a while, e.g. computers have been superior at adding numbers basically their entire existence. Even with reasoning, computers have been better for a while. Language models have allowed for that reasoning to be specified in English, but you could’ve easily written a formally verified program in the 90s that exhibits better reasoning in the form of correctness for discrete tasks.

Even with game playing Go, and Chess, games that require moderate to high planning skills are all but solves with computers, but I don’t consider them AGI.

I would not consider N entities that can each beat humanity in the Y tasks humans are capable of to be AGI, unless some system X is capable of picking N for Y as necessary without explicit prompting. It would need to be a single system. That being said I could see one disagreeing haha.

I am curious if anyone has different definition of AGI that cannot already be met now.

replies(1): >>dogpre+t2
2. dogpre+t2[view] [source] 2024-01-08 22:43:40
>>endisn+(OP)
The comparison of the accomplishments of one entity versus the entirety of humanity is needlessly high. Imagine if we could duplicate everything humans could do but it required specialized AIs, (airplane pilot AI, software engineer AI, chemist AI, etc). That world would be radically different than the one we know and it doesn't reach your bar. So, in that sense it's a misplaced benchmark.
replies(3): >>endisn+f3 >>OJFord+3h >>Jensso+Rp
◧◩
3. endisn+f3[view] [source] [discussion] 2024-01-08 22:47:06
>>dogpre+t2
I imagine AGI to be implemented something similar to MoE, so it seems fair to me.
◧◩
4. OJFord+3h[view] [source] [discussion] 2024-01-08 23:55:49
>>dogpre+t2
I think GP is thinking that those would be AIs yes, but a A General I would be able to do them all, like a hypothetical human GI would.

I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.

(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)

◧◩
5. Jensso+Rp[view] [source] [discussion] 2024-01-09 01:01:39
>>dogpre+t2
> Imagine if we could duplicate everything humans could do but it required specialized AIs

Then those AIs aren't generally intelligences, as you said they are specialized.

Note that a set of AIs is still an AI, so AI should always be compared to groups of humans and not a single human. Since the AI needs to replace groups of humans and not individuals, very few workplaces has individual humans doing tasks alone without talking to coworkers.

[go to top]