zlacker

[parent] [thread] 3 comments
1. dogpre+(OP)[view] [source] 2024-01-08 22:43:40
The comparison of the accomplishments of one entity versus the entirety of humanity is needlessly high. Imagine if we could duplicate everything humans could do but it required specialized AIs, (airplane pilot AI, software engineer AI, chemist AI, etc). That world would be radically different than the one we know and it doesn't reach your bar. So, in that sense it's a misplaced benchmark.
replies(3): >>endisn+M >>OJFord+Ae >>Jensso+on
2. endisn+M[view] [source] 2024-01-08 22:47:06
>>dogpre+(OP)
I imagine AGI to be implemented something similar to MoE, so it seems fair to me.
3. OJFord+Ae[view] [source] 2024-01-08 23:55:49
>>dogpre+(OP)
I think GP is thinking that those would be AIs yes, but a A General I would be able to do them all, like a hypothetical human GI would.

I'm not saying I agree, I'm not really sure how useful it is as a term, seems to me any definition would be arbitrary - we'll always want more intelligence, it doesn't really matter if it's reached a level we can call 'general' or not.

(More useful in specialised roles perhaps, like the 'levels' of self-driving capability.)

4. Jensso+on[view] [source] 2024-01-09 01:01:39
>>dogpre+(OP)
> Imagine if we could duplicate everything humans could do but it required specialized AIs

Then those AIs aren't generally intelligences, as you said they are specialized.

Note that a set of AIs is still an AI, so AI should always be compared to groups of humans and not a single human. Since the AI needs to replace groups of humans and not individuals, very few workplaces has individual humans doing tasks alone without talking to coworkers.

[go to top]