zlacker

[parent] [thread] 2 comments
1. buffer+(OP)[view] [source] 2024-01-08 22:52:41
That's an unreasonable metric for AGI.

You're basically requiring AGI to be smarter/better than the smartest/best humans in every single field.

What you're describing is ASI.

If we have AGI that is on the level of an average human (which is pretty dumb), it's already very useful. That gives you robotic paradise where robots do ALL mundane tasks.

replies(1): >>endisn+Y3
2. endisn+Y3[view] [source] 2024-01-08 23:11:55
>>buffer+(OP)
What is your definition for AGI that isn't already met? Computers have already been superior to average humans in a variety of fields since the 90s. If we consider intelligence as the ability to acquire knowledge, then any "AGI" will be "ASI" in short order, therefore I make no distinction between the two.
replies(1): >>buffer+ab
◧◩
3. buffer+ab[view] [source] [discussion] 2024-01-08 23:49:33
>>endisn+Y3
AGI must be comparable to humans' capabilities in most fields. That includes things like

• driving (at human level safety)

• folding clothes with two robotic hands

• write mostly correct code at large scale (not just leetcode problems), fix bugs after testing

• ability to reason beyond simple riddles

• perform simple surgeries unassisted

• look at a recipe and cook a meal

• most importantly, ability to learn new skills at average human level. Ability to figure out what it needs to learn to solve a given problem, watch some tutorials, and learn from that.

[go to top]