zlacker

[parent] [thread] 1 comments
1. mlyle+(OP)[view] [source] 2023-11-20 07:32:45
> it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains.

I'm assuming you meant "aren't" here.

> That would imply there was some arbitrary physical limit to intelligence

All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.

Also there's no guarantee the amount of raw computation is going to increase quickly.

In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.

I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.

replies(1): >>MrScru+So1
2. MrScru+So1[view] [source] 2023-11-20 15:16:27
>>mlyle+(OP)
I think it depends what is meant by fast take off. If we created AGIs that are superhuman in ML and architecture design you could see a significantly more rapid rate of progress in hardware and software at the same time. It might not be overnight but it could still be fast enough that we wouldn’t have the global political structures in place to effectively manage it.

I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.

[go to top]