It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.
So less like an alien invasion.
And more like a pandemic at the speed of light.
If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.
I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.
Main problems stopping it are:
- no intelligent agent is motivated to improve itself because the new improved thing would be someone else, and not it.
- that costs money and you're just pretending everything is free.
But if we're seeing the existence of an unaligned superintelligence, surely it's squarely too late to do something about it.
I'm assuming you meant "aren't" here.
> That would imply there was some arbitrary physical limit to intelligence
All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.
Also there's no guarantee the amount of raw computation is going to increase quickly.
In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.
I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.
It's like saying don't worry about global thermonuclear war because we haven't seen it yet.
The Neandethals on the other hand have encountered a super-intelligence.
I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.