> If this were true, intelligent people would have taken over society by now
The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".
You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.
This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.
It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.
Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.
Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.
In lots of real-world problems you don't necessarily run into worst cases, and it often doesn't matter if the solution is the absolute optimal one.
That's not to discredit computational complexity theory at all. It's interesting and I think proofs about the limits of information processing required for solving computational problems do have philosophical value, and the theory might be relevant to the limits of intelligence. But just because some problems are intractable in terms of provably always finding correct or optimal answers doesn't mean we're near the limits of intelligence or problem-solving ability in that fuzzy area of finding practically useful solutions to lots of real-world cases.