Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down. I'm talking about taking information from a wide range sensors and being able to use it short term thinking, and then being able to reincorporate it into long term memory as humans do.
So that gets us human level AGI, but why is it capped there? Science as far as I know hasn't come up with a theorem that says once you are smart as a human you hit some limit and it doesn't get any better than that. So, now you have to ask, by producing AGI in computer form, have we actually created an ASI? A machine with vast reasoning capability, but also submicrosecond access to a vast array of different data sources. For example every police camera. How many companies will allow said AI into their transaction systems for optimization? Will government controlled AI's have laws that your data must be accessed and monitored by the AI? Already you can see how this can spiral into dystopia...
But that is not the limit. If AI can learn and reason like a human, and humans can build and test smarter and smarter machines, why can't the AI? Don't think of AI as the software running on a chip somewhere, also think of it as every peripheral controlled by said AI. If we can have an AI create another AI (hardware+software) the idea of AI alignment is gone (and it's already pretty busted as it is).
Anyway, I've already written half a book here and have not even touched on any number of the arguments out there. Maybe reading something by Ray Kurzweil (pie in the sky, but interestingly we're following that trend) or Nick Bostrom would be a good place to start just to make sure you've not missed any of the existing arguments that are out there.
Also, if you are a video watcher check Robert Miles youtube channel
How is this supposed to work, as we reach the limit to how small transistors can get? Fully simulating a human brain would take a vast amount of computing power, far more than is necessary to train a large language model. Maybe we don't need to fully simulate a brain for human-level artificial intelligence, but even if it's a tenth of the brain that's still a giant, inaccessible amount of compute.
For general, reason-capable AI we'll need a fundamentally different approach to computing, and there's nothing out there that'll be production-ready in a decade.
I suspect we will have plenty of intermediary time between those two steps where bad corporations will try to abuse the power of mediocre-to-powerful AI technology, and they will overstep, ultimately forcing regulators to pay attention. At some point before it becomes too powerful to stop, it will be regulated.