How is this supposed to work, as we reach the limit to how small transistors can get? Fully simulating a human brain would take a vast amount of computing power, far more than is necessary to train a large language model. Maybe we don't need to fully simulate a brain for human-level artificial intelligence, but even if it's a tenth of the brain that's still a giant, inaccessible amount of compute.
For general, reason-capable AI we'll need a fundamentally different approach to computing, and there's nothing out there that'll be production-ready in a decade.