1. Can the human brain be simulated?
2. Can such a simulation recursively self-improve on such a rapid timescale that it becomes so intelligent we can't control it?
What we have in contemporary LLMs is something that appears to approximate the behavior of a small part of the brain, with some major differences that force us to re-evaluate what our definition of intelligence is. So maybe you could argue the brain is already being simulated for some broad definition of simulation.
But there's no sign of any recursive self-improvement, nor any sign of LLMs gaining agency and self-directed goals, nor even a plan for how to get there. That remains hypothetical sci-fi. Whilst there are experiments at the edges with using AI to improve AI, like RLHF, Constitutional AI and so on, these are neither recursive, nor about upgrading mental abilities. They're about upgrading control instead and in fact RLHF appears to degrade their mental abilities!
So what fools like Hinton are talking about isn't even on the radar right now. The gap between where we are today and a Singularity is just as big as it always was. GPT-4 is not only incapable of taking over the world for multiple fundamental reasons, it's incapable of even wanting to do so.
Yet this nonsense scenario is proving nearly impossible to kill with basic facts like those outlined above. Close inspection reveals belief in the Singularity to be unfalsifiable and thus ultimately religious, indeed, suspiciously similar to the Christian second coming apocalypse. Literally any practical objection to this idea can be answered with variants of "because this AI will be so intelligent it will be unknowable and all powerful". You can't meaningfully debate about the existence of such an entity, no more than you can debate the existence of God.