86 billion neurons, 100 trillion connections, and each connection modulated by dozens of different neurotransmitters and action potential levels and uncounted timing sequences (and that's just what I remember off the top of my head from undergrad neuroscience courses decades ago).
It hasn't even been done for a single pair of neurons because all the variables are not yet understood. All the neural nets use only the most oversimplified version of what a neuron does — merely a binary fire/don't fire algo with training-adjusted weights.
Even assuming all the neurotransmitters, action potentials, and timing sequences, and internal biochemistry of each neuron type (and all the neuron-supporting cells) were understood and simulate-able, using all 250 billion GPUs shipped in 2024 [0] to each simulate a neuron and all its connections, neurotransmitters and timings, it'd take 344 years to accumulate 86 billion of them to simulate one brain.
Even if the average connection between neurons is one foot long, to simulate 100 trillion connections is 18 billion miles of wire. Even if the average connection is 0.3mm, that's 18 million miles of wire.
I'm not even going to bother back-of-the-envelope calculating the power to run all that.
The point is it is not even close to happening until we achieve many orders of magnitude greater computation density.
Will many useful things be achieved before that level of integration? Absolutely, just these oversimplified neural nets are producing useful things.
But just as we can conceptually imagine faster-than-light travel, imagining full-fidelity human brain simulation (which is not the same as good-enough-to-be-useful or good-enough-to-fool-many-people) is only maybe a bit closer to reality.
[0] https://www.tomshardware.com/tech-industry/more-than-251-mil...
Some intuition:
1. If the universe contains an uncomputable thing, then you could utilize this to build a super turing complete computer. This would only make CS more interesting.
2. If the universe extends beyond the observable universe, and it's infinite, and on some level it exists, and there is some way that we know it all moves forward (not necessarily time, as it's uneven), but that's an infinite amount of information, which can never be stepped forward at once (so it's not computable). The paper itself touches on this, requiring time not to break down. Though it may be the case, the universe does not "step" infinitely much information.
One quick side, this paper uses a proof with model theory. I stumbled upon this subfield of mathematics a few weeks ago, and I deeply regret not learning about it during my time studying formal systems/type theory. If you're interested in CS or math, make sure you know the compactness theorem.
Paper direct:
https://jhap.du.ac.ir/article_488.html
I enjoyed some commentary here:
https://www.reddit.com/r/badmathematics/comments/1om3u47/pub...
See also:
https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...
On the other hand, looking at the state of the world, some may have their doubts.
[1] (A,T,G,C) https://en.wikipedia.org/wiki/Genetic_code ; https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables#Stand...