The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.
There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.
We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.
I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).
On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.
We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.