I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).
On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.
We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.
Just off the top of my head, in my lifetime, I have seen discoveries regarding new neuropeptides/neurotransmitters such as orexin, starting to understand glial cells, new treatments for brain diseases such as epilepsy, new insight into neural metabolism, and better mapping of human neuroanatomy. I might only be a layman observing, but I have a hard time believing anyone can think we've made almost no progress.
[0] https://en.wikipedia.org/wiki/History_of_artificial_neural_n...