The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.
There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.
It seems clear at this point that although computers can be made to model physical systems to great degree, this is not the area where they naturally excel. Think of modeling the temperature of a room, you could try and recreate the physically accurate simulation of every particle and its velocity. We could then create better software to model the particles on ever more powerful and specific hardware to model bigger and bigger rooms.
Just like how thermodynamics might make more sense to model statistically, I think intelligence is not best modeled at the synapse layer.
I think the much more interesting question is what would the equivalent of a worm brain be for a digital intelligence?