This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.
Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.
There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: >>38919548
We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.
Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.
I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.