The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.
There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.
This paper might be helpful for understanding the nervous system in particular:
https://royalsocietypublishing.org/doi/10.1098/rstb.2017.037...
We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.
log(10^17/10^12)/log(2) = 16.61 so assuming 1.5 years per doubling, that'll be another 24.9 years - December, 2048 - before 8x X100s can simulate the human brain.
And "brain in a jar" is different from "AGI"
Brain > Cell > Molecules(DNA and otherwise) > Atoms > Sub-atomic particles...
Potentially dumb question, but how deeply do we need to understand the underlying components to simulate a flatworm brain?
There may be (almost certainly is) a more optimized way a general intelligence could be implemented, but we can't confidentally say what that requires.
You might be right, but this is the kind of hubris that is often embarrassing in hindsight. Like when Aristotle thought the brain was a radiator.
That's because we define "general intelligence" circularly as "something the human brain does."
I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).
On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.
Citation needed?
As long as it's modern scientific evidence and not a 2,300 year old anecdote, of course.
So it is not unreasonable to expect I can have an Ana de Armas AI in 2049?
I hope you AI people are better than the flying car people.
We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.
I think current AI research has shown that simply representing a brain as a neural network (e.g. fully connected, simple neurons) is not sufficient for AGI.
It seems clear at this point that although computers can be made to model physical systems to great degree, this is not the area where they naturally excel. Think of modeling the temperature of a room, you could try and recreate the physically accurate simulation of every particle and its velocity. We could then create better software to model the particles on ever more powerful and specific hardware to model bigger and bigger rooms.
Just like how thermodynamics might make more sense to model statistically, I think intelligence is not best modeled at the synapse layer.
I think the much more interesting question is what would the equivalent of a worm brain be for a digital intelligence?
> Neurons do not work alone. Instead, they depend heavily on non-neuronal or “glia” cells for many important services including access to nutrition and oxygen, waste clearance, and regulation of the ions such as calcium that help them build up or disperse electric charge.
That's exactly what homeostatisis is but we don't simulate astrocyte mitochondria to understand what effect they have on another neuron's activation. They are independent. Otherwise, biochemistry wouldn't function at all.
> they showed in live, behaving animals that they could enhance the response of visual cortex neurons to visual stimulation by directly controlling the activity of astrocytes.
Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.
I was only referring to the internal processes of a cell. We don't need to simulate 90+% of the biochemical processes in a neuron to get an accurate simulation of that neuron - if we did it'd pretty much fuck up our understanding of every other cell because most cells share the same metabolic machinery.
The characteristics of the larger network and which cells are involved is an open question in neuroscience and it's largely an intractable problem as of this time.
Just off the top of my head, in my lifetime, I have seen discoveries regarding new neuropeptides/neurotransmitters such as orexin, starting to understand glial cells, new treatments for brain diseases such as epilepsy, new insight into neural metabolism, and better mapping of human neuroanatomy. I might only be a layman observing, but I have a hard time believing anyone can think we've made almost no progress.
[0] https://en.wikipedia.org/wiki/History_of_artificial_neural_n...