zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. jjcm+ne[view] [source] 2024-01-08 22:32:47
>>treebr+(OP)
A really simple approach we took while I was working on a research team at Microsoft for predicting when AGI would land was simply estimating at what point can we run a full simulation of all of the chemical processes and synapses inside a human brain.

The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.

There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.

◧◩
2. shpong+Lh[view] [source] 2024-01-08 22:48:41
>>jjcm+ne
To be pedantic, I would argue that we aren't even close to being able to simulate the full brain of a flatworm on a supercomputer at anything deeper than a simple representation of neurons.

We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

◧◩◪
3. consum+pj[view] [source] 2024-01-08 22:55:38
>>shpong+Lh
> We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

Brain > Cell > Molecules(DNA and otherwise) > Atoms > Sub-atomic particles...

Potentially dumb question, but how deeply do we need to understand the underlying components to simulate a flatworm brain?

◧◩◪◨
4. shpong+7n[view] [source] 2024-01-08 23:13:21
>>consum+pj
Who knows! I'm sure it depends on how accurately you want to simulate a flatworm brain.

I think current AI research has shown that simply representing a brain as a neural network (e.g. fully connected, simple neurons) is not sufficient for AGI.

[go to top]