zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. jjcm+ne[view] [source] 2024-01-08 22:32:47
>>treebr+(OP)
A really simple approach we took while I was working on a research team at Microsoft for predicting when AGI would land was simply estimating at what point can we run a full simulation of all of the chemical processes and synapses inside a human brain.

The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.

There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.

◧◩
2. shpong+Lh[view] [source] 2024-01-08 22:48:41
>>jjcm+ne
To be pedantic, I would argue that we aren't even close to being able to simulate the full brain of a flatworm on a supercomputer at anything deeper than a simple representation of neurons.

We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

◧◩◪
3. gary_0+Wk[view] [source] 2024-01-08 23:02:32
>>shpong+Lh
It depends on what kind of simulation you're trying to run, though. You don't need to perfectly model the physically moving heads and magnetic oscillations of a hard drive to emulate an old PC; it may be enough to just store the bytes.

I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).

On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.

◧◩◪◨
4. dmd+4m[view] [source] 2024-01-08 23:08:22
>>gary_0+Wk
> based on what we see biological neurons doing

We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.

◧◩◪◨⬒
5. logtem+Fv[view] [source] 2024-01-08 23:59:26
>>dmd+4m
It made a big step forward, imagery is more powerfull now and some people are starting to grow organoids made of neurons. There is a lot to learn, but as soon as we can get good data, AI will step in and digest it I guess.
[go to top]