zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. jjcm+ne[view] [source] 2024-01-08 22:32:47
>>treebr+(OP)
A really simple approach we took while I was working on a research team at Microsoft for predicting when AGI would land was simply estimating at what point can we run a full simulation of all of the chemical processes and synapses inside a human brain.

The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.

There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.

◧◩
2. shpong+Lh[view] [source] 2024-01-08 22:48:41
>>jjcm+ne
To be pedantic, I would argue that we aren't even close to being able to simulate the full brain of a flatworm on a supercomputer at anything deeper than a simple representation of neurons.

We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

◧◩◪
3. minroo+Oi[view] [source] 2024-01-08 22:52:35
>>shpong+Lh
Does AGI must needs to be brain-like?
◧◩◪◨
4. dragon+Uj[view] [source] 2024-01-08 22:58:22
>>minroo+Oi
The human brain is the only thing we can conclusively say does run a general intelligence, so, its the level of complexity at which we can say confidently that its just a software/architecture problem.

There may be (almost certainly is) a more optimized way a general intelligence could be implemented, but we can't confidentally say what that requires.

◧◩◪◨⬒
5. glial+pk[view] [source] 2024-01-08 23:00:15
>>dragon+Uj
> The human brain is the only thing we can conclusively say does run a general intelligence

That's because we define "general intelligence" circularly as "something the human brain does."

◧◩◪◨⬒⬓
6. Jensso+BD[view] [source] 2024-01-09 00:58:31
>>glial+pk
If something else could replace humanity at intellectual tasks we would say it is generally intelligent as well. Currently there is no such thing, we still need humanity to perform intellectual tasks.
◧◩◪◨⬒⬓⬔
7. glial+lY[view] [source] 2024-01-09 03:55:19
>>Jensso+BD
The definition of an 'intellectual task' used to mean 'abstract from experience' (Aristotle) or 'do symbolic processing' (Leibniz). Computers can now do these things - they can integrate better than Feynman, distinguish 'cat' vs 'dog' pictures by looking at examples, and pass the MCAT and LSAT better than most students, not to mention do billions of calculations in one second. And we have moved the goalpost accordingly.
[go to top]