zlacker

[parent] [thread] 35 comments
1. jjcm+(OP)[view] [source] 2024-01-08 22:32:47
A really simple approach we took while I was working on a research team at Microsoft for predicting when AGI would land was simply estimating at what point can we run a full simulation of all of the chemical processes and synapses inside a human brain.

The approach was tremendously simple and totally naive, but it was still interesting. At the time a supercomputer could simulate the full brain of a flatworm. We then simply applied a Moore's law-esque approach of assuming simulation capacity can double every 1.5-2 years (I forget the time period we used), and mapped out different animals that we had the capability to simulate on each date. We showed years for a field mouse, a corvid, a chimp, and eventually a human brain. The date we landed on was 2047.

There are so many things wrong with that approach I can't even count, but I'd be kinda smitten if it ended up being correct.

replies(7): >>tayo42+f1 >>Keloo+L2 >>shpong+o3 >>throwu+H4 >>davegu+J6 >>Shamel+R6 >>realit+49
2. tayo42+f1[view] [source] 2024-01-08 22:38:12
>>jjcm+(OP)
Is there something to read about simulating a worm brain. Neurons aren't just simply on and off? They grow and adapt physically along with their chemical signals. Curious how a computer accounts for all of that.
replies(1): >>jncrat+H1
◧◩
3. jncrat+H1[view] [source] [discussion] 2024-01-08 22:40:17
>>tayo42+f1
You might be interested in OpenWorm:

https://openworm.org/

This paper might be helpful for understanding the nervous system in particular:

https://royalsocietypublishing.org/doi/10.1098/rstb.2017.037...

4. Keloo+L2[view] [source] 2024-01-08 22:45:56
>>jjcm+(OP)
How does it compare to the progress? Is the progress faster/slower?

Any links to read?

5. shpong+o3[view] [source] 2024-01-08 22:48:41
>>jjcm+(OP)
To be pedantic, I would argue that we aren't even close to being able to simulate the full brain of a flatworm on a supercomputer at anything deeper than a simple representation of neurons.

We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

replies(5): >>minroo+r4 >>consum+25 >>throwu+q5 >>gary_0+z6 >>jjcm+E6
◧◩
6. minroo+r4[view] [source] [discussion] 2024-01-08 22:52:35
>>shpong+o3
Does AGI must needs to be brain-like?
replies(2): >>parl_m+05 >>dragon+x5
7. throwu+H4[view] [source] 2024-01-08 22:53:41
>>jjcm+(OP)
Another approach: the adult human brain has 100 (+- 20) billion or 10^11 neurons. Each neuron has 10^3 synapses and each synapse has 10^2 ion channels, amounts to 10^16 total channels. Assuming 10 parameters is enough to represent each channel (unlikely), that's about 10^17 (100 quadrillion) total parameters. Compare that to GPT4 which is rumored to be about 1.7*10^12 parameters on 8x 80GB A100s.

log(10^17/10^12)/log(2) = 16.61 so assuming 1.5 years per doubling, that'll be another 24.9 years - December, 2048 - before 8x X100s can simulate the human brain.

replies(2): >>Whitne+z5 >>redavn+r7
◧◩◪
8. parl_m+05[view] [source] [discussion] 2024-01-08 22:55:36
>>minroo+r4
No, but the simple approach here was "full simulation".

And "brain in a jar" is different from "AGI"

◧◩
9. consum+25[view] [source] [discussion] 2024-01-08 22:55:38
>>shpong+o3
> We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

Brain > Cell > Molecules(DNA and otherwise) > Atoms > Sub-atomic particles...

Potentially dumb question, but how deeply do we need to understand the underlying components to simulate a flatworm brain?

replies(2): >>BlarfM+n6 >>shpong+K8
◧◩
10. throwu+q5[view] [source] [discussion] 2024-01-08 22:57:30
>>shpong+o3
The vast majority of the chemical processes in a single cell are concerned with maintaining homeostasis for that cell - just keeping it alive, well fed with ATP, and repairing the cell walls. We don't need to simulate them.
replies(1): >>glial+O5
◧◩◪
11. dragon+x5[view] [source] [discussion] 2024-01-08 22:58:22
>>minroo+r4
The human brain is the only thing we can conclusively say does run a general intelligence, so, its the level of complexity at which we can say confidently that its just a software/architecture problem.

There may be (almost certainly is) a more optimized way a general intelligence could be implemented, but we can't confidentally say what that requires.

replies(1): >>glial+26
◧◩
12. Whitne+z5[view] [source] [discussion] 2024-01-08 22:58:28
>>throwu+H4
And then how long until it runs on 20 watts of power? ;)
◧◩◪
13. glial+O5[view] [source] [discussion] 2024-01-08 22:59:30
>>throwu+q5
> We don't need to simulate them.

You might be right, but this is the kind of hubris that is often embarrassing in hindsight. Like when Aristotle thought the brain was a radiator.

replies(1): >>throwu+m7
◧◩◪◨
14. glial+26[view] [source] [discussion] 2024-01-08 23:00:15
>>dragon+x5
> The human brain is the only thing we can conclusively say does run a general intelligence

That's because we define "general intelligence" circularly as "something the human brain does."

replies(1): >>Jensso+ep
◧◩◪
15. BlarfM+n6[view] [source] [discussion] 2024-01-08 23:01:38
>>consum+25
While I believe there are some biological processes that rely on engagement and such, they haven’t been found in the brain. So likely somewhere just above the molecule level (chemical gradients and diffusion timings in cells certainly have an effect).
◧◩
16. gary_0+z6[view] [source] [discussion] 2024-01-08 23:02:32
>>shpong+o3
It depends on what kind of simulation you're trying to run, though. You don't need to perfectly model the physically moving heads and magnetic oscillations of a hard drive to emulate an old PC; it may be enough to just store the bytes.

I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).

On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.

replies(1): >>dmd+H7
◧◩
17. jjcm+E6[view] [source] [discussion] 2024-01-08 23:02:47
>>shpong+o3
I very likely was incorrect about the chemical processes, so thank you for clarifying. This is remembering work for a decade ago, so I'm almost certainly wrong about some of the details.
replies(1): >>shpong+l8
18. davegu+J6[view] [source] 2024-01-08 23:03:36
>>jjcm+(OP)
I assume simulation capacity takes into account the data bandwidth of the processing systems. It seems we are always an orders of magnitude or two behind in bytes/words per second to feed simulations compared to raw flops. When you consider there are multiple orders of magnitude more synapses between neurons than neurons (not to mention other cell types we are only beginning to understand) -- bandwidth limitations seem to put estimates about 10-15 years past computation estimates. By my napkin math, accounting for bandwidth limitations, we will get single-human-intelligence hardware capabilities 2053-2063. Whether or not we've figured out the algorithms by then is any guess. Maybe algorithm advances will reduce hardware needs, but I doubt it because computational complexity to solve hard problems is often a matter of getting all the bits to the processor to perform all the comparisons necessary. However, the massive parallelism of the brain is a point of optimism.
19. Shamel+R6[view] [source] 2024-01-08 23:03:58
>>jjcm+(OP)
> At the time a supercomputer could simulate the full brain of a flatworm.

Citation needed?

replies(1): >>Shamel+th1
◧◩◪◨
20. throwu+m7[view] [source] [discussion] 2024-01-08 23:06:44
>>glial+O5
If you have any evidence to the contrary, I would love to hear it because it would upend biology and modern medicine as we know it and we'd both win a Nobel prize.

As long as it's modern scientific evidence and not a 2,300 year old anecdote, of course.

replies(1): >>glial+nd
◧◩
21. redavn+r7[view] [source] [discussion] 2024-01-08 23:07:04
>>throwu+H4
> December, 2048

So it is not unreasonable to expect I can have an Ana de Armas AI in 2049?

I hope you AI people are better than the flying car people.

◧◩◪
22. dmd+H7[view] [source] [discussion] 2024-01-08 23:08:22
>>gary_0+z6
> based on what we see biological neurons doing

We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.

replies(2): >>logtem+ih >>gary_0+Yj
◧◩◪
23. shpong+l8[view] [source] [discussion] 2024-01-08 23:11:19
>>jjcm+E6
It's still a great idea - I haven't heard anyone else suggest using such context for predicting AGI timelines.
◧◩◪
24. shpong+K8[view] [source] [discussion] 2024-01-08 23:13:21
>>consum+25
Who knows! I'm sure it depends on how accurately you want to simulate a flatworm brain.

I think current AI research has shown that simply representing a brain as a neural network (e.g. fully connected, simple neurons) is not sufficient for AGI.

replies(1): >>mewpme+vv
25. realit+49[view] [source] 2024-01-08 23:14:54
>>jjcm+(OP)
Your approach will eventually work, no doubt about it, but the question is whether the amount of energy the computer uses to complete a task is less than the energy the equivalent conglomeration of humans use to complete a task.

It seems clear at this point that although computers can be made to model physical systems to great degree, this is not the area where they naturally excel. Think of modeling the temperature of a room, you could try and recreate the physically accurate simulation of every particle and its velocity. We could then create better software to model the particles on ever more powerful and specific hardware to model bigger and bigger rooms.

Just like how thermodynamics might make more sense to model statistically, I think intelligence is not best modeled at the synapse layer.

I think the much more interesting question is what would the equivalent of a worm brain be for a digital intelligence?

◧◩◪◨⬒
26. glial+nd[view] [source] [discussion] 2024-01-08 23:36:27
>>throwu+m7
The role of astrocytes in neural computation is an example. For a long time, the assumption was that astrocytes were just "maintenance" or structural cells (the name "glia" comes from "glue"). Thus, they were not included in computational models. More recently, there is growing recognition that they play an important role in neural computation, e.g. https://picower.mit.edu/discoveries/key-roles-astrocytes
replies(1): >>throwu+Tf
◧◩◪◨⬒⬓
27. throwu+Tf[view] [source] [discussion] 2024-01-08 23:51:06
>>glial+nd
The first several sentences from your article:

> Neurons do not work alone. Instead, they depend heavily on non-neuronal or “glia” cells for many important services including access to nutrition and oxygen, waste clearance, and regulation of the ions such as calcium that help them build up or disperse electric charge.

That's exactly what homeostatisis is but we don't simulate astrocyte mitochondria to understand what effect they have on another neuron's activation. They are independent. Otherwise, biochemistry wouldn't function at all.

replies(1): >>glial+1h
◧◩◪◨⬒⬓⬔
28. glial+1h[view] [source] [discussion] 2024-01-08 23:56:42
>>throwu+Tf
Sure, but if you continue:

> they showed in live, behaving animals that they could enhance the response of visual cortex neurons to visual stimulation by directly controlling the activity of astrocytes.

Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.

replies(1): >>throwu+zi
◧◩◪◨
29. logtem+ih[view] [source] [discussion] 2024-01-08 23:59:26
>>dmd+H7
It made a big step forward, imagery is more powerfull now and some people are starting to grow organoids made of neurons. There is a lot to learn, but as soon as we can get good data, AI will step in and digest it I guess.
◧◩◪◨⬒⬓⬔⧯
30. throwu+zi[view] [source] [discussion] 2024-01-09 00:06:05
>>glial+1h
> Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.

I was only referring to the internal processes of a cell. We don't need to simulate 90+% of the biochemical processes in a neuron to get an accurate simulation of that neuron - if we did it'd pretty much fuck up our understanding of every other cell because most cells share the same metabolic machinery.

The characteristics of the larger network and which cells are involved is an open question in neuroscience and it's largely an intractable problem as of this time.

◧◩◪◨
31. gary_0+Yj[view] [source] [discussion] 2024-01-09 00:13:35
>>dmd+H7
I'm referring to the various times biological neurons have been (and will likely continue to be) the inspiration for artificial neurons[0]. I acknowledge that the word "inspiration" is doing a lot of work here, but the research continues[1][2]. If you have a PhD in neuroscience, I understand your need to push back on the hand-wavy optimism of the technologists, but I think saying "almost no idea" is going a little far. Neuroscientists are not looking up from their microscopes and fMRI's, throwing up their hands, and giving up. Yes, there is a lot of work left to do, but it seems needlessly pessimistic to say we have made almost no progress either in understanding biological neurons or in moving forward with their distantly related artificial counterparts.

Just off the top of my head, in my lifetime, I have seen discoveries regarding new neuropeptides/neurotransmitters such as orexin, starting to understand glial cells, new treatments for brain diseases such as epilepsy, new insight into neural metabolism, and better mapping of human neuroanatomy. I might only be a layman observing, but I have a hard time believing anyone can think we've made almost no progress.

[0] https://en.wikipedia.org/wiki/History_of_artificial_neural_n...

[1] https://ai.stackexchange.com/a/3936

[2] https://www.nature.com/articles/s41598-021-84813-6

◧◩◪◨⬒
32. Jensso+ep[view] [source] [discussion] 2024-01-09 00:58:31
>>glial+26
If something else could replace humanity at intellectual tasks we would say it is generally intelligent as well. Currently there is no such thing, we still need humanity to perform intellectual tasks.
replies(1): >>glial+YJ
◧◩◪◨
33. mewpme+vv[view] [source] [discussion] 2024-01-09 01:52:16
>>shpong+K8
How has it shown that exactly if we just year ago had such a huge advance in terms of intelligence?
replies(1): >>shpong+yl2
◧◩◪◨⬒⬓
34. glial+YJ[view] [source] [discussion] 2024-01-09 03:55:19
>>Jensso+ep
The definition of an 'intellectual task' used to mean 'abstract from experience' (Aristotle) or 'do symbolic processing' (Leibniz). Computers can now do these things - they can integrate better than Feynman, distinguish 'cat' vs 'dog' pictures by looking at examples, and pass the MCAT and LSAT better than most students, not to mention do billions of calculations in one second. And we have moved the goalpost accordingly.
◧◩
35. Shamel+th1[view] [source] [discussion] 2024-01-09 09:47:28
>>Shamel+R6
Just going to follow up and say “I don’t think that statement is even remotely true now, much less back then”. We haven’t accurately simulated any life forms. Failure to simulate C. Elegans is notable.
◧◩◪◨⬒
36. shpong+yl2[view] [source] [discussion] 2024-01-09 16:46:03
>>mewpme+vv
Estimates of GPT-4 parameter counts are ~1.7 trillion, which is approximately 20-fold greater than the ~85 billion human neurons we have. To me this suggests that naively building a 1:1 (or even 20:1) representation of simple neurons is insufficient for AGI.
[go to top]