zlacker

[parent] [thread] 24 comments
1. shpong+(OP)[view] [source] 2024-01-08 22:48:41
To be pedantic, I would argue that we aren't even close to being able to simulate the full brain of a flatworm on a supercomputer at anything deeper than a simple representation of neurons.

We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

replies(5): >>minroo+31 >>consum+E1 >>throwu+22 >>gary_0+b3 >>jjcm+g3
2. minroo+31[view] [source] 2024-01-08 22:52:35
>>shpong+(OP)
Does AGI must needs to be brain-like?
replies(2): >>parl_m+C1 >>dragon+92
◧◩
3. parl_m+C1[view] [source] [discussion] 2024-01-08 22:55:36
>>minroo+31
No, but the simple approach here was "full simulation".

And "brain in a jar" is different from "AGI"

4. consum+E1[view] [source] 2024-01-08 22:55:38
>>shpong+(OP)
> We can't even simulate all of the chemical processes inside a single cell. We don't even know all of the chemical processes. We don't know the function of most proteins.

Brain > Cell > Molecules(DNA and otherwise) > Atoms > Sub-atomic particles...

Potentially dumb question, but how deeply do we need to understand the underlying components to simulate a flatworm brain?

replies(2): >>BlarfM+Z2 >>shpong+m5
5. throwu+22[view] [source] 2024-01-08 22:57:30
>>shpong+(OP)
The vast majority of the chemical processes in a single cell are concerned with maintaining homeostasis for that cell - just keeping it alive, well fed with ATP, and repairing the cell walls. We don't need to simulate them.
replies(1): >>glial+q2
◧◩
6. dragon+92[view] [source] [discussion] 2024-01-08 22:58:22
>>minroo+31
The human brain is the only thing we can conclusively say does run a general intelligence, so, its the level of complexity at which we can say confidently that its just a software/architecture problem.

There may be (almost certainly is) a more optimized way a general intelligence could be implemented, but we can't confidentally say what that requires.

replies(1): >>glial+E2
◧◩
7. glial+q2[view] [source] [discussion] 2024-01-08 22:59:30
>>throwu+22
> We don't need to simulate them.

You might be right, but this is the kind of hubris that is often embarrassing in hindsight. Like when Aristotle thought the brain was a radiator.

replies(1): >>throwu+Y3
◧◩◪
8. glial+E2[view] [source] [discussion] 2024-01-08 23:00:15
>>dragon+92
> The human brain is the only thing we can conclusively say does run a general intelligence

That's because we define "general intelligence" circularly as "something the human brain does."

replies(1): >>Jensso+Ql
◧◩
9. BlarfM+Z2[view] [source] [discussion] 2024-01-08 23:01:38
>>consum+E1
While I believe there are some biological processes that rely on engagement and such, they haven’t been found in the brain. So likely somewhere just above the molecule level (chemical gradients and diffusion timings in cells certainly have an effect).
10. gary_0+b3[view] [source] 2024-01-08 23:02:32
>>shpong+(OP)
It depends on what kind of simulation you're trying to run, though. You don't need to perfectly model the physically moving heads and magnetic oscillations of a hard drive to emulate an old PC; it may be enough to just store the bytes.

I suspect if you just want an automaton that provides the utility of a human brain, we'll be fine just using statistical approximations based on what we see biological neurons doing. The utility of LLMs so far has moved the needle in that direction for sure, although there's still enough we don't know about cognition that we could still hit a surprise brick wall when we start trying to build GPT-6 or whatever. But even so, a prediction of 2047 for that kind of AGI is plausible (ironically, any semblance of Moore's Law probably won't last until then).

On the other hand, if you want to model a particular human brain... well, then things get extremely hairy scientifically, philosophically, and ethically.

replies(1): >>dmd+j4
11. jjcm+g3[view] [source] 2024-01-08 23:02:47
>>shpong+(OP)
I very likely was incorrect about the chemical processes, so thank you for clarifying. This is remembering work for a decade ago, so I'm almost certainly wrong about some of the details.
replies(1): >>shpong+X4
◧◩◪
12. throwu+Y3[view] [source] [discussion] 2024-01-08 23:06:44
>>glial+q2
If you have any evidence to the contrary, I would love to hear it because it would upend biology and modern medicine as we know it and we'd both win a Nobel prize.

As long as it's modern scientific evidence and not a 2,300 year old anecdote, of course.

replies(1): >>glial+Z9
◧◩
13. dmd+j4[view] [source] [discussion] 2024-01-08 23:08:22
>>gary_0+b3
> based on what we see biological neurons doing

We have almost no idea what biological neurons are doing, or why. At least we didn't when I got my PhD in neuroscience a little over 10 years ago. Maybe it's a solved problem by now.

replies(2): >>logtem+Ud >>gary_0+Ag
◧◩
14. shpong+X4[view] [source] [discussion] 2024-01-08 23:11:19
>>jjcm+g3
It's still a great idea - I haven't heard anyone else suggest using such context for predicting AGI timelines.
◧◩
15. shpong+m5[view] [source] [discussion] 2024-01-08 23:13:21
>>consum+E1
Who knows! I'm sure it depends on how accurately you want to simulate a flatworm brain.

I think current AI research has shown that simply representing a brain as a neural network (e.g. fully connected, simple neurons) is not sufficient for AGI.

replies(1): >>mewpme+7s
◧◩◪◨
16. glial+Z9[view] [source] [discussion] 2024-01-08 23:36:27
>>throwu+Y3
The role of astrocytes in neural computation is an example. For a long time, the assumption was that astrocytes were just "maintenance" or structural cells (the name "glia" comes from "glue"). Thus, they were not included in computational models. More recently, there is growing recognition that they play an important role in neural computation, e.g. https://picower.mit.edu/discoveries/key-roles-astrocytes
replies(1): >>throwu+vc
◧◩◪◨⬒
17. throwu+vc[view] [source] [discussion] 2024-01-08 23:51:06
>>glial+Z9
The first several sentences from your article:

> Neurons do not work alone. Instead, they depend heavily on non-neuronal or “glia” cells for many important services including access to nutrition and oxygen, waste clearance, and regulation of the ions such as calcium that help them build up or disperse electric charge.

That's exactly what homeostatisis is but we don't simulate astrocyte mitochondria to understand what effect they have on another neuron's activation. They are independent. Otherwise, biochemistry wouldn't function at all.

replies(1): >>glial+Dd
◧◩◪◨⬒⬓
18. glial+Dd[view] [source] [discussion] 2024-01-08 23:56:42
>>throwu+vc
Sure, but if you continue:

> they showed in live, behaving animals that they could enhance the response of visual cortex neurons to visual stimulation by directly controlling the activity of astrocytes.

Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.

replies(1): >>throwu+bf
◧◩◪
19. logtem+Ud[view] [source] [discussion] 2024-01-08 23:59:26
>>dmd+j4
It made a big step forward, imagery is more powerfull now and some people are starting to grow organoids made of neurons. There is a lot to learn, but as soon as we can get good data, AI will step in and digest it I guess.
◧◩◪◨⬒⬓⬔
20. throwu+bf[view] [source] [discussion] 2024-01-09 00:06:05
>>glial+Dd
> Perhaps we're talking past each other, but I thought you were implying that since some function supports homeostasis, we can assume it doesn't matter to a larger computation, and don't need to model it. That's not true with astrocytes, and I wouldn't be surprised if we eventually find out that other biological functions (like "junk DNA") fall into that category as well.

I was only referring to the internal processes of a cell. We don't need to simulate 90+% of the biochemical processes in a neuron to get an accurate simulation of that neuron - if we did it'd pretty much fuck up our understanding of every other cell because most cells share the same metabolic machinery.

The characteristics of the larger network and which cells are involved is an open question in neuroscience and it's largely an intractable problem as of this time.

◧◩◪
21. gary_0+Ag[view] [source] [discussion] 2024-01-09 00:13:35
>>dmd+j4
I'm referring to the various times biological neurons have been (and will likely continue to be) the inspiration for artificial neurons[0]. I acknowledge that the word "inspiration" is doing a lot of work here, but the research continues[1][2]. If you have a PhD in neuroscience, I understand your need to push back on the hand-wavy optimism of the technologists, but I think saying "almost no idea" is going a little far. Neuroscientists are not looking up from their microscopes and fMRI's, throwing up their hands, and giving up. Yes, there is a lot of work left to do, but it seems needlessly pessimistic to say we have made almost no progress either in understanding biological neurons or in moving forward with their distantly related artificial counterparts.

Just off the top of my head, in my lifetime, I have seen discoveries regarding new neuropeptides/neurotransmitters such as orexin, starting to understand glial cells, new treatments for brain diseases such as epilepsy, new insight into neural metabolism, and better mapping of human neuroanatomy. I might only be a layman observing, but I have a hard time believing anyone can think we've made almost no progress.

[0] https://en.wikipedia.org/wiki/History_of_artificial_neural_n...

[1] https://ai.stackexchange.com/a/3936

[2] https://www.nature.com/articles/s41598-021-84813-6

◧◩◪◨
22. Jensso+Ql[view] [source] [discussion] 2024-01-09 00:58:31
>>glial+E2
If something else could replace humanity at intellectual tasks we would say it is generally intelligent as well. Currently there is no such thing, we still need humanity to perform intellectual tasks.
replies(1): >>glial+AG
◧◩◪
23. mewpme+7s[view] [source] [discussion] 2024-01-09 01:52:16
>>shpong+m5
How has it shown that exactly if we just year ago had such a huge advance in terms of intelligence?
replies(1): >>shpong+ai2
◧◩◪◨⬒
24. glial+AG[view] [source] [discussion] 2024-01-09 03:55:19
>>Jensso+Ql
The definition of an 'intellectual task' used to mean 'abstract from experience' (Aristotle) or 'do symbolic processing' (Leibniz). Computers can now do these things - they can integrate better than Feynman, distinguish 'cat' vs 'dog' pictures by looking at examples, and pass the MCAT and LSAT better than most students, not to mention do billions of calculations in one second. And we have moved the goalpost accordingly.
◧◩◪◨
25. shpong+ai2[view] [source] [discussion] 2024-01-09 16:46:03
>>mewpme+7s
Estimates of GPT-4 parameter counts are ~1.7 trillion, which is approximately 20-fold greater than the ~85 billion human neurons we have. To me this suggests that naively building a 1:1 (or even 20:1) representation of simple neurons is insufficient for AGI.
[go to top]