zlacker

[return to "Cubic millimetre of brain mapped at nanoscale resolution"]
1. ein0p+Rl4[view] [source] 2024-05-11 18:54:04
>>geox+(OP)
As important and impressive a result as this is, I am reminded of the cornerstone problem of neuroscience, which goes something like this: if we knew next to nothing about processors but could attach electrodes to the die, would we be able to figure out how processors execute programs and what those programs do, in detail, just from the measurements alone? And now scale that up several orders of magnitude and introduce sensitivity to timing of arrival for signals, and you got the brain. Likewise ok, you have petabytes of data now, but will we ever get closer to understanding, for example, how cognition works? It was a bit of a shock for me when I found out (while taking an introductory comp neuroscience course) that we simply do not have tractable math to model more than a handful neurons in time domain. And they do actually operate in time domain - timings are important for Hebbian learning, and there’s no global “clock” - all that the brain does is a continuous process.
◧◩
2. lll-o-+nw4[view] [source] 2024-05-11 20:53:19
>>ein0p+Rl4
Right. The arguments for the study of A.I. were that you will not discover the principles of flight by looking at a birds feather under an electron microscope.

It’s fascinating, but we aren’t going to understand intelligence this way. Emergent phenomenon are part of complexity theory, and we don’t have any maths for it. Our ignorance in this space is large.

When I was young, I remember a common refrain being “will a brain ever be able to understand itself?”. Perhaps not, but the drive towards understanding is still a worthy goal in my opinion. We need to make some breakthroughs in the study of complexity theory.

◧◩◪
3. hotiwu+MC4[view] [source] 2024-05-11 22:13:58
>>lll-o-+nw4
> but we aren’t going to understand intelligence this way

The same argument holds for "AI" too. We don't understand a damn thing about neural networks.

There's more - we don't care to understand them as long as it's irrelevant to exploiting them.

[go to top]