zlacker

[return to "Cubic millimetre of brain mapped at nanoscale resolution"]
1. posnet+r5[view] [source] 2024-05-09 22:22:39
>>geox+(OP)
1.4 PB/mm^3 (petabytes per millimeter cubed)×1260 cm^3 (cubic centimeters, large human brain) = 1.76×10^21 bytes = 1.76 ZB (zetabytes)
◧◩
2. gary17+29[view] [source] 2024-05-09 22:53:25
>>posnet+r5
[AI] "Frontier [supercomputer]: the storage capacity is reported to be up to 700 petabytes (PB)" (0.0007 ZB).

[AI] "The installed base of global data storage capacity [is] expected to increase to around 16 zettabytes in 2025".

Thus, even the largest supercomputer on Earth cannot store more than 4 percent of state of a single human brain. Even all the servers on the entire Internet could store state of only 9 human brains.

Astonishing.

◧◩◪
3. falcor+9b[view] [source] 2024-05-09 23:18:06
>>gary17+29
I appreciate you're running the numbers to extrapolate this approach, but just wanted to note that this particular figure isn't an upper bound nor a longer bound for actually storing the "state of a single human brain". Assuming the intent would be to store the amount of information needed to essentially "upload" the mind onto a computer emulation, we might not yet have all the details we need in this kind of scanning, but once we do, we may likely discover that a huge portion of it is redundant.

In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade. Lena [0] tells of the first successfully uploaded scan taking place in 2031, and I'm concerned that reality won't be far off.

[0] https://qntm.org/mmacevedo

◧◩◪◨
4. gary17+pR[view] [source] 2024-05-10 08:10:07
>>falcor+9b
> we may likely discover that a huge portion of [a human brain] is redundant

Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard.

Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0].

[0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...

[go to top]