zlacker

[return to "Cubic millimetre of brain mapped at nanoscale resolution"]
1. throwu+J7[view] [source] 2024-05-09 22:41:26
>>geox+(OP)
> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons.

This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.

Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.

There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: >>38919548

◧◩
2. throw3+mn[view] [source] 2024-05-10 01:18:29
>>throwu+J7
Or you can subscribe to Geoffrey Hinton's view that artificial neural networks are actually much more efficient than real ones- more or less the opposite of what we've believed for decades- that is that artificial neurons were just a poor model of the real thing.

Quote:

"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.

◧◩◪
3. lansti+DI[view] [source] 2024-05-10 06:29:50
>>throw3+mn
LLM does not know math as well as a professor, judging from the large number of false functional analysis proofs I have had it generate will trying to learn functional analysis. In fact the thing it seems to lack is what makes a proof true vs. fallacious, as well as a tendency to answer false questions. “How would you prove this incorrectly transcribed problem” will get fourteen steps with 8 and 12 obviously (to a student) wrong, while the professor will step back and ask what am I trying to prove.
◧◩◪◨
4. themoo+Nf7[view] [source] 2024-05-13 08:49:37
>>lansti+DI
LLMs do not know math, at all. Not to sound like one myself, but they are stochastic parrots, and they output stuff similar to their training data, but they have no understanding of the meaning of things beyond vector encodings. This is why chatgpt plays chess in hilarious ways also.

An LLM cannot possibly have any concept of even what a proof is, much less whether it is true or not, even if we're not talking about math. The lower training data amount and the fact that math uses tokens that are largely field-specific, as well as the fact that a single-token error is fatal to truth in math means even output that resembles training data is unlikely to be close to factual.

◧◩◪◨⬒
5. lansti+hY8[view] [source] 2024-05-13 20:01:53
>>themoo+Nf7
That said, they are surprisingly useful. Once I get the understanding thru whatever means, I can converse with it and solidify the understanding nicely. And to be honest people are likely to toss in extra \sqrt{2} and change signs randomly. So you have to read closely anyways.
[go to top]