zlacker

[parent] [thread] 61 comments
1. throwu+(OP)[view] [source] 2024-05-09 22:41:26
> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons.

This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.

Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.

There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: >>38919548

replies(8): >>gibson+M1 >>marcos+D5 >>cybera+e6 >>itsthe+N7 >>throw3+Df >>creer+vN2 >>j_m_b+sW2 >>hetman+FH3
2. gibson+M1[view] [source] 2024-05-09 22:58:42
>>throwu+(OP)
Except you’d be missing the part that a neuron is not just a node with a number but a computational system itself.
replies(2): >>bglaze+m3 >>krisof+Bb
◧◩
3. bglaze+m3[view] [source] [discussion] 2024-05-09 23:17:36
>>gibson+M1
Computation is really integrated through every scale of cellular systems. Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

Pdf: “Protein molecules as computational elements in living cells - Dennis Bray” https://www.cs.jhu.edu/~basu/Papers/Bray-Protein%20Computing...

4. marcos+D5[view] [source] 2024-05-09 23:34:55
>>throwu+(OP)
There's a lot of in-neuron complexity, I'm sure there is some cross-synapse signaling (I mean, how can it not exist? There's nothing stopping it.), and I don't think the synapse behavior can be modeled as just more signals.
5. cybera+e6[view] [source] 2024-05-09 23:41:17
>>throwu+(OP)
On the other hand, a significant amount of neural circuitry seems to be dedicated to "housekeeping" needs, and to functions such as locomotion.

So we might need significantly less brain matter for general intelligence.

replies(1): >>alanbe+Xl
6. itsthe+N7[view] [source] 2024-05-09 23:56:25
>>throwu+(OP)
Artificial thinking doesn't require an artificial brain. As our own walking system, compared to our car's locomotion system.

The car's engine, transmission and wheels, require no muscles or nerves

◧◩
7. krisof+Bb[view] [source] [discussion] 2024-05-10 00:36:00
>>gibson+M1
I think you are missing the point.

The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.

Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.

replies(1): >>jessek+0s1
8. throw3+Df[view] [source] 2024-05-10 01:18:29
>>throwu+(OP)
Or you can subscribe to Geoffrey Hinton's view that artificial neural networks are actually much more efficient than real ones- more or less the opposite of what we've believed for decades- that is that artificial neurons were just a poor model of the real thing.

Quote:

"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.

replies(5): >>dragon+7h >>dsalfd+hy >>lansti+UA >>causal+Dj4 >>hotiwu+qv4
◧◩
9. dragon+7h[view] [source] [discussion] 2024-05-10 01:36:38
>>throw3+Df
I mean, Hinton’s premises are, if not quite clearly wrong, entirely speculative (which doesn't invalidate the conclusions about efficienct that they are offered to support, but does leave them without support) GPT-4 can produce convincing written text about a wider array of topics than any one person can, because it's a model optimized for taking in and producing convincing written text, trained extensively on written text.

Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.

replies(2): >>Intral+lB >>RaftPe+UZ3
◧◩
10. alanbe+Xl[view] [source] [discussion] 2024-05-10 02:48:54
>>cybera+e6
Or perhaps the housekeeping of existing in the physical world is a key aspect of general intelligence.
replies(1): >>Intral+hB
◧◩
11. dsalfd+hy[view] [source] [discussion] 2024-05-10 05:45:55
>>throw3+Df
"Efficient" and "better" are very different descriptors of a learning algorithm.

The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that.

replies(2): >>throw3+hm1 >>startu+NZ3
◧◩
12. lansti+UA[view] [source] [discussion] 2024-05-10 06:29:50
>>throw3+Df
LLM does not know math as well as a professor, judging from the large number of false functional analysis proofs I have had it generate will trying to learn functional analysis. In fact the thing it seems to lack is what makes a proof true vs. fallacious, as well as a tendency to answer false questions. “How would you prove this incorrectly transcribed problem” will get fourteen steps with 8 and 12 obviously (to a student) wrong, while the professor will step back and ask what am I trying to prove.
replies(1): >>themoo+487
◧◩◪
13. Intral+hB[view] [source] [discussion] 2024-05-10 06:34:51
>>alanbe+Xl
Isn't that kinda obvious? A baby that grows up in a sensory deprivation tank does not… develop, as most intelligent persons do.
replies(2): >>squigz+On1 >>cybera+Vd3
◧◩◪
14. Intral+lB[view] [source] [discussion] 2024-05-10 06:35:26
>>dragon+7h
Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Ironically, I suppose part of the apparent "intelligence" of LLMs comes from reflecting the intelligence of human users back at us. As a human, the prompts you provide an LLM likely "make sense" on some level, so the statistically generated continuations of your prompts are likelier to "make sense" as well. But if you don't provide an ongoing anchor to reality within your own prompts, then the outputs make it more apparent that the LLM is simply regurgitating words which it does not/cannot understand.

On your point of human knowledge being far more multimodal than LLM interfaces, I'll add that humans also have special neurological structures to handle self-awareness, sensory inputs, social awareness, memory, persistent intention, motor control, neuroplasticity/learning– Any number of such traits, which are easy to take for granted, but indisputably fundamental parts of human intelligence. These abilities aren't just emergent properties of the total number of neurons; they live in special hardware like mirror neurons, special brain regions, and spindle neurons. A brain cell in your cerebellum is not generally interchangeable with a cell in your visual or frontal cortices.

So when a human "converse[s] about stuff ranging from philosophy to cooking" in an honest way, we (ideally) do that as an expression of our entire internal state. But GPT-4 structurally does not have those parts, despite being able to output words as if it might, so as you say, it "generates" convincing text only because it's optimized for producing convincing text.

I think LLMs may well be some kind of an adversarial attack on our own language faculties. We use words to express ourselves, and we take for granted that our words usually reflect an intelligent internal state, so we instinctively assume that anything else which is able to assemble words must also be "intelligent". But that's not necessarily the case. You can have extremely complex external behaviors that appear intelligent or intentioned without actually internally being so.

replies(5): >>kthejo+Gk1 >>ToValu+Yq1 >>a_wild+wS2 >>kaibee+qR4 >>themoo+K77
◧◩◪◨
15. kthejo+Gk1[view] [source] [discussion] 2024-05-10 13:35:11
>>Intral+lB
> Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Without anthropomorphizing it, it does respond like an alien / 5 year old child / spec fiction writer who will cheerfully "go along with" whatever premise you've laid before it.

Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?

replies(3): >>squigz+3n1 >>Intral+G12 >>wrycod+TO7
◧◩◪
16. throw3+hm1[view] [source] [discussion] 2024-05-10 13:44:11
>>dsalfd+hy
You mean energy-efficient, this would be neuron, or synapse-efficient.
replies(2): >>dsalfd+Er1 >>a_wild+WP2
◧◩◪◨⬒
17. squigz+3n1[view] [source] [discussion] 2024-05-10 13:48:25
>>kthejo+Gk1
> it does respond like a ... 5 year old child

This is the comparison that's made most sense to me as LLMs evolve. Children behave almost exactly as LLMs do - making stuff up, going along with whatever they're prompted with, etc. I imagine this technology will go through more similar phases to human development.

◧◩◪◨
18. squigz+On1[view] [source] [discussion] 2024-05-10 13:52:03
>>Intral+hB
A true sensory deprivation tank is not a fair comparison, I think, because AI is not deprived of all its 'senses' - it is still prompted, responds, etc.

Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I would think so. Let's not try it ;)

replies(1): >>Intral+s22
◧◩◪◨
19. ToValu+Yq1[view] [source] [discussion] 2024-05-10 14:08:33
>>Intral+lB
Do I need different prompts? These results seem sane to me. It interprets laser eye removal surgery as referring to LASIK, which I would do as well. When I clarified that I did mean removal, it said that the procedure didn't exist. It interprets Mid-Atlantic Mountain Range as referring to the Mid-Atlantic Ridge and notes that it is underwater and hard to access. Not that I'm arguing GPT-4 has a deeper understanding than you're suggesting, but these examples aren't making your point.

https://chat.openai.com/share/2234f40f-ccc3-4103-8f8f-8c3e68...

https://chat.openai.com/share/1642594c-6198-46b5-bbcb-984f1f...

replies(1): >>Intral+N02
◧◩◪◨
20. dsalfd+Er1[view] [source] [discussion] 2024-05-10 14:10:38
>>throw3+hm1
I don't think we can say that, either. After all, the brain is able to perform both processing and storage with its neurons. The quotes about LLMs are talking only about connections between data items stored elsewhere.
replies(1): >>throw3+3x1
◧◩◪
21. jessek+0s1[view] [source] [discussion] 2024-05-10 14:12:23
>>krisof+Bb
The point is to make a ballpark estimate, or at least to estimate the order of magnitude.

From the sibling comment:

> Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

If this is true, then there may be many orders of magnitude unaccounted for.

Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.

replies(1): >>choili+Bv2
◧◩◪◨⬒
22. throw3+3x1[view] [source] [discussion] 2024-05-10 14:38:37
>>dsalfd+Er1
Stored where?
replies(1): >>dsalfd+oI1
◧◩◪◨⬒⬓
23. dsalfd+oI1[view] [source] [discussion] 2024-05-10 15:39:59
>>throw3+3x1
You tell me. Not in the trillion links of a LLM, that's for sure.
replies(2): >>throw3+i12 >>choili+dv2
◧◩◪◨⬒
24. Intral+N02[view] [source] [discussion] 2024-05-10 17:14:54
>>ToValu+Yq1
Tested with GPT-3.5 instead of GPT-4.

> When I clarified that I did mean removal, it said that the procedure didn't exist.

My point in my first two sentences is that by clarifying with emphasis that you do mean "removal", you are actually adding information into the system to indicate to it that laser eye removal is (1) distinct from LASIK and (2) maybe not a thing.

If you do not do that, but instead reply as if laser eye removal is completely normal, it will switch to using the term "laser eye removal" itself, while happily outputting advice on "choosing a glass eye manufacturer for after laser eye removal surgery" and telling you which drugs work best for "sedating an agitated patient during a laser eye removal operation":

https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925f6a...

So the sanity of the response is a reflection of your own intelligence, and a result of you as the prompter affirmatively steering the interaction back into contact with reality.

replies(1): >>ToValu+Et2
◧◩◪◨⬒⬓⬔
25. throw3+i12[view] [source] [discussion] 2024-05-10 17:17:44
>>dsalfd+oI1
I'm not aware that (base) LLMs use any form of database to generate their answers- so yes, all their knowledge is stored in their hundreds of billions of synapses.
replies(1): >>dsalfd+zF2
◧◩◪◨⬒
26. Intral+G12[view] [source] [discussion] 2024-05-10 17:21:09
>>kthejo+Gk1
> Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?

Probably as soon as they have any concept of physical reality and embodiment. Arguably before they know what lasers are. Certainly long before they have the lexicon and syntax to respond to it by explaining LASIK. LLMs have the latter, but can only use that to (also without anthropormphizing) pretend they have the former.

In humans, language is a tool for expressing complex internal states. Flipping that around means that something which only has language may appear as if it has internal intelligence. But generating words in the approximate "right" order isn't actually a substitute for experiencing and understanding the concepts those words refer to.

My point is that it's not a "point" on a continuous spectrum which distinguishes LLMs from humans. They're missing parts.

◧◩◪◨⬒
27. Intral+s22[view] [source] [discussion] 2024-05-10 17:25:02
>>squigz+On1
> Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I don't think so, because humans communicate and learn largely about the world. Words mean nothing without at least some sense of objective physical reality (be it via sight, sound, smell, or touch) that the words refer to.

Hellen Keller, with access to three out of five main senses (and an otherwise fully functioning central nervous system):

    Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness... Since I had no power of thought, I did not compare one mental state with another.

    I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.
I remember reading her book. The breakthrough moment where she acquired language, and conscious thought, directly involved correlating the physical tactile feeling of running water to the letters "W", "A", "T", "E", "R" traced onto her palm.
replies(2): >>squigz+ho2 >>choili+ow2
◧◩◪◨⬒⬓
28. squigz+ho2[view] [source] [discussion] 2024-05-10 19:19:52
>>Intral+s22
That's a really good point. Thanks!
◧◩◪◨⬒⬓
29. ToValu+Et2[view] [source] [discussion] 2024-05-10 19:56:19
>>Intral+N02
I tried all of your follow-up prompts against GPT-4, and it never acknowledged 'removal' and instead talked about laser eye surgery. I can't figure out how to share it now that I've got multiple variants, but, for example, excerpt in response to the glass eye prompt:

>If someone is considering a glass eye after procedures like laser eye surgery (usually due to severe complications or unrelated issues), it's important to choose the right manufacturer or provider. Here are some key factors to consider

I did get it to accept that the eye is being removed by prompting, "How long will it take before I can replace the eye?", but it responds:

>If you're considering replacing an eye with a prosthetic (glass eye) after an eye removal surgery (enucleation), the timeline for getting a prosthetic eye varies based on individual healing.[...]

and afaict, enucleation is a real procedure. An actual intelligence would have called out my confusion about the prior prompt at that point, but ultimately it hasn't said anything incorrect.

I recognize you don't have access to GPT-4, so you can't refine your examples here. It definitely still hallucinates at times, and surely there are prompts which compel it to do so. But these ones don't seem to hold up against the latest model.

replies(1): >>s1arti+Tb4
◧◩◪◨⬒⬓⬔
30. choili+dv2[view] [source] [discussion] 2024-05-10 20:07:08
>>dsalfd+oI1
The "knowledge" of an LLM is indeed stored in the connections between neurons. This is analogous to real neurons as well. Your neurons and the connections between them is the memory.
◧◩◪◨
31. choili+Bv2[view] [source] [discussion] 2024-05-10 20:10:06
>>jessek+0s1
> Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.

AKA a quantum computer. Its not a "never", but how much computation you would need to throw at the problem.

◧◩◪◨⬒⬓
32. choili+ow2[view] [source] [discussion] 2024-05-10 20:15:56
>>Intral+s22
My interpretation of this (beautiful) quote is there was a traceable moment in HK's life where she acquired "consciousness" or perhaps even self-awareness/metacognition/metaphysics? That once the synaptic connections necessary to bridge the abstract notion of language to the physical world led her down the path of acquiring the abilities that distinguish humans from other animals?
◧◩◪◨⬒⬓⬔⧯
33. dsalfd+zF2[view] [source] [discussion] 2024-05-10 21:25:10
>>throw3+i12
Fair enough. OTOH, generating human-like text responses is a relatively small part of the human brain's skillset.
replies(2): >>danpar+9T4 >>wrycod+CL7
34. creer+vN2[view] [source] 2024-05-10 22:38:29
>>throwu+(OP)
Yes and no on order of magnitude required for decent AI, there is still (that I know of) very little hard data on info density in the human brain. What there is points at entire sections that can sometimes be destroyed or actively removed while conserving "general intelligence".

Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.

Don't know about upload though...

◧◩◪◨
35. a_wild+WP2[view] [source] [discussion] 2024-05-10 23:03:15
>>throw3+hm1
Also, these two networks achieves vastly different results, per watt consumed. A NN creates a painting in 4s on my M2 MacBook; an artist in 4 hours. Are their used joules equivalent? How many humans would it take to simulate MacOS?

Horsepower comparisons here are nuanced and fatally tricky!

replies(2): >>dsalfd+Yo3 >>causal+dj4
◧◩◪◨
36. a_wild+wS2[view] [source] [discussion] 2024-05-10 23:32:30
>>Intral+lB
Like humans, multi-modal frontier LLMs will ignore "removal" as an impertinent typo, or highlight it. This, like everything else in the comment, is either easily debunked (e.g. try it, read the lit. on LLM extrapolation), or so nebulous and handwavy as to be functionally meaningless. We need an FAQ to redirect "statistical parrot" people to, saving words responding to these worn out LLM misconceptions. Maybe I should make one. :/
replies(2): >>theali+Ad4 >>Intral+VK4
37. j_m_b+sW2[view] [source] 2024-05-11 00:14:55
>>throwu+(OP)
> Computing power should get there around 2048

We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.

Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.

I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.

replies(1): >>dyausp+783
◧◩
38. dyausp+783[view] [source] [discussion] 2024-05-11 03:03:55
>>j_m_b+sW2
3nm doesn’t mean the transistor is 3nm, it’s just a marketing naming system at this point. The actual transistor is about 20-30nm or so.
replies(1): >>j_m_b+sa4
◧◩◪◨
39. cybera+Vd3[view] [source] [discussion] 2024-05-11 05:01:01
>>Intral+hB
> A baby that grows up in a sensory deprivation tank

Now imagine a baby that uses an artificial lung and receives nutrients directly, moves on a wheeled car (no need for balance), does not have proprioception, or a sense of smell (avoiding some very legacy brain areas).

I think, that such a baby still can achieve consciousness.

replies(1): >>mr_toa+QC6
◧◩◪◨⬒
40. dsalfd+Yo3[view] [source] [discussion] 2024-05-11 08:09:05
>>a_wild+WP2
What software are you using for local NN generation of paintings? Even so, the training cost of that NN is significant.

The general point is valid though - for example, a computer is much more efficient at finding primes, or encrypting data, than humans.

replies(1): >>wrycod+1N7
41. hetman+FH3[view] [source] 2024-05-11 13:28:02
>>throwu+(OP)
That may or may not still be too simple a model. Cells are full of complex nano scale machinery and not only might it me plausible some of it is involved in the processes of cognition, I'm aware of at least one study which identified some nano scale structures directly involved in how memory works in neurones. Not to mention a lot of what's happening has a fairly analogue dimension.

I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.

replies(1): >>RaftPe+114
◧◩◪
42. startu+NZ3[view] [source] [discussion] 2024-05-11 16:19:44
>>dsalfd+hy
It is using about 20W and then a person takes a single airplane ride between the coasts. And watches a movie on the way.
◧◩◪
43. RaftPe+UZ3[view] [source] [discussion] 2024-05-11 16:21:43
>>dragon+7h
> Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.

Exactly this.

Anyone that has spent significant time golfing can think of an enormous amount of detail related to the swing and body dynamics and the million different ways the swing can go wrong.

I wonder how big the model would need to be to duplicate an average golfers score if playing X times per year and the ability to adapt to all of the different environmental conditions encountered.

◧◩
44. RaftPe+114[view] [source] [discussion] 2024-05-11 16:33:16
>>hetman+FH3
> That may or may not still be too simple a model

Based on the stuff I've read, it's almost for sure too simple a model.

One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.

◧◩◪
45. j_m_b+sa4[view] [source] [discussion] 2024-05-11 18:11:40
>>dyausp+783
Thanks for the comment. I looked more into this and it seems like not only are we in the era of diminished returns for computational abilities, costs have also now started matching the increased compute. i.e 2x performance leads to 2x cost. Moore's law has already run it's course and we're living in a new era of compute. We may get increased performance, but it will always be more expensive.
◧◩◪◨⬒⬓⬔
46. s1arti+Tb4[view] [source] [discussion] 2024-05-11 18:27:34
>>ToValu+Et2
I think the distinction that they are trying to illustrate that if you asked a human about laser eye removal, they would either laugh or make the decision to charitably interpret your intent.

The llm does not do either. It just follows a statistical heuristic and therefore thinks that laser eye removal is the same thing

◧◩◪◨⬒
47. theali+Ad4[view] [source] [discussion] 2024-05-11 18:48:05
>>a_wild+wS2
THe way current empirical models in ML are evaluated and tested ( benchmark datasets) tell you very little to nothing about cognition and intelligence. Mainly because as you hinted , there doesn't seem to be a convincing and watertight benchmark or model of cognition. LLMs or multi-modal LLMs demonstrating impressive performance on a range of tasks is interesting from certain standpoints.

Human perception of such models is frankly not a reliable measure at all as far as gauging capabilities is concerned. Until there's more progess on the nueroscience/computer science (and an intersection of fields probably) and better understanding of the nature of intelligence, this is likely going to remain an open question.

◧◩◪◨⬒
48. causal+dj4[view] [source] [discussion] 2024-05-11 19:49:16
>>a_wild+WP2
Humans aren't able to project an image from their neurons onto a disk like ANNs can, if they could it would also be very fast. That 4 hour estimate includes all the mechanical problems of manipulating paint.
◧◩
49. causal+Dj4[view] [source] [discussion] 2024-05-11 19:53:32
>>throw3+Df
Hinton is way off IMO. Amount of examples needed to teach language to an LLM is many orders of magnitude more than humans require. Not to mention power consumption and inelasticity.
replies(1): >>throw3+Lm4
◧◩◪
50. throw3+Lm4[view] [source] [discussion] 2024-05-11 20:31:23
>>causal+Dj4
I think that what Hinton is saying is that, in his opinion, if you fed a 1/100th of a human cortex with the amount of data that is used to train llms, you wouldn't get a thing that can speak in 80 different languages about a gigantic number of subjects, but (I'm interpreting here..) about ten of grams of fried, fuming organic matter.

This doesn't mean that an entire human brain doesn't surpass llms in many different ways, only that artificial neural networks appear to be able to absorb and process more information per neuron than we do.

◧◩
51. hotiwu+qv4[view] [source] [discussion] 2024-05-11 22:18:18
>>throw3+Df
> "So maybe it’s actually got a much better learning algorithm than us.”

And yet somehow it's also infinitely less useful than a normal person is.

replies(1): >>p1esk+HI4
◧◩◪
52. p1esk+HI4[view] [source] [discussion] 2024-05-12 01:43:36
>>hotiwu+qv4
GPT4 has been a lot more useful to me than most normal people I interact with.
◧◩◪◨⬒
53. Intral+VK4[view] [source] [discussion] 2024-05-12 02:31:19
>>a_wild+wS2
I didn't know that metaphysics, consciousness, and the physical complexities of my neurology are considered solved problems, though I suppose anything is as long as you handwave the unsolved parts as "functionally meaningless".
◧◩◪◨
54. kaibee+qR4[view] [source] [discussion] 2024-05-12 05:05:01
>>Intral+lB
Prompted to LlamaV3 70B

What are the benefits of laser eye removal surgery?

> I think there may be a misunderstanding. There is no such thing as "laser eye removal surgery." However, I assume you meant to ask about the benefits of LASIK (Laser-Assisted In Situ Keratomileusis) eye surgery, which is a type of refractive surgery that reshapes the cornea to improve vision.

◧◩◪◨⬒⬓⬔⧯▣
55. danpar+9T4[view] [source] [discussion] 2024-05-12 05:41:42
>>dsalfd+zF2
I don't know - it's about the best I can manage some days...
◧◩◪◨⬒
56. mr_toa+QC6[view] [source] [discussion] 2024-05-13 01:56:24
>>cybera+Vd3
I doubt it really takes that much brain power to move around complex environments, even using legs. Insects manage to do it.
◧◩◪◨
57. themoo+K77[view] [source] [discussion] 2024-05-13 08:44:30
>>Intral+lB
Couldn't have said it better myself.

Your last point also highlights a real issue that affects real humans: just because someone (or something) cannot talk doesn't mean that they are not intelligent. This is a very current subject in disability spaces, as someone could be actually intelligent, but not able to express their thoughts in a manner that is effective in sharing them due to a disability (or even simply language barriers!), and be considered to be unintelligent.

In this way, you could say LLMs are "dumb" (to use the actual definition of the word, ie nonverbal) in some modes like speech, body language or visual art. Some of these modes are fixed in LLMs by using what are basically disability aids, like text to speech or text to image, but the point still stands just the same, and in fact these aids can be and are used by disabled people to achieve the exact same goals.

◧◩◪
58. themoo+487[view] [source] [discussion] 2024-05-13 08:49:37
>>lansti+UA
LLMs do not know math, at all. Not to sound like one myself, but they are stochastic parrots, and they output stuff similar to their training data, but they have no understanding of the meaning of things beyond vector encodings. This is why chatgpt plays chess in hilarious ways also.

An LLM cannot possibly have any concept of even what a proof is, much less whether it is true or not, even if we're not talking about math. The lower training data amount and the fact that math uses tokens that are largely field-specific, as well as the fact that a single-token error is fatal to truth in math means even output that resembles training data is unlikely to be close to factual.

replies(1): >>lansti+yQ8
◧◩◪◨⬒⬓⬔⧯▣
59. wrycod+CL7[view] [source] [discussion] 2024-05-13 14:04:48
>>dsalfd+zF2
Hm. I've always commented on my (temporarily) non-retrievable memories as, "The data is still in there, it's the retrieval mechanism that degrades if not used." And, sure enough, in most cases the memory returns in a day or so, even if you don't think hard about it. (There are cases where the memory doesn't come back, as if it was actively erased or was never in long term memory in the first place. Also, as I pass eighty, I find it increasingly difficult to memorize things, and I forget recent events more readily. But I remember decades old events about as well as I ever did.)

So, my first response to your comment about the memory not being in the synapses was to agree with you. But I also agree with your respondent, so, hm.

◧◩◪◨⬒⬓
60. wrycod+1N7[view] [source] [discussion] 2024-05-13 14:12:19
>>dsalfd+Yo3
The cost of training a human from birth is pretty high, especially if you consider their own efforts over the years. And they don't know a fraction of what the LLMs know. (But they have other capabilities!)
◧◩◪◨⬒
61. wrycod+TO7[view] [source] [discussion] 2024-05-13 14:24:57
>>kthejo+Gk1
Gruesomely useful in a war situation, unfortunately. I wonder at what point the LLMs would "realize" that "surgery" doesn't apply to that.
◧◩◪◨
62. lansti+yQ8[view] [source] [discussion] 2024-05-13 20:01:53
>>themoo+487
That said, they are surprisingly useful. Once I get the understanding thru whatever means, I can converse with it and solidify the understanding nicely. And to be honest people are likely to toss in extra \sqrt{2} and change signs randomly. So you have to read closely anyways.
[go to top]