http://h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rele...
Does anyone have any insight into how this is done without damaging the sample?
Paper: https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4 See Figure 1.
The ATUM is described in more detail here https://www.eden-instruments.com/en/ex-situ-equipments/rmc-e...
and there's a bunch of nice photos and explanations here https://www.wormatlas.org/EMmethods/ATUM.htm
TL;DR this project is reaping all the benefits of the 21st century.
To think that’s one single millimeter of our brain and look at all those connections.
Now I understand why crows can be so smart walnut sized brain be damned.
What an amazing thing brains are.
Possibly the most complex things in the universe.
Is it complex enough to understand itself though? Is that logically even possible?
This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.
Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.
There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: >>38919548
If someone did this experiment with a crow brain I imagine it would look “twice as complex” (whatever that might mean). 250 million years of evolution separates mammals from birds.
We have more detail than this about the C. elegans nematode brain, yet we still no clue how nematode intelligence actually works.
It's definitely along these lines. Like so much (everything?) that is us happens amongst this tiny little mesh of connections. It's just eerie, isn't it?
Sorry for the mundane, slightly off-topic question. This is far outside my areas of knowledge, but I thought I'd ask anyhow. :)
[AI] "The installed base of global data storage capacity [is] expected to increase to around 16 zettabytes in 2025".
Thus, even the largest supercomputer on Earth cannot store more than 4 percent of state of a single human brain. Even all the servers on the entire Internet could store state of only 9 human brains.
Astonishing.
Note the part where the biologists tell him to make an electron microscope that's 1000X more powerful. Then note what technology was used to scan these images.
That said I do get this eerie void feeling from the image. My first thought was to marvel how this is what I am as a conscious being in terms of my "implementation", and it is a mess of fibers locked away in the complete darkness of my skull.
There is also the morose feeling from knowing that any image of human brain tissue was once a person with a life and experiences. It is your living brain looking at a dead brain.
Pdf: “Protein molecules as computational elements in living cells - Dennis Bray” https://www.cs.jhu.edu/~basu/Papers/Bray-Protein%20Computing...
In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade. Lena [0] tells of the first successfully uploaded scan taking place in 2031, and I'm concerned that reality won't be far off.
Obviously I'm not advocating for this, but I'll just link to the Mad TV skit about how the drunk president cured cancer.
Wonder how they figured out which fragment to cut out.
So we might need significantly less brain matter for general intelligence.
If all of the layers were guaranteed to be orthographic with no twisting, shearing, scaling, squishing, with a consistent origin... Then yeah, there's a huge number of ways to just render that data.
But if you physically slice layers first, and scan them second, there are all manner of physical processes that can make normal image stacking fail miserably.
The car's engine, transmission and wheels, require no muscles or nerves
[1] https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4.... [2] https://www.ilastik.org/
I worry this might make the sample biased in some way.
The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.
Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.
Growing actual bio brains is just way easier. Its never going to happen in silicon.
Every machine will just have a cubic centimeter block of neuro meat embedded in it somewhere.
Quote:
"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.
Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.
The human 'spoken data rate' is likely due to average processing rates in our common hardware. Birds have a different architecture.
How do they know if their AI did it correctly or not?
So one would just need to pick that little cube out of our cerebellum, to have that 'twice as complexity'.
I'm saying we will probably discover that the "overall performance" of different vertebrate neural setups are clustered pretty closely, even when the neurons are arranged rather differently.
Human speech is just an example of another kind of performance-clustering, which occurs for similar metaphysical reasons between competing, evolving, related alternatives.
The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that.
Ironically, I suppose part of the apparent "intelligence" of LLMs comes from reflecting the intelligence of human users back at us. As a human, the prompts you provide an LLM likely "make sense" on some level, so the statistically generated continuations of your prompts are likelier to "make sense" as well. But if you don't provide an ongoing anchor to reality within your own prompts, then the outputs make it more apparent that the LLM is simply regurgitating words which it does not/cannot understand.
On your point of human knowledge being far more multimodal than LLM interfaces, I'll add that humans also have special neurological structures to handle self-awareness, sensory inputs, social awareness, memory, persistent intention, motor control, neuroplasticity/learning– Any number of such traits, which are easy to take for granted, but indisputably fundamental parts of human intelligence. These abilities aren't just emergent properties of the total number of neurons; they live in special hardware like mirror neurons, special brain regions, and spindle neurons. A brain cell in your cerebellum is not generally interchangeable with a cell in your visual or frontal cortices.
So when a human "converse[s] about stuff ranging from philosophy to cooking" in an honest way, we (ideally) do that as an expression of our entire internal state. But GPT-4 structurally does not have those parts, despite being able to output words as if it might, so as you say, it "generates" convincing text only because it's optimized for producing convincing text.
I think LLMs may well be some kind of an adversarial attack on our own language faculties. We use words to express ourselves, and we take for granted that our words usually reflect an intelligent internal state, so we instinctively assume that anything else which is able to assemble words must also be "intelligent". But that's not necessarily the case. You can have extremely complex external behaviors that appear intelligent or intentioned without actually internally being so.
Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard.
Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0].
[0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...
Without anthropomorphizing it, it does respond like an alien / 5 year old child / spec fiction writer who will cheerfully "go along with" whatever premise you've laid before it.
Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?
This is the comparison that's made most sense to me as LLMs evolve. Children behave almost exactly as LLMs do - making stuff up, going along with whatever they're prompted with, etc. I imagine this technology will go through more similar phases to human development.
Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?
I would think so. Let's not try it ;)
https://chat.openai.com/share/2234f40f-ccc3-4103-8f8f-8c3e68...
https://chat.openai.com/share/1642594c-6198-46b5-bbcb-984f1f...
From the sibling comment:
> Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.
If this is true, then there may be many orders of magnitude unaccounted for.
Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.
We have made some progress it seems. Googling I see "up to 0.05 nm" for transmission electron microscopes and "less than 0.1 nanometers" for scanning. https://www.kentfaith.co.uk/blog/article_which-electron-micr...
For comparison the distance between hydrogen nuclei in H2 is 0.074 nm I think.
You can see the shape of molecules but it's still a bit fuzzy to see individual atoms https://cosmosmagazine.com/science/chemistry/molecular-model...
I strongly believe that there is a TON of potential for synthetic biology-- but not in computation.
People just forget how superior current silicon is for running algorithms; if you consider e.g. a 17 by 17 digit multiplication (double precision), then a current CPU can do that in the time it takes for light to reach your eye from the screen in front of you (!!!). During all the completely unavoidable latency (the time any visual stimulus takes to propagate and reach your consciousness), the CPU does millions more of those operations.
Any biocomputer would be limited to low-bandwidth, ultra high latency operations purely by design.
If you solely consider AGI as application, where abysmal latency and low input bandwidth might be acceptable, then it still appears to be extremely unlikely that we are going to reach that goal via synthetic biology; our current capabilities are just disappointing and not looking like they are gonna improve quickly.
Building artificial neural networks on silicon, on the other hand, capitalises on the almost exponential gains we made during the last decades, and already produces results that compare to say, a schoolchild, quite favorably; I'd argue that current LLM based approaches already eclipse the intellectual capabilities of ANY animal, for example. Artificial bio brains, on the other hand, are basically competing with worms right now...
Also consider that even though our brains might look daunting from a pure "upper bound on required complexity/number of connections" point of view, these limits are very unlikely to be applicable, because they confound implementation details, redundancy and irrelevant details. And we have precise bound on other parameters, that our technology already matches easily:
1) Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)
2) Bandwidth for training is quite low, even when compressing the ~20year training time for an actual human into a more manageable timeframe
3) Operating power does not require more than ~20W.
4) No understanding was necessary to create human intelligence-- its purely a result of an iterative process (evolution).
Also consider human flight as an analogy: we did not achieve that by copying beating wings, powered by dozens of muscle groups and complex control algorithms-- those are just implementation details of existing biological systems. All we needed was the wing-concept itself and a bunch of trial-and-error.
Almost every other cell in the worm can be simulated with known biophysics. But we don't have a clue how any individual nematode neuron actually works. I don't have the link but there are a few teams in China working on visualizing brain activity in living C. elegans, but it's difficult to get good measurements without affecting the behavior of the worm (e.g. reacting to the dye).
> When I clarified that I did mean removal, it said that the procedure didn't exist.
My point in my first two sentences is that by clarifying with emphasis that you do mean "removal", you are actually adding information into the system to indicate to it that laser eye removal is (1) distinct from LASIK and (2) maybe not a thing.
If you do not do that, but instead reply as if laser eye removal is completely normal, it will switch to using the term "laser eye removal" itself, while happily outputting advice on "choosing a glass eye manufacturer for after laser eye removal surgery" and telling you which drugs work best for "sedating an agitated patient during a laser eye removal operation":
https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925f6a...
So the sanity of the response is a reflection of your own intelligence, and a result of you as the prompter affirmatively steering the interaction back into contact with reality.
Probably as soon as they have any concept of physical reality and embodiment. Arguably before they know what lasers are. Certainly long before they have the lexicon and syntax to respond to it by explaining LASIK. LLMs have the latter, but can only use that to (also without anthropormphizing) pretend they have the former.
In humans, language is a tool for expressing complex internal states. Flipping that around means that something which only has language may appear as if it has internal intelligence. But generating words in the approximate "right" order isn't actually a substitute for experiencing and understanding the concepts those words refer to.
My point is that it's not a "point" on a continuous spectrum which distinguishes LLMs from humans. They're missing parts.
I don't think so, because humans communicate and learn largely about the world. Words mean nothing without at least some sense of objective physical reality (be it via sight, sound, smell, or touch) that the words refer to.
Hellen Keller, with access to three out of five main senses (and an otherwise fully functioning central nervous system):
Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness... Since I had no power of thought, I did not compare one mental state with another.
I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.
I remember reading her book. The breakthrough moment where she acquired language, and conscious thought, directly involved correlating the physical tactile feeling of running water to the letters "W", "A", "T", "E", "R" traced onto her palm.They don't even know how a single neuron works yet. There is complexity and computation at many scales and distributed throughout the neuron and other types of cells (e.g. astrocytes) and they are discovering more relentlessly.
They just recently (last few years) found that dendrites have local spiking and non-linear computation prior to forwarding the signal to the soma. They couldn't tell that was happening previously because the equipment couldn't detected the activity.
They discovered that astrocytes don't just have local calcium wave signaling (local=within the extensions of the cell), they also forward calcium waves to the soma which integrates that information just like a neuron soma does with electricity.
Single dendrites can detect patterns of synaptic activity and respond with calcium and electrical signaling (i.e. when synapse fires in a particular timing sequence, the a signal is forwarded to the soma).
It's really amazing how much computationally relevant complexity there is, and how much they keep adding to their knowledge each year. (I have a file of notes with about 2,000 lines of these types of interesting factoids I've been accumulating as I read).
the sheer number of things that work in co-ordination to make biology work!
In-f*king-credible !
>If someone is considering a glass eye after procedures like laser eye surgery (usually due to severe complications or unrelated issues), it's important to choose the right manufacturer or provider. Here are some key factors to consider
I did get it to accept that the eye is being removed by prompting, "How long will it take before I can replace the eye?", but it responds:
>If you're considering replacing an eye with a prosthetic (glass eye) after an eye removal surgery (enucleation), the timeline for getting a prosthetic eye varies based on individual healing.[...]
and afaict, enucleation is a real procedure. An actual intelligence would have called out my confusion about the prior prompt at that point, but ultimately it hasn't said anything incorrect.
I recognize you don't have access to GPT-4, so you can't refine your examples here. It definitely still hallucinates at times, and surely there are prompts which compel it to do so. But these ones don't seem to hold up against the latest model.
AKA a quantum computer. Its not a "never", but how much computation you would need to throw at the problem.
Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.
Don't know about upload though...
Horsepower comparisons here are nuanced and fatally tricky!
We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.
Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.
I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.
Human brains might not be all that efficient; for example, if the competitive edge for primate brains is distinct enough, they'll get big before they get efficient. And humans are a pretty 'young' species. (Look at how machine learning models are built for comparison... you have absolute monsters which become significantly more efficient as they are actually adopted.)
By contrast, birds are under extreme size constraints, and have had millions of years to specialize (ie, speciate) and refine their architectures accordingly. So they may be exceedingly efficient, but have no way to scale up due to the 'need to fly' constraint.
I haven’t heard of a clocking mechanism in brains, but signals propagate much slower and a walnut / crow brain is much larger than a cpu die.
Now imagine a baby that uses an artificial lung and receives nutrients directly, moves on a wheeled car (no need for balance), does not have proprioception, or a sense of smell (avoiding some very legacy brain areas).
I think, that such a baby still can achieve consciousness.
I'd be much more horrified to see our consciousness simplified to anything smaller than that, which is why any hype for AGI because we invented chatbots is absolutely laughable to me. We just invented the wheel and now hope to drive straight to the Moon.
Anyway, you are seeing a fake three dimensional simplification of a four+ dimensional quantum system. There is at least one unseen physical dimension in which to encode your "soul"
The general point is valid though - for example, a computer is much more efficient at finding primes, or encrypting data, than humans.
I wonder if this plays into the mechanism of epilepsy. Self-arousal...?
Anybody qualified to comment on?
If the wires make consciousness then there is consciousness. The substrate is irrelevant and has no bearing on the awesomeness of the phenomena of knowing, experiencing and living.
I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.
Brain waves (partially). They aren't exactly like a cpu clock, but they do coordinate activity of cells in space and time.
There are different frequencies that are involved in different types of activity. Lower frequencies synchronize across larger areas (can be entire brain) and higher frequencies across smaller local areas.
There is coupling between different types of waves (i.e. slow wave phase coupled to fast waves amplitude) and some researchers (Miller) thinks the slow wave is managing memory access and the fast wave is managing cognition/computation (utilizing the retrieved memory).
Exactly this.
Anyone that has spent significant time golfing can think of an enormous amount of detail related to the swing and body dynamics and the million different ways the swing can go wrong.
I wonder how big the model would need to be to duplicate an average golfers score if playing X times per year and the ability to adapt to all of the different environmental conditions encountered.
Based on the stuff I've read, it's almost for sure too simple a model.
One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.
The llm does not do either. It just follows a statistical heuristic and therefore thinks that laser eye removal is the same thing
Human perception of such models is frankly not a reliable measure at all as far as gauging capabilities is concerned. Until there's more progess on the nueroscience/computer science (and an intersection of fields probably) and better understanding of the nature of intelligence, this is likely going to remain an open question.
Are you counting epigenetic factors in that? They're heritable.
By and large It’s not direct competition but we are stamping our species at an alarming rate and birds are taking a hammering.
Try experimenting with immersing your brain in preservatives and staining with heavy metals to see how would you be able to write the comment similar to the above.
No wonder that monkey methods continue to unveil monkey cognition.
I think we all do every day
Nerve signals are both chemical reactions and electrical impulses like metal wire. Electrical impulses are sent along the fat layer by ions Potassium , Calcium, Sodium etc.
Twitch responses are actually done in spinal cord. The signals are short circuited all along the spine and return back to muscle without touching the brain ever.
This doesn't mean that an entire human brain doesn't surpass llms in many different ways, only that artificial neural networks appear to be able to absorb and process more information per neuron than we do.
https://h01-release.storage.googleapis.com/gallery.html
I count seven.
It’s fascinating, but we aren’t going to understand intelligence this way. Emergent phenomenon are part of complexity theory, and we don’t have any maths for it. Our ignorance in this space is large.
When I was young, I remember a common refrain being “will a brain ever be able to understand itself?”. Perhaps not, but the drive towards understanding is still a worthy goal in my opinion. We need to make some breakthroughs in the study of complexity theory.
As a complete outsider who doesn't know what to look for, the dendrite inside soma (dendrite from one cell tunnelling through the soma of another) was the biggest surprise.
There was a short series filmed, that I enjoyed, but definitely not strong.
On the second point, the failure of Openworm to model the very well-mapped-out C. elegans (~0.3k neurons) says a lot.
The same argument holds for "AI" too. We don't understand a damn thing about neural networks.
There's more - we don't care to understand them as long as it's irrelevant to exploiting them.
And yet somehow it's also infinitely less useful than a normal person is.
Yes, which is why the current explosion in practical application isn’t very interesting.
> we don't care to understand them as long as it's irrelevant to exploiting them.
For some definition of “we”, I’m sure that’s true. We don’t need to understand things to make practical use of them. Giant Cathedrals were built without science and mathematics. Still, once we do have the science and mathematics, generally exponential advancement results.
I'm particularly fond of the "Egg shaped object with no associated processes". :)
Yes we figured out how to build aircraft.
But it can not be compared to a bird flying. Neither in terms of efficiency or elegance.
What are the benefits of laser eye removal surgery?
> I think there may be a misunderstanding. There is no such thing as "laser eye removal surgery." However, I assume you meant to ask about the benefits of LASIK (Laser-Assisted In Situ Keratomileusis) eye surgery, which is a type of refractive surgery that reshapes the cornea to improve vision.
More project details: https://www.ll.mit.edu/sites/default/files/other/doc/2023-02...
There's too many confounding factors to say that the human brain architecture is actually 'better' based on the outcomes of natural selection. And if we kill all the birds, we will lose the chance to find out as we develop techniques to better compare the trade-offs of the different architectures.
LLMs that work at a very crude level of string tokens and emit probabilities.
It's also the tome as in book, more properly one volume of a multi-volume (or multi-part) set, though it now generally simply means any large book.
Your last point also highlights a real issue that affects real humans: just because someone (or something) cannot talk doesn't mean that they are not intelligent. This is a very current subject in disability spaces, as someone could be actually intelligent, but not able to express their thoughts in a manner that is effective in sharing them due to a disability (or even simply language barriers!), and be considered to be unintelligent.
In this way, you could say LLMs are "dumb" (to use the actual definition of the word, ie nonverbal) in some modes like speech, body language or visual art. Some of these modes are fixed in LLMs by using what are basically disability aids, like text to speech or text to image, but the point still stands just the same, and in fact these aids can be and are used by disabled people to achieve the exact same goals.
An LLM cannot possibly have any concept of even what a proof is, much less whether it is true or not, even if we're not talking about math. The lower training data amount and the fact that math uses tokens that are largely field-specific, as well as the fact that a single-token error is fatal to truth in math means even output that resembles training data is unlikely to be close to factual.
So, my first response to your comment about the memory not being in the synapses was to agree with you. But I also agree with your respondent, so, hm.
What if it's a "wireless" device?
Summary (my paraphrasing):
They partially figured out how two neurons (AVA, AVB) control forward and backward movement, previous theories assumed one neuron controlled forward and one controlled backward, but that didn't correctly model actual movement.
They found that AVA+AVB combine in a complex mechanism with two different signaling/control methods acting at different timescales to produce a graded shifting between forward+backward when switching directions, as opposed to an on/off type switch (that previous models used but didn't match actual movements).
Interesting learnings from this paper (at least for me):
1-Most neurons in worm are non-spiking (I had no idea, I've read about this stuff a lot and wasn't aware)
2-Non-spiking neurons can have multiple resting states at different voltages
3-Neurons AVA and AVB are different, they each have different resting state characteristics and respond differently to inputs