I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.
The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.
Instead I would take the opposite take.
How wonderful is it, that with naturally evolved processes and neural structures, have we been able to create what we have. Van Gogh’s paintings came out of the human brain. The Queens of the Skies - hundreds of tons of metal and composites - flying across continents in the form of a Boeing 747 or an A380 - was designed by the human brain. We went to space, have studied nature (and have conservation programs for organisms we have found to need help), took pictures the pillars of creation that are so incredibly far… all with such a “puny” structure a few cm in diameter? I think that’s freaking amazing.
I was reading a reddit post the other day where the guy lost his crypto holdings because he input his recovery phrase somewhere. We question the intelligence of LLMs because they might open a website, read something nefarious, and then do it. But here we have real humans doing the exact same thing...
> I guess humans really aren't so special after all
No they are not. But we are still far from getting there with the current LLMs and I suspect mimicking the human brain won't be the best path forward.
I'd wager that a motivation in designing these systems it so they do not make these mistakes. Otherwise what's the point, really.
This is a crazy take to me. As compared to what? The machines that we built?
Until we discover comparably intelligent life in the universe I think it's fair to say that we are indeed very special.
Ah, but these wizards created a magical entity that can also do magic! Wizards must not be so special after all...
(Of course, there’s plenty of sci-fi where conscious entities manifest themselves as abstract balls of pure energy or the like; except for some reason those balls still think in the same way we do, get assigned the same motivations, sometimes even speak our language, etc., which makes it, in a way, even less realistic than the walking and talking human-cat hybrid you’d see in Elder Scrolls.)
Whenever we ponder questions of intelligence and consciousness, the same pitfall awaits.
Since we don’t have an objective definition of consciousness or intelligence (and in all likelihood we can’t have one, because any formal attempt at such wouldn’t get very far due to being attempted by the same thing that’s being defined), the only one that makes sense is, in crude language, “something like what we are”. There’s a vague feeling that it has to do with free will, self-awareness, etc.; however, all of it is also influenced by the nature of us being all parts of some big figurative anthill—assuming your sense of self only arises as you model yourself against the other (starting with your parents/caretakers and on), a standalone human cannot be self-aware in the way we are if it evolved in an emptiness without others—i.e., it would not possess human intelligence; supported by our natural-scientific observations rejecting the possibility of a being of this shape and form ever evolving in the first place.
In other words, the more different some kind of intelligence is from ours, the less it would look like intelligence to us—which makes the search for alien intelligence in space somewhat tragically futile (if it exists, we wouldn’t recognize it unless it just happens to be like us), but opens up exciting opportunities for finding alien but not-too-alien intelligence right on this planet (almost Douglas Adams style, minus dolphins speaking English).
There’s an extra trick when it comes to LLMs. In case of alien life, the possibility of a radically different kind of consciousness producing output that closely mimics our own is almost impossible (if our prior assumption is correct, then for all intents and purposes truly alien, non-meatbag-scale kind of intelligence might not be able to recognize ours in the first place, just like we wouldn’t recognize alien intelligence). However, the LLMs are designed to mimic the most social aspect of our behavior, our communication aimed at fellow humans; so when an LLM produces sufficiently human-like output—even if it has a very different kind of consciousness[0] or no consciousness at all (more likely, though as we concluded above we can’t distinguish between the two cases anyway)—our minds are primed to see it as a manifestation of [which would be human-like] intelligence, even if there’s nothing that would suggest such judging by the way it’s created (which is radically different from the way we’ve been creating intelligent life so far, wink-wink), by the substrate it runs on, if not by the way it actually works (which per our conclusion above we might never be able to conclusively determine about our own minds, without resorting to unfalsifiable philosophical assumptions for at least some aspects of it).
So yes, I’d say humans are special, if nothing else then because by the only usable (if somewhat circular) definition of what we are there’s absolutely nothing like us around, and in all likelihood can never be. (That’s not to say that something not like us isn’t special in its own way—I mean, think of the dolphins!—but given we, due to not being it, would not be able to properly understand it, it just never hits the same.)
[0] Which if true would be completely asocial (given it neither exists in groups nor depends on others for survival) and therefore drastically different from ours.
Now LLMs made a big breakthrough that they showed we can do decent fuzzy reasoning in practice. But at the cost of nobody understanding the underlying process formally.
If we had a good unified (formal) theory of fuzzy reasoning, we could build models that reason better (or at least more predictably). But we won't get a better theory by scaling the existing models, I think Chomsky is right about that.
We lack the goal, not the means. If I am asking LLM a question, what answer do I want? A playfully creative one? A strictly logical one? A pleasingly sycophantic one? A harshly critical one? An out of the box devil's advocate one? A beautiful one? A practical one? We have no clue how to express these modes in logical reasoning.
We have LLMs that can get some boilerplate right if you use it in a greenfield project, and will repeatedly mess up your code once it grows enough for you to actually need assistance grokking it.
Isn't Physics trying to describe the natural world? I'm guessing you are taking two positions here that are causing me confusion with your statement: 1) that our minds can be explained strictly through physical processes, and 2) our minds, including our intelligence, are outside of the domain of Physics.
If you take 1) to be true, then it follows that Physics, at least theoretically, should be able to explain intelligence. It may be intractably hard, like it might be intractably hard to have physics decribe and predict the motions of more than two planetary bodies.
I guess I'm saying that Physical laws ARE natural laws. I think you might be thinking that natural laws refer solely to all that messy, living stuff.
They spent the whole budget on the salt vampire and never recovered.
As someone who has worked in linguistics, I don't really see what you're talking about. Minimalism is not full of exceptions (please elaborate on a specific example if you have one). Minimalism was created to make the old theory, Government and Binding, simpler.
[0] Genuinely not unlike how a congregation of gelled-together humans is an entity that can achieve much more than an individual human.
> Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions...Perhaps it's all just emergent properties of some messy evolved substrate.
Yeah, it is very likely that there are not laws that will do this, it's the substrate. The fruit fly brain (let alone human) has been mapped, and we've figured out that it's not just the synapse count, but the 'weights' that matter too [0]. Mind you, those weights adjust in real time when a living animal is out there.
You'll see in literature that there are people with some 'lucky' form of hydranencephaly where their brain is as thin as paper. But they vote, get married, have kids, and for some strange reason seem to work in mailrooms (not a joke). So we know it's something about the connectome that's the 'magic' of a human.
My pet theory: We need memristors [2] to better represent things. But that takes redesigning the computer from the metal on up, so is unlikely to occur any time soon with this current AI craze.
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics.
Yeah, biologists get there too, just the other way abouts, with animals and humans. Like, dogs make vitamin C internally, and humans have that gene too, it's just dormant, ready for evolution (or genetic engineering) to reactivate. That said, these neuroscience issues with us and the other great apes are somewhat large and strange. I'm not big into that literature, but from what little I know, the exact mechanisms and processes that get you from tool using ourangs to tool using humans, well, those seem to be a bit strange and harder to grasp for us. Again, not in that field though.
In the end though, humans are special. We're the only ones on the planet that ever really asked a question. There's a lot to us and we're actually pretty strange in the end. There's many centuries of work to do with biology, we're just at the wading stage of that ocean.
[0] https://en.wikipedia.org/wiki/Drosophila_connectome
That said, I began with "A Treatise of Human Nature" around the age of 17, translated to my native language (his works are not an easy read in English, IMO), due to my interest in both philosophy and psychology.
If you haven't read them yet, I would certainly recommend them. I would recommend the latter I mentioned even if you are not interested in psychology (but may be interested in epistemology, philosophy of mind, and/or ethics), as he gets into detail about his "impressions" vs "ideas".
Additionally, he is famously known for his "problem of induction" which you may already know.
In a way, this collaboration between the machine and the human is better than what came before, because now productive actions can be taken sooner, and mathematicians do not have to doubt whether they are searching for a proof that exists.
- Predicate Fronting in Free Relatives: In sentences like “What John saw was a surprise,” labeling the fronted predicate is not without problems, Merge doesn’t yield a clear head.
- Optional Verb Movement in Persian: Yes-no questions where verbs can optionally move (e.g., “Did you go?” vs. “You went?”) messes up feature-checking’s binary mode.
- Non-Matching Free Relatives with Pied-Piping: Structures like “In whichever city you live, you’ll find culture” mess up standard labeling, needs extra stipulations.
- Some Subjects in Finnish: Nominative vs. non-nominative subjects (e.g., “Minua kylmä” [me-ACC cold]) complicate that Minimalist case assignment.
These cases seem totally fascinating. Have you any links to examples or more information (i'm also curious about the curious detail of them tending to work in mail rooms)?