zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. atdt+SZ[view] [source] 2025-05-26 01:19:36
>>mef+(OP)
The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.

That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.

◧◩
2. lovepa+v61[view] [source] 2025-05-26 02:29:36
>>atdt+SZ
> AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution

I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.

The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.

◧◩◪
3. user_7+ih1[view] [source] 2025-05-26 04:54:30
>>lovepa+v61
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all"

Instead I would take the opposite take.

How wonderful is it, that with naturally evolved processes and neural structures, have we been able to create what we have. Van Gogh’s paintings came out of the human brain. The Queens of the Skies - hundreds of tons of metal and composites - flying across continents in the form of a Boeing 747 or an A380 - was designed by the human brain. We went to space, have studied nature (and have conservation programs for organisms we have found to need help), took pictures the pillars of creation that are so incredibly far… all with such a “puny” structure a few cm in diameter? I think that’s freaking amazing.

◧◩◪◨
4. maaaaa+bR1[view] [source] 2025-05-26 11:23:37
>>user_7+ih1
"Brain_s_". I find we (me included) generally overlook/underestimate the distributed nature of human intelligence, included in the AI field. That's why when I first heard of mixture of experts I was thrilled about the idea and the potential. (One could also see similarities in random forest). I believe a path to AGI(tm) would be to reproduce the evolution of human intelligence artificially. Start with small models training bigger and bigger models and let the bigger successfull models (insert RL, genetic algos, etc.) "reproduce" and teach newer models from scratch. Having different model architecture cohabit could maybe even lead to the kind of specializations we see in parts of the brain
[go to top]