zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. atdt+SZ[view] [source] 2025-05-26 01:19:36
>>mef+(OP)
The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.

That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.

◧◩
2. PeterS+Tt1[view] [source] 2025-05-26 07:21:22
>>atdt+SZ
It indeed baffles me how academics overall seem so dismissive of recent breakthroughs in sub-symbolic approaches as models from which we can learn about 'intelligence'?

It is as if a biochemist looks at a human brain, and concludes there is no 'intelligence' there at all, just a whole lot of electro-chemical reactions. It fully ignores the potential for emergence.

Don't misunderstand me, I'm not saying 'AGI has arrived', but I'd say even current LLM's do most certainly have interesting lessons for Human Language development and evolution in science. What can the success in transfer learning in these models contribute to the debates on universal language faculties? How do invariants correlated across LLM systems and humans?

◧◩◪
3. Barrin+yv1[view] [source] 2025-05-26 07:38:04
>>PeterS+Tt1
>It fully ignores the potential for emergence.

There's two kinds of emergence, one scientific, the other a strange, vacuous notion in the absence of any theory and explanation.

The first case is emergence when we for example talk about how gas or liquid states, or combustibility emerge from certain chemical or physical properties of particles. It's not just that they're emergent, we can explain how they're emergent and how their properties are already present in the lower level of abstraction. Emergence properly understood is always reducible to lower states, not some magical word if you don't know how something works.

In these AI debates that's however exactly how "emergence" is used, people just assert it, following necessarily from their assumptions. They don't offer a scientific explanation. (the same is true with various other topics, like consciousness, or what have you). This is pointless, it's a sort of god of the gaps disguised as an argument. When Chomsky talks about science proper, he correctly points out that these kinds of arguments have no place in it, because the point of science is to build coherent theories.

◧◩◪◨
4. PeterS+Sw1[view] [source] 2025-05-26 07:52:30
>>Barrin+yv1
Nobody claims mistical gaps. There is no deus ex machina claim in emergence. However e.g. stable phenomena at a higher model level might be fully dynamic at a lower level model.
[go to top]