That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.
It is as if a biochemist looks at a human brain, and concludes there is no 'intelligence' there at all, just a whole lot of electro-chemical reactions. It fully ignores the potential for emergence.
Don't misunderstand me, I'm not saying 'AGI has arrived', but I'd say even current LLM's do most certainly have interesting lessons for Human Language development and evolution in science. What can the success in transfer learning in these models contribute to the debates on universal language faculties? How do invariants correlated across LLM systems and humans?
There's two kinds of emergence, one scientific, the other a strange, vacuous notion in the absence of any theory and explanation.
The first case is emergence when we for example talk about how gas or liquid states, or combustibility emerge from certain chemical or physical properties of particles. It's not just that they're emergent, we can explain how they're emergent and how their properties are already present in the lower level of abstraction. Emergence properly understood is always reducible to lower states, not some magical word if you don't know how something works.
In these AI debates that's however exactly how "emergence" is used, people just assert it, following necessarily from their assumptions. They don't offer a scientific explanation. (the same is true with various other topics, like consciousness, or what have you). This is pointless, it's a sort of god of the gaps disguised as an argument. When Chomsky talks about science proper, he correctly points out that these kinds of arguments have no place in it, because the point of science is to build coherent theories.
I'd disagree, emergence is typically what we don't understand. When we understand it, it's rarely considered an emergent concept, just something that is.
>They don't offer a scientific explanation.
Correct, because we don't have the tooling necessary to explain it yet. Emergence as you stated came from simpler concepts at first, for example burning hydrogen and oxygen and water emerges from that.
Ecosystems are an emergent property of living systems, ones that we can explain rather well these days after we realized there were gaps in our knowledge. It's taken millions and millions of hours of research to piece all these bits together.
Now we are at the same place in large neural nets. What you say is pointless is not pointless at all. It's pointing at the exact things we need to work on if we want to have understanding of it. But at the same time understanding isn't necessary. We have made advancements in scientific topics that we don't understand.