That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.
However, a paper published last year (Mission: Impossible Language Models, Kallini et al.) proved that LLMs do NOT learn impossible languages as easily as they learn possible languages. This undermines everything that Chompsky says about LLMs in the linked interview.
Also, GPT-2 actually seems to do quite well on some of the tested languages, including word-hop, partial reverse, and local-shuffle. It doesn't do quite as well as plain English, but GPT-2 was designed to learn English, so it's not surprising that it would do a little better. For instance, they tokenization seems biased towards English. They show "bookshelf" becoming the tokens "book", "sh", and "lf" – which in many of the languages get spread throughout a sentence. I don't think a system designed to learn shuffled-English would tokenize this way!