If I were to ask a Chinese room operator, "What would happen if gravity suddenly became half as strong while I'm drinking tea?," what would you expect as an answer?
Another question: if I were to ask "What would be an example of something a Chinese room's operator could not handle, that an actual Chinese human could?", what would you expect in response?
Claude gave me the first question in response to the second. That alone takes Chinese Rooms out of the realm of any discussion regarding LLMs, and vice versa. The thought experiment didn't prove anything when Searle came up with it, and it hasn't exactly aged well. Neither Searle nor Chomsky had any earthly idea that language was this powerful.
I tend to agree that Chinese Rooms should be kept out of LLM discussions. In addition to it being a flawed thought experiment, of all the dozens of times I've seen them brought up, not a single example has demonstrated understanding of what a Chinese Room is anyway.
So said Searle. But without specifying what he meant, it was a circular statement at best. Punting to "it passes a Turing Test" just turns it into a different debate about a different flawed test.
The operator has no idea what he's doing. He doesn't know Chinese. He has a Borges-scale library of Chinese books and a symbol-to-symbol translation guide. He can do nothing but manipulate symbols he doesn't understand. How anyone can pass a well-administered Turing test without state retention and context-based reflection, I don't know, but we've already put more thought into this than Searle did.