I tend to agree that Chinese Rooms should be kept out of LLM discussions. In addition to it being a flawed thought experiment, of all the dozens of times I've seen them brought up, not a single example has demonstrated understanding of what a Chinese Room is anyway.
So said Searle. But without specifying what he meant, it was a circular statement at best. Punting to "it passes a Turing Test" just turns it into a different debate about a different flawed test.
The operator has no idea what he's doing. He doesn't know Chinese. He has a Borges-scale library of Chinese books and a symbol-to-symbol translation guide. He can do nothing but manipulate symbols he doesn't understand. How anyone can pass a well-administered Turing test without state retention and context-based reflection, I don't know, but we've already put more thought into this than Searle did.