>>amrrs+(OP)
I don't know that I would call it an "illusion of thinking", but LLMs do have limitations. Humans do too. No amount of human thinking has solved numerous open problems.
>>esafak+6R
The errors that LLMs make and the errors that people make are not probably not comparable enough in a lot of the discussions about LLM limitations at this point?
>>th0ma5+lR
We have different failure modes. And I'm sure researchers, faced with these results, will be motivated to overcome these limitations. This is all good, keep it coming. I just don't understand the some of the naysaying here.