zlacker

[parent] [thread] 3 comments
1. esafak+(OP)[view] [source] 2025-06-07 02:09:54
I don't know that I would call it an "illusion of thinking", but LLMs do have limitations. Humans do too. No amount of human thinking has solved numerous open problems.
replies(1): >>th0ma5+f
2. th0ma5+f[view] [source] 2025-06-07 02:13:26
>>esafak+(OP)
The errors that LLMs make and the errors that people make are not probably not comparable enough in a lot of the discussions about LLM limitations at this point?
replies(1): >>esafak+m1
◧◩
3. esafak+m1[view] [source] [discussion] 2025-06-07 02:28:53
>>th0ma5+f
We have different failure modes. And I'm sure researchers, faced with these results, will be motivated to overcome these limitations. This is all good, keep it coming. I just don't understand the some of the naysaying here.
replies(1): >>Jensso+ao
◧◩◪
4. Jensso+ao[view] [source] [discussion] 2025-06-07 08:50:22
>>esafak+m1
They naysayers just says that even when people are motivated to solve a problem the problem might still not get solved. And there are unsolved problems still with LLM, the AI hypemen say AGI is all but a given in a few years time, but if that relies on some undiscovered breakthrough that is very unlikely since such breakthroughs are very rare.
[go to top]