zlacker

[return to "As Rocks May Think"]
1. kalter+9x[view] [source] 2026-02-05 02:55:38
>>modele+(OP)
> Chief among all changes is that machines can code and think quite well now.

They can’t and never will.

◧◩
2. johnfn+qx[view] [source] 2026-02-05 02:58:53
>>kalter+9x
Are you really claiming that there isn't a machine in existence that can code? And that that is never possible?
◧◩◪
3. kalter+cG[view] [source] 2026-02-05 04:21:18
>>johnfn+qx
It can code in an autocomplete sense. In the serious sense, if we don’t distinguish between code and thought, it can’t.

Observe that modern coding agents rely heavily on heuristics. LLM excels at training datasets, at analyzing existing knowledge, but it can’t generate new knowledge on the same scale, its thinking (a process of identification and integration) is severely limited on the conscious level (context window), where being rational is most valuable.

Because it doesn’t have volition, it cannot choose to be logical and not irrational, it cannot commit to attaining the full non-contradictory awareness of reality. That’s why I said “never.”

◧◩◪◨
4. johnfn+5I[view] [source] 2026-02-05 04:39:41
>>kalter+cG
> It can code in an autocomplete sense.

I just (right before hopping on HN) finished up a session where an agent rewrote 3000 lines of custom tests. If you know of any "autocomplete" that can do something similar, let me know. Otherwise, I think saying LLMs are "autocomplete" doesn't make a lot of sense.

◧◩◪◨⬒
5. kalter+EJ[view] [source] 2026-02-05 04:59:57
>>johnfn+5I
That’s impressive. I don’t object to the fact that they make humans phenomenally productive. But “they code and think” makes me cringe. Maybe I’m confusing lexicon differences for philosophic battles.
[go to top]