They can’t and never will.
Observe that modern coding agents rely heavily on heuristics. LLM excels at training datasets, at analyzing existing knowledge, but it can’t generate new knowledge on the same scale, its thinking (a process of identification and integration) is severely limited on the conscious level (context window), where being rational is most valuable.
Because it doesn’t have volition, it cannot choose to be logical and not irrational, it cannot commit to attaining the full non-contradictory awareness of reality. That’s why I said “never.”