zlacker

[return to "Scaling long-running autonomous coding"]
1. Chipsh+jZ[view] [source] 2026-01-20 10:19:20
>>srames+(OP)
The more I think about LLMs the stranger it feels trying to grasp what they are. To me, when I'm working with them, they don't feel intelligence but rather an attempt at mimicking it. You can never trust, that the AI actually did something smart or dump. The judge always has to be you.

It's ability to pattern match it's way through a code base is impressive until it's not and you always have to pull it back to reality when it goes astray.

It's ability to plan ahead is so limited and it's way of "remembering" is so basic. Every day it's a bit like 50 first dates.

Nonetheless seeing what can be achieved with this pseudo intelligence tool makes me feel a little in awe. It's the contrast between not being intelligence and achieving clearly useful outcomes if stirred correctly and the feeling that we just started to understand how to interact with this alien.

◧◩
2. Gazoch+u11[view] [source] 2026-01-20 10:38:25
>>Chipsh+jZ
> they don't feel intelligence but rather an attempt at mimicking it

Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".

There is no higher thinking. They were literally built as a mimicry of intelligence.

◧◩◪
3. azan_+j21[view] [source] 2026-01-20 10:47:47
>>Gazoch+u11
> Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context". > There is no higher thinking. They were literally built as a mimicry of intelligence.

Maybe real intelligence also is a big optimization function? Brain isn't magical, there are rules that govern our intelligence and I wouldn't be terribly surprised if our intelligence in fact turned out to be kind of returning the most plausible thoughs. Might as well be something else of course - my point is that "it's not intelligence, it's just predicting next token" doesn't make sense to me - it could be both!

[go to top]