zlacker

[return to "Scaling long-running autonomous coding"]
1. Chipsh+jZ[view] [source] 2026-01-20 10:19:20
>>srames+(OP)
The more I think about LLMs the stranger it feels trying to grasp what they are. To me, when I'm working with them, they don't feel intelligence but rather an attempt at mimicking it. You can never trust, that the AI actually did something smart or dump. The judge always has to be you.

It's ability to pattern match it's way through a code base is impressive until it's not and you always have to pull it back to reality when it goes astray.

It's ability to plan ahead is so limited and it's way of "remembering" is so basic. Every day it's a bit like 50 first dates.

Nonetheless seeing what can be achieved with this pseudo intelligence tool makes me feel a little in awe. It's the contrast between not being intelligence and achieving clearly useful outcomes if stirred correctly and the feeling that we just started to understand how to interact with this alien.

◧◩
2. Gazoch+u11[view] [source] 2026-01-20 10:38:25
>>Chipsh+jZ
> they don't feel intelligence but rather an attempt at mimicking it

Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".

There is no higher thinking. They were literally built as a mimicry of intelligence.

◧◩◪
3. encycl+4i1[view] [source] 2026-01-20 13:00:52
>>Gazoch+u11
I don't understand why this point is NOT getting across to so many on HN.

LLM's do not think, understand, reason, reflect, comprehend and they never shall. I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

◧◩◪◨
4. Zababa+Bo1[view] [source] 2026-01-20 13:46:43
>>encycl+4i1
Can you give examples of how that "LLM's do not think, understand, reason, reflect, comprehend and they never shall" or that "completely mechanical process" helps you understand better when LLM works and when they don't?

Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well. The "they don't reason" people tend to, in my opinion/experience, underestimate them by a lot, often claiming that they will never be able to do <thing that LLMs have been able to do for a year>.

To be fair, the "they reason/are conscious" people tend to, in my opinion/experience, overestimate how much a LLM being able to "act" a certain way in a certain situation says about the LLM/LLMs as a whole ("act" is not a perfect word here, another way of looking at it is that they visit only the coast of a country and conclude that the whole country must be sailors and have a sailing culture).

◧◩◪◨⬒
5. encycl+mq1[view] [source] 2026-01-20 13:59:23
>>Zababa+Bo1
We know what an LLM is in fact you can build one from scratch if you like. e.g. https://www.manning.com/books/build-a-large-language-model-f...

It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?

> Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well.

My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.

A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.

What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition. Some things are so hard to define (and people have tried for centuries) e.g. what is consciousness? That they are a problem set within themselves please see Hard problem of consciousness.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

◧◩◪◨⬒⬓
6. amenho+KM6[view] [source] 2026-01-21 22:26:10
>>encycl+mq1
A cursory read of basic philosophy would surely include the arguments against Searle's Chinese room, no? It's hardly settled.
[go to top]