zlacker

[parent] [thread] 0 comments
1. azan_+(OP)[view] [source] 2026-01-20 10:47:47
> Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context". > There is no higher thinking. They were literally built as a mimicry of intelligence.

Maybe real intelligence also is a big optimization function? Brain isn't magical, there are rules that govern our intelligence and I wouldn't be terribly surprised if our intelligence in fact turned out to be kind of returning the most plausible thoughs. Might as well be something else of course - my point is that "it's not intelligence, it's just predicting next token" doesn't make sense to me - it could be both!

[go to top]