zlacker

[return to "2025: The Year in LLMs"]
1. waldre+T7[view] [source] 2026-01-01 01:03:16
>>simonw+(OP)
Remember, back in the day, when a year of progress was like, oh, they voted to add some syntactic sugar to Java...
◧◩
2. crysta+lt[view] [source] 2026-01-01 05:04:11
>>waldre+T7
That must have been a long time back. Having lived through the time when web pages were served through CGI and mobile phones only existed in movies, when SVMs where the new hotness in ML and people would write about how weird NNs were, I feel like I've seen a lot more concrete progress in the last few decades than this year.

This year honestly feels quite stagnant. LLMs are literally technology that can only reproduce the past. They're cool, but they were way cooler 4 years ago. We've taken big ideas like "agents" and "reinforcement learning" and basically stripped them of all meaning in order to claim progress.

I mean, do you remember Geoffrey Hinton's RBM talk at Google in 2010? [0] That was absolutely insane for anyone keeping up with that field. By the mid-twenty teens RBMs were already outdated. I remember when everyone was implementing flavors of RNNs and LSTMs. Karpathy's character 2015 RNN project was insane [1].

This comment makes me wonder if part of the hype around LLMs is just that a lot of software people simply weren't paying attention to the absolutely mind-blowing progress we've seen in this field for the last 20 years. But even ignoring ML, the world's of web development and mobile application development have gone through incredible progress over the last decade and a half. I remember a time when JavaScript books would have a section warning that you should never use JS for anything critical to the application. Then there's the work in theorem provers over the last decade... If you remember when syntactic sugar was progress, either you remember way further back than I do, or you weren't paying attention to what was happening in the larger computing world.

0. https://www.youtube.com/watch?v=VdIURAu1-aU

1. https://karpathy.github.io/2015/05/21/rnn-effectiveness/

◧◩◪
3. handof+It[view] [source] 2026-01-01 05:10:13
>>crysta+lt
> LLMs are literally technology that can only reproduce the past.

Funny, I've used them to create my own personalized text editor, perfectly tailored to what I actually want. I'm pretty sure that didn't exist before.

It's wild to me how many people who talk about LLM apparently haven't learned how to use them for even very basic tasks like this! No wonder you think they're not that powerful, if you don't even know basic stuff like this. You really owe it to yourself to try them out.

◧◩◪◨
4. crysta+bv[view] [source] 2026-01-01 05:29:42
>>handof+It
> You really owe it to yourself to try them out.

I've worked at multiple AI startups in lead AI Engineering roles, both working on deploying user facing LLM products and working on the research end of LLMs. I've done collaborative projects and demos with a pretty wide range of big names in this space (but don't want to doxx myself too aggressively), have had my LLM work cited on HN multiple times, have LLM based github projects with hundreds of stars, appeared on a few podcasts talking about AI etc.

This gets to the point I was making. I'm starting to realize that part of the disconnect between my opinions on the state of the field and others is that many people haven't really been paying much attention.

I can see if recent LLMs are your first intro to the state of the field, it must feel incredible.

◧◩◪◨⬒
5. Camper+Dv[view] [source] 2026-01-01 05:36:55
>>crysta+bv
That's all very impressive, to be sure. But are you sure you're getting the point? As of 2025, LLMs are now very good at writing new code, creating new imagery, and writing original text. They continue to improve at a remarkable rate. They are helping their users create things that didn't exist before. Additionally, they are now very good at searching and utilizing web resources that didn't exist at training time.

So it is absurdly incorrect to say "they can only reproduce the past." Only someone who hasn't been paying attention (as you put it) would say such a thing.

◧◩◪◨⬒⬓
6. windex+cD[view] [source] 2026-01-01 07:30:50
>>Camper+Dv
> They are helping their users create things that didn't exist before.

That is a derived output. That isn't new as in: novel. It may be unique but it is derived from training data. LLMs legitimately cannot think and thus they cannot create in that way.

◧◩◪◨⬒⬓⬔
7. orders+u91[view] [source] 2026-01-01 13:46:53
>>windex+cD
I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.

5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.

I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".

◧◩◪◨⬒⬓⬔⧯
8. windex+if1[view] [source] 2026-01-01 14:35:02
>>orders+u91
> 5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do.

Mastery of words is thinking? In that line of argument then computers have been able to think for decades.

Humans don't think only in words. Our context, memory and thoughts are processed and occur in ways we don't understand, still.

There's a lot of great information out there describing this [0][1]. Continuing to believe these tools are thinking, however, is dangerous. I'd gather it has something to do with logic: you can't see the process and it's non-deterministic so it feels like thinking. ELIZA tricked people. LLMs are no different.

[0] https://archive.is/FM4y8 [0] https://www.theverge.com/ai-artificial-intelligence/827820/l... [1] https://www.raspberrypi.org/blog/secondary-school-maths-show...

◧◩◪◨⬒⬓⬔⧯▣
9. Camper+2z1[view] [source] 2026-01-01 16:59:12
>>windex+if1
Mastery of words is thinking?

That's the crazy thing. Yes, in fact, it turns out that language encodes and embodies reasoning. All you have to do is pile up enough of it in a high-dimensional space, use gradient descent to model its original structure, and add some feedback in the form of RL. At that point, reasoning is just a database problem, which we currently attack with attention.

No one had the faintest clue. Even now, many people not only don't understand what just happened, but they don't think anything happened at all.

ELIZA, ROFL. How'd ELIZA do at the IMO last year?

◧◩◪◨⬒⬓⬔⧯▣▦
10. meindn+YX1[view] [source] 2026-01-01 19:31:12
>>Camper+2z1
So people without language cannot reason? I don't think so.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. Camper+h12[view] [source] 2026-01-01 19:53:05
>>meindn+YX1
There's no such thing as people without language, except for infants and those who are so mentally incapacitated that the answer is self-evidently "No, they cannot."

Language is the substrate of reason. It doesn't need to be spoken or written, but it's a necessary and (as it turns out) sufficient component of thought.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. windex+Gp2[view] [source] 2026-01-01 22:36:00
>>Camper+h12
There are quite a few studies to refute this highly ignorant comment. I'd suggest some reading [0].

From the abstract: "Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Astonishingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person’s thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain’s language areas when they understand a sentence, but not when they perform other nonlinguistic tasks like arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Taken together, these two complementary lines of evidence provide a clear answer to the classic question: many aspects of thought engage distinct brain regions from, and do not depend on, language."

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
13. Camper+bN2[view] [source] 2026-01-02 01:34:06
>>windex+Gp2
Yeah, you can prove pretty much anything with a pubmed link. Do dead salmon "think?" fMRI says maybe!

https://pmc.ncbi.nlm.nih.gov/articles/PMC2799957/

The resources that the brain is using to think -- whatever resources those are -- are language-based. Otherwise there would be no way to communicate with the test subjects. "Language" doesn't just imply written and spoken text, as these researchers seem to assume.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳
14. emp173+fx4[view] [source] 2026-01-02 17:24:06
>>Camper+bN2
There’s linguistic evidence that, while language influences thought, it does not determine thought - see the failure of the strong Sapir-Whorf hypothesis. This is one of the most widely studied and robust linguistic results - we actually know for a fact that language does not determine or define thought.
[go to top]