zlacker

[return to "2025: The Year in LLMs"]
1. syndac+Hj[view] [source] 2026-01-01 02:56:20
>>simonw+(OP)
I can’t get over the range of sentiment on LLMs. HN leans snake oil, X leans “we’re all cooked” —- can it possibly be both? How do other folks make sense of this? I’m not asking for a side, rather understanding the range. Does the range lead you to believe X over Y?
◧◩
2. PeterH+gB[view] [source] 2026-01-01 06:57:49
>>syndac+Hj
I think it may be all summed up by Roy Amara's observation that "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
◧◩◪
3. legule+ZI[view] [source] 2026-01-01 08:52:37
>>PeterH+gB
The effects might be drastically different from what you would expect though. We’ve seen this with machine learning/AI again and again that what looks probable to work doesn’t work out and unexpected things work.
[go to top]