zlacker

[parent] [thread] 3 comments
1. wat100+(OP)[view] [source] 2025-04-05 18:40:09
LLMs literally are just predicting tokens with a probabilistic model. They’re incredibly complicated and sophisticated models, but they still are just incredibly complicated and sophisticated models for predicting tokens. It’s maybe unexpected that such a thing can do summarization, but it demonstrably can.
replies(1): >>Workac+o6
2. Workac+o6[view] [source] 2025-04-05 19:16:31
>>wat100+(OP)
The rub is that we don't know if intelligence is anything more than "just predicting next output".
replies(1): >>sangno+Xf
◧◩
3. sangno+Xf[view] [source] [discussion] 2025-04-05 20:40:41
>>Workac+o6
I think we do.
replies(1): >>sho+R41
◧◩◪
4. sho+R41[view] [source] [discussion] 2025-04-06 08:21:52
>>sangno+Xf
That's just what you were most likely to say...
[go to top]