zlacker

[return to "Obituary for Cyc"]
1. vannev+14[view] [source] 2025-04-08 19:44:13
>>todsac+(OP)
I would argue that Lenat was at least directionally correct in understanding that sheer volume of data (in Cyc's case, rules and facts) was the key in eventually achieving useful intelligence. I have to confess that I once criticized the Cyc project for creating an ever-larger pile of sh*t and expecting a pony to emerge, but that's sort of what has happened with LLMs.
◧◩
2. chubot+ca[view] [source] 2025-04-08 20:26:50
>>vannev+14
That’s hilarious, but at least Llama was trained on libgen, an archive of most books and publications by humanity, no? Except for the ones which were not digitized I guess

So there is probably a big pile of Reddit comments, twitter messages, and libgen and arxiv PDFs I imagine

So there is some shit, but also painstakingly encoded knowledge (ie writing), and yeah it is miraculous that LLMs are right as often as they are

◧◩◪
3. ChadNa+Qj[view] [source] 2025-04-08 21:36:11
>>chubot+ca
It's a miracle, but it's all thanks to the post-training. When you think of it, for so-called "next token predictors", LLMs talk in a way that almost no one actually talks, with perfect spelling and use of punctuation. The post-training somehow is able to get them to predict something along the lines of what a reasonably intelligent assistant with perfect grammar would say. LLMs are probably smarter than is exposed through their chat interface, since it's unlikely the post-training process is able to get them to impersonate the smartest character they'd be capable of impersonating.
[go to top]