zlacker

[return to "Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete"]
1. gsf_em+4d8[view] [source] 2025-04-05 15:45:25
>>alphad+(OP)
Recent talk: https://www.youtube.com/watch?v=ETZfkkv6V7Y

LeCun, "Mathematical Obstacles on the Way to Human-Level AI"

Slide (Why autoregressive models suck)

https://xcancel.com/ravi_mohan/status/1906612309880930641

◧◩
2. hatefu+Ff8[view] [source] 2025-04-05 16:09:20
>>gsf_em+4d8
Maybe someone can explain it to me, but isn't that slide sort of just describing what makes solving problems hard in general? That there are many more decisions which put you on an inevitable path of failure?

"Probability e that any produced [choice] takes us outside the set of correct answers .. probability that answer of length n is correct: P(correct) = (1-e)^{n}"

◧◩◪
3. somena+bn8[view] [source] 2025-04-05 17:11:48
>>hatefu+Ff8
I think he's focusing on the distinction between facts and output for humans and drawing a parallel to LLMs.

If I ask you something that you know the answer to, the words you use and that fact iself are distinct entities. You're just giving me a presentation layer for fact #74719.

But LLMs lack any comparable pool to draw from, and so their words and their answer are essentially the same thing.

[go to top]