zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. jackdo+vm1[view] [source] 2025-06-07 10:54:45
>>amrrs+(OP)
I think one of the reason we are confused about what LLMs can do is because they use language. And we look at the "reasoning traces" and the tokens there look human, but what is actually happening is very alien to us, as shown by "Biology of Large Language Models"[1] and "Safety Alignment Should Be Made More Than Just a Few Tokens Deep"[2]

I am struggling a lot to see what the tech can and can not do, particularly designing systems with them, and how to build systems where the whole is bigger than the sum of its parts. And I think this is because I am constantly confused by their capabilities, despite understanding their machinery and how they work, their use of language just seems like magic. I even wrote https://punkx.org/jackdoe/language.html just to remind myself how to think about it.

I think this kind of research is amazing and we have to spend tremendous more effort into understanding how to use the tokens and how to build with them.

[1]: https://transformer-circuits.pub/2025/attribution-graphs/bio...

[2]: https://arxiv.org/pdf/2406.05946

◧◩
2. dmos62+Co1[view] [source] 2025-06-07 11:22:40
>>jackdo+vm1
> how to build systems where the whole is bigger than the sum of its parts

A bit tangential, but I look at programming as inherently being that. Every task I try to break down into some smaller tasks that together accomplish something more. That leads me to think that, if you structure the process of programming right, you will only end up solving small, minimally interwined problems. Might sound far-fetched, but I think it's doable to create such a workflow. And, even the dumber LLMs would slot in naturally into such a process, I imagine.

◧◩◪
3. throwa+tq1[view] [source] 2025-06-07 11:54:03
>>dmos62+Co1
> And, even the dumber LLMs would slot in naturally into such a process

That is what I am struggling with, it is really easy at the moment to slot LLM and make everything worse. Mainly because its output is coming from torch.multinomial with all kinds of speculative decoding and quantizations and etc.

But I am convinced it is possible, just not the way I am doing it right now, thats why I am spending most of my time studying.

◧◩◪◨
4. dmos62+3G1[view] [source] 2025-06-07 14:48:36
>>throwa+tq1
What's your approach?
[go to top]