zlacker

[return to "Towards a science of scaling agent systems: When and why agent systems work"]
1. Curiou+Rd[view] [source] 2026-02-01 19:53:48
>>gmays+(OP)
This is a neat idea but there are so many variables here that it's hard to make generalizations.

Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.

◧◩
2. zby+Ej[view] [source] 2026-02-01 20:43:04
>>Curiou+Rd
Reasoning is recursive - you cannot isolate where is should be symbolic and where it should be llm based (fuzzy/neural). This is the idea that started https://github.com/zby/llm-do - there is also RLM: https://alexzhang13.github.io/blog/2025/rlm/ RLM is simpler - but my approach also have some advantages.
◧◩◪
3. bob102+Zt1[view] [source] 2026-02-02 08:15:02
>>zby+Ej
I think the AI community is sleeping hard on proper symbolic recursion. The computer has gigabytes of very accurate "context" available if you start stacking frames. Any strategy that happens inside token space will never scale the same way.

Depth first, slow turtle recursion is likely the best way to reason through the hardest problems. It's also much more efficient compared to things that look more like breadth first search (gas town).

[go to top]