zlacker

[return to "The unexpected effectiveness of one-shot decompilation with Claude"]
1. t_mann+oAo[view] [source] 2025-12-06 17:31:13
>>knacke+(OP)
> The ‘give up after ten attempts’ threshold aims to prevent Claude from wasting tokens when further progress is unlikely. It was only partially successful, as Claude would still sometimes make dozens of attempts.

Not what I would have expected from a 'one-shot'. Maybe self-supervised would be a more suitable term?

◧◩
2. wavemo+rJo[view] [source] 2025-12-06 18:43:13
>>t_mann+oAo
"one-shot" usually just means, one example and its correct answer was provided in the prompt.

See also, "zero-shot" / "few-shot" etc.

◧◩◪
3. t_mann+Kjp[view] [source] 2025-12-07 00:00:27
>>wavemo+rJo
The article says that having decompiled some functions helps with decompiling others, so it seems like more than one example could be provided in the context. I think the OP was referring to the fact that only a single prompt created by a human was used. But then it goes off into what appears to be an agentic loop with no hard stopping conditions outside of what the agent decides.

We're essentially trying to map 'traditional' ML terminology to LLMs, it's natural that it'll take some time to get settled. I just thought that one-shot isn't an ideal name for something that might go off into an arbitrarily long loop.

[go to top]