zlacker

[return to "The unexpected effectiveness of one-shot decompilation with Claude"]
1. saagar+kpo[view] [source] 2025-12-06 16:02:52
>>knacke+(OP)
It's worth noting here that the author came up with a handful of good heuristics to guide Claude and a very specific goal, and the LLM did a good job given those constraints. Most seasoned reverse engineers I know have found similar wins with those in place.

What LLMs are (still?) not good at is one-shot reverse engineering for understanding by a non-expert. If that's your goal, don't blindly use an LLM. People already know that you getting an LLM to write prose or code is bad, but it's worth remembering that doing this for decompilation is even harder :)

◧◩
2. zdware+OCo[view] [source] 2025-12-06 17:48:54
>>saagar+kpo
Agree with this. I'm a software engineer that has mostly not had to manage memory for most of my career.

I asked Opus how hard it would be to port the script extender for Baldurs Gate 3 from Windows to the native Linux Build. It outlined that it would be very difficult for someone without reverse engineering experience, and correctly pointed out they are using different compilers, so it's not a simple mapping exercise. It's recommendation was not to try unless I was a Ghrida master and had lots of time in my hands.

◧◩◪
3. dimitr+0Go[view] [source] 2025-12-06 18:19:02
>>zdware+OCo
FWIW most LLMs are pretty terrible at estimating complexity. If you've used Claude Code for any length of time you might be familiar with it's plan "timelines" which always span many days but for medium size projects get implemented in about an hour.

I've had CC build semi-complex Tauri, PyQT6, Rust and SvelteKit apps for me without me having ever touched that language. Is the code quality good? Probably not. But all those apps were local-only tools or had less than 10 users so it doesn't matter.

◧◩◪◨
4. hobs+cWo[view] [source] 2025-12-06 20:29:17
>>dimitr+0Go
Disagree - the timelines are completely reasonable for an actual software project, and that's what the training data is based on, not projects written with LLMs.
[go to top]