zlacker

[parent] [thread] 8 comments
1. saagar+(OP)[view] [source] 2025-12-06 16:02:52
It's worth noting here that the author came up with a handful of good heuristics to guide Claude and a very specific goal, and the LLM did a good job given those constraints. Most seasoned reverse engineers I know have found similar wins with those in place.

What LLMs are (still?) not good at is one-shot reverse engineering for understanding by a non-expert. If that's your goal, don't blindly use an LLM. People already know that you getting an LLM to write prose or code is bad, but it's worth remembering that doing this for decompilation is even harder :)

replies(2): >>ph4eve+x1 >>zdware+ud
2. ph4eve+x1[view] [source] 2025-12-06 16:13:09
>>saagar+(OP)
Are they not performing well because they are trained to be more generic, or is the task too complex? It seems like a cheap problem to fine-tune.
replies(2): >>pixl97+S4 >>motobo+TH
◧◩
3. pixl97+S4[view] [source] [discussion] 2025-12-06 16:39:50
>>ph4eve+x1
Sounds like a more agentic pipeline task. Decompile, assess, explain.
4. zdware+ud[view] [source] 2025-12-06 17:48:54
>>saagar+(OP)
Agree with this. I'm a software engineer that has mostly not had to manage memory for most of my career.

I asked Opus how hard it would be to port the script extender for Baldurs Gate 3 from Windows to the native Linux Build. It outlined that it would be very difficult for someone without reverse engineering experience, and correctly pointed out they are using different compilers, so it's not a simple mapping exercise. It's recommendation was not to try unless I was a Ghrida master and had lots of time in my hands.

replies(1): >>dimitr+Gg
◧◩
5. dimitr+Gg[view] [source] [discussion] 2025-12-06 18:19:02
>>zdware+ud
FWIW most LLMs are pretty terrible at estimating complexity. If you've used Claude Code for any length of time you might be familiar with it's plan "timelines" which always span many days but for medium size projects get implemented in about an hour.

I've had CC build semi-complex Tauri, PyQT6, Rust and SvelteKit apps for me without me having ever touched that language. Is the code quality good? Probably not. But all those apps were local-only tools or had less than 10 users so it doesn't matter.

replies(2): >>zdware+0i >>hobs+Sw
◧◩◪
6. zdware+0i[view] [source] [discussion] 2025-12-06 18:27:11
>>dimitr+Gg
That's fair, I've had similar experiences working in other stacks with it. And with some niche stacks, it seems to struggle more. Definitely agree the more narrow the context/problem statement, higher chance of success.

For this project, it described its reasoning well, and knowing my own skillset, and surface level info on how one would start this, it had many good points that made the project not realistic for me.

◧◩◪
7. hobs+Sw[view] [source] [discussion] 2025-12-06 20:29:17
>>dimitr+Gg
Disagree - the timelines are completely reasonable for an actual software project, and that's what the training data is based on, not projects written with LLMs.
replies(1): >>thetur+3F
◧◩◪◨
8. thetur+3F[view] [source] [discussion] 2025-12-06 21:48:34
>>hobs+Sw
Yes, this is my experience as well.
◧◩
9. motobo+TH[view] [source] [discussion] 2025-12-06 22:15:27
>>ph4eve+x1
The knowledge probably is o the pre-training data (the internet documenta the LLM is trained at to get a good grasp), but probably very poorly represented in the reinforcement learning phase.

Which is to say that probably antropic don’t have good training documents and evals to teach the model how to do that.

Well they didn’t. But now they have some.

If the author want to improve his efficiency even more, I’d suggest he starts creating tools that allow a human to create a text trace of a good run on decompilating this project.

Those traces can be hosted in a place Antropic can see and then after the next model pre-training there will be a good chance the model become even better at this task.

[go to top]