What LLMs are (still?) not good at is one-shot reverse engineering for understanding by a non-expert. If that's your goal, don't blindly use an LLM. People already know that you getting an LLM to write prose or code is bad, but it's worth remembering that doing this for decompilation is even harder :)
Which is to say that probably antropic don’t have good training documents and evals to teach the model how to do that.
Well they didn’t. But now they have some.
If the author want to improve his efficiency even more, I’d suggest he starts creating tools that allow a human to create a text trace of a good run on decompilating this project.
Those traces can be hosted in a place Antropic can see and then after the next model pre-training there will be a good chance the model become even better at this task.