zlacker

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering

submitted by xerzes+(OP) on 2026-02-04 06:51:51 | 286 points 66 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
9. stared+6l[view] [source] 2026-02-04 09:44:12
>>xerzes+(OP)
Interesting to see Ghidra here!

A friend from work just used it (with Claude) to hack River Ride game (https://quesma.com/blog/ghidra-mcp-unlimited-lives/).

Inspired by the, I have it a try as well. While I have no prior experience with reverse engineering, I ported an old game from PowerPC to Apple Silicon.

First, including a few MCPs with Claude Code (including LaurieWired/GhidraMCP you forked from, and https://github.com/jtang613/GhidrAssistMCP). Yet, the agent fabricated as lot of code, instead for translating it from source.

I ended up using headless mode directly in Cursor + GPT 5.2 Codex. The results were the best.

Once I get some time, will share a write-up.

10. summar+ml[view] [source] 2026-02-04 09:45:49
>>xerzes+(OP)
Ive been using it (the original 15 tool version) for months now. It’s amazing. Any app's inner workings are suddenly transparent. I can track down bugs. Get a deeper understanding of any tool, and even write plug-ins or preload shims that mod any app. It’s like I finally actually _own_ the software I bought years ago.

For objective C heavy code, I also use Hopper Disassembler (which now has a built in MCP server).

Some related academic work (full recompilation with LLMs and Ghidra): https://dl.acm.org/doi/10.1145/3728958

◧◩
14. babas+Jn[view] [source] [discussion] 2026-02-04 10:05:26
>>xerzes+3
How does this compare to ReVa? https://github.com/cyberkaida/reverse-engineering-assistant

I think your installation instructions are incomplete. I followed the instructions and installed via file -> install in the project view. Restarted. But GhidraMCP is not visible in Tools after opening a binary.

◧◩
17. s-mack+Fs[view] [source] [discussion] 2026-02-04 10:44:27
>>stared+6l
I’ve also been playing around with reverse engineering, and I’m very impressed. It turns out that Codex with GPT-5.2 is better at reverse engineering than Claude.

For example, Codex can completely reverse-engineer this 1,300-line example [0] of a so-called C64-SID file within 30 minutes, without any human interaction.

I am working on a multi-agent system that can completely reverse-engineer C64 games. Old MS-DOS games are still too massive to analyze for my budget limit.

[0] https://gist.github.com/s-macke/595982d46d6699b69e1f0e051e7b...

27. DonHop+rE[view] [source] 2026-02-04 12:10:07
>>xerzes+(OP)
How could this be more efficiently and elegantly refactored as an Anthropic or MOOLLM skill set that was composable and repeatable (skills calling other skills, and iterating over MANY fast skill calls, in ONE llm completion call, as opposed many slow MCP calls ping-ponging back and forth, waiting for network delay + tokenization/detokenization cost, quantization and distortion each round)?

What parts of Ghidra (like cross referencing, translating, interpreting text and code) can be "uplifted" and inlined into skills that run inside the LLM completion call on a large context window without doing token IO and glacially slow and frequently repeated remote procedure calls to external MCP servers?

>>46878126

>There's a fundamental architectural difference being missed here: MCP operates BETWEEN LLM complete calls, while skills operate DURING them. Every MCP tool call requires a full round-trip — generation stops, wait for external tool, start a new complete call with the result. N tool calls = N round-trips. Skills work differently. Once loaded into context, the LLM can iterate, recurse, compose, and run multiple agents all within a single generation. No stopping. No serialization.

>Skills can be MASSIVELY more efficient and powerful than MCP, if designed and used right. [...]

Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...

>I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.

speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...

More: Speed of Light -vs- Carrier Pigeon (an allegory for Skills -vs- MCP):

https://github.com/SimHacker/moollm/blob/main/designs/SPEED-...

◧◩
28. DonHop+QG[view] [source] [discussion] 2026-02-04 12:29:00
>>jakoza+ff
Great point! Not just binary analysis, plus even self-analysis! (See skill-snitch analyze and snitch on itself below!)

MOOLLM's Anthropic skill scanning and monitoring "skill-snitch" skill has superhuman capabilities in reviewing and reverse engineering and monitoring the behavior of untrusted Anthropic and MOOLLM skills, and is also great for debugging and optimizing skills.

It composes with the "cursor-mirror" skill, which gives you full reflective access to all of Cursor's internal chat state, behavior, tool calls, parameters, prompts, thinking, file reads and writes, etc.

That's but one example of how skills can compose, call each other, delegate from one to another, even recurse, iterate, and apply many (HUNDREDS) of skills in one llm completion call.

>>46878126

Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...

I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.

speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...

Skills also compose. MOOLLM's cursor-mirror skill introspects Cursor's internals via a sister Python script that reads cursor's chat history and sqlite databases — tool calls, context assembly, thinking blocks, chat history. Everything, for all time, even after Cursor's chat has summarized and forgotten: it's still all there and searchable!

cursor-mirror skill: https://github.com/SimHacker/moollm/tree/main/skills/cursor-...

MOOLLM's skill-snitch skill composes with cursor-mirror for security monitoring of untrusted skills, also performance testing and optimization of trusted ones. Like Little Snitch watches your network, skill-snitch watches skill behavior — comparing declared tools and documentation against observed runtime behavior.

skill-snitch skill: https://github.com/SimHacker/moollm/tree/main/skills/skill-s...

You can even use skill-snitch like a virus scanner to review and monitor untrusted skills. I have more than 100 skills and had skill-snitch review each one including itself -- you can find them in the skill-snitch-report.md file of each skill in MOOLLM. Here is skill-snitch analyzing and reporting on itself, for example:

skill-snitch's skill-snitch-report.md: https://github.com/SimHacker/moollm/blob/main/skills/skill-s...

MOOLLM's thoughtful-commitment skill also composes with cursor-mirror to trace the reasoning behind git commits.

thoughtful-commit skill: https://github.com/SimHacker/moollm/tree/main/skills/thought...

MCP is still valuable for connecting to external systems. But for reasoning, simulation, and skills calling skills? In-context beats tool-call round-trips by orders of magnitude.

More: Speed of Light -vs- Carrier Pigeon (an allegory for Skills -vs- MCP):

https://github.com/SimHacker/moollm/blob/main/designs/SPEED-...

◧◩
29. Retr0i+RH[view] [source] [discussion] 2026-02-04 12:37:07
>>xerzes+3
What does your function-hashing system offer over ghidra's built in FunctionID, or the bindiff plugin[0]?

[0] https://github.com/google/bindiff

◧◩
38. esafak+OY[view] [source] [discussion] 2026-02-04 14:23:46
>>tarasy+dq
Because it was started before the revelation MCP was a context hog. https://github.com/LaurieWired/GhidraMCP
49. poly2i+1q1[view] [source] 2026-02-04 16:27:20
>>xerzes+(OP)
I saw this earlier, but opted for LaurieWired's MCP because it had a nice README and seemed to be the most common. How does this one compare? Are there any benchmark or functionality comparisons?

https://github.com/LaurieWired/GhidraMCP

50. clint+5s1[view] [source] 2026-02-04 16:36:27
>>xerzes+(OP)
i wonder how this compares to the work I've been doing @ 2389 with the binary-re skill: https://github.com/2389-research/claude-plugins/tree/main/bi...

Specifically the dynamic analysis skills could get a really big boost with this MCP server, I also wonder if this MCP server could be rephrased into a pure skill and not come with all the context baggage.

◧◩◪
62. sintax+yi2[view] [source] [discussion] 2026-02-04 20:29:48
>>Retr0i+RH
or Binary Ninja's WARP : https://docs.binary.ninja/guide/warp.html // https://github.com/vector35/warp
[go to top]