Open-source plugins for Ghidra, Binary Ninja, and IDA Pro that bring LLM reasoning, autonomous agents, and semantic knowledge graphs directly into your analysis workflow.
Reverse engineering is slow because the tools don't think with you. You copy pseudocode into a chat window, wait for a response, then manually apply the results back. You rename one function at a time. You build mental models of call graphs that evaporate between sessions.
Our plugins eliminate that friction. Ask questions about code without leaving your disassembler. Get function explanations, rename suggestions, and vulnerability assessments in seconds. Build a persistent knowledge graph that captures what every function does, how they relate, and where the risks are — so your analysis compounds instead of starting over.
Same capabilities, native to every major RE platform. Install, configure an LLM provider, and start analyzing.
A seven-tab sidebar that turns Ghidra into an AI workbench. Select a function, get a plain-English explanation with security risk flags. Ask follow-up questions with context macros that auto-inject decompiled code, cross-references, or call graphs. Let the ReAct agent autonomously trace data flows across dozens of functions while you watch.
A native Binary Ninja sidebar built on Qt that streams LLM responses in real time. Navigate to any function and get instant analysis. The Actions tab generates rename and retype suggestions with confidence scores — apply them individually or in bulk. Extended thinking mode lets you dial up reasoning depth for complex vulnerability research.
A dockable panel for IDA Pro 9.x that brings the same AI workflow to Hex-Rays users. Visual graph exploration renders function relationships with Graphviz layouts. Community detection automatically groups related functions into logical modules, giving you a high-level map of unfamiliar binaries in minutes instead of days.
Each step builds on the last. By the end, you have a searchable, annotated map of the entire binary.
Select a function, click Explain. The LLM reads the decompiled code and produces a summary, purpose description, and security risk assessment. Explanations are stored persistently so you never re-analyze the same function twice.
Ask a question like "trace user input through this binary" and the ReAct agent takes over. It plans an investigation, calls MCP tools to read functions, follow cross-references, and navigate the call graph — then synthesizes a comprehensive answer across dozens of functions, all without you clicking a thing.
The Actions tab generates batch rename, retype, and struct creation suggestions for every function and variable in scope. Each suggestion comes with a confidence score. Review the list, accept what looks right, and move on.
Index the entire binary into a semantic graph. Every function gets a summary, security flags, and call relationships. Search by behavior ("which functions parse network input?"), visualize clusters, and trace taint flows — all without leaving your disassembler.
Each plugin has a companion MCP server that exposes your disassembler's full API to external AI clients. Connect Claude Desktop, Cursor, or any MCP-compatible tool and let the LLM navigate functions, read decompiled code, set comments, and rename symbols — all programmatically.
Runs inside Ghidra as an extension. Shares a single server across all CodeBrowser windows with automatic focus tracking, so the AI always knows which binary you're looking at.
Supports concurrent analysis of multiple binaries with intelligent session management. Includes guided workflow prompts for vulnerability research, protocol analysis, and documentation generation.
Standalone server for IDA Pro 9.x with thread-safe database modifications. Five consolidated tools use action parameters for a clean, predictable API surface that LLMs can reason about reliably.
Built by reverse engineers, for reverse engineers. Every feature exists because we needed it ourselves.
Explanations, security flags, and graph data are stored in local databases. Close the binary, reopen it weeks later, and everything is still there. Your analysis compounds over time instead of disappearing with each session.
Every plugin supports Anthropic, OpenAI, Ollama, LM Studio, LiteLLM, and OAuth-based subscriptions. Run a local model for air-gapped work. Use Claude or GPT for maximum quality. Switch between them without changing your workflow.
The semantic graph lets you query functions by what they do: "which functions handle user input?", "show me crypto operations", "find network parsers." No more scrolling through thousands of sub_* stubs hoping to find the right one.
All plugins and MCP servers are open source. Add your own MCP tools, connect external servers, upload custom RAG documents, or modify the system prompt. The architecture is designed to get out of your way.