Our idea is straightforward: after a decade of auditing code and writing exploits, we've accumulated a wealth of experience. So, why not teach these agents to replicate what we do during bug hunting and exploit writing? Of course, the LLMs themselves aren't sufficient on their own, so we've integrated various program analysis techniques to augment the models and help the agents understand more complex and esoteric code.