- Design the system and prompts
- Build and integrate the attack tools
- Guide the decision logic and analysis
This isn’t just semantics — overstating AI capabilities can confuse the public and mislead buyers, especially in high-stakes security contexts.
I say this as someone actively working in this space. I participated in the development of PentestGPT, which helped kickstart this wave of research and investment, and more recently, I’ve been working on Cybersecurity AI (CAI) — the leading open-source project for building autonomous agents for security:
- CAI GitHub: https://github.com/aliasrobotics/cai
- Tech report: https://arxiv.org/pdf/2504.06017
I’m all for pushing boundaries, but let’s keep the messaging grounded in reality. The future of AI in security is exciting — and we’re just getting started.
Who would it be, gremlins? Those humans weren't at the top of the leaderboard before they had the AI, so clearly it helps.
What's being critized here is the hype, which can be misleading and confusing. On this topic, wrote a small essay: “Cybersecurity AI: The Dangerous Gap Between Automation and Autonomy,” to sort fact from fiction -> https://shorturl.at/1ytz7