In actuality "Antivirus" for AI agents looks something more like this:
1. Input scanning: ML classifiers detect injection patterns (not regex, actual embedding-based detection) 2. Output validation: catch when the model attempts unauthorized actions 3. Privilege separation: the LLM doesn't have direct access to sensitive resources
Is it perfect? No. Neither is SQL parameterization against all injection attacks. But good is better than nothing.
(Disclosure: I've built a prompt protection layer for OpenClaw that I've been using myself and sharing with friends - happy to discuss technical approaches if anyone's curious.)
What injection attack gets through SQL parameterization?
If you must generate nonsense with an LLM, at least proofread it before posting.