Ep. 26 - The Future of Autonomous Red Teaming with Ads Dawson
In episode 26, we talk to Ads Dawson, Staff AI Security Researcher @ Dreadnode.
Ads is a seasoned offensive security engineer with over 13 years of experience in offensive security and web application penetration testing. Many in our community will know him as a foundational figure in AI security. He was a founder and the Technical Lead for the hugely influential OWASP Top 10 for Large Language Model Applications. He's also a prolific open-source toolsmith, building extensions for Caido, Burp Suite, and other utilities that many of us have likely used.
Now at Dreadnode, he's at the absolute bleeding edge, working on a team that is industrializing offensive security with AI agents. We're going to get into his journey from being a self-taught 'OWASP child' to shaping industry standards, the future of automated red teaming, and what it means to build the very tools that will define the next era of cyber conflict. Ads was also the lead author and owner of “AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models”, a public benchmark, dataset and harness for LLMs attacking AI/ML CTF challenges.
In this episode, we discuss the evolving landscape of offensive security in the age of AI. The conversation covers the practical application of AI agents in red teaming, a critical look at industry standards like the OWASP Top 10 for LLMs, and Ads hands-on approach to building and evaluating autonomous hacking tools. He shares insights from his work industrializing offensive security with AI, his journey as a self-taught professional, and offers advice for others looking to grow in the field.
Key Takeaways
-
AI is a "Force Multiplier," Not a Replacement: Ad emphasizes that AI should be viewed as a productivity tool that enhances the capabilities of human security professionals, allowing them to scale their efforts and tackle more complex tasks. Human expertise remains critical, especially since much of the data used to train AI models originates from human researchers.
-
Prompt Injection is a Mechanism, Not a Vulnerability: A key insight is that "prompt injection" itself isn't a vulnerability but a method used to deliver an exploit. The discussion highlights a broader critique of security frameworks like the OWASP Top 10, which can sometimes oversimplify complex issues and become compliance checklists rather than practical guides.
-
Build Offensive Agents with Small, Focused Tasks: When creating offensive AI agents, the most successful approach is to break down the overall objective into small, concise sub-tasks. For example, instead of a single goal to "find XSS," an agent would have separate tasks to log in, identify input fields, and then test those inputs.
-
Hands-On Learning and Community are Crucial for Growth: As a self-taught professional, Ad advocates for getting deeply involved in the security community through meetups and CTFs. He stresses the importance of hands-on practice—"just play with it"—and curating your information feed by following trusted researchers to cut through the noise and continuously learn.
We hope you tune in and, if you like the episode, please do subscribe!