Project Glasswing. What Anthropic's Mythos Means for Cybersecurity
What happens when an AI model can find more vulnerabilities in a day than a red team could find in a year?
Welcome to Razorwire, the podcast where we share our take on the world of cybersecurity with direct, practical advice for professionals and business owners alike. I’m Jim and in this episode, I’m joined by Martin Voelk, penetration tester and AI red teamer, and Jonathan Care, lead analyst at KuppingerCole covering AI and cybersecurity.
Anthropic recently announced Mythos, a security-focused AI model reportedly capable of discovering vulnerabilities that have gone undetected for decades, including a 27-year-old bug in OpenBSD. But how much of this is genuine breakthrough and how much is marketing? This episode cuts through the hype and asks what Mythos actually means for the cybersecurity industry, from the arms race it signals between AI model providers to the competitive implications of restricting access to a small group of US-based companies.
The conversation goes well beyond Mythos itself, into the reality that AI-powered hacking at scale is already happening, that existing models have already been used to compromise government infrastructure, and that open source and non-Western alternatives are freely available to anyone who wants them. With 80% of code now being vibe coded with minimal security checks, jailbreaking tools available on the open web and CISOs unable to keep pace with the speed of adoption, the question isn’t whether AI will change cybersecurity. It’s whether the industry can adapt fast enough to survive what’s already here.
⸻
Three key talking points:
- The Mythos hype vs the reality of AI-powered hacking
- The competitive and geopolitical implications of restricted AI models
- Why security practitioners can’t keep up and what comes next
Whether Mythos lives up to the hype or not, the arms race it signals is already underway. If you want to understand what that means for cybersecurity, this is the conversation to listen to.
⸻
On the implications of restricting AI security models:
“Anthropic may be doing this, but for those of us who are not lucky enough to be Anthropic’s friend, other countries, other organisations are not so circumspect.”
Jonathan Care
⸻
Listen to this episode on your favourite podcasting platform:
(https://razorwire.captivate.fm/listen)
⸻
In this episode, we covered the following topics:
- Anthropic’s Mythos Announcement
- AI-Powered Vulnerability Discovery at Scale
- The Mexican Government Hack
- Restricted Access and Competitive Advantage
- The Open Source and Non-Western Model Landscape
- Vibe Coding and Unchecked AI-Generated Code
- Jailbreaking and Uncensored Models
- The CISO’s Impossible Position
- Keeping Up With the Pace of Change
- The Future: Agent vs Agent
⸻
For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com.
If you need consultation, visit (https://www.razorthorn.com). We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion.
⸻
Follow us online:
LinkedIn: (https://www.linkedin.com/company/razorthorn-security)
YouTube: (https://www.youtube.com/c/RazorthornSecurity)
TikTok: (https://www.tiktok.com/@razorwire.podcast)
Instagram: (https://www.instagram.com/razorwire.podcast)
X: (https://x.com/RazorThornLTD)
Website: (https://www.razorthorn.com)