Agent ForgingGround with Built-In Red-Teaming Agents continuously evaluates and stress-tests AI agents across 50+ production-grade enterprise environments so enterprises can embrace AI agents without compromising security.
Today, Mend.io is launching Contextual Project Classification, an AI-native feature that automatically analyzes your codebase to identify which applications handle sensitive data like payments, healthcare records, and PII, enabling true risk-based security prioritization.
Anyone who knows me knows I’m passionate about mindfulness. Because I genuinely believe it makes us better humans. But also, because I have one of those brains that desperately needs it. I’m easily distracted and I start new ideas before finishing old ones. My attention can scatter in a hundred directions. I wrote before how I clicked on a phishing test because I was multitasking and running on autopilot. And that moment really changed the direction of my career and my research.
As I was writing my latest book, How AI and Quantum Impact Cyber Threats and Defenses, I was hit by how many theoretical and real attacks there are involving AI. There are attacks committed by AI and attacks committed agsinst AI, and I’m not sure which category is bigger.
AI security tools are no longer just defensive layers. They are high value targets being studied, fingerprinted, and bypassed much like traditional endpoint detection and response (EDR) platforms and antivirus solutions were in their early days. The speed and scale at which these tools are being deployed makes reactive defense increasingly unsustainable.
Synthetic data has been fine for testing software for decades. Traditional apps follow rules. You check inputs, check outputs, file a bug when something breaks. AI is different. AI gets deployed into the situations where the rules aren’t clear and context is everything. The edge cases aren’t exceptions. They’re the whole point. That changes what your test data needs to look like.
In late 2025, a Fortune 50 enterprise decided to deploy autonomous AI agents across core business operations. Customer support that could reason through complex issues. Supply chain systems that could adapt in real time. Product managers with AI assistants pulling insights from dozens of data sources simultaneously. The capabilities that made the agents useful also introduced a problem nobody had a clean answer for. These weren’t chatbots locked inside a single application.
When your SOC alerts on “suspicious AI activity” in a production trading system, your response team faces a question that didn’t exist two years ago: can you explain to regulators exactly which function processed the malicious prompt, which internal tool it called, and how customer data ended up leaving your environment?