AI is transforming the cybersecurity landscape for both threat actors and defenders. Learn how AI is being used on both sides of the battle with practical tips to help your security team up-level its AI use.
AI has moved from experimentation to a strategic enterprise imperative. It’s no longer about whether organizations will adopt AI, but whether their security architecture can govern it at the speed and scale at which it is being embedded into the business. This is not a future concern. It is today’s operational mandate to: Securing AI is not limited to software applications and agents.
Today, Mend.io is launching Contextual Project Classification, an AI-native feature that automatically analyzes your codebase to identify which applications handle sensitive data like payments, healthcare records, and PII, enabling true risk-based security prioritization.
Anyone who knows me knows I’m passionate about mindfulness. Because I genuinely believe it makes us better humans. But also, because I have one of those brains that desperately needs it. I’m easily distracted and I start new ideas before finishing old ones. My attention can scatter in a hundred directions. I wrote before how I clicked on a phishing test because I was multitasking and running on autopilot. And that moment really changed the direction of my career and my research.
As I was writing my latest book, How AI and Quantum Impact Cyber Threats and Defenses, I was hit by how many theoretical and real attacks there are involving AI. There are attacks committed by AI and attacks committed agsinst AI, and I’m not sure which category is bigger.
AI security tools are no longer just defensive layers. They are high value targets being studied, fingerprinted, and bypassed much like traditional endpoint detection and response (EDR) platforms and antivirus solutions were in their early days. The speed and scale at which these tools are being deployed makes reactive defense increasingly unsustainable.
Synthetic data has been fine for testing software for decades. Traditional apps follow rules. You check inputs, check outputs, file a bug when something breaks. AI is different. AI gets deployed into the situations where the rules aren’t clear and context is everything. The edge cases aren’t exceptions. They’re the whole point. That changes what your test data needs to look like.
In late 2025, a Fortune 50 enterprise decided to deploy autonomous AI agents across core business operations. Customer support that could reason through complex issues. Supply chain systems that could adapt in real time. Product managers with AI assistants pulling insights from dozens of data sources simultaneously. The capabilities that made the agents useful also introduced a problem nobody had a clean answer for. These weren’t chatbots locked inside a single application.
When your SOC alerts on “suspicious AI activity” in a production trading system, your response team faces a question that didn’t exist two years ago: can you explain to regulators exactly which function processed the malicious prompt, which internal tool it called, and how customer data ended up leaving your environment?
It’s 2:47 AM and your SOC dashboard lights up. Six alerts fire across three hours from a single Kubernetes cluster: an outbound HTTP fetch to an unfamiliar domain, a tool invocation inside a customer support agent, an API call to an internal service the agent has never contacted, a service account token read, a file write to a model artifact directory, and an outbound data transfer that looks like normal API usage.