Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Detecting Rogue AI Agents: Tool Misuse and API Abuse at Runtime

When your CNAPP flags a suspicious dependency in an AI agent container, your WAF logs an unusual API spike, and your SIEM shows a burst of cloud storage calls—are those three separate incidents or one rogue agent attack? Most security teams treat them as three tickets in three queues, investigated by three people who may never connect the dots. By the time someone pieces together that a single compromised agent drove all three signals, the attacker has already moved laterally and exfiltrated data.

What is an AI-BOM? Why Static Manifests Fall Short

Your AI-BOM shows every model, tool, and data source you deployed. But when your SOC investigates an alert about unusual agent behavior, that inventory tells them nothing about what actually happened at runtime. Static AI-BOMs document what you intended to run. Attackers exploit what your AI workloads actually do in production: which APIs they call, what data they touch, and how they use approved tools in unapproved ways.

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up

Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference, one theme echoed through nearly every conversation I had with senior government officials and public policy leaders from global technology companies: agentic AI security is the defining emerging security challenge of this moment — and policy is not keeping pace.

Building AI Security with Our Customers: 5 Lessons from Evo's Design Partner Program

In 2025, we embarked on a new journey to secure the most important technology transformation of this decade – generative AI. Our vision is to help companies secure their AI fast, so that they can innovate on the cutting edge and put AI and agentic use cases into production. To do this, we built Evo, the world’s first agentic orchestrator for AI security. The foundation of any product is customer needs.

AI-driven DAST for mobile apps: The next evolution of Dynamic Security Testing

“AI-powered DAST” is everywhere. It signals progress, but assumes something fundamental was missing. It wasn’t. DAST struggled not from lack of intelligence, but from lack of depth. Most tools never reached inside authenticated, stateful, multi-step journeys where real logic, sensitive data, and critical vulnerabilities exist. That’s the part Appknox solved years ago. AI here is not a reset. It is an accelerator, applied to a system already operating where risk actually lives.

Kubernetes for Agentic AI: Best Practices for Security and Observability

Agentic AI workloads are shipping to production on Kubernetes faster than the standards to secure them. Many teams deploying autonomous, tool-calling agents as containerized microservices do so without a shared baseline for securing or monitoring those containers. The CNCF AI Technical Community Group recently published a comprehensive article on cloud-native agentic standards, marking the first attempt to define best practices for such deployments.

DSPM, DLP, and AI Security: Why You Need All Three

Security budgets are tightening, and tool consolidation reviews keep landing on the same three categories: data security posture management (DSPM), data loss prevention (DLP), and AI security. At the same time, vendor marketing has done little to clarify the differences among the three and the path for organizations needing to enhance data security efficiently.
Featured Post

The UK's Cyber Action Plan marks the end of compliance-led security

The UK government's new £210 million Cyber Action Plan signals an important shift in how cyber risk is being addressed at a national level. Designed to strengthen cyber defences across government departments and the wider public sector, the plan establishes a new Cyber Unit and introduces stronger expectations around resilience, accountability and operational capability.