Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Next Era of AppSec: Why AI-Generated Code Needs Offensive Dynamic Testing

My colleague Manoj Nair recently wrote about the growing gap between what AI builds and what security teams actually test. He made the case that the speed of AI-driven development has fundamentally outpaced validation, and that the response can't be to slow down, but to change what testing means. I agree with every word.

AI Is Building Your Attack Surface. Are You Testing It?

The market is flooded with claims. One vendor tops a leaderboard. Another raises nine figures on a pitch deck. Meanwhile, your developers shipped three AI-generated services before lunch. Here's the conversation the industry isn't having, and the one we've been building toward for years. There's a version of this conversation happening inside every Security team right now. Someone demos an AI coding assistant. The speed is undeniable and the team is in awe. Still cautious, sometimes skeptical.

Securing the Agent Skills Registry: How Snyk and Tessl Are Setting the Standard

Agent skills are becoming the building blocks of AI-native software development, giving coding agents structured, versioned context, like how to use your APIs, how to build in your codebase, and how to enforce your team's policies. Developers install them from registries the same way they install npm packages or Python libraries. But unlike npm or PyPI, the agent skills ecosystem is new.

I Read Cursor's Security Agent Prompts, So You Don't Have To

Cursor's security team built four autonomous agents that review 3,000+ PRs per week, catch 200+ vulnerabilities, and open fix PRs automatically. The engineering is impressive, and the prompts are shockingly simple. But there's a meaningful gap between "LLM agents reviewing PRs" and "enterprise security program," and that gap is exactly where things get interesting.

The 89% Problem: How LLMs Are Resurrecting the "Dormant Majority" of Open Source

AI coding assistants are quietly resurrecting millions of abandoned open source packages. For the last decade, developers relied on a simple heuristic for open source security: Prevalence \= Trust. If a package was downloaded millions of times a week (lodash, react, requests), we assumed it was "safe enough" because thousands of eyes were on it. If it was obscure, we approached with caution.

The Rise of the AI Security Engineer: A New Discipline for an AI-Native World

We are witnessing the birth of a new profession in the blend of security engineering and security operations, a discipline that didn't exist five years ago because the systems it protects didn't exist five years ago. As artificial intelligence moves from experimental to essential and agentic systems begin to perceive, reason, act, and learn autonomously, we need defenders who can operate at the same velocity. I'm talking about the AI Security Engineer.

Claude Code Security: A Welcome Evolution in the Remediation Loop

AI accelerates discovery — but enterprise trust still depends on deterministic validation, remediation automation, and governance at scale. Last Friday, Anthropic launched Claude Code Security, powered by Opus 4.6, inside Claude Code. The demo is impressive: Frontier AI reasoning scanned open source codebases and surfaced over 500 previously unknown high-severity vulnerabilities — including subtle heap buffer overflows that had survived decades of expert review and fuzzing.

How "Clinejection" Turned an AI Bot into a Supply Chain Attack

On February 9, 2026, security researcher Adnan Khan publicly disclosed a vulnerability chain (dubbed "Clinejection") in the Cline repository that turned the popular AI coding tool's own issue triage bot into a supply chain attack vector. Eight days later, an unknown actor exploited the same flaw to publish an unauthorized version of the Cline CLI to npm, installing the OpenClaw AI agent on every developer machine that updated during an eight-hour window.

Snyk and Cline: Securing the Future of Autonomous Coding

We are thrilled to announce a strategic partnership with Cline Bot Inc. to bridge the gap between autonomous speed and enterprise trust. By embedding Snyk’s security intelligence directly into Cline’s autonomous loops, we are delivering an end-to-end automated secure coding workflow that empowers developers to innovate with confidence. The evolution of AI coding tools is accelerating rapidly. We have moved from simple completion to sophisticated chat, and now to full autonomy.