Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI on the Radar: Securing AI Driven Development

Join Vandana and Rob in this insightful webinar exploring the rapidly evolving landscape of AI security. As we shift from simple query-response models to complex autonomous agents that can plan, execute code, and access sensitive APIs, the traditional security "locks" are no longer sufficient. This session dives deep into the OWASP AI Exchange, a community-driven initiative providing practical guidance and technical controls for securing AI systems.

The Rise of the AI Security Engineer: A New Discipline for an AI-Native World

We are witnessing the birth of a new profession in the blend of security engineering and security operations, a discipline that didn't exist five years ago because the systems it protects didn't exist five years ago. As artificial intelligence moves from experimental to essential and agentic systems begin to perceive, reason, act, and learn autonomously, we need defenders who can operate at the same velocity. I'm talking about the AI Security Engineer.

Claude Code Security: A Welcome Evolution in the Remediation Loop

AI accelerates discovery — but enterprise trust still depends on deterministic validation, remediation automation, and governance at scale. Last Friday, Anthropic launched Claude Code Security, powered by Opus 4.6, inside Claude Code. The demo is impressive: Frontier AI reasoning scanned open source codebases and surfaced over 500 previously unknown high-severity vulnerabilities — including subtle heap buffer overflows that had survived decades of expert review and fuzzing.

Cursor Composer 1.5 is Here: Is It Actually Better?

Is Cursor’s new Composer 1.5 model a major leap forward, or just a marginal update? Today, we’re putting the latest version of Cursor’s agentic AI to the test using our "Production-Ready Note App" prompt. We compare the speed, UI design, and agentic capabilities of 1.5 against version 1.0. Most importantly, we run a full security audit using the Snyk extension to see if the AI-generated code is actually safe for production.

How "Clinejection" Turned an AI Bot into a Supply Chain Attack

On February 9, 2026, security researcher Adnan Khan publicly disclosed a vulnerability chain (dubbed "Clinejection") in the Cline repository that turned the popular AI coding tool's own issue triage bot into a supply chain attack vector. Eight days later, an unknown actor exploited the same flaw to publish an unauthorized version of the Cline CLI to npm, installing the OpenClaw AI agent on every developer machine that updated during an eight-hour window.

Snyk and Cline: Securing the Future of Autonomous Coding

We are thrilled to announce a strategic partnership with Cline Bot Inc. to bridge the gap between autonomous speed and enterprise trust. By embedding Snyk’s security intelligence directly into Cline’s autonomous loops, we are delivering an end-to-end automated secure coding workflow that empowers developers to innovate with confidence. The evolution of AI coding tools is accelerating rapidly. We have moved from simple completion to sophisticated chat, and now to full autonomy.