Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Axios npm Package Compromised: Supply Chain Attack Delivers Cross-Platform RAT

On March 31, 2026, two malicious versions of axios, the enormously popular JavaScript HTTP client with over 100 million weekly downloads, were briefly published to npm via a compromised maintainer account. The packages contained a hidden dependency that deployed a cross-platform remote access trojan (RAT) to any machine that ran npm install (or equivalent in other package managers like Bun) during a two-hour window. The malicious versions (1.14.1 and 0.30.4) were removed from npm by 03:29 UTC.

GitHub Spark vs. Replit - Vibe Code Challenge

We pit GitHub Spark (in public preview) against Replit's AI agent. The challenge? Build a fully functional community forum for DIY tips from a single prompt. We compare design aesthetics, mobile responsiveness, login security, and deployment speed to see which tool creates a truly production-ready application. Which one do you think deserved the win? Let me know in the comments!

The 5 Principles of Snyk's Developer Experience

In the age of AI-driven development, speed is the new baseline. But as AI agents accelerate the pace of coding, they also amplify the risk of security bottlenecks. At Snyk, we believe a superior Developer Experience (DX) is the only way to secure this new frontier. DX is not just a layer on top of the product. It is the foundation that allows developers to unleash AI innovation securely. We think of DX as a system of decisions that compound over time.

From Discovery to Defense: Why AI Red Teaming Is the Next Step After AI-SPM

This week, we announced the general availability of Evo AI-SPM, the first operational layer of Snyk’s AI Security Fabric. AI-SPM gives security teams something they’ve never had before: a system of record for AI risk, with the ability to discover models, frameworks, datasets, and agent infrastructure embedded directly in code. For many organizations, that discovery step is a breakthrough.

The Next Era of AppSec: Why AI-Generated Code Needs Offensive Dynamic Testing

My colleague Manoj Nair recently wrote about the growing gap between what AI builds and what security teams actually test. He made the case that the speed of AI-driven development has fundamentally outpaced validation, and that the response can't be to slow down, but to change what testing means. I agree with every word.

AI Is Building Your Attack Surface. Are You Testing It?

The market is flooded with claims. One vendor tops a leaderboard. Another raises nine figures on a pitch deck. Meanwhile, your developers shipped three AI-generated services before lunch. Here's the conversation the industry isn't having, and the one we've been building toward for years. There's a version of this conversation happening inside every Security team right now. Someone demos an AI coding assistant. The speed is undeniable and the team is in awe. Still cautious, sometimes skeptical.

I Read Cursor's Security Agent Prompts, So You Don't Have To

Cursor's security team built four autonomous agents that review 3,000+ PRs per week, catch 200+ vulnerabilities, and open fix PRs automatically. The engineering is impressive, and the prompts are shockingly simple. But there's a meaningful gap between "LLM agents reviewing PRs" and "enterprise security program," and that gap is exactly where things get interesting.

Securing the Agent Skills Registry: How Snyk and Tessl Are Setting the Standard

Agent skills are becoming the building blocks of AI-native software development, giving coding agents structured, versioned context, like how to use your APIs, how to build in your codebase, and how to enforce your team's policies. Developers install them from registries the same way they install npm packages or Python libraries. But unlike npm or PyPI, the agent skills ecosystem is new.

Lovable vs. Bolt - Vibe Code Challenge

Which AI tool is better for building a real app without writing code, Bolt or Lovable? In this video, I put both AI app builders head-to-head using the exact same prompt to create a DIY home repair forum. From database setup to authentication, UI design, publishing, and security checks, we compare how each platform performs in real time. The goal isn’t just to generate something that looks like an app, it’s to see whether these tools can actually create something usable, functional, and potentially production-ready. We evaluate.

The 89% Problem: How LLMs Are Resurrecting the "Dormant Majority" of Open Source

AI coding assistants are quietly resurrecting millions of abandoned open source packages. For the last decade, developers relied on a simple heuristic for open source security: Prevalence \= Trust. If a package was downloaded millions of times a week (lodash, react, requests), we assumed it was "safe enough" because thousands of eyes were on it. If it was obscure, we approached with caution.

AI on the Radar: Securing AI Driven Development

Join Vandana and Rob in this insightful webinar exploring the rapidly evolving landscape of AI security. As we shift from simple query-response models to complex autonomous agents that can plan, execute code, and access sensitive APIs, the traditional security "locks" are no longer sufficient. This session dives deep into the OWASP AI Exchange, a community-driven initiative providing practical guidance and technical controls for securing AI systems.