Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Snyk Joins CISA's Secure by Design Pledge

As the Chief Information Security Officer at Snyk, my primary role is to ensure the security and integrity of our products, our systems, and our customers' data. But my responsibility extends beyond our walls. It involves championing a vision for a more secure digital world—a vision I am proud to say we share with the U.S. Cybersecurity and Infrastructure Security Agency (CISA).

Secure at Inception: Introducing New Tools for Securing AI-Native Development

At Snyk, we believe you should never have to choose between speed and security. As the age of AI transforms software development, our goal is to extend our developer-first security approach to this new era, providing the essential tools your teams need to build with confidence. Today at Black Hat, we are delivering on that vision with three tangible innovations that offer a comprehensive solution to secure the entire code lifecycle with AI.

When "Private" Isn't: The Security Risks of GPT Chats Leaking to Search Engines

In late July 2025, users discovered that ChatGPT chats, initially shared via link, were appearing in search engine results on platforms such as Google, Bing, and DuckDuckGo. These shared conversations included personal content relating to mental health, career concerns, legal issues, and more, without any indication of a data breach. Instead, the exposure resulted from a now-removed feature that enabled discoverability via search indexing.

Fend Off AI Fatigue with the Snyk AI Trust Platform

Generative AI has transformed software development almost overnight. From coding assistants to AI-native applications, tools are evolving faster than most teams can keep up with. But the rapid evolution of AI comes with its own cost: mental fatigue. Even among AI developers, most don’t consider themselves experts in generative AI. Between shifting tools, growing security risks, and a flood of hype, it’s no surprise that developers and security teams feel overwhelmed.

Navigating Enterprise AI Implementation: Risks, Rewards, and Where to Start

At Snyk, we believe that AI innovation starts with trust, which must be earned through clear governance, sound security practices, and proven value delivery. As we scale our AI initiatives across the business, we’re continually refining how to implement AI in a way that is not just fast and functional, but also secure and responsible.

Cursor IDE Malware Extension Compromise in $500k Crypto Heist

Cursor IDE, as many are aware, is a fork of the open source and popular VS Code IDE project from Microsoft. Similarly to VS Code, Cursor has support for IDE extensions, which prompts many developers to migrate over with their favorite extensions and long-lived workflows, shortcuts, themes, and other configurations. Back in May 2021, Snyk’s Security Labs conducted research that uncovered VS Code extensions vulnerable to insecure code patterns.

From Hype to Trust: Building the Foundations of Secure AI Development

Generative AI and Agentic AI are changing everything from who writes software to how we define secure architecture. At Snyk’s recent Lighthouse event in NYC, leaders from cloud, security, and development teams came together to answer one essential question: how do we move fast with AI without breaking trust? The answer? Start with visibility, bake in security by design, and never lose sight of the humans behind the code.