Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Understanding CRA Compliance: Overcoming Challenges with an Integrated Security Testing Approach

Shipping software into the EU now comes with serious strings attached. The Cyber Resilience Act (CRA), in effect since December 2024, sets strict new rules for any company offering digital products or services in the region, whether you’re a local startup or a global platform. The regulation aims to improve cybersecurity across connected devices and cloud-based software.

Why AI Trust Will Shape Your Next Decade of Software Development

AI is often compared to electricity, but without trust, it’s just a live wire. As organizations adopt AI to move faster, reduce manual effort, and push the boundaries of what’s possible, one truth is becoming clear: trust in AI isn’t optional. It’s foundational. And for software development teams, AI Trust is now the north star that guides safe, scalable innovation.

Cursor's One-click Install MCP in Action

In this video, I’m checking out the brand new Cursor 1.0 release and testing one of its most exciting new features — the one-click MCP install. Setting up MCP servers has never been this easy! Join me as I walk through the process, share my first impressions, and see how smooth (or not) the setup really is. If you’ve been curious about Cursor or want to simplify your MCP workflows, this one’s for you.

Building AI Trust with Snyk Code and Snyk Agent Fix

Many businesses are using AI to innovate and boost productivity. But to truly benefit from AI, you need to trust it. That's where the Snyk AI Trust Platform comes in. As we announced at the 2025 Snyk Launch, the Snyk AI Trust Platform is designed to unleash innovation, reduce business risk, and accelerate software delivery in the age of AI.

Scan your AI-generated code from Cursor using Model Context Protocol (MCP)

We’re happy to announce that Cursor has validated Snyk’s CLI MCP server and added Snyk to their curated set of MCP tools from official providers. At Snyk, we recognized early on that although AI assistants accelerate development, they can inadvertently introduce vulnerable patterns, leverage outdated libraries, or even code with known security flaws. In order to maintain the rapid iteration cycles that AI enables, developers need security to be as agile as AI itself.