I Read Cursor's Security Agent Prompts, So You Don't Have To

Cursor's security team built four autonomous agents that review 3,000+ PRs per week, catch 200+ vulnerabilities, and open fix PRs automatically. The engineering is impressive, and the prompts are shockingly simple. But there's a meaningful gap between "LLM agents reviewing PRs" and "enterprise security program," and that gap is exactly where things get interesting.

Tackling Third-Party Risks: The Persistent Software Supply Chain Challenge

Modern software development relies on open-source components to accelerate innovation. This efficiency, however, introduces significant risk. Your application’s security is now tied to a vast and complex supply chain of code you did not write. The persistent software supply chain challenge is that this external code is a primary source of critical vulnerabilities and a hard.

AI Workload Security for Financial Services: What CISOs Need to Know

When your SOC alerts on “suspicious AI activity” in a production trading system, your response team faces a question that didn’t exist two years ago: can you explain to regulators exactly which function processed the malicious prompt, which internal tool it called, and how customer data ended up leaving your environment?

Cato AI Security: Is Your Security Stack Built for How AI Works?

AI adoption is accelerating across enterprises — often faster than security teams can respond. Employees are using AI tools and copilots across SaaS apps and workflows, creating new exposure around sensitive data, shadow AI, and attack surfaces that traditional tools weren't built to see. This video breaks down the four AI security challenges every enterprise is facing, where existing controls fall short, and how Cato AI Security gives you visibility, guardrails, and enforcement across the AI your employees use, the applications you build, and the agents acting on your behalf.

The State of Secrets Sprawl 2026: AI-Service Leaks Surge 81% and 29M Secrets Hit Public GitHub

GitGuardian’s 5th State of Secrets Sprawl report is here. In this blog, we unpack the key findings behind the 2026 edition, from AI-driven leak growth to the remediation gaps security teams can’t ignore.

Why Generic Container Alerts Miss AI-Specific Threats

It’s 2:47 AM and your SOC dashboard lights up. Six alerts fire across three hours from a single Kubernetes cluster: an outbound HTTP fetch to an unfamiliar domain, a tool invocation inside a customer support agent, an API call to an internal service the agent has never contacted, a service account token read, a file write to a model artifact directory, and an outbound data transfer that looks like normal API usage.

AI Workload Security Tools: Runtime vs. Declarative Compared

You’re forty-five minutes into a vendor demo for AI workload security. The dashboard looks polished—posture scores, misconfiguration findings, vulnerability counts, all tagged with an “AI workload” label that wasn’t there last quarter. You ask the obvious question: “Show me how this detects a prompt injection attack on our production agent.” Long pause. The SE pulls up a generic process anomaly rule.

Meet the Industry's First GPU-Powered SASE Platform with Native AI Security

AI has moved from experimentation to a strategic enterprise imperative. It’s no longer about whether organizations will adopt AI, but whether their security architecture can govern it at the speed and scale at which it is being embedded into the business. This is not a future concern. It is today’s operational mandate to: Securing AI is not limited to software applications and agents.

Cloud-Native Security for AI Workloads: Why It Matters and What's Changed

You’ve been securing Kubernetes workloads for years. Your CSPM is running, your CNAPP is configured, your team knows how to triage container alerts. Then an AI agent lands in your cluster — maybe from the data science team, maybe from a vendor integration, maybe from a tool you didn’t even know was running. Within a week, it’s making API calls nobody planned, accessing data stores that aren’t in the architecture diagram, and executing code it generated itself.