Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Metric AI Security is Missing

As autonomous and semi-autonomous AI systems take on more responsibility within the enterprise, they shift from being “features” of software to becoming true internal actors. They make decisions, take actions, call tools, orchestrate workflows, and influence other AI agents. With this evolution, we must confront an uncomfortable truth: the metrics and response patterns we built for deterministic software no longer work.

Beyond the Build: Dynamic Remediation for Malicious Package Versions

In the fast-moving world of software supply chains, the discovery of a malicious version of a popular library often triggers a state of emergency. Traditional security tools take a reactive approach: they scan, they find a match, and they fail the build. But what happens if the malicious version was merged before it was flagged? What if it’s already running in your production containers? Or what if it’s being pulled dynamically across hundreds of different pipelines?

Emerging Threat: (CVE-2026-3854) GitHub Enterprise Server RCE via Git Push Injection

CVE-2026-3854 is a command injection vulnerability in GitHub Enterprise Server. It lives in the git push pipeline. User-supplied push option values were not properly sanitized before being embedded in an internal service header. The header format used a delimiter that could also appear in user input. A crafted push option containing that delimiter let an attacker inject additional metadata fields. Downstream services treated those fields as trusted internal values.

How Zero Standing Privileges Defuses the Shadow AI Agent Problem

As more organizations move past experimentation and start planning real AI agent deployments, the same set of concerns keeps surfacing in our conversations with security teams. Whether the worry is a shadow agent that shows up uninvited or a sanctioned agent going rogue, the questions tend to cluster around control: These are the right questions to be asking, and they share a common answer that’s more concrete than most people expect. AI agents are only as dangerous as the privileges they can reach.

Nine Seconds to Delete a Database: What the PocketOS Incident Teaches Us About AI Agent Privilege Management

There’s never a good time to lose a production database, but losing one to your own AI coding agent on a Friday afternoon has to rank near the bottom of the list. That’s the backdrop to the PocketOS incident, and it’s the clearest case yet for why AI agent security and intent-based access control belong at the top of every cloud security roadmap this year.

Falcon Exposure Management AI Inventory: Demo Drill Down

AI adoption is accelerating across the enterprise, but governance isn’t keeping pace—leaving security teams without a clear view of what AI is running, how it’s being used, and where it introduces exposure. In this Demo Drill Down, we showcase AI Inventory in Falcon Exposure Management, delivering a centralized view of AI across hosts—from local LLMs and MCP servers to IDE extensions, packages, and applications.

CTEM Explained in 60 Seconds (And Why Your Security Strategy Has Gaps)

(CTEM) Continuous Threat Exposure Management—isn't just another framework. It's a philosophy for finally connecting the parts of your security program that aren't talking to each other. SafeBreach Helm makes it actionable for any organization, no matter where you're starting from.

Ep. 56 - 10,000 Bugs, 12 That Matter: Using AI to Cut Through Exposure Noise with CTEM

Are you still stuck on the vulnerability hamster wheel? In this episode of the Cyber Resilience Brief, host Tova Dvorin is joined by SafeBreach VP of Product Koby Bar and offensive security expert Adrian Culley to unpack a major shift in how enterprises approach proactive security — and to announce the launch of SafeBreach Helm, the AI validation layer built for Continuous Threat Exposure Management (CTEM).