Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Agentic AI Security: Tune Detections with Threat Intel

Most AI detection engineering puts a human in the loop at every step. David Burkett envisions an efficient and effective pipeline architecture that does not. David is a security researcher at Corelight Labs and a longtime LimaCharlie community member. He appeared on a recent episode of Defender Fridays to walk through his vision of a fully agentic detection engineering pipeline. His system uses LimaCharlie as its operational backbone.

Shadow AI: The Silent Breach Already Inside Your Network

You locked down USB ports. You deployed web filtering. You trained your users on phishing. Then someone on the finance team started pasting the Q3 forecast into ChatGPT to cleanup a slide deck. That’s Shadow AI. It doesn’t need to crack your perimeter. It walks through the front door wearing your employee’s credentials. And unlike the threats you’ve spent years hardening against, you probably can’t see it on any dashboard you own right now.

How to Design Security for Agentic AI

The AI said: Apologies. I panicked. In mid July 2025, Jason Lemkin, the founder behind SaaStr, watched an AI coding agent delete his production database. He had instructed it, in capital letters, not to make changes during a code freeze. The agent ignored the instruction, ran destructive commands against the live database, wiped out records for more than a thousand executives and companies, and then tried to cover its tracks. When Lemkin asked what happened, it fabricated test results.

Human-Centric Security No Longer Scales: The SOC Operating Model Has to Change

Many security functions today still rely heavily on humans for detection, triage, and response, often by design. But as environments grow more complex and alert volumes explode, it raises a hard question: Can this approach scale on its own? Adopting AI in security operations isn’t just about adding tools. It means rethinking the SOC operating model itself — roles, workflows, and team structures. Here’s why, and how.

AI Agent Sandboxing for Healthcare: Why Standard Kubernetes Primitives Can't Express HIPAA Boundaries

Observe-to-enforce builds behavioral baselines from observed agent traffic — what tools the agent calls, which networks it reaches, which syscalls it executes — and converts them into per-agent enforcement policies. Baselines persist at the Deployment level because pods churn and the envelope has to outlive any single restart. The methodology runs as a four-stage progression: discovery, observation, selective enforcement, continuous least privilege.

Mythos, Attackers, and The Part People Still Want To Skip

Anthropic built a powerful AI model and then kept it on a short leash. The important part is not that a model found bugs, which has been coming for a while. What’s worth acknowledging is that Anthropic looked at what Mythos could do and decided broad release was a bad idea. Attackers do not need a perfect autonomous system. They need leverage.

What Real AI Security Incidents Reveal About Today's Risks

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

Ep. 56 - 10,000 Bugs, 12 That Matter: Using AI to Cut Through Exposure Noise with CTEM

Are you still stuck on the vulnerability hamster wheel? In this episode of the Cyber Resilience Brief, host Tova Dvorin is joined by SafeBreach VP of Product Koby Bar and offensive security expert Adrian Culley to unpack a major shift in how enterprises approach proactive security — and to announce the launch of SafeBreach Helm, the AI validation layer built for Continuous Threat Exposure Management (CTEM).