Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Kimi Found 40+ Security Issues in Our Code. Open Source AI Is Here | Michelle Chen

In this episode of This Week in NET, host João Tomé is joined by Michelle Chen from Cloudflare’s AI product team to discuss the rise of open models, the launch of Kimi 2.5 on Workers AI, and why enterprises are rethinking the cost of proprietary AI.

7 Important Questions Facing CISOs on Bridging the Gap Between AI Threats, Supply Chain, and Cyber Resilience

A CISO’s job never ends, and, according to a recent LevelBlue survey, the issues they are dealing with on a daily basis are piling up, causing some disconnect in priorities and a misunderstanding of how to accomplish specific cybersecurity goals. To help answer some of the more pressing questions CISOs face and to gain a different perspective on the survey’s results, we sat down with LevelBlue’s Chief Security & Trust Officer, Kory Daniels.

AI Takes Over RSAC Conference (Now What?) with Dave Bittner

In this RSAC 2026 Conference recap, Dave Bittner, Host of the CyberWire Daily podcast, joins Data Security Decoded host Caleb Tolin from the guest seat to discuss the biggest theme dominating the conference: artificial intelligence, and, more specifically, agentic AI. From wall-to-wall AI messaging across San Francisco to in-depth conversations with security leaders and analysts, one thing became clear: the industry has moved past debating whether AI will take hold. It already has. Now, the focus has shifted to making it safe.

Securing AI Agents on GKE: Where gVisor, Workload Identity, and VPC Service Controls Stop Working

You enable GKE Sandbox on a dedicated node pool, bind Workload Identity Federation to your AI agent pods, wrap your data services in a VPC Service Controls perimeter, and deploy your agents with the Agent Sandbox CRD using warm pools for sub-second startup. Your security posture dashboard shows every control configured and active. And then an attacker uses prompt injection to trick an agent into exfiltrating sensitive data through API calls that every single one of those layers explicitly allows.

eBPF for AI Agent Enforcement: What Kernel-Level Security Catches (and What It Misses)

Your team deployed Tetragon six months ago. TracingPolicies are humming along—you’re catching unauthorized binary executions, blocking suspicious network connections, and generating seccomp profiles from observed behavior. Runtime security for your traditional workloads is solid. Then engineering ships their first autonomous AI agent into production. A LangChain agent connected to internal databases, external APIs through MCP tool runtimes, and a vector database for RAG.