Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why AI Features Don't Equal Better Vulnerability Management

AI is becoming table stakes in vulnerability and exposure management. In this candid webinar conversation, Chris Ray, Field CTO at GigaOm, and Will Gorman, CTO and leader of AI initiatives at Nucleus Security, challenge the assumption that more AI automatically leads to better outcomes.

AI certificate

You can ask AI to create a song that sounds like a famous band sang it. But what happens if you use it or share it? Are there legal or other implications? AI tools must be visible and governed. Shadow AI isn’t. Take Cato’s AI in Cybersecurity course to understand the risks of unsanctioned AI tools. It’s free, comes with a downloadable cert, and earns CPE credits. Register now.

Your AI Just Became the Insider Threat | CrowdStrike Global Threat Report 2026

Hackers can reach your critical systems in just 27 seconds. In 2025, AI-powered cyberattacks surged 89% as adversaries weaponized the same AI tools organizations use every day. From eCrime groups to China-nexus actors, North Korean operatives, and Russian intelligence, AI is accelerating and reshaping global threat activity. In this video, you’ll learn: Adversaries are not just using AI. They are weaponizing your AI against you.

What a Rogue Vacuum Army Teaches Us About Securing AI

If you’re like me, you’ve been enthralled with the recent story, expertly written by Sean Hollister at The Verge, about how Sammy Azdoufal built a remote control for his DJI Romo vacuum with a PlayStation controller, and ended up in control of 7,000+ robovacs all over the world. On the surface, it sounds like vibe coding gone slightly sideways. I mean, really, what could a vacuum possibly do? Turns out… a lot.

The 89% Problem: How LLMs Are Resurrecting the "Dormant Majority" of Open Source

AI coding assistants are quietly resurrecting millions of abandoned open source packages. For the last decade, developers relied on a simple heuristic for open source security: Prevalence \= Trust. If a package was downloaded millions of times a week (lodash, react, requests), we assumed it was "safe enough" because thousands of eyes were on it. If it was obscure, we approached with caution.

AI Moves Fast, Privacy Has to Move Faster with Ojas Rege

In this episode, Caleb Tolin welcomes Ojas Rege of OneTrust for a practical, wide-ranging conversation on how data privacy and governance must evolve alongside enterprise AI adoption. Ojas explains why AI fundamentally changes the privacy conversation: the same systems that enable organizations to move faster can also cause harm faster when guardrails aren’t in place. From agentic AI systems that dynamically repurpose data to general-purpose models that blur traditional notions of “intended use,” the challenge isn’t just compliance—it’s trust.