What AI Operator-First SOC Looks Like, and Why It Matters Now

There is a version of AI SOC that most security teams are familiar with. It summarizes alerts. It surfaces recommendations. It tells an analyst what to look at next. It is useful in the way a well-organized report is useful: it saves time reading, but the work still happens at a human pace. That version of AI is not what this blog is about. For MSSPs and SecOps teams operating at scale, advisory AI is not a destination. In fact, it presents a bottleneck in a different form.

Understanding shadow AI in your endpoint environment

Generative AI–and large language models in particular–reached mass consumer adoption beginning in late 2022 and early 2023, with ChatGPT reaching 100 million users faster than any consumer application in history. Since then, AI has advanced at a breakneck pace and now seems to be incorporated in every tool, app, and website–regardless of how useful it might actually be.

Ep 38: Wheels up, systems down: cybersecurity at cruising altitude

In this episode of Masters of Data, we buckle up and explore the staggering technological complexity behind the airline industry, from managing IoT devices across global fleets to navigating the data chaos of mergers and acquisitions. We dig into the delicate balance airlines strike between aging legacy systems and risky upgrades, and why getting that wrong isn't just costly but potentially catastrophic. We also look at how forward-thinking airlines are turning operational logs into real business wins, all while safeguarding the mountains of sensitive passenger data they collect every day.

Announcing Justification Coach: AI-Powered Guidance for Better Access Requests and Stronger Audits

Today, we’re introducing Justification Coach, a new AI-powered capability that helps users write better access request justifications in real time, so admins get the context they need for audits and investigations without having to chase people down after the fact.

7 Generative AI Security Risks and How to Defend Your Organization

Generative AI creates new attack surfaces that traditional security tools were not designed to address. The biggest generative AI security risks include prompt injection, data leakage, shadow AI, compliance exposure, model poisoning, insecure RAG pipelines, and broken access control. Each one requires a specific defense, not a generic firewall or DLP rule.

Axios CVE-2026-40175: a critical bug that's... not exploitable

It’s been a chaotic few weeks for Axios. First, a major supply chain attack put the package under scrutiny. Then, just days later, headlines started appearing about a “critical 10/10 vulnerability” that could lead to full cloud compromise. If you’ve read the coverage, you’ve probably seen claims like: That sounds bad. But when you look closely at how this vulnerability actually behaves in real environments, the story changes.

New KnowBe4 Agent Risk Manager Addresses Pervasive AI Agent Risk

By Roger A. Grimes and Matthew Duren AI agents can deliver incredible productivity gains, but their operational complexity makes effective threat modeling harder than ever, including for developers, administrators and especially end users. At the same time, both developers and non-developers are increasingly vibe-coding, or using AI to generate functional software from natural language prompts.