Composable AI Agents and the SOC That Runs Itself

Picture a SOC that investigates its own alerts, hunts threats across customer tenants, isolates compromised endpoints, and writes its own detection rules. Envision the same SOC attacking itself every morning to find the gaps it missed, all before your analysts arrive for the day. This is not a roadmap item, but an operational reality on LimaCharlie. It’s what agentic AI security looks like on a platform built to support it.

Secure private networking for everyone: users, nodes, agents, Workers - introducing Cloudflare Mesh

AI agents have changed how teams think about private network access. Your coding agent needs to query a staging database. Your production agent needs to call an internal API. Your personal AI assistant needs to reach a service running on your home network. The clients are no longer just humans or services. They're agents, running autonomously, making requests you didn't explicitly approve, against infrastructure you need to keep secure.

What AI Operator-First SOC Looks Like, and Why It Matters Now

There is a version of AI SOC that most security teams are familiar with. It summarizes alerts. It surfaces recommendations. It tells an analyst what to look at next. It is useful in the way a well-organized report is useful: it saves time reading, but the work still happens at a human pace. That version of AI is not what this blog is about. For MSSPs and SecOps teams operating at scale, advisory AI is not a destination. In fact, it presents a bottleneck in a different form.

Understanding shadow AI in your endpoint environment

Generative AI–and large language models in particular–reached mass consumer adoption beginning in late 2022 and early 2023, with ChatGPT reaching 100 million users faster than any consumer application in history. Since then, AI has advanced at a breakneck pace and now seems to be incorporated in every tool, app, and website–regardless of how useful it might actually be.

Ep 38: Wheels up, systems down: cybersecurity at cruising altitude

In this episode of Masters of Data, we buckle up and explore the staggering technological complexity behind the airline industry, from managing IoT devices across global fleets to navigating the data chaos of mergers and acquisitions. We dig into the delicate balance airlines strike between aging legacy systems and risky upgrades, and why getting that wrong isn't just costly but potentially catastrophic. We also look at how forward-thinking airlines are turning operational logs into real business wins, all while safeguarding the mountains of sensitive passenger data they collect every day.

Announcing Justification Coach: AI-Powered Guidance for Better Access Requests and Stronger Audits

Today, we’re introducing Justification Coach, a new AI-powered capability that helps users write better access request justifications in real time, so admins get the context they need for audits and investigations without having to chase people down after the fact.

7 Generative AI Security Risks and How to Defend Your Organization

Generative AI creates new attack surfaces that traditional security tools were not designed to address. The biggest generative AI security risks include prompt injection, data leakage, shadow AI, compliance exposure, model poisoning, insecure RAG pipelines, and broken access control. Each one requires a specific defense, not a generic firewall or DLP rule.