Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Agentic MDR Pipeline: Detection Engineering at Scale

A CVE surfaces in the morning. By the time you are talking to that customer, you can tell them: we saw it, we checked your environment, you were not affected, and we deployed a rule that will catch it if it ever shows up. For MSSPs and MDR providers, detection engineering is among the most valuable services you can offer. It is also among the most expensive to deliver consistently and at scale.

Agents Need Boundaries. The Market Is Starting to Agree.

Gartner published the inaugural Hype Cycle for Agentic AI last week (and yes, we’re included in two subcategories - Agentic AI Security and Guardian Agent). A few things worth noting. It's inaugural, Gartner publishes over 130 Hype Cycles a year, and standing up a new one signals that a space has earned its own map. And it dropped in April, months ahead of the June - August window when these things usually appear.

Understanding shadow AI in your endpoint environment

Generative AI–and large language models in particular–reached mass consumer adoption beginning in late 2022 and early 2023, with ChatGPT reaching 100 million users faster than any consumer application in history. Since then, AI has advanced at a breakneck pace and now seems to be incorporated in every tool, app, and website–regardless of how useful it might actually be.

Best Enterprise DLP Tools for AI Data Risk (2026 Comparison)

Employees move sensitive data into AI tools every day. Someone pastes customer records into ChatGPT to draft an email. A developer feeds proprietary source code into a coding assistant to fix a bug. A project manager drops a confidential contract into Gemini to summarize it for a meeting. According to research from Cyberhaven Labs, 39.7% of the data employees share with AI tools is sensitive, and enterprise adoption of endpoint-based AI agents grew 276% in the past year alone.

7 Generative AI Security Risks and How to Defend Your Organization

Generative AI creates new attack surfaces that traditional security tools were not designed to address. The biggest generative AI security risks include prompt injection, data leakage, shadow AI, compliance exposure, model poisoning, insecure RAG pipelines, and broken access control. Each one requires a specific defense, not a generic firewall or DLP rule.

Understanding Cloudflare's network architecture

For decades, enterprise IT relied on a “hub and spoke” security model. But between the explosion of cloud infrastructure, SaaS apps and a remote workforce, that old perimeter hasn't just cracked—it’s shattered. In an attempt to stay on top of the advancing perimeter, many different solutions from many vendors entered the market and created a "spaghetti mess" of point solutions that drive up costs and tank user experience. Cloudflare is an answer to this problem, delivering everything you need to secure your apps, networks, users, data and devices.

"It's Quite a Shock": The Quantum Deadline Is Real

In this World Quantum Day special edition of This Week in NET, host João Tomé is joined by Bas Westerbaan (Principal Research Engineer) and Sharon Goldberg (Senior Director, Product) to explain why the timeline for post-quantum cryptography may be arriving sooner than expected. Recent research suggests the number of qubits required to break today’s encryption could fall dramatically, accelerating the urgency for companies and the Internet ecosystem to migrate to post-quantum security. Google has set a 2029 migration target, and Cloudflare is working toward a similar timeline.

A Look At GitGuardian's ML-Powered Contextual EnrichmentAnd Incident Scoring

In this quick introductory video, Mathieu Bellon, Senior Product Manager at GitGuardian, sits down with Dwayne McDaniel, Developer Advocate, to cover some of the advancements GitGuardian has made by integrating machine learning directly into the secrets security platform. Mathieu describes how engineers and responders can save serious time as by automating contextual analysis, geving the humans in the loop with the best information to be able to take an informed action when it comes to secrets leaks. They also discuss the security implications and where teams can look if they want to opt out or bring their own agents.