Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Demo: Access controls for GenAI and agentic AI

See how Cloudflare One simplifies access controls across both generative AI and agentic AI communication — all from one unified secure access service edge (SASE) dashboard. This demo highlights: Securing human-to-AI connections by as blocking or redirecting from unapproved tools and isolating AI apps to protect data (0:09) Streamlining access to MCP servers for AI-to-resource connections via Cloudflare’s MCP server portals (1:10)

Demo: Discover workforce use of shadow AI

See how Cloudflare One helps restore visibility and controls over unsanctioned use of AI tools. This demo highlights secure access service edge (SASE) capabilities including: Shadow AI reporting: Analyze how AI apps are used across your environment 0:10 Application confidence scores: Evaluate the risks posed by specific AI apps 1:10 Access controls: Allow, block, redirect, isolate, and more based on an app’s approval status 1:45.

Demo: Prevent data exposure in AI

See how Cloudflare One helps protect sensitive data when users interact with generative AI apps. This demo highlights secure access service edge (SASE) capabilities including: Data loss prevention (DLP) detections for sensitive content (e.g., PII, source code, financials) 0:22 Detections for data at rest in AI tools like ChatGPT 1:00 Guardrails for user prompts based on intent / topic to block jailbreak attempts, code abuse, PII requests, and other risky behavior 2:12.

Demo: Manage security posture of GenAI apps

See how Cloudflare One helps you manage the security posture of GenAI tools like ChatGPT, Claude, and Gemini. This demo highlights: API integrations: Available for ChatGPT, Gemini, and Claude, and most popular SaaS apps 0:18 Posture findings: Scan for misconfigurations, unauthorized activity, and other security issues 0:50 Shadow AI discovery: Find what third-party AI apps access your SaaS tools 1:15.

The Easiest Way to Get Hacked: Open Introspection. #graphql #businesslogic #apisecurity #rbi

The RBI incident (Burger King, Tim Hortons) proves that BLA often results from a cascade of simple flaws, not one complex attack. The key mistake: GraphQL Introspection was enabled. This gave the attacker the full API blueprint - the map needed to find the open registration validation flaw and execute a massive data leak. Action Item: If you have GraphQL, check your production settings now. Disable Introspection. Don't hand the attacker the map to your castle!

2026 Cybersecurity Predictions by Teleport CEO Ev Kontsevoy

2025 was a turning point for identity security. Many professionals realized that traditional human and machine-focused identity solutions just don’t work for AI. AI is non-deterministic like a human, yet it’s still software. This creates an entirely new identity category. Traditional IAM tools would treat AI identities as yet another separate type, creating new silos.

From discovery to defense: Securing APIs with Datadog App and API Protection

APIs now sit at the center of almost every digital product, from mobile apps and SaaS platforms to embedded services. As organizations scale, the number of endpoints grows quickly, as does the attack surface. Unmonitored or misconfigured APIs have already led to major incidents across industries, including data exposure, broken authentication, and large-scale account takeover.

Cybersecurity Predictions for 2026: Human Risk, AI Data Leaks, and the Next Big Breach

Looking back at 2025, two mega-trends from the past have continued: First, data breaches remained a constant and continued to trend upward; and second, there was once again a headline disaster no one anticipated. The first point needs no elaboration; data breaches are like air pollution—an accepted nuisance that only occasionally becomes so severe that we wonder why we live like this. For the second point, I gesture toward the major incidents of recent years. MoveIt. Crowdstrike. Snowflake.