Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Demo: Access controls for GenAI and agentic AI

See how Cloudflare One simplifies access controls across both generative AI and agentic AI communication — all from one unified secure access service edge (SASE) dashboard. This demo highlights: Securing human-to-AI connections by as blocking or redirecting from unapproved tools and isolating AI apps to protect data (0:09) Streamlining access to MCP servers for AI-to-resource connections via Cloudflare’s MCP server portals (1:10)

Solving the AI Data Gap: Secure Enterprise File Access via Egnyte's MCP Server

Enterprise organizations face a fundamental challenge in AI adoption. While tools like ChatGPT and Claude offer transformative capabilities, their effectiveness is limited without secure access to organizational data. Critical business information often stays locked in secure repositories, limiting AI assistants from providing business-specific insights. Without secure access to mission-critical content, AI assistants fall short of their potential.

Agentic Era: The Myths and Realities of It All

After four sessions covering the technical realities, business imperatives, and security challenges of agentic AI, Salt Security’s Co-Founder and CEO Roey Eliyahu, and Salt's CMO Michael Callahan, come together for an unfiltered conversation about where the industry actually stands and where it's headed. The gap between AI ambition and operational readiness has never been wider.

The Agentic Era is Here: Announcing the 4th Edition of AI & API Security For Dummies

If you look at the headlines, the story is about Artificial Intelligence. But if you look at the architecture, the story is about APIs. The reality of modern tech is simple: You can’t have AI security without API security. As we move rapidly from simple chatbots to autonomous agents, the way we secure our infrastructure must evolve. That is why we are thrilled to announce the release of the 4th Edition of AI & API Security For Dummies, Salt Security Special Edition.

Cybersecurity Predictions for 2026: Human Risk, AI Data Leaks, and the Next Big Breach

Looking back at 2025, two mega-trends from the past have continued: First, data breaches remained a constant and continued to trend upward; and second, there was once again a headline disaster no one anticipated. The first point needs no elaboration; data breaches are like air pollution—an accepted nuisance that only occasionally becomes so severe that we wonder why we live like this. For the second point, I gesture toward the major incidents of recent years. MoveIt. Crowdstrike. Snowflake.

The top 6 AI security trends for 2026-and how companies can prepare

AI is changing the threat landscape faster than organizations can respond. AI-generated phishing and fraud have increased sharply year-over-year, and GenAI is enabling more sophisticated cyber attacks than ever before. ‍ Businesses are feeling the pain. Our team at Vanta surveyed 2,500 business and IT leaders across the globe and found that nearly three-quarters believe AI threats are outpacing their ability to manage them.

How to Detect and Eliminate Shadow AI in 5 Steps

The pressure to integrate AI is immense. Your developers need to move fast, and they’re finding ways to get the job done. But this rush for innovation often happens outside of established governance, creating a sprawling, invisible risk known as Shadow AI. To secure your organization, you must first understand what Shadow AI actually is. It’s not just a developer downloading a file to their laptop. Shadow AI is the totality of unmanaged AI assets within your supply chain.

DeepChat AI agent XSS-to-RCE via Mermaid and Electron IPC

In December 2025, a critical remote code execution vulnerability was disclosed in DeepChat, an open-source desktop AI agent platform built using Electron. The issue, tracked as CVE-2025-67744, affects all DeepChat versions prior to 0.5.3 and carries a CVSS score of 9.6. The vulnerability arises from the interaction between two separate weaknesses. The first allows attacker-controlled JavaScript execution through unsafe rendering of Mermaid diagrams.