Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Silent Threat to the Agentic Enterprise: Why BOLA is the #1 Risk for AI Agents

In the race to deploy autonomous AI agents, organizations are inadvertently building on a foundation of shifting sand. While security teams have spent the last year focused on "Prompt Injection" and "Model Poisoning," a much older, more dangerous adversary has quietly become the primary attack vector for the agentic era: Broken Object Level Authorization (BOLA).

Edge Security Is Not Enough: Why Agentic AI Moves the Risk Inside Your APIs

For the last twenty years, cybersecurity has been built around the edge: the belief that threats come from the outside, and that firewalls, WAFs, and API gateways can inspect and control what enters the environment. That model worked when applications were centralized, traffic was predictable, and most interactions followed a clear pattern: a user in a browser talking to an app inside a data center. Agentic AI breaks that model.

The Agentic Era is Here: Announcing the 4th Edition of AI & API Security For Dummies

If you look at the headlines, the story is about Artificial Intelligence. But if you look at the architecture, the story is about APIs. The reality of modern tech is simple: You can’t have AI security without API security. As we move rapidly from simple chatbots to autonomous agents, the way we secure our infrastructure must evolve. That is why we are thrilled to announce the release of the 4th Edition of AI & API Security For Dummies, Salt Security Special Edition.

The 12 Months of Innovation: How Salt Security Helped Rewrite API & AI Security in 2025

As holiday lights go up and inboxes fill with year-in-review emails, it’s tempting to look back on 2025 as “the year of AI.” But for security teams, it was something more specific – the year APIs, AI agents, and MCP servers collided across the API fabric, expanding the attack surface faster than most organizations could keep up. At Salt Security, we spent 2025 focused on one thing: defending the API action layer where AI, applications, and data intersect.

Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance

As organizations accelerate the adoption of Artificial Intelligence, from deploying Large Language Models (LLMs) to integrating autonomous agents and Model Context Protocol (MCP) servers, risk management has transitioned from a theoretical exercise to a critical business imperative. The NIST AI Risk Management Framework (AI RMF 1.0) has emerged as the standard for managing these risks, offering a structured approach to designing, developing, and deploying trustworthy AI systems.

React2Shell: The Frontend Vulnerability That Unlocks Your Internal APIs

The cybersecurity world is currently buzzing about React2Shell (CVE-2025-55182), a critical remote code execution (RCE) vulnerability affecting React and Next.js. The scale of the threat is massive: researchers have already identified over 77,000 vulnerable IP addresses exposed to the internet, and confirmed that state-sponsored actors and opportunistic crypto miners have already breached at least 30 organizations. But if you look closely, this isn't really a story about React.

Agentic AI Security: The Emerging Fourth Pillar of Cybersecurity

For decades, cybersecurity has been organized around three dominant pillars: endpoint security, network security, and cloud security. These domains have shaped technology categories, vendor ecosystems, and enterprise budgets. They have matured into multi-billion-dollar markets, each responding to successive waves of digital transformation. However, a tectonic shift is underway.

Critical vLLM Flaw Exposes the Soft Underbelly of AI Infrastructure

While the world worries about "jailbreaking" LLMs or preventing them from hallucinating, a critical new vulnerability has just reminded us of a fundamental truth: AI is just software, and software has bugs. A newly discovered critical flaw (CVE-2025-62164) in vLLM, one of the most popular libraries for serving large language models, allows attackers to achieve Remote Code Execution (RCE) or crash servers simply by sending a malicious API request. This isn't a failure of the AI model.

Securing the New AI Edge: Why Salt Security Is Bringing MCP Protection to AWS WAF

The definition of the "edge" is changing. For years, security teams have focused on the traditional perimeter: web applications, public APIs, and user interfaces. We built firewalls, deployed WAFs, and established strict access controls to keep bad actors out. But with the rapid adoption of Agentic AI, the perimeter has expanded. Today, your "edge" isn't just where users connect to your apps; it's where AI agents connect to your data.

Say Hello to Ask Pepper AI: Turning API Security into a Conversation

In the world of cybersecurity, we have a "data" problem. We have more of it than ever before, more logs, more alerts, and definitely more APIs. But recently, this challenge has compounded. The rise of Agentic AI and Model Context Protocols (MCPs) has exploded the number of machine-to-machine connections in our environments. These agents spin up new pathways and access data in ways that are often invisible to traditional monitoring.