Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Weathering the Attacker's Perfect Storm with Agentic AI-Powered SecOps

The cybersecurity landscape is facing its own perfect storm: AI-powered attacks coupled with resource constraints and regulator pressure, demanding a fundamental shift in SecOps to rise above. With AI showing no signs of slowing down, these issues are not fleeting. They are here to stay, and it is our responsibility to meet them head-on with efficient, AI-powered solutions that allow SecOps teams to conquer the world’s most innovative attacks.

RSAC 2026 Wrap-Up: Defining the Future as the AI Cybersecurity Company

At RSAC 2026, Arctic Wolf set the agenda for the future of cybersecurity and AI. Throughout the week, we were at the center of the industry dialogue, shaping how the market is approaching agentic AI in cybersecurity and setting clear expectations for where the industry is headed next. The launches of the Aurora Superintelligence Platform and the Aurora Agentic SOC raised the bar for the industry.

Browser AI Plugins, Agentic AI, and MCP: The 3 Blind Spots Legacy DLP Can't See

A recently patched Google Chrome vulnerability is a signal security leaders cannot ignore. But it's only the beginning of a much larger story. In January 2026, a high-severity vulnerability was disclosed in Chrome's Gemini AI integration: CVE-2026-0628. The flaw allowed a malicious browser extension with only basic permissions to escalate privileges and gain access to a user's camera, microphone, local files, and the ability to screenshot any website, all without user consent. Google patched it quickly.

You Patched LiteLLM, But Do You Know Your AI Blast Radius?

For a brief window, a widely used open source package in the AI ecosystem was compromised with credential-stealing malware. LiteLLM, a model gateway used to route requests to more than 100 LLM providers, has been downloaded millions of times per day. In that short window, the malicious versions were likely pulled tens of thousands of times before being caught.

The Agentic Stack Explained: How LLMs, MCP Servers, and APIs Work Together

The term AI agent is dominant in current cybersecurity discourse. Vendors, analysts, and CISOs all use the label, yet technical confusion remains regarding how agents actually operate and where the security risks reside. Beneath the surface-level familiarity, there is often significant confusion about what an AI agent actually is, how it operates technically, and most importantly for security teams, where the risk actually lives.

The AI Compliance Gap No One's Talking About (ISO, NIST, EU AI Act)

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

How to Stub LLMs for AI Agent Security Testing and Governance

Note: The core architecture for this pattern was introduced by Isaac Hawley from Tigera. If you are building an AI agent that relies on tool calling, complex routing, or the Model Context Protocol (MCP), you’re not just building a chatbot anymore. You are building an autonomous system with access to your internal APIs. With that power comes a massive security and governance headache, and AI agent security testing is where most teams hit a wall.

AI Application Security: 6 Focus Areas and Critical Best Practices

AI application security protects AI-powered apps, including those powered by large language models ( LLMs), from unique threats like prompt injection, data poisoning, and model theft. It achieves this by securing the entire lifecycle, including code, data, algorithms, and APIs, using specialized tools and processes that go beyond traditional security measures. It involves securing the AI model’s behavior, training data, and outputs.

Secure Coding Techniques that Is Critical for Modern Applications

Let's be honest: software ships faster today than most security teams can comfortably keep up with. Microservices, sprawling APIs, cloud-native deployments, and AI-assisted code generation have accelerated development at an unprecedented pace. But buried within that speed are small, overlooked coding mistakes that quietly open the door to serious breaches.