Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Questions to ask before vetting an AI agent for your SOC

So you’re ready to “hire” an agent or two for security operations. While AI agents won’t replace your human analysts, they are quickly becoming indispensable team members. Choosing the right ones should resemble a typical hiring process: you need to determine if they possess the necessary skills to fill your team’s gaps, work effectively with others, and grow with your organization. Here are five questions worth asking before you bring an AI agent on board in your SOC.

Why "We Thought It Was On" Keeps Leading to Breaches

At UC Irvine’s Digital Leadership Agenda 2026, moderated by Nicole Perlroth, Garrett Hamilton illustrates what those blind spots can look like: “We believed it was deployed.”“It was turned on.”“It should have stopped this.” Except one exception, one policy gap, one control not applied at scale — and assumptions replace reality. The real problem isn’t visibility. It’s continuously validating intent against execution.

Misconfigurations Are Still Owning Security Teams

Garrett Hamilton sat down with Todd Graham, Managing Partner at Microsoft’s venture fund, M12, to talk about why M12 invested in Reach and why our mission was a no-brainer for him. Nation-state attacks make the headlines—but most people are getting owned by misconfigured servers, networks, and controls hiding in plain sight. Turns out the problem isn’t what teams don’t own. It’s what they do own that isn’t, in most cases, even turned on.

Are LLMs becoming messengers for attackers? #ai #cybersecurity

AI assistants with broad enterprise access are creating a new attack vector. Chris Luft and Matt Bromiley discuss the Gemini Jack vulnerability, where attackers used prompt injection to turn Google's AI assistant into an unwitting accomplice in data exfiltration. The attack embedded hidden instructions in documents or emails. When employees asked Gemini normal questions like "show me our budgets," the AI retrieved the poisoned document and executed the attacker's commands without anyone clicking anything.

Security Visionaries | Agentic AI Threats: Hype or Reality

Are agentic AI threats just hype or reality? Security Visionaries host Max Havey digs into the world of agentic AI-enabled threats and cyber espionage with guests Neil Thacker, Global Privacy and Data Protection Officer at Netskope, and Ray Canzanese, Head of Netskope Threat Labs. IN THIS EPISODE.

Finding the Best AI Governance Software for Enterprises

‍ ‍AI governance software provides GRC leaders and security and risk managers (SRMs) with a dependable way to understand how AI is being used across the business and whether safeguards are functioning as intended. The software can translate a complex ecosystem of tools and models into concrete insights that stakeholders can evaluate.

Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance

As organizations accelerate the adoption of Artificial Intelligence, from deploying Large Language Models (LLMs) to integrating autonomous agents and Model Context Protocol (MCP) servers, risk management has transitioned from a theoretical exercise to a critical business imperative. The NIST AI Risk Management Framework (AI RMF 1.0) has emerged as the standard for managing these risks, offering a structured approach to designing, developing, and deploying trustworthy AI systems.

GDPR Compliance for AI Agents: A Startup's Guide

AI agents are moving fast. They book meetings, draft emails, summarize calls, search internal knowledge bases, and increasingly act on behalf of users. And as more teams adopt these systems, a familiar question surfaces almost immediately: How does GDPR apply to AI agents? What we’ve learned—working with startups rolling out AI features across support, sales, HR, and engineering—is that GDPR is not a blocker.

Old AI Security vs Evo: Watch Agentic Security Replace Weeks of Manual Work

From intelligent chatbots to autonomous agents, innovation has never moved faster thanks to GenAI. But with the rate of velocity comes a massive new challenge: a class of complex, non-deterministic security risks that traditional cybersecurity methods are simply not equipped to handle. AI-native applications are already running in production. Across industries, teams are deploying copilots, RAG systems, autonomous agents, and AI-powered workflows faster than traditional security processes can keep up.