Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to Gain Value from AI in Cybersecurity

The Terminator is often people’s reference point for artificial intelligence (AI), especially when they worry that technology will be the end of civilization. However, on the other end of the AI spectrum is the beloved, marshmallow fluff Baymax, the helper robot providing assistance to those in his presence. The reality of AI sits somewhere between these two extremes. For security teams, AI initially seemed like a revolutionary technology that would offer faster detection and automated analysis.

Special Episode: A conversation with Sam, the AI SOC Analyst | Breach Ready Radio | Securonix

In this special episode, Ben sits down with Sam, the AI SOC Analyst inside Securonix, to walk through what happens when a detection fires and a real investigation begins. From a suspicious login at 2 a.m. to building context across users, endpoints, identities, and cloud activity, the conversation focuses on how investigations are changing in practice. We dig into what Sam actually does. How telemetry is pulled together. How behavior is compared to baselines. How risk is calculated. And how findings are turned into clear, structured recommendations that analysts can act on.

Securing Agentic AI: Why Visibility, Behavior, and Guardrails Matter

Agentic AI is quickly transitioning from experimentation to production. Enterprises are deploying AI agents to interpret goals, decide what actions to take, interact with business tools and APIs, and execute those actions autonomously, with limited or no human oversight. The promise is speed and efficiency, but the proverbial “blast radius” is bigger and fundamentally different from anything security teams have managed before.

Why Your Human Risk Management Strategy Can't Ignore AI

AI isn’t just another technology wave—it’s a force multiplier for both innovation and risk. In a recent webinar featuring insights from Bryan Palma and guest speaker Jinan Budge, Vice President and Research Director at Forrester, one message came through clearly: the rise of AI and AI agents is fundamentally reshaping the human risk landscape—and security leaders need to move fast to keep up.

Top Generative AI Security Risks In The Enterprise

Enterprise security teams spent years building data loss prevention (DLP) programs around a predictable set of egress channels: email, USB drives, cloud storage, and sanctioned SaaS apps. Generative AI has rewritten those assumptions almost overnight. Today, the same data those DLP controls were built to protect is flowing into AI interfaces that most organizations have no visibility into and no enforcement capability over.

From Discovery to Defense: Why AI Red Teaming Is the Next Step After AI-SPM

This week, we announced the general availability of Evo AI-SPM, the first operational layer of Snyk’s AI Security Fabric. AI-SPM gives security teams something they’ve never had before: a system of record for AI risk, with the ability to discover models, frameworks, datasets, and agent infrastructure embedded directly in code. For many organizations, that discovery step is a breakthrough.

Trustworthy AI Starts with Better Agents

The difference between an AI feature and an AI-led operating model becomes clear the moment a security problem becomes difficult. In real-world security operations — where the signal is ambiguous, the evidence spans multiple domains, and the attacker is behaving in unfamiliar ways — architecture matters much more.

Non-Human Identity Sprawl Is the Hidden Cost of AI Velocity

In the current AI boom, we race to use copilots, orchestration scripts, CI workflows, retrieval pipelines, and background jobs. Sometimes, we take for granted that every one of these things needs an identity. Service accounts. OAuth apps. API keys. Short-lived tokens. As AI velocity increases, so does the number of these non-human identities (NHIs). Instead of obsessing over model quality, latency, hallucinations, and GPU costs, we also need to consider how these identities impact security.

Agentic commerce is happening now. Here's what we've learned.

We’ve been collaborating with others to explore when and how agentic commerce will work. Robin Gandhi is the CPO of Lithic, a leading card issuer that’s already seeing agents use its cards to make purchases. Below, he shares his thoughts on what’s changed, and what needs to change, for agentic commerce to become mainstream. Last year, I wrote about the opportunity for agentic payments to revolutionize travel bookings, ad spend management, procurement, and more.

AI can do what now?! - Detecting financial fraud with Elastic Security

Financial fraud is increasingly cyber-enabled, requiring organizations to detect complex campaigns across transactions, identities, and digital systems faster and with greater accuracy. Join cybersecurity experts Lisa Jones-Huff and Joe Murin as they discuss how Elastic Security applies AI, machine learning, and generative AI to modern fraud detection. They’ll share how Elastic Security helps teams connect signals, reduce noise, accelerate investigations, and scale fraud prevention through emerging frameworks and standards across financial services organizations.