Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why Every Tech Company is Talking About OWASP for AI (and You Should Too)

AI is changing everything—but with innovation comes new risks. In this episode of AI on the Edge, we dive deep into OWASP's Top 10 for Large Language Models with security leader Steve Wilson (Exabeam). Discover why every tech company is suddenly talking about LLM security and how you can stay ahead. Inside this episode: Why traditional security doesn’t work for AI Learn from Steve’s new book The Developer’s Playbook for LLM Security and get actionable tips to protect your AI systems.

The Critical Inflection Point: Navigating Apex Risks from AI to Stolen Credentials

The global cyber threat landscape has accelerated beyond traditional defense, reaching a critical inflection point. Today, organizations are no longer battling isolated attackers; instead, they are confronting industrialized, financially motivated cyber syndicates that leverage cutting-edge technologies to maximize their impact. Moreover, the rise of AI in Cybersecurity has created both opportunities and threats.

From Model Drift to API Exploitation: The Next Challenge in AI Security

From Model Drift to API Exploitation: The Next Challenge in AI Security In this clip from "Securing AI Part 4: The Rising Threat of Hidden Attacks in Multimodal AI," Diptanshu Purwar and Madhav Aggarwal summarize why external guardrails are the only sustainable defense against the new wave of AI exploitation. Jamison Utter then sets the stage for the next topic in the series: securing the fundamental protocols and APIs that AI agents rely on.

How Reach Security Automates Remediation and Prevents Configuration Drift

From identification to remediation to drift management. When Reach flags an exposure, it doesn’t stop there. It shows exactly how much risk you’ll reduce by fixing it — and what impact it’ll have on users. In this short demo, CRO Jared Phipps walks through how Reach:︎ Quantifies residual risk reduction (e.g., 62%, 91%, etc.)︎ Weighs that against user impact︎ Guides teams through the remediation process︎ Integrates with Jira or other ticketing systems to track fixes︎ Monitors configurations to prevent drift and maintain baselines.

5 Critical LLM Privacy Risks Every Organization Should Know

Large language models take in unstructured data. They transform it into context, embeddings, and answers. That journey touches raw files, vector stores, model logs, and third-party services. Traditional privacy programs focus on databases and forms. LLMs push risk to the edges. The riskiest moments are when you ingest messy content, when your system retrieves chunks to support an answer, and when an agent with tool access is tricked into over-sharing.

Find the Fixer: The AI Agent Bringing Order to Ownership

Assigning remediation tasks across an enterprise organization can feel like navigating a maze of inconsistent tags, overlapping teams, and unclear ownership. It’s one of the most persistent operational challenges in vulnerability and exposure management, and one of the biggest barriers to speed. Each scanner and cloud platform comes with its own tagging logic. One system uses ProductOwner, another productowner. Some tags are outdated, others duplicated, and many have no clear purpose.

Predict and Prevent: How AI is Changing Insider Risk Management

Insider risk has become one of the most urgent and financially consequential cybersecurity challenges for today’s organizations. Insider Risk is a top concern for the C-Suite and Boards, and organizations must be prepared to detect and respond to insider risks. In fact, according to IBM’s Insider Threat Report, 83% of organizations reported at least one insider-related security incident in 2024 (IBM, Insider Threat Report, 2024).

"Trust in AI Starts with Transparency | Sebastian Goodwin (Autodesk) x Reach Security"

Trust in AI starts with transparency. In our recent conversation, “No Time to Drift,” Sebastian Goodwin, Chief Trust Officer at Autodesk, shares how his team is putting that principle into practice — by creating AI Transparency Cards. Think of them like nutrition labels for AI: clear, consistent, and designed to help customers understand what’s inside. Each one outlines what the model does, how it’s trained, safeguards in place, and more.

How Responsible AI Governance Strengthens Cybersecurity Defenses

Here's something that should keep you up at night: cybercrime might cost the global economy $10.5 trillion every year by 2025. That's not a typo. Traditional security measures? They're already struggling to keep pace. Attackers have figured out how to weaponize artificial intelligence, launching sophisticated campaigns that waltz right past conventional defenses like they're invisible.