Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 4 - AI-Powered Fraud: Redefining the Identity Threat Landscape

Imagine receiving an urgent email from your bank that looks perfectly legitimate. It warns you of a suspicious transaction and prompts you to verify your identity. You hesitate but click, and suddenly, your credentials are compromised. This scenario, crafted by AI-powered fraud-as-a-service, is happening now.

Managing shadow AI: best practices for enterprise security

The rush to work faster with artificial intelligence (AI) risks encouraging employees to accidentally put sensitive data at risk. Take this scenario: someone in the procurement team has a tight deadline, so they upload a confidential contract into an AI tool to review a few redlines. It’s unclear if the AI system is storing the data from the contract, how long it’ll be retained, and if the data will resurface in a future prompt to someone else.

The EU AI Act: Key deadlines, risk levels, and steps to prepare

The EU AI Act is one of the world’s first comprehensive regulations aimed at AI-based systems. While we had voluntary standards like ISO 42001, the Act introduced mandatory requirements that in-scope organizations must meet to avoid considerable fines and operational disruptions. ‍ If you develop, use, or distribute AI systems, you may have to meet the obligations prescribed by this directive. Our EU AI Act summary will help you do so by covering: ‍

5 Steps to Securing AI Workloads

In the past year alone, the number of artificial intelligence (AI) packages running in workloads grew by almost 500%. Which is to say: AI is everywhere, and it’s settling in for the long haul. Naturally, as helpful as they are, these AI workloads come with security challenges, including data exposure, adversarial attacks, and model manipulation. So as AI adoption accelerates, security leaders must build an AI workload security program to protect their organizations while enabling innovation.

Insider Risk with Nightfall DLP: Episode 2 - Managing Shadow AI

Earlier this year, security researchers found more than 1 million records, including user data and API keys, in an exposed DeepSeek database. This massive exposure event tells us that data exfiltration risk and AI proliferation are forever linked together: as AI tools grow in popularity and complexity, exfiltration risk rises in kind.

AI Agents and API Security: The Hidden Risks Lurking in Your Business Logic

Modern organizations are becoming increasingly reliant on agentic AI, and for good reason: AI agents can dramatically improve efficiency and automate mission-critical functions like customer support, sales, operations, and even security. However, this deep integration into business processes introduces risks that, without proper API security, can compromise sensitive data and decision-making.