Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

9 AI risks that could impact your organization-and how to mitigate them

As AI becomes more user-friendly and performance-focused, organizations are increasingly adopting it into their systems to streamline elaborate workflows. However, the rapid pace of adoption means that teams often implement AI models before fully mapping the security and compliance implications that they bring. ‍ According to Vanta’s State of Trust Report, more than 50% of organizations view AI risks as a growing concern today.

Exabeam Nova: The First Autonomous Multi-Agent AI for Cybersecurity

Security teams are in an AI arms race — facing massive data volumes, insider threats, and adversaries using AI to find vulnerabilities and launch faster, smarter attacks. Exabeam changes the game with Exabeam Nova, the first autonomous multi-agent AI purpose-built for security operations. Fully embedded within the New-Scale Security Operations Platform, Exabeam Nova delivers measurable outcomes across threat detection, investigation, and response.

SquareX Shows AI Browsers Fall Prey to OAuth Attacks, Malware Downloads and Malicious Link Distribution

As AI Browsers rapidly gain adoption across enterprises, SquareX has released critical security research exposing major vulnerabilities that could allow attackers to exploit AI Browsers to exfiltrate sensitive data, distribute malware and gain unauthorized access to enterprise SaaS apps. The timing of this disclosure is particularly significant as major companies including OpenAI, Microsoft, Google and The Browser Company have announced or released their own AI browsers. With Chrome and Edge alone representing 70% of the browser market share, it is very likely that the majority of consumer browsers in the future will be AI Browsers.

Agentic AI Security: Introducing the AI Firewall/Guardrail

As organizations adopt powerful AI agents for complex B2B workflow automation, securing their actions and ensuring compliance becomes paramount. A10 Networks' security expert, Diptanshu Purwar, explains the foundational need to integrate AI agents into existing governance platforms, which involves utilizing established enterprise security practices, such as role-based access and robust policy management, tailored explicitly for agents.

Regulatory Frameworks Affecting AI and Data Privacy Explained

AI is now embedded in everyday operations across support, finance, healthcare, and the public sector. As models touch more sensitive data, the legal landscape is moving just as quickly. The center of gravity has shifted from annual checklists to continuous compliance in production. This guide explains the regulatory frameworks affecting AI and data privacy in 2025, how they fit together, and how to turn their requirements into practical, repeatable controls your teams can run every day.

Anatomy of a Modern Threat: Deconstructing the Figma MCP Vulnerability

Threat researchers recently disclosed a severe vulnerability in a Figma Model Context Protocol (MCP) server, as reported by The Hacker News. While the specific patch is important, the discovery itself serves as a critical wake-up call for every organization rushing to adopt AI. This incident provides a blueprint for a new class of attacks that target the very infrastructure powering the AI Agent Economy. To understand the risk, we must first look at the mechanics of this emerging threat.

A step-by-step guide to AI security assessments [With a template]

As artificial intelligence becomes deeply integrated into business operations, organizations have started feeling the pressure to keep up. According to Vanta’s 2025 survey, more than 50% of the organizations report being overwhelmed by the speed of AI adoption and growing compliance obligations. ‍ This issue is aggravated by the fact that AI tools evolve faster than governance policies can adapt, potentially leaving complex gaps for security teams to fill.

Identity automation in the age of agentic AI with Matthew Chiodi

Defender Fridays - Identity Automation in the Age of Agentic AI with Matthew Chiodi Join us for this session of Defender Fridays as we explore identity automation in the age of agentic AI with Matthew Chiodi, Chief Strategy Officer at Cerby. At Defender Fridays, we delve into the dynamic world of information security, exploring its defensive side with seasoned professionals from across the industry. Our aim is simple yet ambitious: to foster a collaborative space where ideas flow freely, experiences are shared, and knowledge expands.