Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Security is a Critical Factor in AI Adoption

Security is a Critical Factor in AI Adoption Jamison Utter joins A10's GenAI experts Madhav Aggarwal and Diptanshu Purwar to discuss the critical importance of security for AI adoption. They cover how AI fundamentally shifts the attack surface, requiring a move from traditional rule-based pattern matching to understanding natural language semantics. The team emphasizes the need for alignment in AI to ensure models are "helpful, harmless, and honest" (the 3H philosophy) and highlights the role of red teaming and guardrails in preventing vulnerabilities, such as prompt injection.

Securing AI-Generated Code: Why It Matters

Securing AI-Generated Code: Why It Matters In this video, A10's Madhav Aggarwal explains how using AI for coding is an excellent use case, but the code generated by AI must be safe and secure. As AI and large language models (LLMs) become central to enterprise strategy, securing these powerful workloads is no longer optional—it's essential. A10 Networks' security leaders, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, explore the growing security risks associated with AI/LLM adoption and what organizations must do to stay protected.

What Is AI Penetration Testing? A Guide to Autonomous Security Testing

AI penetration testing is changing how organizations identify and exploit vulnerabilities. Instead of relying on traditional manual tests or basic automated scans, autonomous systems now simulate attacker behavior continuously and at scale. These systems use agentic AI to execute real-world exploits, reduce noise, and shift security left, all while keeping human experts focused on the creative flaws machines can’t yet catch.

Illusion of control: Why securing AI agents challenges traditional cybersecurity models

Enterprise security teams commonly focus on controlling AI agent conversations through prompt filters and testing edge cases to prevent unauthorized information access. While these measures matter, they miss the bigger picture: the real challenge is granting AI agents necessary permissions while minimizing risk exposure. This isn’t a new problem—it’s the same fundamental challenge we’ve faced with human users for years.

The Nightfall Approach: 5 Ways Our Shadow AI Coverage Differs from Generic DLP

Shadow AI refers to the unauthorized or unmonitored use of AI tools (like ChatGPT, Copilot, Claude, and Gemini) by employees in the workplace. It’s now one of the fastest-growing data exfiltration vectors. Employees are pasting source code, customer or patient data, contract terms, and even M&A info into gen AI tools, often without realizing the risk. And many legacy DLP tools are still catching up.

Riscosity Launches The DFPM Trust Center

For a AI software company like Riscosity, which helps organizations secure and govern data flows to third parties, compliance is not just a regulatory requirement—it is central to the value proposition. Recognizing this, Riscosity has launched a dedicated Trust Center at trust.riscosity.com, powered by industry leader Vanta, to streamline how it communicates its compliance posture with current and prospective customers.