Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Ep 2: Hacked together: fast, safe prototyping with AI

Join security experts Adam White, Chas Clawson, and Seth Williams as they explore how AI-first development is reshaping the way cybersecurity teams build, test, and deploy solutions. Traditional development cycles often leave critical ideas trapped in backlogs, but with Gen-AI and language models, security teams can now move from concept to prototype in hours, not months.

Identify common security risks in MCP servers

AI adoption is rapidly increasing, and with that comes a steady influx of useful but potentially vulnerable tools and services still maturing in the AI space. The Model Context Protocol (MCP) is one example of new AI tooling, providing a framework for how applications integrate with and supply context to large language models (LLMs). MCP servers are central to developing AI assistants and workflows that are deeply integrated with your environment.

ToolShell: Remote Code Execution in Microsoft SharePoint (CVE-2025-53770)

On July 19, 2025, a critical remote code execution (RCE) vulnerability (CVE-2025-53770, also referred to as ToolShell) was publicly disclosed, impacting on-premises Microsoft SharePoint Server installations. This vulnerability allows unauthenticated attackers to execute arbitrary code remotely by leveraging insecure deserialization techniques.

LLMs Are Not Goldfish: Why AI Memory Poses a Risk to Your Sensitive Data

We’ve all heard the myth: goldfish have a memory span of just a few seconds. While that’s debatable in marine biology circles, it’s useful as a metaphor in tech, especially when talking about memory, risk, and AI. The problem is, large language models (LLMs) are not goldfish. In fact, they have incredible memory. And increasingly, that memory isn’t just session-based. It’s persistent, long-term, and system-connected. That changes everything.

What Is AI Penetration Testing? A Guide to Autonomous Security Testing

AI penetration testing is changing how organizations identify and exploit vulnerabilities. Instead of relying on traditional manual tests or basic automated scans, autonomous systems now simulate attacker behavior continuously and at scale. These systems use agentic AI to execute real-world exploits, reduce noise, and shift security left, all while keeping human experts focused on the creative flaws machines can’t yet catch.

Securing AI-Generated Code: Why It Matters

Securing AI-Generated Code: Why It Matters In this video, A10's Madhav Aggarwal explains how using AI for coding is an excellent use case, but the code generated by AI must be safe and secure. As AI and large language models (LLMs) become central to enterprise strategy, securing these powerful workloads is no longer optional—it's essential. A10 Networks' security leaders, Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar, explore the growing security risks associated with AI/LLM adoption and what organizations must do to stay protected.

Security is a Critical Factor in AI Adoption

Security is a Critical Factor in AI Adoption Jamison Utter joins A10's GenAI experts Madhav Aggarwal and Diptanshu Purwar to discuss the critical importance of security for AI adoption. They cover how AI fundamentally shifts the attack surface, requiring a move from traditional rule-based pattern matching to understanding natural language semantics. The team emphasizes the need for alignment in AI to ensure models are "helpful, harmless, and honest" (the 3H philosophy) and highlights the role of red teaming and guardrails in preventing vulnerabilities, such as prompt injection.

miniOrange SAML SSO for Azure AD: A Better Way to Secure Your Atlassian Environment

As teams expand and compliance tightens, disconnected logins and manual provisioning create more risk than resilience. Learn how miniOrange SAML SSO syncs Azure AD with Jira, Confluence, and Bitbucket to bring seamless access and centralized control to your Atlassian stack.