Outpost 24: How an AI Agent Hacked McKinsey's AI Platform
Autonomous Attacks on AI Systems: How to Test Before They Do...
In March 2026, a security firm deployed an autonomous AI agent against McKinsey’s internal AI platform.
What happened next took less than two hours.
The AI agent managed to gain access to millions of internal chat messages, exposure of sensitive data, and the ability to modify system prompts and manipulate outputs globally.
How did they do it?
This wasn’t a traditional breach. It was AI attacking AI.
Join our webinar to learn how this attack took place, step-by-step, and learn all about AI-driven attacks, prompt exploitation, and the tools for you to protect your business from being the next in line.
Learn what security teams need to know about AI-driven attacks, prompt exploitation, and the emerging need for AI pentesting.
As organizations rapidly deploy copilots, chatbots, and autonomous AI agents across internal workflows and customer-facing applications, security teams face a new challenge: traditional pentesting does not test how AI behaves under attack.
Attackers are already beginning to exploit LLMs, AI agents, and AI-driven systems, making it critical for organizations to start testing AI behavior itself, not just the underlying infrastructure.
In this session, our experts will explore:
- How an autonomous AI agent exploited McKinsey’s AI platform in under two hours
- Why AI systems introduce new attack surfaces beyond traditional applications
- The growing risks of prompt injection, data leakage, and unsafe AI behavior
- Why most security teams are testing the wrong layer when it comes to AI systems
- How AI pentesting helps identify vulnerabilities in model behavior, prompts, and AI decision logic
- How security teams can begin testing AI systems before attackers do
AI adoption is accelerating across enterprises but security testing has not kept pace - until now.