Webinar Replay - AI Security Testing: Prompt Injection Everywhere
Kroll’s LLM penetration testing has seen it analyze data sets of OpenAI models, non-public models and RAG systems. It has used this to produce an anonymized dataset that catalogs vulnerabilities from all LLM engagements.
Kroll has found a worrying prevalence of prompt injection attacks in the LLM cases it has investigated and shares its findings in this briefing.
Key Sections From the Webinar:
0:00 – Intro
3:48 – Kroll LLM Security Assessment and Results
11:20 – Anatomy of an LLM Prompt
19:12 – Prompt Injection Stories
21:46 – Prompt Injection Strategies
24:09 – Prompt Injection Techniques
25:59 – Why is Prompt Injection So Prevalent?
29:05 – What Can Organizations Do?
24:09 – Self Hosted Inference Risk
34:15 – What is Advanced Prompt Injection?
38:38 – Q&A session
Additional Kroll Resources:
Artificial Intelligence (AI) Insights: https://www.kroll.com/en/insights/publications/artificial-intelligence
Kroll Threat Intel Reports: https://www.kroll.com/en/insights/publications/cyber/threat-intelligence-reports
The State of Cyber Defense: Manufacturing Cyber Resilience:
https://www.kroll.com/en/insights/publications/cyber/state-cyber-defense-manufacturing
The State of Cyber Defense: Diagnosing Cyber Threats in Healthcare:
https://www.kroll.com/en/insights/publications/cyber/state-cyber-defense-healthcare
Get the latest from the Kroll Cyber Risk blog:
https://www.kroll.com/en/insights/publications/cyber