Kroll: Webinar - AI Security Testing: Prompt Injection Everywhere
Kroll offers a glimpse into the security vulnerabilities faced by businesses adopting Artificial Intelligence (AI), Machine Learning (ML) and Large Language Model (LLM) following eight months of LLM penetration testing.
Kroll’s LLM penetration testing has seen it analyze data sets of OpenAI models, non-public models and RAG systems. It has used this to produce an anonymized dataset that catalogs vulnerabilities from all LLM engagements.
Kroll has found a worrying prevalence of prompt injection attacks in the LLM cases it has investigated and it plans to share its findings.
Key Takeaways- Introduction: What is a prompt injection security attack?
- Research Findings: 92% of assessments with LLM findings had prompt injection, 38% of assessments with LLM findings had multiple prompt injection vulnerabilities
- Case Studies: Tales from the trenches of prompt injection attacks
- Impact: Why is prompt injection so prevalent?
- Mitigation: Ways to mitigate the risk of prompt injection attacks