When User Input Lines Are Blurred: Indirect Prompt Injection Attack Vulnerabilities in AI LLMs
It was a cold and wet Thursday morning, sometime in early 2006. There I was sitting at the very top back row of an awe-inspiring lecture theatre inside Royal Holloway's Founder’s Building in Egham, Surrey (UK) while studying for my MSc in Information Security. Back then, the lecture in progress was from the software security module. The first rule of software security back then was never to trust user inputs.