Why You Can't Defend Against Prompt Injection
Prompt injection works because language models struggle to tell the difference between trusted instructions and untrusted user content. Unlike SQL injection or cross site scripting, there is no clean deterministic defence, which leaves code, libraries and AI workflows open to manipulation at multiple points.