LLM guardrails: Best practices for deploying LLM apps securely
Prompt guardrails are a common first line of defense against client-level LLM application attacks, such as prompt injection and context poisoning. They’re also a critical component of a full defense-in-depth strategy for LLM security at the infrastructure, supply chain, and application level. The specific guardrails that teams implement depend highly on use case, but they are typically designed to.