LLM red teaming is a proactive security practice that involves systematically testing large language models (LLMs) with adversarial inputs to find vulnerabilities before deployment. By using manual or automated methods to probe for weaknesses, red teamers can identify issues like harmful content generation, bias, or security exploits, which are then addressed through a continuous “break-fix” loop to improve the model’s safety and reliability.