As AI agents become increasingly prevalent across business environments, their security is a pressing concern. Among the insidious threats facing AI agents is tool poisoning, a type of attack that exploits the way AI agents interpret and use tool descriptions to guide their reasoning. In this blog, we explain how AI tool poisoning works, the different forms it can take, and how organizations can strengthen their defenses against this type of attack.