Preventing Data Poisoning in Training Pipelines Without Killing Innovation
Data poisoning occurs when cyber criminals intentionally compromise the integrity of a data set used for training machine learning models. They corrupt the information to manipulate the model’s outcome in the form of incorrect predictions by introducing vulnerabilities that reduce the effectiveness, add security risks, and fundamentally shape its decision making capabilities.