data poisoning
🤖 CT-AI
Official ISTQB Definition
An adversarial attack where malicious data is injected into a training dataset to corrupt the resulting ML model.
3 Ways to Think About It
The Quick Take
Attackers injecting bad data into training sets to corrupt an AI model's behavior.
Look Closer
A security attack where malicious training examples cause an AI to learn wrong or dangerous patterns.
The Bottom Line
Sabotaging AI by contaminating its learning data - a critical security concern for ML systems.
Practice this term with quizzes and arcade games
Study with Lexicon →