explainability
🤖 CT-AI
Official ISTQB Definition
The extent to which the internal mechanics of an AI system can be explained in human terms.
3 Ways to Think About It
The Quick Take
Being able to understand WHY an AI made a decision - critical for testing fairness and debugging failures.
Look Closer
The ability to trace an AI's reasoning, essential for regulated industries and building trust.
The Bottom Line
Making the 'black box' transparent so testers can verify the AI is using appropriate logic.
Practice this term with quizzes and arcade games
Study with Lexicon →