hallucination
🧠 CT-GenAI
Official ISTQB Definition
When a generative AI model produces outputs that appear plausible but are factually incorrect, nonsensical, or not grounded in the training data or context.
3 Ways to Think About It
The Quick Take
When an LLM confidently makes up false information - a critical bug type in AI testing.
Look Closer
AI generating plausible-sounding but completely wrong content - the #1 reliability concern for LLMs.
The Bottom Line
Fabricated facts that look real - testers must verify AI outputs don't contain hallucinated information.
Practice this term with quizzes and arcade games
Study with Lexicon →