Intermediate3 min read

What Strategies Do You Use to Reduce Hallucinations?

Walk through a layered approach to reducing LLM hallucinations — from prompt-level techniques to retrieval grounding and output validation.

Also preparing for coding interviews?

Rubduck is an AI mock interviewer for DSA and coding rounds — get instant feedback on your solutions.

Daily tips, confessions & AI news. Unsubscribe anytime. Questions? [email protected]

Why This Is Asked

Hallucination is one of the fundamental limitations of LLMs and a critical production concern. Interviewers ask this to see if you have a systematic, multi-layered approach — not just "use RAG" or "add 'don't make things up' to the prompt."

Key Concepts to Cover

  • Types of hallucinations — intrinsic (contradicts source), extrinsic/confabulation (unverifiable, invented facts or citations)
  • Retrieval grounding (RAG) — anchoring responses to verified sources
  • Uncertainty elicitation — prompting the model to express when it does not know
  • Output validation — post-processing to catch hallucinated content
  • Model selection — some models hallucinate significantly less than others

How to Approach This

1. Understand the Types of Hallucination

The standard taxonomy distinguishes two types:

Intrinsic hallucination: The output contradicts the provided source material (e.g., the document says a policy was updated in 2023 but the model says 2021).

Extrinsic hallucination (confabulation): The output cannot be verified against any provided source — the model invents facts, citations, quotes, or details with no grounding. Source fabrication (made-up paper titles, statistics) is a common form.

Each type needs a different mitigation strategy.

2. Prompt-Level Mitigations

Encourage uncertainty:

"If you are not certain about a fact, say 'I'm not sure about this' rather than guessing."

Constrain to provided context:

"Answer ONLY based on the information provided below. Do not use your training knowledge."

Ask for sources: "For each claim, include the exact sentence from the context that supports it."

3. Architecture-Level: RAG Grounding

The most reliable mitigation for factual hallucination:

  • Retrieve relevant documents before generating
  • Instruct the model to stay within the provided sources
  • Validate citations post-generation

4. Output Validation

  • Fact-checking pass: Second LLM call to verify key claims
  • Citation validation: Check cited passages actually support the claims
  • Structured output schemas: Catch invented fields or values

5. Model Selection

Not all models hallucinate equally. Evaluate models on your specific task with a benchmark that includes known correct answers.

Common Follow-ups

  1. "Can you ever fully eliminate hallucinations?" No — current LLMs are probabilistic. The goal is to reduce frequency, catch occurrences before users see them, and design systems so hallucinations have minimal impact.

  2. "How do you measure hallucination rate?" Benchmark with known facts, LLM-as-judge for factual consistency, user-reported errors in production, citation accuracy metrics.

  3. "What is the difference between a model being 'wrong' and 'hallucinating'?" This is an active definitional debate. A narrow definition limits hallucination to content ungrounded in provided context (confabulation). A broader definition includes factual errors from stale training data. Both matter for production: stale-knowledge errors are fixed by RAG or knowledge updates; confabulation errors require grounding constraints and output validation. Be explicit about which definition you're using when discussing metrics.

Related Questions

Prep the coding round too

AI knowledge is only half the picture. Rubduck helps you nail DSA and coding interviews with an AI interviewer that gives real-time feedback.

Daily tips, confessions & AI news. Unsubscribe anytime. Questions? [email protected]