Monday, June 17, 2024

7. Hallucination?

What is Hallucination in LLMs?

It refers to the phenomenon where the model generates text that is plausible-sounding but factually incorrect, irrelevant, or nonsensical. 

In the realm of Generative AI, hallucination refers to instances where a model generates text that deviates from factual accuracy or coherence. Despite sounding fluent and convincing, the output may be entirely fabricated or contain incorrect information. This is a significant challenge as it undermines the reliability of AI-generated content.

Causes of Hallucination in LLMs

  1. Training Data Limitations: The model's output is influenced by the quality and scope of its training data. If the data contains errors or biases, the model might reproduce these inaccuracies.
  2. Model Architecture: The design of LLMs like GPT-4 prioritizes fluency and coherence, which can sometimes lead to confident but incorrect statements.
  3. Inference Process: During inference, the model might generate text based on patterns it has learned, without verifying the factual accuracy of the output.

Detecting Hallucination

Identifying hallucinations involves comparing the AI's output against reliable sources or established facts. Here are some methods to detect hallucination:

  1. Human Evaluation: Subject matter experts or trained annotators review the model’s output for factual accuracy and coherence.
  2. Fact-Checking Tools: Automated tools can cross-reference the AI's output with trusted databases and sources to verify accuracy.
  3. Consistency Checks: Verifying that the model’s output is consistent with previously known facts or within the given context.
  4. Knowledge-Grounded Generation: Using external knowledge bases to ground the model's responses and ensure factual accuracy.

Metrics for Measuring Hallucination

Several metrics and approaches are used to measure hallucination in LLMs:

  1. Precision and Recall:
    • Precision: The proportion of correctly generated facts out of all generated facts.
    • Recall: The proportion of correctly generated facts out of all relevant facts.
  2. BLEU (Bilingual Evaluation Understudy): While primarily used for machine translation, BLEU can be adapted to measure the overlap between the model's output and reference text.
  3. ROUGE (Recall-Oriented Understudy for Gisting Evaluation): Measures the overlap of n-grams between the model's output and reference summaries, useful for assessing factual consistency.
  4. F1 Score: The harmonic mean of precision and recall, balancing the two metrics.
  5. Fact-Based Evaluation Metrics:
    • FEQA (Fact-based Evaluation of Question Answering): Assesses response accuracy by checking against a set of factual questions.
    • QAGS (Question Answering and Generation Score): Evaluates factual consistency by generating questions from the model’s output and comparing answers with the original text.

Challenges in Addressing Hallucination

  • Context Sensitivity: Ensuring the model correctly understands and interprets context.
  • Source Reliability: Identifying and prioritizing reliable sources for fact-checking.
  • Dynamic Knowledge: Keeping the model’s knowledge updated with current information.

No comments:

Post a Comment

7.2 Reducing Hallucination by Prompt crafting step by step -

 Reducing hallucinations in large language models (LLMs) can be achieved by carefully crafting prompts and providing clarifications. Here is...