How Lisa Prevents LLM Hallucinations

September 3, 2024 Artifical Intelligence, Lisa No Comments

Hallucinations (better-called confabulations) in the context of large language models (LLMs) occur when these models generate information that isn’t factually accurate. Lisa can mitigate these from the insight of why they happen, namely:

LLM confabulations happen because these systems don’t have a proper understanding of the world but generate text based on patterns learned from vast amounts of data.

Some general steps can collectively make LLMs less prone to confabulation:

  • better training data
  • enhanced prompting techniques
  • post-processing and fact-checking
  • human-in-the-loop systems
  • transparency in model limitations
  • improving how models gauge their own uncertainty.

These are well-documented steps that should generally be followed. However, even with all this in place, some level of confabulation remains a risk.

So, let’s start with the reason why LLMs confabulate.

The following strategies aim to minimize confabulations by recognizing that LLMs generate responses based on learned patterns rather than proper comprehension.

The key is to constrain, guide, and verify the output to ensure it remains aligned with factual information.

  • Contextual constraints: By giving the LLMs more specific context, one can limit the scope within which they generate text.
  • Incremental prompting: Instead of asking broad questions, break down queries into smaller, more focused questions.
  • Prompt engineering for verifiability: Design prompts that explicitly ask the LLM to reference specific sources or indicate uncertainty when the information is unclear.
  • Reinforcement with corrective feedback: Implementing feedback loops where the model is trained to recognize and correct its own mistakes can gradually reduce confabulations.
  • Layered approach with knowledge bases: Integrate LLMs with structured knowledge bases or rule-based systems that can cross-check generated content.
  • Model calibration: Develop techniques to calibrate the model’s confidence levels better.
  • Transparency with users: Educate users on the strengths and limitations of LLMs.

Crucially, Lisa can take the initiative to use these strategies.

In a dialogue with the user, Lisa can initiate them and ask the user for help. The result is cooperation.

Lisa’s ability to initiate cooperation with users is a significant asset. She can encourage users to engage in a dialogue where they work together to ensure accuracy. For example, Lisa might ask, “Would you like me to cross-check this information with another source?” or “Do you think we should break this question down further?”

Moreover, by keeping track of such cooperative interactions, Lisa can improve future dialogues, creating a more robust and reliable experience over time.

Here is how Lisa can go about for each:

Contextual constraints
Lisa can begin by narrowing the focus of the conversation, ensuring that she operates within a defined scope.

For example, when a user asks a broad question, Lisa might clarify by asking for more details or by specifying that her answer will be based on a particular source or document.

Incremental prompting
Lisa can guide users to break down their questions into smaller, more manageable parts. By doing so, she can address each query with greater precision and reduce the chance of veering into speculative or incorrect territory.

For instance, instead of answering a complex question in one go, Lisa might suggest tackling it step by step, verifying each part along the way.

Prompt engineering for verifiability
Lisa can ask herself to check her sources automatically. By integrating prompts that require citation or sourcing, Lisa ensures that the information she provides is grounded in verifiable data ― improving the accuracy of the output and building user trust.

For example, Lisa might respond to a question by saying, “According to [specific source], the information is as follows…”

Reinforcement with corrective feedback
When Lisa identifies a potential error or receives feedback from the user indicating that an output might be inaccurate, she can use this information to improve future responses.

This involves using techniques like fine-tuning on a dataset where incorrect outputs have been labeled and corrected.

Layered Approach with knowledge bases
Lisa can cross-reference her responses with an internal knowledge base. If a discrepancy is detected between her generated content and the verified data in the knowledge base, she can either correct the information or flag it for further review.

This internally layered approach adds an additional safeguard against the propagation of incorrect information.

Model calibration
To better gauge her own confidence levels, Lisa can be programmed to recognize when she might be operating outside her certainty zone. In such cases, she can express caution, suggest a possible answer with disclaimers, or even defer to external verification.

For example, Lisa might say, “I believe the answer is X, but I recommend verifying this information with a reliable source.”

Transparency with users
Lisa can actively inform users about the inherent limitations of LLMs.

This transparency empowers users to discern when to trust Lisa’s responses fully and when to seek additional verification.

Final thoughts

By integrating these strategies, Lisa doesn’t just reduce the risk of confabulation — she transforms how users interact with AI. With every dialogue, Lisa builds trust, ensuring that the information she provides is accurate, reliable, and grounded in a deeper understanding. Through careful calibration, continuous learning, and transparent communication, Lisa evolves into a partner that users can rely on, not just for answers but for meaningful, cooperative conversations.

This commitment to excellence sets Lisa apart, paving the way for a future where AI and humans work hand in hand, creating a smarter, more trustworthy world of information.

If you’re interested in how Lisa does this more concretely, please read this pdf.

Leave a Reply

Related Posts

Why is Compassion Important in the Future of A.I.?

Compassionate A.I. is poised to revolutionize personal well-being across many domains, such as mental health, content curation, and customer service, turning technology into a true partner in emotional and mental growth. I’ve been down with COVID for a few days now, for the first time. Mainly very tired in a weird way. One shouldn’t even Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

A.I. Is in the Patterns

And so is ‘intelligence’ in general. At least what we humanly call ‘intelligence’, mainly because we ourselves are so good at pattern-processing… But, why not? A.I. is not (exclusively) in the serial look-up Neither does our human brain function this way [see: “Human Brain: Giant Pattern Recognizer”]. A machine that performs look-ups and nothing but Read the full article…

Translate »