Skip to Main Content

Consensus

An overview of the Consensus.app tool

How Consensus Deals with Hallucations

Let’s talk about “hallucinations.” Hallucinations are a common issue in AI systems where models generate something that is not true. These usually fall into three categories:

  1. Fake sources – the AI cites a paper or article that doesn’t exist.
  2. Wrong facts – the AI generates a confident answer from internal memory that’s simply incorrect with no source.
  3. Misread sources – the AI summarizes a real paper or source, cites it, but gets it wrong.

Consensus states that only the third type of hallucination is possible with its tool and that efforts are taken to minimize it.

Consensus isn’t a chatbot. It’s a search engine that uses AI to summarize real scientific papers. Every time you ask a question, it searches a database of peer-reviewed research. That means:

  • Every paper Consensus cites is guaranteed to be real
  • Every summary is based on actual research, not a model’s guess or internal memory
     

No AI System is Perfect!

Still, no AI system is perfect. Sometimes a model can misinterpret a paper and summarize it incorrectly and this can happen in Consensus. To reduce this risk, Consensus added safeguards like “checker models” that verify a paper’s relevance before summarizing it. Additionally, Consensus is designed to make it easy for users to dive into the source material themselves.