Let’s talk about “hallucinations.” Hallucinations are a common issue in AI systems where models generate something that is not true. These usually fall into three categories:
Consensus states that only the third type of hallucination is possible with its tool and that efforts are taken to minimize it.
Consensus isn’t a chatbot. It’s a search engine that uses AI to summarize real scientific papers. Every time you ask a question, it searches a database of peer-reviewed research. That means:
Still, no AI system is perfect. Sometimes a model can misinterpret a paper and summarize it incorrectly and this can happen in Consensus. To reduce this risk, Consensus added safeguards like “checker models” that verify a paper’s relevance before summarizing it. Additionally, Consensus is designed to make it easy for users to dive into the source material themselves.