Improving Truth-Seeking in LLMs: A Verified Internal Knowledge Base with a Ratchet Mechanism
Large language models today have a basic problem: they tend to treat information as equally valid if it appears often enough in their training data. They don’t do a good job distinguishing high-quality evidence from low-quality or biased sources. This leads to several recurring issues: * They confidently state things...