This is a special post for quick takes by Yair Halberstadt. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

3 comments, sorted by Click to highlight new comments since: Today at 8:45 PM

It seems LLMs are less likely to hallucinate answers if you end each question with 'If you don't know, say "I don't know"'.

They still hallucinate a bit, but less. Given how easy it is I'm surprised openAI and Microsoft don't already do that.

Has its own failure modes. What does it even mean not to know something? It is just yet another category of possible answers. 

Still a nice prompt. Also works on humans.

Fun fact I just discovered - Asian elephants are actually more closely related to wooly mammoths than they are to African elephants!