(Summary) Sequence Highlights - Thinking Better on Purpose
Rationality is a very useful thing to learn, but while there is good curated reading material, it's not always easy reading. More than once I've wanted to introduce someone to the topics, but couldn't hope for them to dig through a pile of essays. This is an attempt to trim some of the most important texts down into something very concise and approachable. All of the original texts are well worth reading, or I wouldn't be summarizing them. I make no claim to be able to do them justice, but I can try to optimize them for different readers. Akash has written a much shorter summary of all highlights. Images generated by Midjourney, prompted by the post title. The Lens That Sees Its Flaws Full Text by Eliezer Yudkowsky When you look at an optical illusion, you're aware that what you're seeing doesn't match reality. As a human, you have the exceptional ability to be able to understand this, that your mental model of the world is not the same as the actual world around you. You are seeing a warped image through a flawed lens. Because you know this, you can manually correct yourself - "no, it's not actually moving" - and have a more accurate model than you would have on autopilot. Our brains are riddled with systematic errors, mistakes people make so often you could bet money on it. But brains are not magical. The systems making those errors can be understood, anticipated, and corrected for. The human brain is a flawed lens that can see its own flaws. By learning, noticing, and correcting for distortions, the lens can become far more powerful. What Do We Mean By "Rationality"? Full Text by Eliezer Yudkowsky Rationality is about being right and succeeding. You think you have milk in the fridge when you don't, and when you come home milkless from shopping you're disappointed. You had a false belief. Your mental map of the world didn't match reality, and so you're steered into a
I'd be very interested in talking to anonymous friend, or anyone else working on this. I have two relevant projects.
Most directly, I wrote a harness for llms to play text adventures and have spent some time optimizing the wrapper and testing on anchorhead. As you'd expect it has the same issues, but cheaper and without vision problems.
I've also worked on llm social deduction game play, which is nuanced and challenging in different ways, but shares the need for strong memory and robust reasoning in the face of hallucination.
I'd be happy to talk about any of these issues and compare leads!