In 2006, Eliezer Yudkowsky, Robin Hanson, and others began writing on Overcoming Bias, a group blog with the general theme of how to move one’s beliefs closer to reality despite biases such as overconfidence and wishful thinking. In 2009, after the topics drifted more widely, Eliezer moved to a new community blog, LessWrong.

LessWrong was seeded with series of daily blog posts written by Eliezer, originally known as The Sequences, and more recently compiled into an edited volume, Rationality: A-Z. These writings attracted a large community of readers and writers interested in the art of human rationality.

In 2015-2016 the site underwent a steady decline of activity leading some to declare the site dead. In 2017, a team led by Oliver Habryka took over the administration and development of the site, relaunching it on an entirely new codebase later that year.

The new project, dubbed LessWrong 2.0, was the first time LessWrong had a full-time dedicated development team behind it instead of only volunteer hours. Site activity recovered from the 2015-2016 decline and has remained at steady levels since the launch.

The team behind LessWrong 2.0 has ambitions not limited to maintaining the original LessWrong community blog and forum. The LessWrong 2.0 team conceives of itself more broadly as an organization attempting to build community, culture, and technology which will drive intellectual progress on the world’s most pressing problems.

New Comment
2 comments, sorted by Click to highlight new comments since:

Is the origin of the name LessWrong related to Karl Popper’s ideas? Popper showed that you can’t prove a scientific theory, you can only disprove it. So current Scientific “truth” is the Less Wrong explanation that today best fits the observable facts, not necessarily the Right explanation.

[-]Ruby30

I don't think so. My guess is Eliezer came up with the name, or at least approved it, and he's a staunch Bayesian. In Bayesianism, you have hypotheses ("explanations") and evidence you collect will cause you consider some hypotheses more or less likely than others.  Famously 0 And 1 Are Not Probabilities  (though this is disputed) and you can never be completely certain, but it's symmetrical – never completely certain something is correct or incorrect. Overall, I recommend Bayesianism. 

Philosophy of "Right explanation" gets a little tough between what's just our models vs the actual reality. I'm pretty happy with models/explanations that mean I make good predictions about my future experiences.