History of Less Wrong

Unnamed (+154/-92)
Yoav Ravid (+46)
Swimmer963 (Miranda Dixon-Luinenburg) copied discussion
Deku-shrub (+51) /* See also */
Deku-shrub (+67) /* Today */
Deku-shrub
Deku-shrub
Deku-shrub /* Diaspora */
Deku-shrub (-3) /* Prehistory */
Deku-shrub (+229/-207) /* Prehistory */

Less Wrong is a community resource devoted to refining the art of human rationality, sometimes known as rationalism. It has undergone which was founded in 2009. Site activity reached a risepeak in 2011-13 and arguably a falltrough in recent years.2016-17. This page mainly describes the history through 2016.

TodayThe view from 2016

As of 2016 the community is far less active than it once was. The forum'sforum stands, but submissions are down. The wiki has low traction and it is potentially in need to streamlining around remaining activity rather than its former glories.

Around 2001 Yudkowsky had created theSL4 mailing list and IRC channel, and on them Yudkowsky frequently expressed annoyance, frustration, and disappointment in his interlocutors' inability to think in ways he considered obviously rational. After failed attempts at teaching people to use Bayes' Theorem, he went largely quiet from SL4 to work on AI safety research directly. After discovering he was not able to make as much progress as he wanted to, he changed tacts to focus on teaching the rationality skills necessary to do AI safety research until such time as there was a sustainable culture that would allow him to focus on AI safety research while also continuing to find and train new AI safety researchers.

LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with Eliezer Yudkowsky and Robin Hanson as the principal contributors.

Prior to thatAround 2001 Yudkowsky had created the SL4 mailing list and IRC channel, and on them Yudkowsky frequently expressed annoyance, frustration, and disappointment in his interlocutors' inability to think in ways he considered obviously rational. After failed attempts at teaching people to use Bayes' Theorem, he went largely quiet from SL4 to work on AI safety research directly. After discovering he was not able to make as much progress as he wanted to, he changed tacts to focus on teaching the rationality skills necessary to do AI safety research until such time as there was a sustainable culture that would allow him to focus on AI safety research while also continuing to find and train new AI safety researchers.

LessWrong material was ultimately developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with Eliezer Yudkowsky and Robin Hanson as the principal contributors.

Load More (10/37)