Unnamed | v1.36.0Mar 9th 2021 | (+154/-92) | ||
Yoav Ravid | v1.35.0Feb 9th 2021 | (+46) | ||
Swimmer963 (Miranda Dixon-Luinenburg) | v1.34.0Oct 3rd 2020 | copied discussion | ||
Deku-shrub | v1.33.0Oct 20th 2017 | (+51) /* See also */ | ||
Deku-shrub | v1.32.0Oct 20th 2017 | (+67) /* Today */ | ||
Deku-shrub | v1.31.0Jul 2nd 2017 | |||
Deku-shrub | v1.30.0Jul 2nd 2017 | |||
Deku-shrub | v1.29.0Jul 2nd 2017 | /* Diaspora */ | ||
Deku-shrub | v1.28.0Jun 20th 2017 | (-3) /* Prehistory */ | ||
Deku-shrub | v1.27.0Jun 20th 2017 | (+229/-207) /* Prehistory */ |
Related Pages: History of Rationality, History
Around 2001 Yudkowsky had created theSL4 mailing list and IRC channel, and on them Yudkowsky frequently expressed annoyance, frustration, and disappointment in his interlocutors' inability to think in ways he considered obviously rational. After failed attempts at teaching people to use Bayes' Theorem, he went largely quiet from SL4 to work on AI safety research directly. After discovering he was not able to make as much progress as he wanted to, he changed tacts to focus on teaching the rationality skills necessary to do AI safety research until such time as there was a sustainable culture that would allow him to focus on AI safety research while also continuing to find and train new AI safety researchers.
LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with Eliezer Yudkowsky and Robin Hanson as the principal contributors.
Prior to thatAround 2001 Yudkowsky had created the SL4 mailing list and IRC channel, and on them Yudkowsky frequently expressed annoyance, frustration, and disappointment in his interlocutors' inability to think in ways he considered obviously rational. After failed attempts at teaching people to use Bayes' Theorem, he went largely quiet from SL4 to work on AI safety research directly. After discovering he was not able to make as much progress as he wanted to, he changed tacts to focus on teaching the rationality skills necessary to do AI safety research until such time as there was a sustainable culture that would allow him to focus on AI safety research while also continuing to find and train new AI safety researchers.
LessWrong material was ultimately developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with Eliezer Yudkowsky and Robin Hanson as the principal contributors.
Less Wrong is a community resource devoted to refining the art of human rationality
, sometimes known asrationalism. It has undergonewhich was founded in 2009. Site activity reached arisepeak in 2011-13 andarguablyafalltrough inrecent years.2016-17. This page mainly describes the history through 2016.TodayThe view from 2016As of 2016 the community is far less active than it once was. The
forum'sforum stands, but submissions are down. The wiki has low traction and it is potentially in need to streamlining around remaining activity rather than its former glories.