Lesswrong 2.0 is a project by Oliver Habryka, Ben Pace, and Matthew Graves with the aim of revitalizing the Lesswrong discussion platform. Oliver and Ben are currently working on the project full-time and Matthew Graves is providing part-time support and oversight from MIRI.
Our main goals are to move Lesswrong to a modern codebase, add an effective moderation system, and integrate the cultural shifts that the rationality community has made over the last eight years. We also think that many of the distinct qualities of the Lesswrong community (e.g. propensity for long-form arguments, reasoned debate, and a culture of building on one another's conceptual progress) suggest a set of features unique to the new Lesswrong that will greatly benefit the community.
We are planning to be improving and maintaining the site for many years to come, but whether the site will be successful ultimately depends on whether the community will find it useful. As such, it is important to get your feedback and guidance on how the site should develop and how we should prioritize our resources. Over the coming months we want to experiment with many different content-types and page designs, while actively integrating your feedback, in an attempt to find a structure for Lesswrong that is best suited at facilitating rational discourse.
What follows is a rough summary of how we are currently thinking about the development of Lesswrong 2.0, and what we see as the major pillars of the Lesswrong 2.0 project. We would love to get your thoughts and critiques on these.
Table of Contents:
I. Modern Codebase
II. Effective Moderation
III. Discourse Norms
IV. New Features
V. Beta Feedback Period
The old lesswrong is one of the only successful forks of the reddit codebase (forked circa 2009). While reddit's code served as a stable platform while our community was in its initial stages, it has become hard to develop and extend because of its age, complexity and monolithic design.
Lesswrong 2.0, on the other hand, is based on modern web technologies designed to make rapid development much easier (to be precise React, GraphQL, Slate.js, Vulcan.js and Meteor). The old codebase was a pain to work with, and almost every developer who tried to contribute gave up after trying their hands at it. The new Lesswrong codebase on the other hand is built with tools that are well-documented and accessible, and is designed to have a modular architecture. You can find our Github repo here.
We hope that these architectural decisions will allow us to rapidly improve the site and turn it into what a tool for creating intellectual progress should look like in 2017.
Historically, LW has had only a few dedicated moderators at a time, applying crude tools, which has tended to lead to burnout and backlash. There are many many obvious things we are planning to do to improve moderation, but here are some of the top ones:
Any user above N karma can flag a post as spam, which renders it invisible to everyone but mods. Mods will check the queue of spam posts, deleting correct flags, and removing the power from anyone that misuses it. If it seems necessary, we will also be integrating all the cool new spam detection mechanisms that modern technology has given us in the last 8 years.
Historically, Lesswrong’s value has come in large part from being a place on the internet where the comments were worth reading. This was largely a result of the norms and ability of the people who were commenting on the page, with a strong culture of minimizing defensiveness, searching for the truth and acting in the spirit of double crux. To sustain that culture and level of quality, we need to set up broad incentives that are driven by the community itself.
The core strategy we are currently considering is something we’re calling the Sunshine Regiment. The Sunshine regiment is a pretty large set of trusted users who have access to reduced moderating powers, such as automatically hiding comments for other users and temporarily suspending comment threads. The goal is to give the community the tools to de-escalate conflicts and help both the users and moderators make better decisions, by giving both sides time to reflect and think and by distributing the load of draining moderation decisions.
The two main plans we have against trolls is to change the Karma system to something more like “Eigenkarma” and improvements to the moderator tools. In an Eigenkarma system the weights of the votes of a user depends on how many other trustworthy users have upvoted that user. For the moderator tools, one of the biggest projects is a much better data querying interface that aims to help admins notice exploitative voting behavior and other problems in the voting patterns.
In terms of culture, we still broadly agree with the principles that Eliezer established in the early days of Overcoming Bias and Lesswrong. The twelve virtues of rationality continue to resonate with us, and the “The Craft and the Community” sequence is still highly influential on our thinking. The team (and in particular Oliver) have taken significant inspiration from the original vision of Arbital in our ideas for Lesswrong 2.0.
That being said we also think that the culture of the rationality community has substantially changed in the last eight years, and that many of those changes were for the better. As Eliezer himself said in the opening to “Rationality: AI to Zombies”:
It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples. In retrospect, this was the second-largest mistake in my approach. It ties into the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say ‘Oops’ and ‘Duh.’
We broadly agree with this, and think both that the community has made important progress in that direction, and that there are still many things to improve about the current community culture. We do not aim to make the new Lesswrong the same as it was at its previous height, but instead aim to integrate many of the changes to the culture of the rationalist culture, while also re-emphasizing important old virtues that we feel have been lost in the intervening years.
We continue to think that strongly discouraging the discussion of highly political topics is the correct way to go. A large part of the value of Lesswrong comes from being a place where many people can experience something closer to rational debate for the first time in their life. Political topics are important, and not to be neglected, but they serve as a bad introduction and base on which to build a culture of rationality. We are open to creating spaces on Lesswrong where people above a certain Karma threshold can discuss political topics, but we would not want that part of the site to be visible to new users, and we would want the votes on that part of the site to be less-important for the total karma of the participating users. We want seasoned and skilled rationalists to discuss political topics, but we do not want users to seek out Lesswrong primarily as a venue to hold political debates.
As a general content guideline on the new Lesswrong: If while writing the article the author is primarily writing with the intent of rallying people to action, instead of explaining things to them, then the content is probably ill-suited for Lesswrong.
You can find our short-term feature roadmap over here in this post. This is a high-level overview on our reasoning on some of the big underlying features we expect to significantly shape the nature of Lesswrong 2.0.
Many authors want their independence, which is one of the reasons why Scott Alexander prefers to write on SlateStarCodex instead of Lesswrong. We support that need for independence, and are hoping to serve it in two different ways:
Arbital-Style features and content:
Arbital did many things right, even though it never really seemed to take off. We think that allowing users to add prediction-polls is great, and that it is important to give authors the tools to create their own content that is designed to be maintained over a long period of time and with multiple authors. We also really like link previews on hover-over as well as the ability to create highly interconnected networks of concepts with overview pages.
Of the Arbital features, prediction polls are most certainly going to end up on feature list, but as of yet, it is unclear whether we want to copy any other features directly, though we expect to be inspired by many small features.
Better editor software:
The editor on Lesswrong and the EA Forum often lead to badly formatted posts. The editor didn’t deal well with copying content over from other webpages or Google docs, which often resulted in hard to read posts that could only be fixed by directly editing the HTML. We are working on an editor experience that will be flexible and powerful, while also making it hard to accidentally mess up the formatting of a post.
Sequences-like content with curated comments:
After doing a large amount of interviews with old users of Lesswrong, it became clear to us that the vast majority of top contributors on Lesswrong spent at least 3 months doing nothing else but reading the sequences and other linearly-structured content on the page, while also reading the discussion on those posts. We aim to improve that experience significantly, while also making it easier to start participating in the discussion.
Books like Rationality: AI to Zombies are valuable in that they reach an audience that was impossible to reach with the old Lesswrong, and by curating the content into an established book-like format. But we also think that something very important is lost when cutting the discussion out of the content. We aim to make Lesswrong a platform that provides sequences-like content in formats that are as easy to consume as possible, while also encouraging the user to engage with the discussion on the posts, and be exposed to critical comments, disagreements and important contradicting or supporting facts. We also hope that being exposed to the discussion will more directly teach new users how to interact with the culture of Lesswrong and to learn the art of rationality more directly by observing people struggling in conversation with difficult intellectual problems.
V. Beta Feedback Period
It’s important for us to note that we don’t think that online discussion is primarily a technical problem. Our intention in sharing our plans with you and launching a closed beta are to discover both the cultural and the and technical problems that we need to solve to build a new and better discussion platform for our community. With your feedback we’re planning to rework the site, adjust our feature priorities and and make new plans for improving the culture of the new Lesswrong 2.0 community.
Far more important than implementing any particular feature is building an effective culture that has the correct social incentives. As such, our focus lies on building a community with norms and social incentives that facilitate good discourse, with a platform that does not get in the way of that. However, we do think that there are certain underlying attributes of a discussion platform that significantly shift the nature of discussions on that platform in a way that prevents or encourages good community norms to arise, i.e. Twitter’s 140 character limit makes it almost impossible to have reasoned discourse. At this stage, we are still trying to figure out what the best content-types and fundamental design philosophies are that are best for giving rise and facilitating effective discussion.
That’s all we have for now. Please post your ideas for features or design changes as top-level comments, and discuss your concerns and details of the suggestions in second-level comments. We will be giving significant weight to the discussion and votes in our decisions on what to work on for the coming weeks.
You talk about using karma thresholds for various things. But traditional lesswrong style karma screens more for quantity than quality of posts, and this would remain true of a version where you weight people's upvotes and downvotes. I suggest looking for versions which filter more for quality (while not creating too much disincentive to make additional posts/comments).
This is actually a feature, not a bug. The karma threshold isn't just there to limit who has access to features; it's also to increase the cost of creating sockpuppets and of recovering from bans.
I think keeping some dependence on quantity is desirable, but that scaling linearly with number of posts weights it too heavily compared to variation in number of upvotes (I proposed scaling with roughly the cube root of number of posts in my explicit formula suggestion elsewhere in the comment thread).
Here's an example functional form which is the best guess from the top of my head at creating this effect (but I'm giving as an illustration of what to pay attention to rather than a claim that this precisely should be used):K = (U - 3D) * P^0.3 / RWhereK = KarmaU = total (weighted) upvotesD = total (weighted) downvotesP = total number of posts+commentsR = total number of reads of your posts+comments
I also agree with the spirit of this, but I think dividing upvotes by the number of reads penalizes reads excessively, because each reader doesn't decide how to vote independently. Once a post already has a high score, a new reader is not likely to upvote it more even if they think it's high quality. Also we ought to encourage people to create highly popular articles that spread our ideas beyond the local community, and this system would serve to discourage that. On the other hand we also don't want to penalize people for writing specialized content that only a few others might read. I'm not sure what the right solution is here.
I agree with the spirit of this. That said, if the goal is to calculate a Karma score which fails to be fooled by a user posting a large amount of low quality content, it might be better to do something roughly: sum((P*x if x < 0 else max(0, x-T)) for x in post_and_comment_scores). Only comments that hit a certain bar should count at all. Here P is the penalty multiplier for creating bad content, and T is the threshold a comment score needs to meet to begin counting as good content. Of course, I also agree that it's probably worth weighting upvotes and downvotes separately and normalizing by reads to calculate these per-(comment or post) scores.
I was just writing a very similar function in one of the comments above!
I think something in this direction makes sense.
My current interpretation is that you mean that people who write a lot of content generally get much more karma than people who write little but very good content. I agree with that, and have been thinking about good ways of dealing with that. Here are two approaches:
When deciding what to show the user, use an algorithm that combines the information of: 1) How many upvotes did this piece of content get? 2) How many users have seen this piece of content? 3) How many downvotes did this piece of content get
Allow users to give out variable Karma rewards, with some cost attached to them. Maybe Karma transfers, or some limited amount of currency that's generated based on your current karma. Top comments would then receive more of this limited amount of currency.
We could have some scarce resource based on karma. Not paying with karma directly, because I guess losing karma would feel bad, but rather that with each 100 karma points you get 1 "credit".
You could then spend those credits e.g. on visually highlighting other people's comments and articles. Something like when Reddit displays that a comment got "Reddit gold". It could even transfer some karma (but much less than it costs) to the rewarded user, but mostly it would be a costly signal of "I really liked this", with the name of person giving the reward displayed as a tooltip. A costly version of "+1 nice", essentially.
One variation of karma system I'd like is the ability to rate posts as being exceptionally good (probably taking more than one click, to introduce a trivial inconvenience so that it isn't used all the time like five star ratings are). This would give more ability to pick out very useful contributors from small numbers of posts.
Agree. One of the broader things I have been thinking of is a similar two-tier voting system like Facebook has. There is the primary interaction of upvoting and downvoting, but then there are additional vote-types you can access with an additional click (on Facebook "angry", "sad", etc. here it would be "exceptionally good point", "needs clarification", "too agressive" or something along those lines).
First click could be the generic upvote or downvote; then using a second click you could pick a more specific "flavor" of the vote. (Different flavors for upvotes, and for downvotes.)
I think publicly applying badges to a comment should be completely orthogonal to anonymously voting on it. EDIT: now a feature request.
I'd like to see my old suggestion either folded into this feature, or perhaps as an independent one, whichever makes more sense.
That's a really neat idea.
Being able to sort by some of those would also be helpful (e.g. Sort comments by 'exceptional insight', don't show me comments with >2 'overly aggressive'.)
You mentioned an alternative comment structure in the features post. Some of the value of that could be achieved by a 2nd tier vote saying a comment is a key consideration (e.g. "This is a crux"), and being able to sort by that.
Nitpick, but possibly a significant one: there's an unusually wide range of font sizes being used, with the article text and compose-comment box being very large, regular comments being very small. Normally I would pick my preferred font size by zooming the whole page, but the variation within the page makes this not achievable.
(EDIT: Typography settings have changed significantly since this comment was written.)
Ah, yes, it does feel a bit odd. Actually, I like it a little bit: it puts visual focus on the text I'm composing, while allowing a lot of room for other text on the screen. But at the same time, it feels clunky. I would probably get annoyed at the large text in the comment-composing box if I were writing a longer comment.
Yeah, I was planning to reduce the size of the text in the compose-comment box to normal levels. Would that improve things?
Besides that, I am not sure whether we can avoid the large difference in font-size between article text and comments. The problem is that comments need to be small in font-size to allow the reader to understand the thread structure (i.e. they need to be able to see a comment and a reply at the same time), and the article text needs a large font-size to not run into any problems with the number of characters per line, and general readability for long-form text gets a lot better with larger font-sizes.
I'd generally recommend reading Practical Typography, and Professional Web Typography. I expect knowing that stuff well would be valuable since LW is primary a websites where people read lots of text.
Yes, I am a fan of Practical Typography and skimmed Professional Web Typography a while ago.
I haven't yet spend super much time optimizing the typography of LW2, and am happy about input. Rereading both of the books above in the process of that might be a good idea.
Yeah, it's pretty unreasonable to expect typography to be dialed in for the closed beta :)
Some quick thoughts/opinions I have for the post text:
I'd consider making the body text a serif font. I find it's a better reading experience.
Body text is too grey. It definitely shouldn't be black, but maybe darker at something like: #2F2230.
I'd differentiate heading a little more, maybe a different font, or real small caps. Also if I was being really opinionated I'd only support 3 heading levels and make them smaller. I think people are overdoing it these days with really big headings in post/article text on the web. It certainly clearly differentiates them from the text but there are classier ways of doing that.
The current line-height is pretty big at 1.846, I'd change it to something closer to 1.6. Maybe even as low as 1.4.
Most sites have their font size to small, so I'm really happy to see you didn't do that, but I think to current body font is two big at 20px. I'd do 18px at most, an no smaller then 16px. Doing this might make the lines a little on the long end though.
I'd also implement hanging bullets. This is were bullet text is flush with the body text, and the bullets are in the margin a little. It's very easy to do with CSS. For bonus points you could do hanging punctuation for quotes, but that's much harder.
As many of you might have noticed, after a discussion with Malo and some great suggestions by him, the typography for the whole page is updated and looks a lot better!
I will still do a larger typography rework at some point during the closed-beta, and will obviously do adjustments as I notice problems with the current setup, but I am definitely happier with this.
Will there be a mobile app?
Maybe, eventually, but it's not high on our priority list.
My username on the closed beta is based on my email address, not my username from LW 1.0. Will these accounts be merged when 2.0 goes live, or will both of these accounts exist independently, or something else?
A merge accounts feature is on the dock. Until then, I am happy to give anyone access to their old LW accounts who pings me on the Intercom chat.
Testing if name change went through.