Context for Draft / Request for Feedback

The LessWrong team is hoping to soon display a new About/Welcome page which does an improved job of conveying what LessWrong.com is about and how community members can productively use the site.

However, LessWrong is a community site and I (plus the team) feel it's not appropriately for us to unilaterally declare what LessWrong is about. So here's our in-progress draft of a new About/Welcome page. Please let us know what you think in the comments. Please especially let us know if you think LessWrong is actually about something else. Or even just what it means to you.

Thanks!

<3 Ruby


---------------------------------------------------------------------------------

Related:

The tl;dr

LessWrong is a community blog devoted to the art of human rationality.

We invite you to use this site for any number of reasons, including, but not limited to: learning valuable things, being entertained, sharing and getting feedback on your ideas, and participating in a community you like. However, fundamentally, this site is designed for two main uses:

  • As a place to level-up your rationality
  • As a place to apply your rationality to important real-world problems

Primary things to do on LessWrong are:

Leveling up your rationality

First off, what is rationality?

Rationality is a term which can have different connotations to different people. On LessWrong, we mean something like the following:

  • Rationality is thinking in ways which systematically arrive at truth.
  • Rationality is thinking in ways which cause you to achieve your goals.
  • Rationality is trying to do better on purpose.
  • Rationality is reasoning well even in the face of massive uncertainty.
  • Rationality is making good decisions even when it’s hard.
  • Rationality is being self-aware, understanding how your own mind works, and applying this knowledge to thinking better.

What rationality is not:

  • Forsaking all human emotion and intuition to embrace Cold Hard Logic.

Why should I care about rationality?

One reason to care about rationality is because you intrinsically care about having true beliefs. You might also care about rationality because you care about anything at all. Our ability to achieve our goals depends on 1) our ability to ability to understand and predict the world, 2) having the skills to make good plans, and 3) having the self-knowledge and self-mastery to avoid falling into common pitfalls of human thinking. These are core topics in rationality are of interest to anyone with non-trivial goals, from curing their persistent insomnia and having fulfilling relationships to performing groundbreaking research or curing the world’s greatest ills.

See also Why truth? And...

How does LessWrong help me level up my rationality?

A repository of rationality knowledge

LessWrong has an extensive Library containing hundreds of essays on rationality topics. You can get started on the Library page or from the homepage. Among the newer material, we particularly recommend Curated posts.

The writings of Eliezer Yudkowsky and Scott Alexander comprise the core readings of LessWrong. As part of the founding of LessWrong, Eliezer Yudkowsky wrote a long series of blog posts, originally known as The Sequences and more recently compiled into an edited volume, Rationality: AI to Zombies.

Rationality: From AI to Zombies is a deep exploration of how humans minds can come to understand the world they exist in - and all reasons they so often fail to do so. The comprehensive work:

Eliezer covers these topics and many more through allegory, anecdote, and scientific theory. He tests these ideas by applying them to debates in artificial intelligence (AI), physics, metaethics, and consciousness.

Eliezer also wrote Harry Potter and the Methods of Rationality (HPMOR), an alternative universe version of Harry Potter where Harry’s adoptive parents raised with Enlightenment ideals and the experimental spirits. This work introduces many of the ideas from Rationality: A-Z in a gripping narrative.

Scott Alexander’s essays on how good reasoning works, how to learn from the institution of science, and the different ways society has been and could be organized have been made into a collection called The Codex. The Codex contains such exemplary essays as:

Members on LessWrong rely on many of the ideas from their writers in their own posts, and so it's advised to read at least a little of these authors to get up to speed on LessWrong's background knowledge and culture.

Truth-seeking norms and culture

We are proud of the LessWrong community not just for its study of rationality, but also for how much these ideals and skills are put into practice. Unlike many social spaces on the modern Internet, LessWrong is a place where changing your mind, charitability, scholarship, and many other virtues are cherished. LessWrong helps you improve you rationality by providing a space where healthy epistemic and conversational norms are encouraged and enforced.

Social support and reinforcement

Beyond culture and norms, it’s easier to learn, change, and grow when you’re not alone on your path. Find solidarity on your quest for greater rationality with the LessWrong community. You can participate in the conversations online (via the comments or writing posts which build on the posts of others). Or attend a local in-person meetup, conference, or community celebration. In the last twelve months, there have been 461 meetups in 32 countries.

Opportunities to practice your rationality.

See the next section.

Applying your rationality to important problems

Feedback and practice are crucial for mastery of skills. If you’re not using your skills to do anything real, how do you even know whether you’re on the right track? For this reason, LessWrong is a place where rationality is both trained and put to use.

Plus, it’s nice to accomplish real things.

Ways to apply your rationality on LessWrong

Participate in discussions aimed at truth-seeking and self-improvement

On LessWrong, you can converse with others with the real goal of exchanging beliefs and converging on the truth. You can delight in dialog which isn’t about Being Right, but actually in clarifying the matter at hand. And you can work together with others, each of you providing your own understanding and background knowledge to figure out how reality really is. This is not Internet discussion as you know it.

While rationality, self-improvement, and AI are the most frequently discussed topics on the site, there are also commonly discussions of self-improvement, psychology, philosophy, decision theory, mathematics, computer science, physics, biology, history, sociology, meditation, and many other topics.

Core to LessWrong is that we want our online conversations to be productive, constructive, and oriented around determining what is true. Our Frontpage commenting guidelines ask members to:

Aim to explain, not persuade. Write your true reasons for believing something, not what you think is most likely to persuade others. Try to offer concrete models, make predictions, and note what would change your mind.
Present your own perspective. Make personal statements instead of statements that try to represent a group consensus (“I think X is wrong” vs. “X is generally frowned upon”). Avoid stereotypical arguments that will cause others to round you off to someone else they’ve encountered before. Tell people how you think about a topic, instead of repeating someone else’s arguments (e.g. “But Nick Bostrom says…”).
Get curious. If I disagree with someone, what might they be thinking; what are the moving parts of their beliefs? What model do I think they are running? Ask yourself - what about this topic do I not understand? What evidence could I get, or what evidence do I already have?

Once you’ve read some of LessWrong’s core material and read through some past comment-section discussions to get a sense of how we communicate around here, you’re ready to participate in a LessWrong discussion.

Post your valuable ideas

Our collective knowledge and skills are solidified by members writing posts. By writing posts, you benefit the world by sharing your knowledge and benefit yourself by getting feedback from an audience. Our audience will hold you to high standards of reasoning, yet in a cooperative and encouraging manner.

Posts on practically any topic are welcomed. We think it's important that members can “bring their entire selves” to LessWrong and are able to share their thoughts, ideas, and experiences without fearing whether they are “on topic”. Rationality is not restricted to only specific domains in one’s life, and neither should LessWrong be.

However, to maintain its overall focus, LessWrong classifies posts as either Personal blogposts or as Frontpage posts. The latter have more visibility by default on the site.

All posts begin as personal blogposts. Authors can grant permission to LessWrong’s moderation team to give a post Frontpage status if it i) has broad relevance to LessWrong’s members, ii) is timeless, i.e. not tied to current events, and iii) primarily attempts to explain rather than persuade.

The not-perfectly-named category of “Personal” blogposts are suitable for everything which doesn't fit in Frontpage. It’s the right classification for discussions of niche topics, personal interests, current events, community concerns, potentially divisive topics, and just about anything else you want to write about.

See more in Site Guide: Personal Blogposts vs Frontpage Posts

Contribution on LessWrong’s Open Questions research platform.

Open Questions was built to help apply the LessWrong community’s rationality/epistemic to humanity’s most important problems.


New Comment
26 comments, sorted by Click to highlight new comments since:

The Codex contains such exemplary essays as: [...] The Categories Were Made For Man, Not Man For The Categories

"... Not Man for the Categories" is really not Scott's best work, and I think it would be better to cite almost literally any other Slate Star Codex post (most of which, I agree, are exemplary).

That post says (redacting an irrelevant object-level example):

I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.

I claim that this is bad epistemology independently of the particular values of X and Y, because we need to draw our conceptual boundaries in a way that "carves reality at the joints" in order to help our brains make efficient probabilistic predictions about reality.

I furthermore claim that the following disjunction is true:

  • Either the quoted excerpt is a blatant lie on Scott's part because there are rules of rationality governing conceptual boundaries and Scott absolutely knows it, or
  • You have no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way.

Look. I know I've been harping on this a lot lately. I know a lot of people have (understandable!) concerns about what they assume to be my internal psychological motives for spending so much effort harping on this lately.

But the quoted excerpt from "... Not Man for the Categories" is an elementary philosophy mistake. Independently of whatever blameworthy psychological motives I may or may not have for repeatedly pointing out the mistake, and independently of whatever putative harm people might fear as a consequence of correcting this particular mistake, if we're going to be serious about this whole "rationality" project, there needs to be some way for someone to invest a finite amount of effort to correct the mistake and get people to stop praising this stupid "categories can't be false, therefore we can redefine them for putative utilitarian benefits without any epistemic consequences" argument. We had an entire Sequence specifically about this. I can't be the only one who remembers!

[-]Ruby320

I reread Scott's post again and it seemed at first still reasonable to me. I began writing up what became a moderately lengthy response to yours. And then I realized you were just plain right. I think Scott's statement is wrong and there is in fact a rule* of rationality saying you shouldn't do that.

I think Scott starts off with a true and defensible position (concepts can only be evaluated instrumentally) and then concludes that in the face of non-epistemic instrumental pressure, there's no reason to choose a boundary otherwise, i.e. forgetting about the epistemic instrumental pressure on concepts. I think the right practical choice might still be to forego "purity of the concepts", but you can't say there exists no rule* of rationality which opposes that choice.

I will remove the reference to that post from the final Welcome/About page post. Thanks for the feedback.

*There's something of a crux here depending on how rigidly we define "rule". Here I mean "strong guideline or principle, but not so strong it can't ever be outweighed." If Scott meant "inviolable rule", I might actually agree with him.

In any case, I want the comments on this post page to be about the object level discussion of the draft About/Welcome page. I don't want things to get side-tracked. I commit to prevent further comments on this thread. Zack, if you want to continue this discussion elsewhere, DM me and we'll figure something out.


I'll note that this is the exact same argument we had in this post, and that I still think contextualizing norms are valid rationality norms. I don't want to have the discussion again here, but I do want to point to another discussion where the counterargument already exists.

[+]TAG-180

If you want to get ideas, you could look at the history of the old about page and homepage on the wiki. Looking over the versions of those pages I wrote, here are some things I like about my versions better:

  • I don't try to be super comprehensive. I link to an FAQ for reference. FAQs are nice because they're indexed by the content the user wants to access.
  • There is just generally less text. Some of the stuff you're writing doesn't deliver a lot of value to the reader in my opinion. For example, you write: "We invite you to use this site for any number of reasons, including, but not limited to: learning valuable things, being entertained, sharing and getting feedback on your ideas, and participating in a community you like." You're basically describing how people use social media websites. It's not delivering insight for the average reader and it's going to cause peoples' eyes to glaze over. Omit needless words. At most, this sentence should be a footnote or FAQ question "Can I use Less Wrong for things that aren't rationality?" or a shorter sentence "Less Wrong isn't just for rationality, everything is on topic in personal blogposts". Remember that we're trying to put our best foot forward with this page, which will be read by many people, so time spent wordsmithing is worthwhile. (Note: It's fine to blather on in an obscure comment like I'm doing here.)
  • I place less emphasis on individuals. Compare: "The writings of Albert Einstein and Richard Feynman comprise the core readings of PhysicsMastery.com. Here are Albert's writings, and here are Richard's."
  • I don't try to sell people on reading long sequences of posts right away. I'd sprinkle a variety of interesting, important links I wish more people even outside the community would read, in kind of a clickbaity way, to give people a sense of what the site is about and why it's interesting before getting them to invest in reading a book-length document.
  • I try to emphasize self-improvement benefits. It's a good sales pitch (always start with benefit to the customer), and I think it draws the right sort of ambitious, driven people into the community. Upgrade your beliefs, habits, brain, etc. You do touch on this but you don't lead with the benefits as much as you could. In sales, I think it's better to present the problem before the solution. But you present the solution ("rationality") before the problem.
  • I emphasize that the community is weird and has weird interests. If Less Wrong causes you to acquire some unusual opinions relative to your society or social circle, that's a common side effect. Autodidactism, cryonics, artificial intelligence, effective altruism, transhumanism, etc. You could "show not tell" by saying: "Here's a particular topic many users currently have a contrarian opinion about. But if you still disagree after reading our thoughts, we want to hear why!"

If I was writing the about page in today's era, I would probably emphasize much more heavily that Less Wrong has a much higher standard of discussion than most of the internet, what that means (emphasis on curiosity/truthseeking/critical thinking/intellectual collaboration, long attention spans expected of readers), how we work to preserve it, etc. I might even make it the central thesis of the about page. I think this would help lay down the right culture if the site was to expand, and also attract good people and prime them to be on their best behavior.

I think I'd also lean on the word "rationality" somewhat less.

Thanks for the detailed response here. My initial thought re-reading your old about page was "Hmm, maybe we should just make this the new about the page." I like a lot of things about it. I'm currently thinking through everything you've said and am deciding what seems, all things considered, the right approach.

Omit needless words

Good Strunkian advice.

"Here's a particular topic many users currently have a contrarian opinion about. But if you still disagree after reading our thoughts, we want to hear why!"

If any such claim is made it should be backed by census numbers.

If this is something that everyone reads, it might be nice to provide links to more technical details of the site. I imagine that someone reading this who then engages with LW might wonder:

  • What makes a curated post a curated post? (this might fit into the site guide on personal vs frontpage posts)
  • Why do comments/posts have more karma than votes?
    • What's the mapping between users' karma and voting power?
  • How does editing work? Some things are not immediately obvious, like:
    • How do I use latex?
    • How do I use footnotes?
    • How do I create images?
  • How does moderation work? Who can moderate their own posts?

This kind of knowledge isn't gathered in one place right now, and is typically difficult to google.

These questions are now addressed in the LessWrong FAQ. Specifically you want the sections on Curated, Voting, the Editor, and Moderation.

[-]Zvi50

I will note that even now I am not entirely clear on a number of these questions.

Yes, the lack of clear explanation of these topics is a clear deficit I hope we can rectify very soon. Thanks for compiling this handy list.

[-]Zvi150

I am split between the instinct "this draft is way, way too long and will make people bounce off it because of that, and we should have the non-tl;dr be distinct" and "if we make people click again they'll never get to the details and the hooks." There's a right answer but not sure what it is. I do think it would be good to have something softer than "tl;dr" to make it clear that if all you do is read the top level then you should have a reasonable high-level understanding.

I also notice that if I think of a new person, while there is a natural series of clicks (library -> sequences -> begin reading) there are a lot of places in between for me to feel confused or like I don't know what the choice is, and perhaps make a mistake, and a lot of other potential ways to go, with no central "go do this" clear path. My instinct is we want to be *very* clear in pointing people *by default* at the Sequences (assuming that's where we would point them), directly after this or the moment people think the site is right for them. Think of it as a full beginner package?

I'm confident that the tl;dr section, whatever we call it, should have a very explicit "begin here and do this" default in it, likely at its end.

This all seems right to me, including it not being obvious to link away from tl;dr to something to longer. Will invest more in getting the "begin here and do this" right.

A few other things rationality is not:

  • An aesthetic preference for square grids
  • Assertion of the superiority of Western culture
  • The belief that credentialed experts always know better
  • Abandoning your ethics/morals
  • Always defecting in the prisoners' dilemma

I would guess that making it clear what we're not talking about is more important to hooking new people than precisely defining rationality. Also, I would avoid using the word "truth" explicitly in the "what is rationality" section.

More generally, if the purpose of this page is be an entry point, I would front-load it with more hooks, examples, and links, and push less hook-y things toward the end. On a meta-level, if it's going to serve as an entry point, then it's also a key page to instrument with tracking, a/b test copy, and all that jazz. On the other hand, if the main purpose of the page is to serve as a mission statement or something along those lines, then parts explicitly aimed at newcomers could be dialed back, especially things like "What is rationality" or "Why should I care" that are addressed within the sequences.

Is there a reason you'd want to dial back those newcomer things if it were Mission-Statement oriented? The sequences are hella long, and part of the point of this post in mind is so a newcomer can quickly figure out the basic deal of what this site is about and why you might want to read the sequences in the first place,

As written, it feels like this is trying to mix some aspects of a mission statement and some aspects of an entry point, and doing neither one very well. A lot of it comes out sounding like bland generispeak - not all of it, but a lot. It would be easy to make it more engaging if that's what we're going for, or more informative if that's the objective, etc - it needs some goal that says what readers are meant to get out of it, and then more focus on achieving that goal.

(Sorry if that sounds overly harsh. I'm viewing this thread as a round of editing, so critiquing it as a piece of writing seems right.)

Thanks, you're providing the feedback entirely as requested.

You are probably quite right. The end result here is plainly the result of the roundabout process by which this doc came about. It was initially a "What LessWrong is for post" which then was co-opted to be both welcome page and about page.

The logic behind this being something like:

1) It's less effort to write a single document than write multiple (this page is a blocker on many other things I want to publish) and the result would still be better than what we currently have.

2) I do think there is something legitimately good about the the thing you say to newcomers being the same thing you show to describe what you're about. A single message about what this site is for.

Of course, 2) can be rightly suspected of being a rationalization.

My question would be is whether you think this isn't good enough to publish such that we really do need to better separate pages or whether we can get away with this in the short-term, and later do a more ideal version?

I definitely think it's fine for the short term. I don't want to push premature perfectionism here - this will not make the site worse than it is, and may make it better.

I wouldn't want it to go up and then forget about it, and have several years of newcomers dropping off because the entry point didn't grab them. (I'm less concerned about perfecting a page whose purpose is not entry-point.) That said, if I'm ever really unhappy about it, I can always just draft something up myself and then propose it to you guys.

I wouldn't want it to go up and then forget about it, and have several years of newcomers dropping off because the entry point didn't grab them.

Me neither. I'm currently trying to get through a long-ish backlog of public docs I think are necessary (About/Welcome page, Team page, User Guides [Getting Started, Posting, Commenting etc.], posts about long-term visions, posts explaining why we're building Open Questions, etc.). Once caught up, I imagine I'll go back to get more improvements where they're most valuable.

That said, if I'm ever really unhappy about it, I can always just draft something up myself and then propose it to you guys.

I would love to receive such submissions.

Typo: "Mem­bers on LessWrong rely on many of the ideas from their writ­ers in their own posts,"

I guess that should be "these writers".

Eliezer also wrote Harry Potter and the Methods of Rationality (HPMOR), an alternative universe version of Harry Potter where Harry’s adoptive parents raised with Enlightenment ideals and the experimental spirits. This work introduces many of the ideas from Rationality: A-Z in a gripping narrative.

I feel like whenever HPMOR is mentioned you need to acknowledge and address the fact that fanfiction is kind of weird and silly? Otherwise people are going to be confused and maybe anxious. Explain that it was written kind of accidentally, as a result of the fact that using other peoples' worldbuilding makes writing easier, and that, yes, it is surprising that it turned out to be good, so here are some attestations from well read people that it definitely is good and we're not just recommending it because it's ours.

A tricky thing about this is that there's an element of cognitive distortion in how most people evaluate these questions, and play-acting at "this distortion makes sense" can worsen the distortion (at the same time that it helps win more trust from people who have the distortion).

If it turned out to be a good idea to try to speak to this perspective, I'd recommend first meditating on a few reversal tests. Like: "Hmm, I wouldn't feel any need to add a disclaimer here if the text I was recommending were The Brothers Karamazov, though I'd want to briefly say why it's relevant, and I might worry about the length. I'd feel a bit worried about recommending a young adult novel, even an unusually didactic one, because people rightly expect YA novels to be optimized for less useful and edifying things than the "literary classics" reference class. The insights tend to be shallower and less common. YA novels and fanfiction are similar in all those respects, and they provoke basically the same feeling in me, so I can maybe use that reversal test to determine what kinds of disclaimers or added context make sense here."

I wouldn’t feel any need to add a dis­claimer here if the text I was recom­mend­ing were The Brothers Kara­ma­zov, though I’d want to briefly say why it’s rele­vant, and I might worry about the length.

Not sure if this was deliberate on your part, but note that HPMOR is almost twice the length of Karamazov. (662k vs 364k.)

I think stereotyping fanfiction as poorly written and non-educational doesn't quite count as a cognitive distortion.