The recent moderation tools announcement represents a fairly major shift in how the site admins are approaching LessWrong. Several people noted important concerns about transparency and trust.

Those concerns deserve an explicit, thorough answer.

Summary of Concepts

  1. The Problem of Private Discussion – Why much intellectual progress in the rationalsphere has happened in hard-to-find places
  2. Public Discussion vs Intellectual Progress – Two subtly conflicting priorities for LessWrong.
  3. Healthy Disagreement – How to give authors tools to have the kinds of conversations they want, without degenerating into echo chambers.
  4. High Trust vs Functioning Low Trust environments – Different modes of feeling safe, with different costs and risks.
  5. Overton Windows, Personal Criticism – Two common conversational attractors. Tempting. Sometimes important. But rarely what an author is interested in talking about.
  6. Public Archipelago - A model that takes all of the above into account, giving people the tools to create person spaces that give them freedom to explore, while keeping all discussion public, so that it can be built upon, criticized, or refined.

i. The Problem

The issue with LessWrong that worries me the most:

In the past 5 years or so, there’s been a lot of progress – on theoretical rationality, on practical epistemic and instrumental rationality, on AI alignment, on effective altruism. But much of this progress has been on some combination of:

  • On various private blogs you need to keep track of.
  • On facebook – where discussions are often private, where searching for old comments is painful, and some people have blocked each other so it’s hard to tell what was actually said and who was able to read it.
  • On tumblr, whose interface for following a conversation is the most confusing thing I’ve ever seen.
  • On various google docs, circulated privately.
  • In person, not written down at all.

People have complained about this. I think a common assumption is something like “if we just got all the good people back on LessWrong at the same time you’d have a critical mass that could reboot the system.” That might help, but doesn't seem sufficient to me.

I think LW2.0 has roughly succeeded at becoming “the happening place” again. But I still know several people who I intellectually respect, who find LessWrong an actively inhospitable place and don’t post here, or do so only grudgingly.

More Than One Way For Discussion To Die

I realize that there’s a very salient pathway for moderators to abuse their power. It’s easy to imagine how echo chambers could form and how reign-of-terror style moderation could lead to, well, reigns of terror.

It may be less salient to imagine a site subtly driving intelligent people away due to being boring, pedantic, or frustrating, but I think the latter is in fact more common, and a bigger threat to intellectual progress.

The current LessWrong selects somewhat for people who are thick skinned and conflict prone. Being thick-skinned is good, all being equal. Being conflict prone is not. And neither of these are the same as being able to generate useful ideas and think clearly, the most important qualities to cultivate in LessWrong participants.

The site admins don’t just have to think about the people currently here. We have to think about people who have things to contribute, but don’t find the site rewarding.

Facebook vs LessWrong

When I personally have a new idea to flesh out... well...

...I’d prefer a LessWrong post over a Facebook post. LW posts are more easily linkable, they have reasonable formatting options over FB’s plain text, and it’s easier to be sure a lot of people have seen it.

But to discuss those ideas…

In my heart of hearts, if I weren’t actively working on the LessWrong team, with a clear vision of where this project is going... I would prefer a Facebook comment thread to a LessWrong discussion.

There are certain blogs – Sarah, Zvi, Ben stick out in my mind, that are comparably good. But not many – the most common pattern is “post idea on blog, and the good discussion happens on FB, and individual comment insights only make it into the broader zeitgeist if someone mentions them in a high profile blogpost."

On the right sort of Facebook comment thread, at least in my personal filter bubble, I can expect:

  • People I intellectually respect to show up and hash out ideas.
  • A collaborative attitude. “Let’s figure out and build a thing together.”
  • People who show up will share enough assumptions that we can talk about refining the idea to a usable state, rather than “is this idea even worth talking about?”

Beyond that, more subtle: even if I don’t know everyone, an intellectual discussion on FB usually feels like, well, we’re friends. Or at least allies.

Relatedly: the number of commenters is manageable. The comments on Slatestarcodex are reasonably good these days, but… I’m just not going to sift through hundreds or thousands of comments to find the gems. It feels like a firehose, not a conversation.

Meanwhile, the comments on LessWrong often feel... nitpicky and pointless.

If an idea isn’t presented maximally defensibly, people will focus on tearing holes in the non-loading-bearing parts of the idea, rather than help refine the idea into something more robust. And there’ll be people who disagree with or don’t understand foundational elements that the idea is supposed to be building off of, and the discussion ends up being about rehashing 101-level things instead of building 201-level knowledge.

Filter Bubbles

An obvious response to the above might be “of course you prefer Facebook over LessWrong. Facebook heavily filter bubbles you so that you don’t have to face disagreement. It’s good to force your ideas to intense scrutiny.”

And there’s important truth to that. But my two points are that:

  1. I think a case can be made that, during idea formation, the kind of disagreement I find on Facebook, Google Docs and in-person is actually better from the standpoint of intellectual progress.
  2. Whether or not #1 turns out to be true, if people prefer private conversations over public discussions (because they’re easier/more-fun/safer), then much discussion will tend to continue taking place in mostly private places, and no matter how suboptimal this is, it won’t change.

My experience is that my filter bubbles (whether on FB, Google Docs or in-person) do involve a lot of disagreement, and the disagreement is higher quality. When someone tells me I’m wrong, it’s often accompanied by an attempt to understand what my goals are, or what the core of a new idea was, which either lets me fix an idea, or abandon it but find something better to accomplish my original intent.

(On FB, this isn’t because the average commenter is that great, but because of a smallish number of people I deeply respect, who have different paradigms of thinking, at least 1-2 of whom will reliably show up)

There seems to be a sense that good ideas form fully polished, without any work to refine them. Or that until an idea is ready for peer review, you should keep it to yourself? Or be willing to have people poke at it with no regard how hedonically rewarding that experience is? I’m not sure what the assumption is but it’s contrary to how everyone I personally know generates insights.

The early stages work best when playful and collaborative.

Peer review is important, but so is idea formation. Idea formation often involves running with assumptions, crashing them into things and seeing if it makes sense.

You could keep idea-formation private and then share things when they’re ‘publicly presentable’, but I think this leads to people tending to keep conversation in “safe, private” zones longer than necessary. And meanwhile, it’s valuable to be able to see the generation process among respected thinkers.

Public Discussion vs Knowledge Building

Some people have a vision of Less Wrong as a public discussion. You put your idea out there. A conversation happens. Anyone is free to respond to that conversation as long as they aren’t being actively abusive. The best ideas rise to the top.

And this is a fine model, that should (and does) exist in some places. But:

  1. It’s never actually been the model or ethos LessWrong runs on. Eliezer wrote Well Kept Gardens Die By Pacifism years ago, and has always employed a Reign-of-Terror-esque moderation style. You may disagree with this approach, but it’s not new.
  2. A public discussion is not necessarily the same as the ethos Habryka is orienting around, which is to make intellectual progress.

These might seem like the same goal. And I share an aesthetic sense that in the ‘should’ world, where things are fair, public discussion and knowledge-building are somehow the same goal.

But we don’t live in the ‘should’ world.

We live in the world where you get what you incentivize.

Yes, there’s a chilling effect when authors are free to delete comments that annoy them. But there is a different chilling effect when authors aren’t free to have the sort of conversation they’re actually interested in having. The conversation won’t happen at all, or it’ll happen somewhere else (where you can't comment on their stuff anyway).

A space cannot be universally inclusive. So the question is: is LessWrong one space, tailored for only the types of people who enjoy that space? Or do we give people tools to make their own spaces?

If the former, who is that space for, and what rules do we set? What level of knowledge do we assume people must have? We’ve long since agreed “if you show up arguing for creationism, this just isn’t the space for you.” We’ve generally agreed that if you are missing concepts in the sequences, it’s your job to educate yourself before trying to debate (although veterans should politely point you in the right direction).

What about posts written since the sequences ended?

What skills and/or responsibilities do we assume people must have? Do we assume people have the ability to notice and speak up about their needs a la Sarah Constantin’s Hierarchy of Requests? Do we require them to be able to express those needs ‘politely’? Whose definition of polite do we use?

No matter which answer you choose for any of these questions, some people are going to find the resulting space inhospitable, and take their conversation elsewhere.

I’d much rather sidestep the question entirely.

A Public Archipelago Solution

Last year I explored applying Scott Alexander's Archipelago idea towards managing community norms. Another quick recap:

Imagine a bunch of factions fighting for political control over a country. They've agreed upon the strict principle of harm (no physically hurting or stealing from each other). But they still disagree on things like "does pornography harm people", "do cigarette ads harm people", "does homosexuality harm the institution of marriage which in turn harms people?", "does soda harm people", etc.
And this is bad not just because everyone wastes all this time fighting over norms, but because the nature of their disagreement incentivizes them to fight over what harm even is.
And this in turn incentivizes them to fight over both definitions of words (distracting and time-wasting) and what counts as evidence or good reasoning through a politically motivated lens. (Which makes it harder to ever use evidence and reasoning to resolve issues, even uncontroversial ones)

And then...

Imagine someone discovers an archipelago of empty islands. And instead of continuing to fight, the people who want to live in Sciencetopia go off to found an island-state based on ideal scientific processes, and the people who want to live in Libertopia go off and found a society based on the strict principle of harm, and the people who want to live in Christiantopia go found a fundamentalist Christian commune.
This lets you test more interesting ideas. If a hundred people have to agree on something, you'll only get to try things that you can can 50+ people on board with (due to crowd inertia, regardless of whether you have a formal democracy)
But maybe you can get 10 people to try a more extreme experiment. (And if you share knowledge, both about experiments that work and ones that don't, you can build the overall body of community-knowledge in your social world)

Taking this a step farther is the idea of Public Archipelago, with islands that overlap.

Let people create their own spaces. Let the conversations be restricted as need be, but centralized and public, so that everyone at least has the opportunity to follow along, learn, respond and build off of each other’s ideas, instead of having to network their way into various social/internet circles to keep up with everything.

This necessarily means that not all of LessWrong will be a comfortable place to any given person, but it at least means a wider variety of people will be able to use it, which means a wider variety of ideas can be seen, critiqued, and built off of.

Healthy Disagreement

Now, there’s an obvious response to my earlier point about “it’s frustrating to have to explain 101-level things to people all the time.”

Maybe you’re not explaining 101-level things. Maybe you’re actually just wrong about the foundations of your ideas, and your little walled garden isn’t a 201 space, it’s an echo chamber built on sand.

This is, indeed, quite a problem.

It’s an even harder problem than you might think at first glance. It’s difficult to offer an informed critique of something that’s actually useful. I’m reminded of Holden Karnofsky’s Thoughts on Public Discourse:

For nearly a decade now, we've been putting a huge amount of work into putting the details of our reasoning out in public, and yet I am hard-pressed to think of cases (especially in more recent years) where a public comment from an unexpected source raised novel important considerations, leading to a change in views.
This isn't because nobody has raised novel important considerations, and it certainly isn't because we haven't changed our views. Rather, it seems to be the case that we get a large amount of valuable and important criticism from a relatively small number of highly engaged, highly informed people. Such people tend to spend a lot of time reading, thinking and writing about relevant topics, to follow our work closely, and to have a great deal of context. They also tend to be people who form relationships of some sort with us beyond public discourse.
The feedback and questions we get from outside of this set of people are often reasonable but familiar, seemingly unreasonable, or difficult for us to make sense of.

The obvious criticisms of an idea may have obvious solutions. If you interrupt a 301 discussion to ask “but have you considered that you might be wrong about everything?”... well, yes. They have probably noticed the skulls. This often feels like 2nd-year undergrads asking post-docs to flesh out everything they’re saying, using concepts only available to the undergrads.

Still, peer review is a crucial part of the knowledge-building process. You need high quality critique (and counter-critique, and counter-counter-critique). How do you square that with giving an author control over their conversation?

I hope (and fairly confidently believe) that most authors, even ones employing Reign-of-Terror style moderation policies, will not delete comments willy nilly – and the site admins will be proactively having conversations with authors who seem to be abusing the system. But we do need safeguards in case this turns out to be worse than we expect.

The answer is pretty straightforward: it’s not at all obvious that the public discussion of a post has to be on that particular post’s comment section.

(Among other things, this is not how most science works, AFAICT, although traditional science leaves substantial room for improvement anyhow).

If you disagree with a post, and the author deletes or blocks you from commenting, you are welcome to write another post about your intellectual disagreement.

Yes, this means that people reading the original post may come away with an impression that a controversial idea is more accepted than it really is. But if that person looks at the front page of the site, and the idea is controversial, there will be both other posts and recent comments arguing about its merits.

It also means that no, you don’t automatically get the engagement of everyone who read the original post. I see this as a feature, not a bug.

If you want your criticism to be read, it has to be good and well written. It doesn’t have to fit within the overall zeitgeist of what’s currently popular or what the locally high-status people think. Holden’s critical Thoughts on Singularity Institute is one of the most highly upvoted posts of all time. (If anything, I think LessWrong folk are too eager to show off their willingness to dissent and upvote people just for being contrarian).

It does suck that you must be good at writing and know your audience (which isn’t necessarily the same as good at thinking). But this applies just as much to being the original author of an idea, as to being a critic.

The author of a post doesn’t owe you their rhetorical strength and audience and platform to give you space to write your counterclaim. We don’t want to incentivize people to protest quickly and loudly to gain mindshare in a popular author’s comment section. We want people to write good critiques.

Meanwhile, if you're making an effort to understand an author's goals and frame disagreement in a way that doesn't feel like an attack, I don't anticipate this coming up much in the first place.

ii. Expectations and Trust

I think a deep disagreement that underlies a lot of the debate over moderation: what sort of trust is important to you?

This is a bit of a digression – almost an essay unto itself – but I think it’s important.

Elements of Trust

Defining trust is tricky, but here’s a stab at it: “Trust is having expectations of other people, and not having to worry about whether those expectations will be met.”

This has a few components:

  • Which expectations do you care about being upheld?
  • How much do you trust people in your environment to uphold them?
  • What strategies do you prefer to resolve the cognitive load that comes when you can't trust people (or, are not sure if you can)?

Which expectations?

You might trust people…

  • to keep their promises and/or mean what they say.
  • to care about your needs.
  • to uphold particular principles (clear thinking, transparency).
  • to be able (and willing) to perform a particular skill (including things like noticing that when you’re not saying what you mean).

Trust is a multiple-place function. Maybe you trust Alice to reliably provide all the relevant information even if it makes her look bad. You trust Bob to pay attention to your emotional state and not say triggering things. You can count on Carl to call you on your own bullshit (and listen thoughtfully when you call him on his). Eve will reliably enforce her rules even when it’s socially inconvenient to do so.

You may care about different kinds of trust in different contexts.

How much do you trust a person or space?

For the expectations that matter most to you, do you generally expect them to be fulfilled, or do you have to constantly monitor and take action to ensure them?

With a given person, or a particular place, is your guard always up?

In high trust environments, you expect other people to care about the same expectations you do, and follow through on them. This might mean looking out for each other’s interests. Or, merely that you’re focused on the same goals such that “each other’s interests” doesn’t come into play.

High trust environments require you to either personally know everyone, or to have strong reason to believe in the selection effects on who is present.

Examples:

  • A small group of friends by a campfire might trust each other to care about each other’s needs and try to ensure they are met (but not necessarily to have particular skills required to do so).
  • A young ideological startup might trust each other to have skills, and to care about the vision of the company (but, perhaps not to ‘have each other’s back’ as the company grows and money/power becomes up for grabs)
  • A small town, where families have lived there for generations and share a culture.
  • A larger military battalion, where everyone knows that everyone knows that everyone went through the same intense training. They clearly have particular skills, and would suffer punishment if they don’t follow the orders from high command.

Low trust environments are where you have no illusions that people are looking out for the things you care about.

The barriers to entry are low. People come and go often. People often represent themselves as if they are aligned with you, but this is poor evidence for whether they are in fact aligned with you. You must constantly have your guard up.

Examples:

  • A large corporation where no single person knows everybody
  • A large community with no particular barrier to entry beyond showing up and talking as if you understand the culture
  • A big city, with many cultures and subcultures constantly interfacing.

Transparent Low Trust, Curated High Trust

Having to watch your back all the time is exhausting, and there’s at least two strategy-clusters I can think of to alleviate that.

In a transparent low trust environment, you don’t need to rely on anyone’s word or good intentions. Instead, you rely upon transparency and safeguards built into the system.

It’s your responsibility to make use of those safeguards to check that things are okay.

A curated high trust environment has some kind of strong barrier to entry. The advantage is that things can move faster, be more productive, require less effort and conflict, and focus only on things you care about.

It’s the owner of the space’s responsibility to kick people out if they aren’t able to live up to the norms in the space. It’s your responsibility to decide whether you trust the the space, and leave if you don’t.

The current atmosphere at LessWrong is something like “transparent medium trust.” There are rough, site-level filters on what kind of participation is acceptable – much moreso than the average internet hangout. But not much micromanaging on what precise expectations to uphold.

I think some people are expecting the new moderation tools to mean “we took a functioning medium trust environment and made it more dangerous, or just weirdly tweaked it, for the sake of removing a few extra annoying comments or cater to some inexplicable whims.”

But part of the goal here is to create a fundamental phase shift, where types of conversations are possible that just weren't in a medium-trust world.

Why High Trust?

Why take the risk of high trust? Aren’t you just exposing yourself to people who might take advantage of you?

I know some people who’ve been repeatedly hurt, by trying to trust, and then having people either trample all over their needs, or actively betray them. Humans are political monkeys that make up convenient stories to make themselves look good all the time. If you aren’t actually aligned with your colleagues, you will probably eventually get burned.

And high trust environments can’t scale – too many people show up with too many different goals, and many of them are good at presenting themselves as aligned with you (they may even think they’re aligned with you), but… they are not.

LessWrong (most likely) needs to scale, so it’s important for there to be spaces here that are Functioning Low Trust, that don’t rely on load-bearing authority figures.

I do not recommend this blindly to everyone.

But. To misquote Umesh – "If you’re not occasionally getting backstabbed, you’re probably not trusting enough."

If you can trust the people around you, all the attention you put into watching your back can go to other things. You can expect other people to look out for your needs, or help you in reliable ways. Your entire body physiologically changes, no longer poised for fight or flight. It’s physically healthier. In some cases it’s better for your epistemics – you’re less defensive when you don’t feel under attack, making it easier to consider opposing points of view.

I live most of my life in high trust environments these days, and… let me tell you holy shit when it works it is amazing. I know a couple dozen people who I trust to be honest about their personal needs, to be reasonably attentive to mine, who are aligned with me on how to resolve interpersonal stuff as well as Big Picture How the Universe Should Look Someday.

When we disagree (as we often do), we have a shared understanding of how to resolve that disagreement.

Conversations with those people are smooth, productive, and insightful. When they are not smooth, the process for figuring out how to resolve them is smooth or at least mutually agreed upon.

So when I come to LessWrong, where the comments assume at-most-medium trust… where I’m not able to set a higher or different standard for a discussion beyond the lowest common denominator...

It’s really frustrating and sad, to have to choose between a public-untrusted and private-but-high-trust conversation.

It’s worth noting: I participate in multiple spaces that I trust differently. Maybe I wouldn’t recommend particular friends join Alice’s space because, while she’s good stating her clear reasons for things and evaluating evidence clearly and making sure others do the same, she’s not good at noticing when you’re triggered and pausing to check in if you’re okay.

And maybe Eve really needs that. That’s usually okay, because Eve can go to Bob’s space, or run her own.

Sometimes, Bob’s space doesn’t exist, and Eve lacks the skills to attract people to a new space. This is really important and sad. I personally expect LessWrong to contain a wide distribution of preferences that can support many needs, but it probably won’t contain something for everyone.

Still, I think it’s an overall better strategy to make it easier to create new subspaces than to try to accommodate everyone at once.

Getting Burned

I expect to get hurt sometimes.

I expect some friends (or myself) to not always be at our best. Not always self-aware enough to avoid falling into sociopolitical traps that pit us against each other.

I expect that at least some of the people I’m currently aligned with, I may eventually turn out to be unaligned with, and to come into conflict that can’t be easily resolved. I’ve had friendships that turned weirdly and badly adversarial and I spent months stressfully dealing with it.

But the benefits of high trust are so great that I don’t regret for a second having spent the first few years with those friends in a high-trust relationship.

I acknowledge that I am pretty privileged in having a set of needs and interpersonal preferences that are easier to fit into a high trust environment. There are people who just don’t interface well with the sort of spaces I thrive in, who may never get the benefits of high trust, and that... really sucks.

But the benefit of the Public Archipelago model is that there can be multiple subsections of the site with different norms. You can participate in discussions where you trust the space owner. Some authors may clearly spell out norms and take the time to clearly explain why they moderate comments, and maybe you trust them the most.

Some authors may not be willing to take that time. Maybe you trust them less, or maybe you know them well enough that you trust them anyhow.

In either case, you know what to expect, and if you’re not okay with it, you either don’t participate, or respond elsewhere, or put effort into understanding the author’s goals so that you are able to write critiques that they find helpful.

iii. The Fine Details

Okay, but can’t we at least require reasons?

I don’t think many people were resistant to deleting comments – the controversial feature was “delete without trace.”

First, spam bots, and dedicated adversaries with armies of sockpuppets make it at least necessary for this tool to be an available (LW2.0 has had posts with hundreds of spam or troll comments we quietly delete and IP ban)

For non-obvious spam…

I do hope delete without trace is used rarely (or that authors send the commenter a private reason when doing so). We plan to implement the moderation log Said Achmiz recommended, so that if someone is deleting a lot of comments without trace you can at least go and check, and notice patterns. (We may change the name to “delete and hide”, since some kind of trace will be available).

All things being equal, clear reasons are better than none, and more transparency is better than less.

But all things are not equal.

Moderation is work.

And I don’t think everyone understands that the amount of work varies a lot, both by volume, and by personality type.

Some people get energized and excited by reading through confrontational comments and responding.

Some people find it incredibly draining.

Some people get maybe a dozen comments on their articles a day. Some get barely any at all. But some authors get hundreds, and even if you’re the sort of person who is energized by it, there are only so many hours in a day and there are other things worth doing.

Some comments are not just mean or dumb, but immensely hateful and triggering to the author, and simply glancing at a reminder that it existed is painful – enough to undo the personal benefit they got from having written their article in the first place.

For many people, figuring out how to word a moderation notice is stressful, and I’m not sure whether it’s more intense on average to have to say:

“Please stop being rude and obnoxiously derailing threads”

vs

“I’m sorry, I know you’re trying your best, but you’re asking a lot of obvious questions and making subtly bad arguments in ways that soak up the other commenter’s time. The colleagues that I’m trying to attract to these discussion threads are tired of dealing with you.”

Not to mention that moderation often involves people getting angry at you, so you don't just have to come up with the initial posted reason, but also deal with a bunch of followup that can wreck your week. Comments that leave a trace invite people to argue.

Moderation can be tedious. Moderation can be stressful. Moderation is generally unpaid. Moderators can burn out or decide “you know what, this just isn’t worth the time and bullshit.”

And this is often the worst deal for the best authors, since the best authors attract more comments, and sometimes end up acquiring a sort of celebrity status where commenters don’t quite feel like they’re people anymore, and feel justified (or even obligated) to go out of their way to take them down a peg.

If none of this makes sense to you, if you can’t imagine moderating being this big a deal… well... all I can say is it just really is a god damn big deal. It really really is.

There is a tradeoff we have to make, one way or another, on whether we want to force our best authors to follow clear, legible procedures, or to write and engage more.

Requiring the former can (and has) ended up punishing the latter.

We prioritized building the delete-and-hide function because Eliezer asked for it and we wanted to get him posting again quickly. But he is not the only author to have asked and expressed appreciation for it.

Incentivizing Good Ideas and Good Criticism

I’ll make an even stronger claim here: punishing idea generation is worse than punishing criticism.

You certainly need both, but criticism is easier. There might be environments where there isn’t enough quantity or quality of critics, but I don’t think LessWrong is one of them. Insofar as we don’t have good enough criticism, it’s because the critiques are nitpicky and unhelpful instead of trying to deeply understand unfamiliar ideas and collaboratively improve their load-bearing cruxes.

And meanwhile, I think the best critics also tend to be the best idea-generators – the two skills are in fact tightly coupled – so making LessWrong a place they feel excited to participate in seems very important.

It’s possible to go too far in this direction. There are reasonable cases for making a different tradeoffs that different corners of the internet might employ. But our decision on LessWrong is that authors are not obligated to put in that work if it’s stressful.

Overton Windows, and Personal Criticism

There’s a few styles of comments that reliably make me go “ugh, this is going to become a mess and I really don’t want to deal with it.” Comments whose substance is “this idea is bad, and should not be something LessWrong talks about.”

In that moment, the conversation stops being about whatever the idea was, and starts being about politics.

A recent example is what I’d call “fuzzy system 1 stuff.” The Kensho and Circling threads felt like they were mostly arguing about “is it even okay to talk about fuzzy system 1 intuitions in rational discourse?”. If you wanted to talk about the core ideas and how to use them effectively, you had to wade through a giant, sprawling demon thread.

Now, it’s actually pretty important whether fuzzy system 1 intuitions have a place in rational discourse. It’s a conversation that needs to happen, a question that probably has a right answer that we can converge on (albeit a nuanced one that depends on circumstances).

But right now, it seems like the only discussion that’s possible to have about them is “are these in the overton window or not?”. There needs to be space to explore ideas that aren’t currently in the accepted paradigm.

I’d even claim that doing that productively is one of the things rationality is for.

Similar issues abound with critiquing someone’s tone, or otherwise critiquing a person rather than an idea. Comments like that tend to quickly dominate the discussion and make it hard to talk about anything else. In many cases, if the comment were a private message, it could have been taken as constructive criticism instead of a personal attack that enflares people’s tribal instincts.

For personal criticism, I think the solution is to build tools that make private discussion easier.

For Overton Window political brawls, I think the brawl itself is inevitable (if someone wants to talk about a controversial thing, and other people don’t want them to talk about the controversial thing, you can’t avoid the conflict). But I think it’s reasonable for authors to say “if we’re going to have the overton discussion, can we have it somewhere else? Right here, I’m trying to talk about the ramifications of X if Y is true.”

Meanwhile, if you think X or Y are actively dangerous, you can still downvote their post. Instead of everyone investing endless energy in multiple demon threads, the issue can be resolved via a single thread, and the karma system.

I don’t think this would have helped with the most recent thread, but it’s an option I’d want available if I ever explored a controversial topic in the future.

iv. Towards Public Archipelago

This is a complicated topic, the decision is going to affect people. If you’re the sort of person for whom the status quo seemed just perfect, your experience is probably going to become worse.

I do think that is sad, and it’s important to own it, and apologize – I think having a place that felt safe and home and right become a place that feels alienating and wrong is in fact among the worst things that can happen to a person.

But the consequences of not making some major changes seem too great to ignore.

The previous iteration of LessWrong died. It depended on skilled writers continuously posting new content. It dried up as, one by one, as they decided LessWrong wasn’t best place for them to publish or brainstorm.

There’s a lot of reasons they made that choice. I don’t know that our current approach will solve the problem. But I strongly believe that to avoid the same fate for LessWrong 2.0, it will need to be structurally different in some ways.

An Atmosphere of Experimentation

We have some particular tools, and plans, to give authors the same control they’d have over a private blog, to reduce the reasons to move elsewhere. This may or may not help. But beneath the moderation tools and Public Archipelago concept is an underlying approach of experimentation.

At a high level, the LessWrong 2.0 team will be experimenting with the site design. We want this to percolate through the site – we want authors to be able to experiment with modalities of discussion. We want to provide useful, flexible tools to help them do so.

Eventually we’d like users to experiment both with their overall moderation policy and culture, as well as the norms for individual posts.

Experiments I’d personally like to see:

  • Posts where all commenters are required to fully justify their claims, such that complete strangers with no preconceptions can verify them
  • Posts where all commenters are required to take a few ideas as given, to see if they have interesting implications in 201 or 301 concept space
  • Discussions where comments must follow particular formats and will be deleted otherwise, such as the r/AskHistorians subreddit or stackoverflow.
  • Discussions where NVC is required
  • Discussions where NVC is banned
  • Personal Blogposts where all commenters are only allowed to speak in poetry.
  • Discussions where you need to be familiar with graduate level math to participate.
  • Discussions where authors feel free to delete any comment that doesn’t seem like it’s pulling its attentional weight.
  • Discussions where only colleagues the author personally knows and trusts get to participate.

Bubbling Up and Peer Review

Experimentation doesn’t mean splintering, or that LessWrong won’t have a central ethos connecting it. The reason we’re allowing user moderation on Frontpage posts is that we want good ideas to bubble up to the top, and we don’t want it to feel like a punishment if a personal blogpost gets promoted to Frontpage or Curated. If an idea (or discussional experiment) is successful, we want people to see it, and build off it.

Still, what sort of experimentation and norms to expect will vary depending on how much exposure a given post has.

On personal blogposts, pretty much anything goes.

On Frontpage posts, we will want to have some kind of standard, which I’m not sure we can formally specify. We’re restricting moderation tools to users with high karma, so that only people who’ve already internalized what LessWrong is about have access to them. We want experimentation that productively explores rational-discussion-space. (If you’re going to ask people to only comment in haiku on a frontpage post, you should have a pretty good reason as to why you think this will foster intellectual progress).

If you’re deleting anyone who disagrees with you even slightly, or criticizing other users without letting them respond, we’ll be having a talk with you. We may remove your mod privileges or restrict them to your personal blogposts.

Curated posts will (as they already do) involve a lot of judgment calls on the sitewide moderation team.

At some point, we might explore some kind of formal peer review process, for ideas that seem important enough to include in the LessWrong canon. But exploring that in full is beyond the scope of this post.

Norms for this comment section

With this post, I’m kinda intentionally summoning a demon thread. That’s okay. This is the official “argue about the moderation overton window changing” discussion space.

Still, some types of arguing seem more productive than others. It’s especially important for this particular conversation to be maximally transparent, so I won’t be deleting anything except blatant trolling. Comments that are exceptionally hostile, I might comment-lock, but leave visible with an explicit reason why.

But, if you want your comments or concerns to be useful, some informal suggestions:

Failure modes to watch out for:

  • If the Public Archipelago direction seems actively dangerous or otherwise awful, try to help solve the underlying problem. Right now, one of the most common concerns we’ve heard from people who we’d like to be participating on LessWrong is that the comments feel nitpicky, annoying, focused on unhelpful criticism, or unsafe. If you’re arguing that the Archipelago approach is fundamentally flawed, you’ll need to address this problem in some fashion. Comments that don’t at least acknowledge the magnitude of the tradeoff are unlikely to be persuasive.
  • If other commenters seem to have vastly different experiences than you, try to proactively understand them – solutions that don’t take into account diversity of experience are less useful.

Types of comments I expect to be especially useful:

  • Considerations we’ve missed. This is a fairly major experiment. We've tried to be pretty thorough about exploring the considerations here, but there are probably a lot o we haven’t thought of.
  • Pareto Improvements. I expect there are a lot of opportunities to avoid making tradeoffs, instead finding third-options that get as many different benefits as once.
  • Specific tools you’d like to see. Ideally, tools that would enable a variety of experiments while ensuring that good content still gets to bubble up.

Ok. That was a bit of a journey. But I appreciate you bearing with me, and am looking forward to having a thorough discussion on this.

New to LessWrong?

New Comment
176 comments, sorted by Click to highlight new comments since: Today at 5:04 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

One general class of solution are tools that satisfy an author's goals in an easy fashion, while keeping discussion as visible/transparent as possible.

An idea Ben and I came up with was having an off-topic comment section of a post. Authors get to decide what is "on topic" for a discussion, and there's an easily accessible button that labels a comment "off topic". Off topic comments move to a hidden-by-default section at the bottom of the comments. Clicking it ones unveils it and leaves it unveiled for the reader in question (and it has some kind of visual cue to let you know that you've entered off-topic world).

(child comments that get tagged as off-topic would be removed from their parent comment if it's on-topic, but in the off-topic section they'd include a link back to their original parent for context)

A common problem that bothers me with my own comment section is comments that are... okay... but I don't think they're worth the attention of most readers. Deleting them (with or without hiding them) feels meaner than the comment deserves. Moving them to an offtopic section feels at least a little mean, but more reasonable.

A related idea is "curated" comments that authors and the mod-team can label, which get a highlighted color and move as high in the comment tree as they can (i.e. to the top of the comments list if they're a top level comment, or the top of their parent comment's children)

A common problem that bothers me with my own comment section is comments that are... okay... but I don't think they're worth the attention of most readers. Deleting them (with or without hiding them) feels meaner than the comment deserves. Moving them to an offtopic section feels at least a little mean, but more reasonable.

Maybe you could invert punishments to rewards, and create an "author highlights" section instead of an "off topic" section?

If you're running a blog and want to apply this approach to comment deletion, then instead of framing it as a "reign of terror" where you mass-delete comments from your blog, you would have an "email the author" field below each of your blog posts and a "featured responses" section highlighting the best emails you've gotten on this topic. 'Being accepted to this scientific journal feels like an honor' is connected to 'being rejected from this journal doesn't feel like an attack or affront.'

9gwillen6y
The New World moderation over at Hacker News has been handling off-topic stuff similarly to how you describe (perhaps that's where you got the idea.) My impression is that they don't have as much a system, as just a button that reparents a post to the root and marks it as 'sorts last'. This seems to work pretty well IMO.

I thought about that variant. I think I personally prefer having them actively hidden, because a major thing I want it for is attention management. When there are 100+ comments, I think it's a valuable service to split them into "here's what you should definitely look at if you want to be following key intellectual progress from this conversation" and "here's what you should look at if you want to participate in long, sprawling conversations that I think are more about people engaging socially." (I think the latter's important, just not something I want to force everyone to read).

If the comments are just moved to the bottom, it's not at all clear where to stop reading, and if I see a lot of comments I sometimes feel too intimidated to even start.

3Chris_Leong6y
"When there are 100+ comments, I think it's a valuable service to split them" - I agree with splitting them as well and I don't have an issue with hiding them, but there's two ways of splitting them. There's the hiding option you've proposed and there's the option of having a seperate Offtopic Comments header that marks that a new section of comments has begun. I'm not advocating for this as better than the other option, just listing it as a possibility.
8Elizabeth6y
I love the idea of an off-topic or "deemphasize" button, for the reasons you describe.
6Raemon5y
My new belief: this option should be called "collapse". Rather than having a new element in the comments section, it just forces a comment to be collapsed-by-default, and sorted to the bottom of the page (independent of how much karma it has), possibly not showing up in the Recent Discussion section. This has two benefits of a) not having to create any new sections that take up conceptual space on the site, instead using existing site mechanics, b) is more ideologically neutral than "on-topic / off-topic", which would have been a bit misleading/deceptive about what sort of uses the offtopic button might have.
4Gurkenglas6y
This is the only approach I can see that will not leave open the possibility of a mirror that unhides deleted items - you simply integrate that mirror into the website. I'd think that making the hidden section available via a button would trigger those who do not want to be reminded of bad comments. Perhaps make it a flag in the URL? I'd like to see an option to instead read hidden comments not at the end of the thread, but inlined, marked, to where they would be if not hidden.
2Gurkenglas6y
Of course there's no reason to split the reader experiences between what the author wants and everything-goes. Let anyone submit metadata on any post, build filters out of the metadata for the community, and use any filter to choose or highlight what they see. Example filters: * Anything goes. * That filter which the author has chosen for me, or if none that which they use to view this post. * Posts which I have not deemed ban-worthy, by people I have not deemed ban-worthy. * That which the Sunshine Regiment has deemed worthy of highlight or hiding. * Posts that fit in my filter bubble, having been upvoted by people who usually upvote the same sorts of things I do, or written by people who usually reply to the same people I would reply to. An author could choose what filter they would suggest to the user if the Sunshine Regiment attached the front page tag and then the Sunshine Regiment can use that to decide whether to attach it. Karma would play no role in this beyond being another piece of metadata.
4gwillen6y
This is reminiscent of Usenet-style killfiles, only fancier. I think anybody designing a discussion site could learn a lot from Usenet and newsreaders.
4PeterBorah6y
This sounds like a really good idea. For my personal tastes, I think this would hit the sweet spot of getting to focus attention on stuff I cared about, without feeling like I was being too mean by deleting well-intentioned comments.
-1Chris_Leong6y
I would suggest experimenting with an off-topic section first and reserving the ability to delete comments for mods. It would more transparently give us an indication of how this is likely to play out if high karma users were to later be given the delete ability. Further, by providing these feature first, high karma users would be more likely to default to marking as off-topic rather than deleting. If Elizier wants to delete comments on his posts, just give him mod privileges if he doesn't already have them. That's far better than distributing them widely as per the current plan, even if it is limited to their own posts.
6Raemon6y
So on one hand "just give Eliezer mod powers because like the whole site was his idea and it seems fair, and most other users have more restrictions" isn't obviously wrong to me. My main preference for not-that-plan is that I just don't actually expect people to abuse their powers much, and if it turns out to be a problem a) it won't be that hard to undo (both for individual comments, and for users with mod privileges), and b) the people who did so will almost certainly end up facing social judgment.
7John_Maxwell6y
Hm, Oli told me that complaining about bad moderation is not allowed. Has that policy changed? If not, I don't see how social judgement can serve as a corrective. One advantage of a non-participant referee is that a participant referee has to worry more about social judgement for appearing partisan. (I also think participant referees are, in fact, more partisan. But the degree to which partisanship is affecting one's moderation preferences might not always be introspectively obvious.) A possible compromise would be to have authors write moderation guidelines for third party moderators to follow on their posts. However, it takes time to accumulate enough examples to know what good guidelines are. I suppose this is an argument in favor of centralized guideline development.
-2Dr. Jamchie6y
I see a risk with this approach, that author will have oportunity to hide comments, that does not agree with his opinion. This might kill some discussions in favor of author.

For easy reference, here is a summary of the claims and concepts included in this post. (But, note that I think reading the full post first makes more sense).

  • The Problem: discussion is happening in private by default
    • This requires people to network their way into various social circles to stay up to date
    • Since you can’t rely on everyone sharing mid-level concepts, it’s harder to build them into higher level concepts.
  • Private venues like facebook...
    • Have at least some advantages for developing early stage ideas
    • Are often more enjoyable (so early-stage-ideas tend to end up staying there much longer)
  • Chilling Effects are concerning, but:
    • There’s a chilling effect on criticism if authors get to control their discussions
    • There’s a chilling effect on authors if they don’t – the costs of requiring authors to have fully public discussions on every post are a lot higher than you think
  • Healthy disagreement / Avoiding Echo Chambers
    • Is important to maintain
    • Does not intrinsically require a given post’s comment section to be fully public – people can write response posts
    • Since the status quo is many people discussing things privately, it’s not even clear that authorial control over comments is net negative
... (read more)

I think rationalists systematically overestimate the benefits of public discussion (and underestimate the costs).

When something is communicated one-on-one, the cost is effectively doubled because it consumes the sender's time. I think is roughly offset by a factor of 2 efficiency bonus for tailored/interactive talk relative to static public talk. Even aside from the efficiency of tailored talk, a factor of 2 isn't very big compared to the other costs and benefits.

So I think the advantages of public discussion are not mostly the efficiency from many people being able to read something that got written once. Instead they are things like:

  • One-on-one explanations often require the listener to compensate the explainer for their time. Right now we have no good institutions for this problem, so it relies on the explainer caring about what the listener believes or expecting to receive something in return.
  • By investing in really great explanations you can save total time compared to one-on-one explanations. I think a very small fraction of public discussion falls into this category / meets this bar.
  • Relative to the potential of both modes, we are currently worse (!) at one-on-one tra
... (read more)
[-]Zvi6y220

Given that rationalists estimate the value of public discourse (and also private discourse) higher than almost anyone else, it is certainly plausible that we overestimate it. Given I estimate it higher than most rationalists, it's even more plausible that I overestimate it. But if anything, I think we underestimate it, and Paul points to some of the reasons why.

Paul is implicitly, I believe, employing a sort of 'student-teacher' model of communication; the teacher transmits communication, the student learns it. The customization and lack of politicization in person is about a factor of two better, which then roughly 'cancels out' the increased cost. But if that's true, it's then pointed out that there are a number of other key advantages to public discourse. Paul's extra reasons seem super important, especially if we add creation of common knowledge and vocabulary to the last one. He also points out that teacher time is typically more valuable than student time, meaning the cost ratio is more than two, perhaps much more. And time spent writing dwarfs the time each person spends reading, so that further increases the ratio.

So it seems like e... (read more)

4alkjash6y
+1 on "learning through writing," this factor alone is my main disagreement with Paul's comment. For most people the biggest low-hanging improvement to their thinking might just be to write it down. Once it's written down it's not nearly that much effort to edit enough to post in a public discussion.

I think one of the most useful things a public discussion can do is create more shared terminology, in order to facilitate future discussions, both private and public. Examples that I think have been clearly good (aside from, like, the whole of the Sequences): Moloch, Slack.

Broadcasting is good for transferring information from people with more valuable time to people with less valuable time---in those cases "using up the explainer's time" much more than doubles the cost. I expect it would be better to do transmitting information 1-on-1 transmission to specialized explainers who broadcast it, but we aren't good at that.

This suggests some models:

  • (thinker -> conversation partner) repeated for everyone of the community
  • thinker => everyone in the community
  • thinker -> explainer => everyone in the community

(where '=>' implies telling a lot of people via an essay)

These models are in order of decreasing net cost to the system, if we assume the thinker's time is the most expensive.

Let me propose another model

  • thinker => group of readers in a nearby inferential space => everyone in the community

I think there's room for LW to intervene on this part of the model. Here are some ways to do that:

  • Create a new type of post which is an 'explainer' - something that does not claim original content, but does purport to re-explain another's content in a more broadly understandable way. Give the best o
... (read more)

Zvi already mentioned this, but I just want to emphasize that I think error finding and correction (including noticing possible ambiguities and misunderstandings) is one of the most important functions of discussion, and it works much better in a group discussion (especially for hard to detect errors) because everyone interested in the discussion can see the error as soon as one person notices it and points it out. If you do a series of one-on-one discussions, to achieve the same effect you'd have to track down all the previous people you talked to and inform them. I think in practice very few people would actually bother to do that, due to subconscious status concerns among other reasons, which would lead to diverging epistemic states among people thinking about the topic and ensuing general confusion.(Sometimes an error is found that the "teacher" doesn't recognize as an error due to various cognitive biases, which would make it impossible to track down all the previous discussants and inform them of the error.)

Zvi's "asynchronous and on demand" is also a hugely important consideration for me. In an one-on-one discussion, the fact that I can ask questions and ... (read more)

2paulfchristiano6y
This seems orthogonal to the broadcast vs. 1-on-1 model. E.g., email threads are a thing, as are comment threads that whose primary value is a dialog between two people.
9Ben Pace6y
While I broadly agree with the statement, the current top contender in my mind for why this is wrong, is a model whereby disagreement between key researchers in a field can be aided significantly by the other researchers, who hash out the lower-level details of the key researchers’ abstract intuitions. An example of someone taking another’s ideas and implementing them concretely, is Eliezer’s A Human’s Guide to Words, being followed up by the application by Scott Alexander in the sequence Categorisation and Concepts. This helps to provide both a better understanding and a test of the abstract model. If two researchers disagree, others in their respective nearby inferential spaces can hash out many of the details. Naturally, this conversation can be aided by public discussion. This is a model I’ve thought about but don’t yet feel I have strong evidence for.
8John_Maxwell6y
The factor of 2 is a multiplier on the number of readers. So the number of readers matters. I think there are more lurkers on LW than people realize. See e.g. this obscure poll of mine that got over 100 votes from logged in users. It wouldn't surprise me if I am speaking to an audience of over 50 people while typing in to this comment box. An advantage of public discourse is that we have good methods for connecting readers with the posts that are likely to be most impactful for them. They can look at post titles, have posts suggested by friends, etc. Finding high value conversations in real life is more haphazard: networking, parties, small talk, etc. The best approach might be a hybrid, where you test your ideas on a conversational audience and publish them when you've refined them enough that they're digestible for a mass audience.
2paulfchristiano6y
The factor of 2 is in the limit of infinitely many readers. I agree that content discovery is tough. Though note that the question is largely about finding topics, not necessarily conversation partners, and here we have better mechanisms (if I'm talking with someone I know, I can bring up the topics most likely to interest them). I think that connecting people is a hard problem we're not great at, though the intuition that it looks hard is also tied up with a few of the other problems (talking from higher to lower values of time, compensating talkers).
6ESRogs6y
This feels to me like a strange model to apply here. I can see the logic of what you're saying -- if every "explanation event" requires time from an explainer and a receiver, then centralizing the explanations cuts the cost by only a factor of 2. But it feels a bit to me like saying, "Everyone should write their own software. Each software use event requires time from a programmer and a user, so writing the programs once only cuts the cost by a factor of 2." Maybe the software analogy is unfair because any given user will use the same piece of software many times. But you could replace software with books, or movies or any other information product.
9ESRogs6y
I think something like this is a crux for me: Maybe the way I would put it is: "Some people have much better things to say than others." If I have to get everything from one-on-one conversation, then I only have access to the thoughts of the people I can get access to in person. So then, whether it's worth it to do broadcasting would depend on: 1) How much does information content degrade if it's passed around by word of mouth? 2) How much better are the best ideas in our community than median ideas? 3) How valuable is it for the median person to hear the best ideas?
4paulfchristiano6y
In my envisioned world, the listener compensates the talker in some other way (if the talker is not sufficiently motivated by helping/influencing the listener). It won't usually be the case that the talker can be compensated by taking a turn as the listener, unless the desired exchange of information happens to line up perfectly that day. "How much does information content degrade if it's passed around by word of mouth?" matters for the particular alternative strategy where you try to do a bucket brigade from people with more valuable time to people with less valuable time.
6paulfchristiano6y
In custom-made-for-you software case, the programmer spends radically more time than the user, so rewriting the software for each user much more than doubles the cost. In the custom-made-for-you talk case, the talker spends the same amount of time as the listener and so the effect is doubling. Sometimes a static explanation is radically better than a tailored one-on-one interaction, instead of slightly worse. In that case I think broadcasting is very useful, just like for software. I think that's unusual (for example, I expect that reading this comment is worse for you than covering this topic in a one-on-one discussion). Even in cases where a large amount of up-front investment can improve communication radically, I think the more likely optimal strategy is to spend a bunch of time creating an artifact that explains something well, and then to additionally discuss the issue one-on-one as a complement.
8ESRogs6y
Yes, but in part that's because being in your company is such a joy. ;-)
6paulfchristiano6y
(This might look like an explanation of why there aren't good public explanations of much of my research. That's actually an independent issue though, I don't endorse it based on these arguments.)
5ESRogs6y
Do you have an explanation for why people would systematically overestimate? (Or is this just an observation that it appears to you that many people do overestimate, so it must be easy to do :P)
6paulfchristiano6y
Mostly introspective.
2Gurkenglas6y
I expect that much of the arguing in favor of maximum freedom for the commenter is motivated by memory of the Wild West-type areas of the internet, which some of its users have gotten used to as their safe space.

First: thank you for writing this. I, for one, appreciate seeing you lay out your reasoning like this.

Second:

We plan to implement the moderation log Said Achmiz recommended, so that if someone is deleting a lot of comments without trace you can at least go and check, and notice patterns.

I applaud this decision, obviously, and look forward to this feature! Transparency is paramount, and I’m very gratified to see that the LW2 team takes it seriously.

Third, some comments on the rest of it, in a somewhat haphazard manner and in no particular order:


On Facebook

I have, on occasion, over the past few years, read Facebook threads wherein “rationalists” discussed rationalist-relevant topics. These threads have been of the widest variety, but one thing they had in common is that the level of intellectual rigor, impressiveness of thinking, good sense, and quality of ideas was… low. To say that I was unimpressed would be an understatement. A few times, reading Facebook posts or comments has actively lowered my level of respect for this or that prominent figure in the community (obviously I won’t name names).

It would, in my view, be catastrophic, if the quality of discussion on Less Wrong... (read more)

[-]gjm6y130

I strongly agree about the circling/kensho discussions. Nothing in them looked to me as if anyone was saying it's not OK to talk about fuzzy system-1 intuitions in rational discourse. My impression of the most-negative comments was that they could be caricatured not as "auggh, get this fuzzy stuff out of my rational discourse" but as "yikes, cultists and mind-manipulators incoming, run away". Less-caricaturedly: some of that discussion makes me uneasy because it seems as if there is a smallish but influential group of people around here who have adopted a particular set of rather peculiar practices and thought-patterns and want to spread the word about how great they are, but are curiously reluctant to be specific about them to those who haven't experienced them already -- and all that stuff pattern-matches to things I would rather keep a long way away from.

For the avoidance of doubt, the above is not an accusation, pattern-matching is not identity, etc., etc., etc. I mention it mostly because I suspect that uneasiness like mine is a likely source for a lot of the negative reactions, and because it's a very different thing from thinking that the topic in question should somehow be off-limits in rational discourse.

9Raemon6y
FYI, I've definitely updated that the "fuzzy-system-1 intuitions" not being the concern for most (or at least many) of the critics in Kensho and Circling. (I do think there's a related thing, though, which is that every time a post that touches upon fuzzy-system-1 stuff spawns a huge thread of intense argumentation, the sort of person who'd like to post that sort of thread ends up experience a chilling effect that isn't quite what the critics intended. In a similar although not quite analogous way that simply having the Reign of Terror option can produce a chilling effect on critics)

I, for one, am not anti-criticism.

I also suspect Ray isn't either, and isn't saying that in his post, but it's a long post, so I might have missed something.

The thing I find annoying to deal with is when discussion is subtly more about politics than the actual thing, which Ray does mention.

I feel like people get upvoted because

  • they voiced any dissenting opinion at all
  • they include evidence for their point, regardless of how relevant the point is to the conversation
  • they include technical language or references to technical topics
  • they cheer for the correct tribes and boo the other tribes
  • etc.

I appreciated the criticisms raised in my Circling post, and I upvoted a number of the comments that raised objections.

But the subsequent "arguments" often spiraled into people talking past each other and wielding arguments as weapons, etc. And not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I've read multiple comments that support this.

8Raemon6y
Explicitly chiming in clarify that yes, this is exactly my concern. I only dedicated a couple paragraphs to this (search for "Incentivizing Good Ideas and Good Criticism") because there were a lot of different things to talk about, but a central crux of mine is that, while much of the criticism you'll on on LW is good, a sizeable chunk of it is just a waste of a time and/or actively harmful. I want better criticism, and I think the central disagreement is something like Said/Cousin_it and a couple others disagreeing strongly with me/Oli/Ben about what makes useful criticism. (To clarify, I also think that many criticisms in the Circling thread were quite good. For example, it's very important to determine whether Circling is training introspection/empathy (extrospection?), or 'just' inducing hypnosis. This is important both within and without the paradigm that Unreal was describing of using Circling as a tool for epistemic rationality. But, a fair chunk of the comments just seemed to me to express raw bewilderment or hostility in a way that took up a lot of conversational space without moving anything forward.)
7Unreal6y
I think I'm expecting people to understand what "finding cruxes" looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before "Double Crux" is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes. My psychology is interacting with this in some unhelpful way.
5cousin_it6y
I'm living happily without that frustration, because for me agreement isn't a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don't reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven't been mentioned yet), not for changing anyone's mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
9ESRogs6y
I'm surprised to see finding cruxes contrasted with value of information considerations. To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas. Correct me if I'm wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it's relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it's related to your or anybody else's cruxes, no? Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I'm just thinking about my own take on an issue. Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they're already implicit in your sense of taste about what's interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information. That seems plausible, but I'm skeptical that it's not often helpful to explicitly check what would make you change your mind about something.
7cousin_it6y
I think caring about agreement first vs VoI first leads to different behavior. Here's two test cases: 1) Someone strongly disagrees with you but doesn't say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who's saying interesting but less disagreeable things (VoI first)? 2) You're one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else's (agreement first) or try to say something new (VoI first)? The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I'll choose the interesting idea every time. It's more fun and more fruitful.
2ESRogs6y
Gotcha, this makes sense to me. I would want to follow the VoI strategy in each of your two test cases.
4Unreal6y
[ I responded to an older, longer version of cousin_it's comment here, which was very different from what it looks like at present; right now, my comment doesn't make a whole lot of sense without that context, but I'll leave it I guess ] This is a fascinating, alternative perspective! If this is what LW is for, then I've misjudged it and don't yet know what to make of it. I disagree with the frame. What I'm into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people's minds so that we're all more aligned with truth seems infinite-sum to me. Why? Because the more groundwork we lay for our foundation, the more we can DO. Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other's math? We wouldn't have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them. Similarly with rationality. I'm interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we'd unearth a whole NEW set of disagreements to resolve. And so forth. That is progress. I'm willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better! But I'm going to look the thing somewhere, if not here.
2cousin_it6y
(I had a mathy argument here, pointing to this post as a motivation for exchanging ideas instead of changing minds. It had an error, so retracted.)
1Unreal6y
Yup! That totally makes sense (the stuff in the link) and the thing about the coins. Also not what I'm trying to talk about here. I'm not interested in sharing posteriors. I'm interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about). So in the fair/unfair coin example in the link, the way I'd "change your mind" about whether a coin flip was fair would be to ask, "You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?" If the answer is, "Well it depends on what happens when the coin is flipped." And let's say this is also a Double Crux for me. At this point we'd have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we'd both converge towards one truth.
2habryka6y
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like "a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people's cruxes helps the discussion move towards sharing pieces of information that haven't been shared yet."
3Said Achmiz6y
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply.

This is surprising to me. A crux is a thing that if you didn't believe it you'd change your mind on some other point -- that seems like a very natural concept!

Is your contention that you usually can't fine any one statement such that if you changed your mind about it, you'd change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer's Is That Your True Rejection? post.)

5Unreal6y
I do not know how to operationalize this into a bet, but I would if I could. My bet would be something like... If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people) Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed. Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive. If a person doesn't work this way, I'd love to know.
2Said Achmiz6y
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
5Unreal6y
Aw Geez, well if you happen to explain your views somewhere I'd be happy to read them. I can't find any comments of yours on the Sabien's Double Crux post or on the post called Contra Double Crux.
3Said Achmiz6y
The moderators moved my comments originally made on former post… to… this post.
7alkjash6y
+1 on the concerns about Facebook conversations. One of the main problems in Facebook conversations, in my view, is that the bar for commenting is way too low. You generally have to sift through a dozen "Nice! Great idea!" and so on to find the real conversations, and random acquaintances feel free to jump into a high level arguments with strawmans and ad hominem or straight up non sequiturs all the time. Now I think LW comments seem to have the opposite problem (the bar feels too high), but all else equal this is the correct side to err on.
7PeterBorah6y
I like your vision of a perfect should world, but I feel that you're ignoring the request to deal with the actual world. People do in fact end up disincentivized from posting due to the sorts of criticism you enjoy. Do you believe that this isn't a problem, or that it is but it's not worth solving, or that it's worth solving but there's a trivial solution? Remember that Policy Debates Should Not Appear One-Sided.
2Said Achmiz6y
Indeed, it is not a problem; it is a solution.

Ok, then that's the crux of this argument. Personally, I value Eliezer's writing and Conor Moreton's writing more than I value a culture of unfettered criticism.

This seems like a good argument for the archipelago concept? You can have your culture of unfettered criticism on some blogs, and I can read my desired authors on their blogs. Would there be negative consequences for you if that model were followed?

6Said Achmiz6y
There would of course be negative consequences, but see how tendentious your phrasing is: you ask if there would be negative consequences for me, as if to imply that this is some personal concern about personal benefit or harm. No; the negative consequences are not for me, but for all of us! Without a “culture of unfettered criticism”, as you say, these very authors’ writings will go un-criticized, their claims will not be challenged, and the quality of their ideas will decline. And if you doubt this, then compare Eliezer’s writing now with his writing of ten years ago, and see that this has already happened. (This is, of course, not to mention the more obvious harms—the spread of bad ideas through our community consensus being only the most obvious of those.)
7ESRogs6y
I suppose I am probably more impressed by the median sequence post than the median post EY writes to facebook now. But my default explanation would just be that 1) he already picked the low hanging fruit of his best ideas, and 2) regression to the mean -- no artist can live up to their greatest work. Edit: feel compelled to add -- still mad respect for modern EY posts. Don't stop writing buddy. (Not that my opinion would have much effect either way.)
5habryka6y
I actually prefer the average post in Inadequate Equilibria quite a bit over the average post in the sequences.
3Elizabeth6y
This seems like a leap. Criticism being fettered does not mean criticism is absent.
0Said Achmiz6y
I was quoting PeterBorah; that is the phrasing he used. I kept it in quotes because I don’t endorse it myself. The fact is, “fettered criticism” is a euphemism. What precisely it’s a euphemism for may vary somewhat from context to context—by the nature of the ‘fetters’, so to speak—and these themselves will be affected by the incentives in place (such as the precise implementation and behavior of the moderation tools available to authors, among others). But one thing it can easily be a euphemism for, is “actually no substantive criticism at all”. As for my conclusion being a leap—as I say, the predicted outcome has already taken place. There is no need for speculation. (And it is, of course, only one example out of many.)
6Qiaochu_Yuan6y
I would take your perspective more seriously here if you ever wrote top-level posts. As matters stand, all you do is comment, so your incentives are skewed; I don't think you understand the perspective of a person who's considering whether it's worth investing time and effort into writing a top-level post, and the discussion here is about how to make LW suck less for the highest-quality such people (Eliezer, Conor, etc.).
6Said Achmiz6y
I do not write top-level posts because my standards for ideas that are sufficiently important, valuable, novel, etc., to justify contributing to the flood of words that is the blogosphere, are fairly high. I would be most gratified to see more people follow my example. I also think that there is great value to be found in commentary (including criticism). Some of my favorite pieces of writing, from which I’ve gotten great insight and great enjoyment, are in this genre. Some of the writers and intellectuals I most respect are famous largely for their commentary on the ideas of others, and for their incisive criticism of those ideas. To criticize is not to rebuke—it is to contribute; it is to give of one’s time and mental energy, in order to participate in the collective project of cutting away the nonsense and the irrelevancies from the vast and variegated mass of ideas that we are always and unceasingly generating, and to get one step closer to the truth. In his book Brainstorms, Daniel Dennett quotes the poet Paul Valéry: We have had, in these past few years (in the “rationalist Diaspora”) and in these past few months (here on Less Wrong 2.0), a great flowering of the former sort of activity. We have neglected the latter. It is good, I think, to try and rectify that imbalance.
5Thrasymachus6y
I endorse Said's view, and I've written a couple of frontpage posts. I also add that I think Said is a particularly able and shrewd critic, and I think LW2 would be much poorer if there was a chilling effect on his contributions.
3namespace6y
I've written front page posts before and largely endorse Said's view. At the same time however I think the thing Raemon and others are discussing is real, and I discuss it myself in Guided Mental Change Requires High Trust.
6Unreal6y
I've definitely facepalmed reading rationalists commenting on FB. My guess is that it's not "Facebook" that's the relevant factor, but "Facebook + level of privacy." Comments on Public posts are abysmal. Comments on my friends-only posts, sadly, get out of hand too; although not quite as bad as Public. Comments on curated private lists and groups with <300 people on them have been quite good, IME, and have high quality density. (Obviously depends though. Not all groups with those params have this feature.) LW is very clearly better than some of these. But I think it compares poorly to the well-kept gardens. (( I am making these points separately from the 'echo chamber' concern. ))
5Qiaochu_Yuan6y
Huh. I've generally had very good conversations on my Facebook statuses, and all of my statuses are Public by default. But I also aggressively delete unproductive comments (which I don't have to do very often), and also I generally don't try to talk about demon thread-y topics on Facebook.
3Said Achmiz6y
On this, sadly, I cannot speak—I do not have a Facebook account myself, so I am privy to none of these private / friends-only / curated / etc. conversations. It may be as you say. Of course, if that’s so, then that can’t be replicated on Less Wrong—since here, presumably, every post is public!
5ESRogs6y
This doesn't seem like strong evidence that EY wasn't moderating. It might just tell you what kinds of things he was willing to allow. (I don't actually know how much he was moderating at that point.)
8Said Achmiz6y
I certainly never made the claim that Eliezer wasn’t moderating. Of course he was. But as I said in a previous discussion of this claim: If moderation standards across Less Wrong 2.0 are no stricter than those employed on Sequence-era LW/OB, then my concerns largely fall away.

I think Eliezer has a different set of "reasons a comment might aggravate him" than most of the other authors who've complained to us. (note: I'm not that confident in the following, and I don't want this to turn into a psychoanalyze Eliezer subthread and will lock it if it appears to do that)

I think one of the common failure modes he wants the ability to delete are "comments that tug a discussion sideways into social-reality-space" where people's status/tribal modes kick in, distorting people's epistemics and the topic of the conversation. In particular, comments that subtly do this in such a way that most people won't notice, but the decline becomes inevitable, and there's no way to engage with the discussion that doesn't feed into the problem.

I think looking at his current Facebook Wall (where he deletes things that annoy him) is a pretty reasonable look into what you might expect his comments on LW to look like.

But, speaking of that:

I think an important factor to consider in your calculus is that the end result of the the 2 years of great comments you refer to, was Eliezer getting tired of dealing with bullshit and mov... (read more)

6Said Achmiz6y
It seems to me like our views interact as follows, then: I say that in the absence of open and lively criticism, bad ideas proliferate, echo chambers are built, and discussion degenerates into streams of sheer nonsense. You say that in the presence of [what I call] open and lively criticism, authors get tired of dealing with their critics, and withdraw into “safer” spaces. Perhaps we are both right. What guarantee is there, that this problem can be solved at all? Who promised us that a solution could be found? Must there be a “middle way”, that avoids the better part of both forms of failure? I do not see any reason to be certain of that… Suppose we accept this pessimistic view. What does that imply, for charting the way forward? I don’t know. I have only speculations. Here is one: Perhaps we ought to consider, not the effects of our choice of norms on behavior of given authors, but rather two things: 1. For what sorts of authors, and for what sorts of ideas, does either sort of norm (when implemented in a public space like Less Wrong) select? 2. What effects, then, does either sort of norm have, on public consensus, publicly widespread ideas, etc.?

Richard_Kennaway's summary of the PNSE paper on the Kensho thread is the most valuable thing I've ever read about meditation. However, it's critical of meditation. If Valentine had specified a comment policy saying "please don't be critical of meditation, we are looking for 201 level discussions and beyond", Richard might have written a top-level post instead. But Richard has zero top-level posts to date, so it's more likely that he just wouldn't bother.

the most valuable thing I've ever read about meditation. However, it's critical of meditation.

Possibly obvious, but seems worth noting explicitly: that paper discusses the result of some very advanced meditative states, which usually need to be pursued with serious intent. It shouldn't be considered a criticism of meditation as a whole, given that there are a lot of benefits that typically show up much before one gets to the realm of non-symbolic stuff and which are more objectively verifiable. Also, not all meditative practices even have non-symbolic states as their goal.

Note also that even in that paper, the disconnect between internal experience and actual externally-reported signs of bodily/emotional awareness was only reported in three interviewees out of 50, and that the memory loss stuff only started showing up around the last stage, which several traditions were noted to stop short of.

(I also suspect that there may have been a bit of a disconnect in the language used by the interviewer and the interviewees: I think I might have had a few brief glimpses of what a non-symbolic state feels like, and one thing in particular that I would still have emotions as ... (read more)

9habryka6y
Yep, I think this is a valid point against stronger moderation guidelines. Though I think it's a bit less bad than you said: While I am unsure whether Richard might have written a top-level post, it only has to be the case that any other person would create a top-level post on which Richard would feel comfortable commenting, and that seems a lot more likely to me. When I am honest, if there had not been enough discussion of a bunch of my concerns about circling on the circling thread, I would have probably written a top-level posts with my concerns at some point in the next few weeks.
6Ben Pace6y
Also, had I read that paper and seen it mostly go ignored, I probably would've made a link post for it.
7Qiaochu_Yuan6y
FWIW, I don't consider this a nitpicky comment, and my model of Val would not have censored it (although the relevant question is whether Richard_Kennaway's model of Val would have censored it). But a tradeoff is being made here, and I'm personally willing to trade off potentially not getting to see this kind of detailed criticism in comments in exchange for people like Eliezer and Conor writing more top-level posts.
4Ben Pace6y
+1
[-]Zvi6y150

Yeah. The chilling effect is almost entirely (I think) about people worrying they will be censored, rather than actual censorship. I have full ability on my blog to delete comments, have used it only for removing typo/duplicate comments, and felt vaguely bad about even doing that. Val censoring that comment would be a huge surprise and forced update for me, but Richard thinking he might censor it seems a reasonable thing to worry about.

Of course, if it was censored, there's nothing to stop someone from then posting it elsewhere or as its own post, if they think it's valuable, and presumably the system will save the comment so you can copy it. Given that, this circles back to the general case of The Fear people have around posting and commenting, as if negative feedback of any sort was just awful. If you get censored, it's not that big a deal. Might even be good, now you know where the line is and not to do that again!

I'm confused about what sort of content belongs on LW 2.0, even in the Archipelago model.

I've been a lurker on LW and many of the diaspora rational blogs for years, and I've only recently started commenting after being nudged to do so by certain life events, certain blog posts, and the hopeful breath of life slightly reanimating LessWrong.

Sometimes I write on a personal blog elsewhere, but my standards are below what I'd want to see on LW. Then again, I've seen things on LW that are below my standards of what I expect on LW.

I've seen it said multiple times that people can now put whatever they want on their personal LW spaces/blogposts, and that's stressed again here. But I still feel unsettled and like I don't really understand what this means. Does it mean that anyone on the internet talking about random stuff is welcome to have a blog on LW? Does it mean well known members are encouraged to stick around and can be off the rationality topic in their personal blogposts? How about the unknown members? How tangential can the topic be from rationality before it's not welcome?

Could a personal post about MealSquares and trading money for ti... (read more)

The idea is indeed that you are welcome to post about whatever you want on LW, and as we get more and more content, we will make people's personal blogs less visible from the frontpage, and instead add subscription mechanisms that allow people to subscribe to the specific people they want to follow (which they will see in addition to the frontpage discussion).

We are planning to turn off the ability to lose and gain global-karma for personal blogposts in the near future, though we are still planning to allow people to upvote and downvote content (though we might put a lower bound of something like -4 or 0 on the negative score a post can get). So as soon as that happens, you will no longer be able to lose karma from writing a post on your personal blog that people didn't like.

We are still optimizing the site for the people who are trying to make progress on rationality and various related topics, and so while it's possible to use LessWrong as a fashion blog, you will probably find the feature set of the site not super useful to do that, and you won't benefit super much from doing that on LessWrong over something like Medium (unless you want to analyze fashion u... (read more)

An issue I currently notice with Personal Blogposts is that they serve two purposes, which are getting conflated:

1) blogposts that don't meet the frontpage guidelines (i.e. touching upon politics, or certain kinds of ingroupy stuff), but which you expect to be worth the time and attention of people who are heavily involved with the community.

2) blogposts that you aren't making a claim are worth everyone's attention.

Right now there's a fair amount of posts of type #1, which means if you want to stay up to date on them, you need to viewing all posts. But that means seeing a lot of posts in type #2, that the author may well have preferred not to force into your attention unless you already know the author and subscribing to them. But they don't have a choice.

I predict we'll ultimately solve that by splitting those two use cases up.

3Qiaochu_Yuan6y
My immediate reaction is that 2) should not be happening on LW at all. What's your rationale for wanting it?
[-]Zvi6y190

Being worthy of everyone's attention is quite the bar! I certainly wouldn't want to only publish things that rise to the level of 'everyone or at least a large percentage of rationalists should read this post.' The majority of my posts do not rise to that level, and by math almost no posts in the world can rise to that level.

The global justification is, if you don't let me put my third-tier posts on my personal blog here, then I'll be creating content that doesn't end up on LW2.0, which means that to read all my stuff they have to check my blog itself at DWATV, which means they get into the habit of reading me there and LW2.0 fails as a central hub. You want to make it possible for people to just not check other sources, at all.

7Qiaochu_Yuan6y
Okay, I'm sold.
9ozymandias6y
They might be worth the attention of some subset of people. For example, I write rationalist-influenced posts about transness. These are no doubt very uninteresting to the vast majority of the cisgender population, but people who have specifically chosen to subscribe to my blog are probably going to be interested in the subject.
9habryka6y
Example: Interesting discussion of technical issues around AI safety, requiring a graduate-level math background. Seems very important to happen, I think a good chunk of people are interested in it, but it's clear that not everyone should see that discussion given that most wouldn't be able to understand it.
9Raemon6y
Yeah, the keyword was "everyone." Other posts include "Zvi is visiting San Francisco" (useful for rationalists to know if they follow Zvi, not important otherwise), or many forms of "explore a new idea that isn't ready for primetime yet."
5Qiaochu_Yuan6y
Hum. This is the sort of thing I'd want something like subreddits for. One thing I'm currently confused about is what both the current and planned future visibility of personal blog posts is; are they currently visible from Community? Will they be in the future?
4Raemon6y
Subreddits are one way to do handle that sort of thing, but it depends on the critical mass of of people caring about a specific topic. Like, the "Zvi is in San Francisco – do you want to hang out with him?" thing is something that'd never make sense for a subreddit, but does make sense for people who follow Zvi in particular to pay attention to. And people may have lots of topics they care about that are niche, but not yet have anyone else who cares enough to write about similar things to support subreddit. They're currently visible from the community tag. In the future we may rearrange the UI hierarchy on the frontpage in some fashion.
8Vaniver6y
Obviously this post should have the Bay Area tag, and be made by Zvi, and show up in the feeds of people who have given karma boosts to the Bay Area tag and Zvi as an author.
4crybx6y
This sentiment seems opposed to what others have expressed. Mixed messaging is part of why I've been confused. Aspiring rationalists could benefit from a central place to make friends with and interact with other rationalists (that isn't Facebook) and welcoming 2) seems like it would be a way to incentivize community, while hopefully the Archipelago model limits how much this could lower LW's main posts' standards. I notice that when I write about rationality adjacent things, it most often comes out as a story about my personal experiences. It isn't advice or world changing info for others, but it is an account of someone who is thinking about and trying to apply rationality in their life. I predict these stories aren't totally useless but that they may not be liked or seen as typical LW fair. I'll admit the link I see between my last two paragraphs. I would like to be less of a silent watcher and make friends in the community, but my natural style of writing is experiential and mostly doesn't feel like LessWrong has felt in the past.
6Qiaochu_Yuan6y
It seems like my sense of what "worth everyone's attention" means is pretty different from others and that's part of the miscommunication. I take as given that people are already mostly reading garbage most of the time, on Facebook or LW or Reddit or wherever else. So my bar for "worth everyone's attention" is relative, not absolute: not whether this thing I'm writing is worth everyone's attention in some absolute sense, but whether it's better than the garbage it's displacing. This is not a very high bar! Also, for what it's worth, I think stories about personal experiences are great and we should have more of them.
7Raemon6y
In this case, it's replacing other stuff on Lesswrong which is a much higher bar. There's already more stuff on LessWrong than I actually have time to look at, and having to filter through more stuff as more people join and start posting personal stuff will rapidly move this from "I could physically do it if I wanted to" to "I definitely literally could not." I want to hear stories about personal experiences from people that I know moderately well and/or who are good writers, but not everyone. (The whole point is that there are two different reasons one might want to make something a personal blogpost, or that an admin might-or-might-not want not promote it to everyone's attention, and it's causing some issues that these two things are getting conflated)
5alkjash6y
It's curious and surprising how rapidly LW grew in the last two months; My System 1 is still expecting 1 post a day on here instead of ten.

Once again, I just want to say a huge thanks to the team building the site. This type of work requires really careful consideration of tradeoffs, is hugely leveraged, and almost automatically displeases some people by the tradeoffs made.

For my part, I don't have any real opinions on the moderation guidelines and tech buildout here specifically; I just wanted to say thanks and salut for all the thoughtful work going into it. It's too often thankless work, but it makes a tremendous difference. Regards and appreciation.

6Raemon6y
:)

Loren ipsum

9Raemon6y
Weirdly meta-but-also-object-level question for Conor: This comment now has 35 karma, and been visible for about a day, and has no particularly dangling threads to resolve. If my "on-topic/off-topic" idea was implemented, this is the sort of comment I think I'd prefer to move to a pseudo-hidden "offtopic" section, to free up visible space for comments currently under discussion. But, I can imagine that defeating at least some of the value you was getting from it (i.e. feeling sufficiently supported in your endeavors that it feels right to give LW2.0 a second go). Mostly for building up an intuitive sense of how people would react to the offtopic thing, how would you feel if this comment were moved there if the feature existed?
5Qiaochu_Yuan6y
I don't know how this ontopic/offtopic thing would interact with threading. What happens to a thread in which some of the comments are terrible and some of them are great?
5Raemon6y
A sub-comment marked as off-topic gets moved to the offtopic section, where they link back to the original thread, but the original thread doesn't link to it. (is my currently conceived implementation)
6Raemon6y
The sort of thing you want to do was part of my motivation for the Archipelago concept, so would definitely be interested in seeing what you do with such an island. I have some questions/comments/thoughts about how you'd handle certain kinds of edge cases, but that's a conversation I'd like to have some other time. Yeah, this is already basically the case, but as comments in this post highlighted, it is not nearly as clear as it should be, and we'll work on some way to make it more obvious. tldr: you can click on the Moderation Guidelines at the top of the comment section to see the author's general norms. I'm was mulling over changing this so that it shows the first couple lines of the moderation policy, with a 'read more' thing that looks similar to how frontpage Recent Comments look, and maybe saying "this user has not set any explicit moderation guidelines" if they haven't listed it yet.
6ChristianKl6y
It might be worthwhile to create a popup with the moderation norms that shows when a person clicks "submit" with the options: "Edit post" "Post conforms to the norms" (there might be a checkbox to not show the popup again)
5Conor Moreton6y
Loren ipsum
4habryka6y
What are your thoughts on having an additional link in the moderation guidelines box at the top of the comment section (and at the bottom of the new-comment box)? Or do you feel like it should be more noticeable?
6Conor Moreton6y
Loren ipsum

Instead of purely focusing on whether people will use these powers well, perhaps we should also talk about ways to nudge them towards responsibility?

  • What if authors had to give reasons for the first 10 or 20 comments that they deleted? This would avoid creating a long term burden, but it would also nudge them towards thinking carefully about which comments they would or would not delete at the start
  • Perhaps reign of terror moderation should require more karma than norm enforcing? This would encourage people to lean towards norm enforcing and to only switch to reign of terror if norm enforcing wasn't working for them
  • I already posted this as a reply to a comment further up, but perhaps authors should only be able to collapse comments at first and then later be given delete powers. Again, it would nudge them towards the former, rather than the later
  • For the 2nd and 3rd idea to work, the delay system couldn't be based purely on karma as many authors already have enough karma to gain these powers instantly. There should ideally be to some delay in gaining access to the higher level features even in this case

One thing this has made me realize is that I think people are explicitly assuming Reign of Terror means "access to Delete-And-Hide". Which is not actually the case right now, but which is a reasonable assumption and we should probably be designing the site around that being intuitively obvious.

(right now, Reign of Terror is just a text description, informing commenters what to expect. But it probably makes sense for users to only have Delete-And-Hide if their moderation style is set to Reign of Terror, and/or if they use the delete-and-hide feature, the give post's moderation setting is changed to Reign of Terror regardless of what their usual setting is)

Delete-and-Hide requiring higher karma that other delete options also seems reasonable to me. (And regardless of whether it should or not, we should probably refactor the code such that it's handled separately from from the block-of-permissions that grants the other deletes, so we have more control over it)

I'm a huge fan of the Archipelago model, but I'm unsure how well our current structure is suited to it. On Reddit, you only have to learn one set of moderation norms per sub. On Less Wrong, learning one set of moderation norms per author seems like a much higher burden.

In fact, the Archipelago model itself, is one set of norms per group. If you don't like the norms of a particular group, you just don't go there. This is harder when there isn't a clear separation and everyone's posts are all mixed together.

4Rhaine6y
The moderation guidelines are currently at the end of the post and also as a dropdown menu in the comment box, so I think "learning" sets of rules won't be an issue. Concerning the lack of separation, maybe we could give users the opportunity to filter out posts with certain moderation guidelines. I get that currently it seems like each author is going to describe their guidelines in their words, but an opportunity to filter out "reigns of terror" might be useful for people who despise heavy moderation. Or at least have a little icon for every post that shows the level of moderation on that post.

This behavior of writing a post and getting unhelpful comments: is it something that can be changed by tweaking the karma system?

Like, right now, if I read a post and think of a true-but-unhelpful objection, maybe I post the objection in the hope of getting upvotes.

But maybe if you make the post author's upvotes worth more than upvotes by random schmoes, then I optimize more for posting things the post author will like?

8Qiaochu_Yuan6y
The karma system is not really the problem; tweaking it won't fix the underlying trust dynamics (the thing Raemon was talking about re: high, medium, and low trust). On Facebook, the people who comment on my statuses are typically my friends, or at least acquaintances. We've met in real life, we know and like each other, we're all using our real names, etc. It feels like we're on the same team and so it's easy for us to be helpful and supportive of each other while commenting. Here many people are using pseudonyms, I don't know who they are, we've never met, we're not friends, we plausibly wouldn't like each other even if we met, etc. It's easy for us to relate to each other in a more adversarial way, because it doesn't feel like we're on the same team.
6Dan B6y
Let me say that a little more clearly: Someone might argue that "the commentariat on LesserWrong are a group of people who are more nitpicky and less helpful than my friends on Facebook". I'm not sure if this is a strawman. But I'd like to propose, instead, that the comments being posted on LesserWrong are the result of people responding to incentives imposed by the karma system. For whatever reason, it appears that these incentives are leading people to post nitpicky and unhelpful comments. Improving the karma system might fix the incentives. You've suggested elsewhere that the post owner might move irrelevant comments to an "off-topic" section, and that's a good way to deal with off-topic comments. But what if a comment is directly replying to my post, but I just sort of feel like it's nitpicky and unhelpful? I could mark it as "off-topic", but this wouldn't be strictly accurate. Instead, I'd propose letting the post owner mark certain comments as "helpful", which would be worth +10 karma, or would double the value of all karma received, or it would sort those posts to the top where more people would see them, or something.
9habryka6y
I do think the karma system is contributing to this, and I do want to explore ways to modify that. I am happy about more suggestions to do so. I am skeptical of the owner of a post marking comments as "helpful" or "unhelpful" since this gives people who write top-level posts suddenly drastically more power than people who don't, even at similar levels of karma. I do think that giving moderators or very-high-karma users the ability to special-upvote or special-downvote something might work out (we've been planning on adding karma rewards to the moderator toolkit for a while now, and think this can indeed be used to combat this problem).

Something that has only just now occurred to me, which in retrospect is perfectly consistent with my other experiences, is the extent to which posts like this help to improve my sense of how to contribute. Specifically, articulating the intuitions behind the decisions is very valuable to me, quite aside from the actual administrative direction of the website. This is the most valuable meta-discussion chain I have observed.

I support the current direction; I add that this will help keep the site from growing stale; I have no concerns about unjust infringement of expression. Good show!

Some quick thoughts:

1) I realize that this may be some work, but maybe "down" votes could be more specific. Some categories I've seen have been:

  • This material has already been explained by another LessWrong post or something somewhere else on the internet.
  • This material is not relevant to LesserWrong
  • This comment may be accurate, but it is phrased as quite mean-spirited.

I would assume that if there are a few things you are trying to discourage, having the ability for people to label those things would be a good step to measuring and minimizing... (read more)

2Raemon6y
Thanks! Re: #4, I think that gets explored a bit in this thread: https://www.lesserwrong.com/posts/XmA3u9c3AYFLmQ7tZ/mapping-the-archipelago#Wuc9dZy92jM9QM9gK #1 is something we've been thinking about awhile, although implementing it in a way manner with a clear, clear UI is a fair bit of work so it's not planned for near-term. We're working on distill things down further, but this is roughly the intent of the library page (with R:AZ and Codex being the most important and the curated sequences being next most important). This is certainly a lot of content. The recent Canon/Peer Review discussion may hopefully result in distilling these things further down, but when all is said and done there's just a lot of content. I think even if we distilled things further you'd still run into people criticizing you for repeating something, so I think the best approach there is to start with lower key posts (perhaps using a Shortform Feed to hash out the idea) before investing a lot of time into an effortpost. (Oftentimes, new angles on old ideas are useful anyhow, but it's still helpful to know what came before so you know how to cover it differently)
2Said Achmiz6y
Take a look at how lobste.rs does this. (Actually, a lot of the functionality and UI stuff mentioned here, and in your recent thread, is something I’ve worked on when tinkering with my fork of lobste.rs, and I think there are a good few lessons / transferable design work / etc. there—email me if you’re interested in discussing it.)

Thanks for articulating why Facebook is a safer and more pleasant place to comment than LW. I tried to post pretty much this on a previous thread but wasn't able to actually articulate the phenomenon so didn't say anything.

That being said, I still feel like I'd rather just post on Facebook.

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there&#... (read more)

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there's a technical solution to this, because Jeff Kaufman implemented it on his blog.

I don't understand this response. That there exists a solution doesn't mean that there exists a solution that 1) is easy to use and 2) that people will actually use. One of the many advantages of hosting a conversation on a blog post instead of on a Facebook status is that it's easy for random people to link to that blog post years later. Even if people could in principle do this for Facebook statuses with the appropriate tools, the trivial inconveniences are way too high and they won't.

(I've already had one friend explicitly say that he was looking for a Facebook status I wrote because he wanted to show it to someone else but found it too annoying to look for and gave up.)

3ChristianKl6y
This suggests it might be valuable to get a Facebook/LessWrong2.0 integration that works like Kaufman's solution and is easy to use.
1Taymon Beal6y
Yes, this was what I was trying to suggest.
9habryka6y
I've been thinking about that, though I am somewhat worried about the legality of that integration (it's unclear whether you can copy people's content like that without their direct consent, or what would count as consent), and also think it removes most of the levers to shape the culture of a community. For example, it seems clear to me that the rationality community could not have formed its culture on Facebook, though it might be able to preserve its culture on Facebook. The forces towards standard online discussion norms on Facebook are quite high (for example, you can't display the moderation norms easily accessible below a comment, you can't reduce the attention a comment gets by downvoting it, you can't collapse a comment by default, etc.)
3Taymon Beal6y
I guess there's an inherent tradeoff between archipelago and the ability to shape the culture of the community. The status quo on LW 2.0 leans too far towards the latter for my tastes; the rationalist community is big and diverse and different people want different things, and the culture of LW 2.0 feels optimized for what you and Ben want, which diverges often enough from what I want that I'd rather post on Facebook to avoid dealing with that set of selection effects. Whether you should care about this depends on how many other people are in a similar position and how likely they are to make valuable contributions to the project of intellectual progress, vs. the costs of loss of control. I'm quite confident that there are some people whose contributions are extremely valuable and whose style differs from the prevailing one here—Scott Alexander being one, although he's not active on Facebook in particular—but unfortunately I have no idea whether the costs are worth it.
2Raemon6y
Quick note: this isn't what I mean by archipelago (see other comment)
4ESRogs6y
Jeff copies those comments by hand. Source: some facebook thread that I can't find right now. EDIT: Looks like I am wrong: https://www.jefftk.com/p/external-comment-integration.
Other than that, Facebook seems to have the whole "archipelago" thing pretty much solved.

I actually think there's a deep sense in which Facebook has not accomplished archipelago, and an additional deep sense in which they have not accomplished public archipelago.

Archipelago doesn't just mean "you've filter-bubbled yourself such that people you only hang out with likeminded people." It means you've filtered yourself and then used that filtering to enforce norms that you wouldn't be able to enforce otherwise, allowing you to experiment with culture building.

On FB, I've seen a small number of people do this on purpose. Mostly I see people sort of halfheartedly complaining about norms, but neither setting explicit norms for people to follow nor following through on kicking people out if they don't. (An issue is that FB is designed to be a melting pot. Your mom, your college friends, and rationalist friends are all bumping into each other, and have different assumptions about what norms even mean)

And then, re Public Archipelago: Facebook very much works against the ability for good ideas to bubble up into a central conversation that everyone can be aware of. You could attempt to solve this by building around Facebook, but Facebook really doesn't want you to do that and it's a pain.

8cousin_it6y
I think Reddit has a better claim to "accomplishing Archipelago". Subreddits are a thing of beauty. They are bigger than a personal blog though, and don't interact much, so LW2 is really trying something new. I can't wait to see how it works out.
1Taymon Beal6y
I think I agree that if you see that as the development of explicit new norms as the primary point then Facebook doesn't really work and you need something like this. I guess I got excited because I was hoping that you'd solved the "audience is inclined towards nitpicking" and "the people I most want to hear from will have been prefiltered out" problems, and now it looks more like those aren't going to change.
4Raemon6y
My expectation is that the new rules will result in less nitpicking (since authors will have a number of tools to say 'sorry this comment doesn't seem to be pulling its weight'), although you may have to learn which authors enforce which sorts of norms to figure it out. I'm not 100% which things are prefiltering people you care about out, so am not sure whether this will make a difference.
2Wei Dai5y
I'm curious which worldviews and approaches you saw as over-represented, and which are the ones you most wanted to hear from, and whether anything has changed since you wrote this comment. Are your friends here now? If not, why?

I expect to have serious comments after some reflection, but I wanted to register that this is extraordinarily well-thought out.

Also, what a perfect post for the day I reach 2k!

Personal Blogposts where all commenters are only allowed to speak in poetry.

Challenge accepted.

How do I put it, so as not to offend anyone... I think this is the right discussion for me to say that although I percieve this comment as positive, this definitely is not one I would wish to allocate my attention to, given the choice. I would have expected such posts to get downvoted. I suggest two separate systems of voting: one for positive fuzzy feelings, one for worthiness of attention. What I hope is that it would mitigate the reluctance to downvote (or to not upvote) stemming from the social nature of humans. I.e. we could continue not discouraging each other while still having a useful conversation.

8Raemon6y
Yeah. This is the sort of comment I'd consider tagging "off-topic" (probably while responding publicly to it to note that I am happy about the comment, so that the off-topic-ness clearly comes across like good management of people's attention rather than a mean rebuke)

I strongly agree about the importance of play in idea generation. There's a scale with yes, and at one end and well, actually at the other, and I try to stay close to the "yes, and" end all else equal. (Though if I see someone defecting on good conversation norms, I'm sometimes willing to defect in response.)

Re: trust, academia has a nice model here: collaborate with friends at your university, and eventually publish & make your ideas known to strangers at other universities. From an epistemic hygiene perspective, it feels right ... (read more)

6steven04616y
I suspect in practice the epistemic status of a post is signaled less by what it says under "epistemic status" and more by facts about where it's posted, who will see it, and how long it will remain accessible. Sites acquire entrenched cultures. "The medium is the message" and the message of a LessWrong post is "behold my eternal contribution to the ivory halls of knowledge". A chat message will be seen only by people in the chat room, a tweet will be seen mostly by people who chose to follow you, but it's much harder to characterize the audience of a LessWrong post or comment, so these will feel like they're aimed at convincing or explaining to a general audience, even if they say they're not. In my experience, playing with ideas requires frequent low-stakes high-context back-and-forths, and this is in tension with each comment appearing in various feeds and remaining visible forever to everyone reading the original post, which might have become the standard reference for a concept. So I think LessWrong has always been much better for polished work than playing with ideas, and changing this would require a lot of tradeoffs that might not be worth it.
5cousin_it6y
Open threads on old LW were good for this, I wonder why we don't have them here?
2Raemon6y
Periodically people post open threads (generally as a personal post) but they haven't gotten much traction. (They seem to get less traction that other personal blogposts so I don't think it's just about that)
5Qiaochu_Yuan6y
Someone posted an open thread two weeks ago and my subjective impression is that it disappeared from view almost immediately, other than generating a few comments I saw in Recent Comments that seemed to be, if you'll pardon the bluntness, two people making basic mistakes. Starting from Community, right now, and repeatedly clicking "Load More," I got to something that was posted three months ago and still can't see this open thread. (I forgot to count how many times I clicked "Load More" but it's enough times that I got bored and stopped, twice.) I'm not even sure if it's visible from Community at all. If it is, the magical algorithm that sorts Community hates it. If we want open threads to work they might need to be stickied, at least at first.
2Said Achmiz6y
This is tangential, but re: clicking “Load More” a bunch of times: GreaterWrong has an archive browser where you can view posts by year, by month, and by day, as far back as you like.
4Raemon6y
FYI, my current (unofficial) thought process re: this is to move towards Personal Feeds being more established as a dominant way to engage with the site. (i.e. if you're a new user, the structure of the site is such that most new content you create will probably be a personal feed comment, and only when you think you've got a moderately polished thing to say would you write a post) However, this is all still very up in the air.
On Frontpage posts, we will want to have some kind of standard, which I’m not sure we can formally specify. We’re restricting moderation tools to users with high karma, so that only people who’ve already internalized what LessWrong is about have access to them. We want experimentation that productively explores rational-discussion-space.

This policy doesn't currently make sense to me, but it might just be because I don't understand some of the mechanics of how the site works.

Am I correct that currently frontpage posts can be created in two ways: e... (read more)

6habryka6y
One reason here is that I think it is quite a bit of cognitive overhead for the average commenter to have to memorize the moderation norms of 20 different personal blogs when they want to participate in the discussion on the page. Some subreddits seem to work fine in spite of the reddit frontpage though, so it might actually be more fine than I think. The second reason is that I do think there is a significant benefit to having a discussion in which you know dissenting views are represented, and where you can get a sense of what the site at large thinks about a topic. There is definitely a chilling effect to any moderation policy. And while it is often the case that that chilling effect promotes content generation, I do think that any piece of content that wants to properly enter the canon of the site should go through a period of public review in which any active user of the site can voice their problems with the content.
[-]Zvi6y100

The idea that one does not simply comment without knowing the moderation policy seems like an error. That doesn't mean that there's no value in knowing the moderation policy, but if 20 posts had 20 different moderation policies and you wanted to write comments, the likely effect is still you write 20 comments and nothing happens to any of them; if 1-2 of them do get minimized, maybe then you look at what happened. Or alternatively, you'd check if and only if you know you're in a grey area.

I do notice that I might look at the comment policy when deciding whether to read the comments...

it is quite a bit of cognitive overhead for the average commenter to have to memorize the moderation norms of 20 different personal blogs

yeah i'm worried about this...

would like it if there were some obvious indication when someone's moderation policy significantly deviates from what one would typically expect, such that I will definitely be sure to read theirs if it's unusual (like the poetry one).

and otherwise, I'd want to encourage ppl to stick to the defaults...

Also seems like, given LW's structure, it makes way more sense to have "moderation policies for posts" and not "moderation policies for blogs / authors." I don't really see the blogs. I see the posts. I really can't distinguish very well between blogs. So I'm going to check post-level moderation policy and not really track blog-level / author-level moderation policy.

And as an author, I may want each of my posts to have different policies anyway, so I might change them for each post... I dunno how that works right now.

4Zvi6y
If you're doing the poetry thing, I think that's cool/unique enough that you should mention it in the post itself (and the comment section should likely change colors, or something), plus if you read the comments it presumably should be clear something's going on? Default can still be, you have different guidelines for different posts, and I only check them when I actively want to comment and am worried about violating the guidelines.
4Said Achmiz6y
I absolutely agree with this given the current LW2 UI, but note that this situation may be altered by UI changes without modifying the underlying structure / hierarchy of the site.
2Raemon6y
Blog-level norms show up on each individual post (the issue is just that it's not as obvious as I'd like where that lives) Once we implement post-level norms (which we plan to soonish), they'd also show up on the individual post. FYI, right now my own blog-level-norms specify "I usually have a goal for a given post, will usually explain that goal or it'll be obvious, and I reserve the right to delete comments that aren't furthering the goals of the discussion I want" As Said mentions, this is mostly a UI problem, and while I think it'll take some more experimentation I think it's pretty fixable.

This is a bit of a tangent, but have LessWrongers considered replacing the norm of using common names in sequential order like 'Alice, Bob, Carl...' etc? They could be replaced by less common names. Sometimes I start thinking of rationality community members when their names are also used for stand-in people in examples in blog posts.

3Conor Moreton6y
Loren ipsum

I think the "not written down at all" link is broken. It's taking me to "https://www.lesserwrong.com/posts/.../writing-down-conversations", which isn't a valid link.

While the comment threads about both Kensho and Circling had a lot of conflict, I don't think that's bad. It's conflict that worth to be had and it's good to be open about the lines of that conflict in a public forum.

While the posts haven't changed my opinions about the main stance about the topics they did give me a better idea about the inferential gap.

At the moment I have three drafts:

1) The epistemology of NLP

2) What does it mean to be in relationship?

3) Pierget's Assimilation/Accommodation/Equilibration

It's conflict that worth to be had and it's good to be open about the lines of that conflict in a public forum.

That's not my crux. The conflict on the Kensho and Circling threads made me substantially more worried about posting similar threads. I don't enjoy the thought of being punished for trying to introduce new ideas to LW by having to spend my time dealing with demon threads. This is the sort of thing that made me stop writing on LW 1.0, and it's also an invisible cost; you won't notice it's happening unless you talk to people who aren't posting about why they aren't posting.

2ChristianKl6y
When being afraid of being punished, comments wouldn't be my main fear. I prefer a charged open discussion above other ways of social punishment where people change their opinion about myself without the conflict being out in the open.
5PeterBorah6y
That doesn't address the fact that Qiaochu has a different instinctive reaction. The goal of this proposal is to deal with the fact that different people are different.
2ChristianKl6y
I completely understand the instinct. I don't think think Qiaochu would like the effect of posting Overton Window violating things on this website, not getting negative feedback here but actually suffering socially as a result. It would be very nice if it would be possible to violate the Overton Window without having to think about it, but it isn't as easy.

This has change my thinking quite a bit around how to create an anti-fragile environment that still has strong norms and values. I would love to see more work in this direction and think about the "Archipelago Model" frequently

I thought about the idea of removing the effect of karma votes on personal blogs in the future and I don't think it's a good idea.

I do understand the lure of wanting to be inclusive of any kind of writing and also allow those people who's posts currently get downvoted into oblivion but if a lot of personal blogs are of that quality less people would read personal blogs of people they didn't subscribe to. This inturn will mean that if a new person writes a personal blog post, nobody will read it.

Longer-term I think it's better to make the software easy to install by other people so that the people who do want to have a LessWrong2.0 style blog about cat pictures can install their own instances.

6Raemon6y
My understanding is that Cat Pictures aren't much related to the "upvotes/downvotes don't count for Personal Blogs" thing, and if Quality is a consideration it's only a secondary one. There's a few common use cases we expect to be relevant to Personal Blogs, such as: * discussing politics with rational people (we don't want this on the front page but it's okay for the blogs, in places that get a bit less visibility) * making calls to action, especially relating to rationality community culture * talking about less fleshed out ideas in a low key setting. All of these are things that intersect a bit weirdly with karma. Politics and calls to action incentivize people to upvote or downvote based on tribal or coalitional affiliation, which isn't necessarily about epistemic clarity. The problem of "not enough people will read the personal blogs to start getting them upvoted" is definitely an issue (I'm not sure how to solve it), but it's going to need to get solved in some scalable fashion, and I think trying to disincentivize people from starting personal blogs isn't the best tradeoff to make to solve it.
[-]PDV6y00

Why do you think that LessWrong can or should scale?

My perspective, Ray might have different thoughts on this:

  • "Scaling" in this post mostly refers to "scaling up from small (<20) closed-group interactions". I have a lot more people than that that I want to actively talk to, and whose ideas I want to hear about in the context of existing LessWrong content.
  • Ultimately, a lot of the problems I care about seem hard, and it seems like we need at least a few dozen people working on them, maybe even a few hundred, if we want to solve them. It's not obvious that there is super much to gain by scaling LessWrong to thousands of active commenters, but scaling it to at least a few dozen strikes me as necessary to solve a lot of the problems I care about (AI alignment, various open questions in individual rationality, various open questions in group rationality, etc.).
  • I care about the best people working on the problems I care about, and that requires an environment where a lot of people can try their hands at working on the problems, so that we can identify the ones that seem best and give them more resources and support. This requires a platform that can deal with new people coming in.
0PDV6y
So scale it to...the size it already is? Maybe double that? I don't think that requires any change. If you wanted a 10x user count increase, that probably would, but I don't think those 10X potential users even exist. Unless and until round 3 of "Eliezer writes something that has no business getting a large audience into his preferred cause areas, but somehow works anyway" occurs. I am also extremely skeptical that any discussion platform can do the third thing you mention. I don't think any discussion platform that has ever existed both dealt with significant quantities of new people coming in well and was effective at filtering for effectiveness/quality. Those goals, in point of fact, seem directly opposed in most contexts; in order to judge people in any detail, the number to be judged must be kept small. Are you sure you're not building for scale because that's the default thing you do with a web app made in the SF Bay Area? Hmm, related question: Assuming this revival works, how long do you expect the site to be actively used before a 3.0 requiring a similar level of effort as this project becomes necessary? 5 years? 10? (My prediction is 5 years.)

5 years doesn't strike me as insane. It seems that most online platforms require a makeover about once every 5 years, so yeah, if this goes well then launching LessWrong 3.0 in 2022-2023 seems quite reasonable. Though some platforms seem to last 10 years without makeover before they seriously decline, so maybe we can put it off until 2027-2028. I would be surprised if this website would be successful for more than 10 years without significant rework (the internet changes quickly, and there is a good chance we will all be browsing in VR by then, or social networks will have wholly consumed the whole internet, or some other change of the scale of the onset of social networks happens).

4spiralingintocontrol6y
If people are leaving as we speak, then scaling it to the size it already is may indeed require change.

Yeah, it's both important to me that the people I see doing the most valuable work on rationality and existential risk feel comfortable posting to the platform, and that we can continue replacing the people we will inevitably lose because of natural turnover with people of equal or better quality.

This has definitely not been the case over the previous 3 years of LessWrong, and so to fix that we will require some changes. My diagnosis of why that happened is partially that the nature of how people use the internet changed (with the onset of social networks and more competition due to better overall technology), partially because the people who were doing good work on the problems changed, and partially because we simply didn't have a system that could productively fight against the forces of entropy for too long, and so a lot of the best people left.

I agree that it is hard to deal with large numbers of people joining at the same time, which is why I am indeed not super interested in discontinuous growth and am not pushing for anything in that direction. I do still think we are under the number of people who can productively talk to each other, and that at this point in time further-sustained, slow growth is valuable and net-positive.

1Chris_Leong6y
Do you think that people are leaving at more than a reasonable rate of natural attrition? If so, why?
5habryka6y
Right now I think we are growing, though people have definitely been leaving over the last few years. I also think Eliezer has not had amazing experiences with the new LW, and there are some other people who showed up and would probably leave again if things don’t change in some way. On net I think we are strongly in the green, but I still think we are missing out on some really good and core people.
8Raemon6y
I'm thinking of it as "we're growing, but on credit." People are trying it out because they've heard enough interesting things to give it a go again, but it hasn't hit something like genuine profitability.
-3ChristianKl6y
There's a lot of writing about how to make predictions for the future on LessWrong and this is a poor one. Good predictions have probabilities attached to them.

Is your concern that it's not clear whether PDV's estimate is a mean, median, or mode? "Median and mode" seems like a reasonable guess (though have to be careful when defining mode etc.)

Being ambiguous about your prediction leaves wiggle room, but that's typical for english sentences; giving an estimate without saying what exactly it means is still less ambiguous than the default.

-2ChristianKl6y
The question is a bit like "When will you stop beating your wife?". It assumes that some time in the future there will be a need to invest resources in a LW 3.0. That's a bad way to think about the future. It's much better to think about the future by thinking that different event might happen and those have probabilities attached to them.
8habryka6y
I mean, I mostly agree with PDVs prediction here. Very few things on the internet survive for more than 5 years, and so yeah, I think it's likely you will need a group of at least two or three full-time people to work on it to ensure that it stays active and healthy and up-to-date with the way people use the internet in 5 years.