Epistemic status: My actual best bet.  But I used to think differently; and I don't know how to fully explicate the updating I did (I'm not sure what fully formed argument I could give my past self, that would cause her to update), so you should probably be somewhat suspicious of this until explicated.  And/or you should help me explicate it.

It seems to me that:
  1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

  2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.[1]

  3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]

  4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

  5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

  6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

It seems to me, moreover, that Less Wrong used to be such a locus, and that it is worth seeing whether Less Wrong or some similar such place[3] may be a viable locus again.  I will try to post and comment here more often, at least for a while, while we see if we can get this going.  Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.

I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed).  Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort.  (At least if we can build up toward in fact having a single locus.)

If you believe this is so, I invite you to join with us.  (And if you believe it isn't so, I invite you to explain why, and to thereby help explicate a shared body of arguments as to how to actually think usefully in common!)

[1] By track record, I have in mind most obviously that AI risk is now relatively credible and mainstream, and that this seems to have been due largely to (the direct + indirect effects of) Eliezer, Nick Bostrom, and others who were poking around the general aspiring rationality and effective altruist space in 2008 or so, with significant help from the extended communities that eventually grew up around this space.  More controversially, it seems to me that this set of people has probably (though not indubitably) helped with locating specific angles of traction around these problems that are worth pursuing; with locating other angles on existential risk; and with locating techniques for forecasting/prediction (e.g., there seems to be similarity between the techniques already being practiced in this community, and those Philip Tetlock documented as working).

[2] Again, it may seem somewhat hubristic to claim that that a relatively small community can usefully add to the world's analysis across a broad array of topics (such as the summed topics that bear on "How do we create an existential win?").  But it is generally smallish groups (rather than widely dispersed millions of people) that can actually bring analysis together; history has often involved relatively small intellectual circles that make concerted progress; and even if things are already known that bear on how to create an existential win, one must probably still combine and synthesize that understanding into a smallish set of people that can apply the understanding to AI (or what have you).

It seems worth a serious try to see if we can become (or continue to be) such an intellectually generative circle; and it seems worth asking what institutions (such as a shared blogging platform) may increase our success odds.

[3]  I am curious whether Arbital may become useful in this way; making conversation and debate work well seems to be near their central mission.  The Effective Altruism Forum is another plausible candidate, but I find myself substantially more excited about Less Wrong in this regard; it seems to me one must be free to speak about a broad array of topics to succeed, and this feels easier to do here.  The presence and easy linkability of Eliezer's Less Wrong Sequences also seems like an advantage of LW.

Thanks to Michael Arc (formerly Michael Vassar) and Davis Kingsley for pushing this/related points in conversation.
New Comment
365 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com

... (read more)

Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)

Anyone want to join me in this, or else make a counterproposal?

Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he's up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.

Seconding Anna and Satvik

I also vote for Vaniver as BDFL.

Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of "who" but I wonder how much weight there will be behind this person. Where would the BDFL's authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.

I'm empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).

I like the idea of granting domain ownership if we in fact go down the BDFL route.

that's awesome. I'm starting to hope something may come of this effort.
An additional point is that you you can only grant the DFL part. The B part cannot be granted but can only be hoped for.
An additional additional point is that the dictator can indeed quit and is not forced to kill themselves to get out of it. So it's actually not FL. And in fact, it's arguably not even a dictatorship, as it depends on the consent of the governed. Yes, BDFL is intentionally outrageous to make a point. What's yours?
The person who owns the website doesn't need consent of the people who visit the website to make changes to the website.
Funny how I didn't notice anyone become outraged. And, of course, BDFL's powers do NOT depend on the consent of the governed -- it's just that the governed have the ability to exit. As to the point, it's merely reminding of the standard trade-off with dictator-like rulers. They are like a little girl: When she was good She was very, very good And when she was bad she was horrid.
I'm concerned that we're only voting for Vaniver because he's well known, but I'll throw in a tentative vote for him. Who are our other options?

I'll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the "villians" of decision making in a book on decision making he suggested to me, Decisive.) Plus, I scanned this thread and I haven't seen Vaniver say he is okay with such a role.

I do agree; one of the reasons why I haven't accepted yet is to give other people time to see this, think about it, and come up with other options. (I considered setting up a way for people to anonymously suggest others, but ended up thinking that it would be difficult to find a way to make it credibly anonymous if I were the person that set it up, and username2 already exists.)

I'm concerned that we're only voting for Vaniver because he's well known

Also because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.

Do we know anyone who actually has experience doing product management? (Or has the sort of resume that the best companies like to see when they hire for product management roles. Which is not necessarily what you might expect.)

I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.

That said, I don't think I was great at it, and suspect most of the lessons I learned are easily transferred.

Edit: I actually suspect that I've learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.

OK, I vote for Satvik as the person to choose who the BDFL is :D
I've done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.
It would be good to know what he thinks the direction of LW should be, but I would really like to see a new BDFL.
I agree that Vaniver should be.
I concur with placing Vaniver in charge. Mainly, we need a leader and a decision maker empowered to execute on suggestions.
I agree, assuming that "technical admin powers" really include access to everything he might need for his work (database, code, logs, whatever).
Throwing in another vote for Vaniver.
Having a BDFL would be great. Vaniver seems to be a good candidate.
I have reservations about this, especially the weird 'for life' part.

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a prob
... (read more)
"Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone"." - this seems complex and better done via a comment

For the Russian LessWrong slack chat we agreed on the following emoji semantics:

  • :+1: means "I want to see more messages like this"
  • :-1: means "I want to see less messages like this"
  • :plus: means "I agree with a position expressed here"
  • :minus: means "I disagree"
  • :same: means "it's the same for me" and is used for impressions, subjective experiences and preferences, but without approval connotations
  • :delta: means "I have changed my mind/updated"

We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.

It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.

Shared here: What reacts do you to be able to give to posts? (emoticons, cognicons, and more)
This Slack-specific emoji capability is akin to Facebook Reactions; namely a wider array of aggregated post/comment actions.
Some sort of emoticon could work, like what Facebook does. Personally, I find the lack of feedback from an upvote or downvote to be discouraging. I understand that many people don't want to take the time to provide a quick comment, but personally I think that's silly as a 10 second comment could help a lot in many cases. If there is a possibility for a 1 second feedback method to allow a little more information than up or down, I think it's worth trying.
I'm reminded of Slashdot. Not that you necessarily want to copy that, but that's some preexisting work in that direction.
This would be a top recommendation of mine as well. There are quite a few prediction tracking websites now: PredictionBook, Metaculus, and Good Judgement Open come to mind immediately, and that's not considering the various prediction markets too. I've started writing a command line prediction tracker which will integrate with these sites and some others (eventually, at least). PredictionBook and Metaculus both seem to have APIs which would make the integration rather easy. So integration with LessWrong should not be particularly difficult. (The API for Metaculus is not documented best I can tell, but by snooping around the code you can figure things out...)
On that topic how you upvote? I've never been able to figure it out. I can't find any upvote button. Does anyone know where the button is?
It's a thumbs-up that is in the lower left corner of a comment or post (next to a thumbs-down). It looks like the top of these two thumbs-ups (or the bottom one after you've clicked it): If you don't see it, it may be that they've turned off voting for new or low-karma accounts.
Ya, that must be it. I've been on here for like 3 years (not with this account though) but only after the diaspora. Really excited that things are getting posted again. One major issue with such a system is that I now feel pressure to post popular content. A major feature of this community is that nothing is dismissed out of hand. You can propose anything you want so long as it's supported by a sophisticated argument. The problem with only giving voting privileges to >x karma accounts is that people, like myself, will feel a pressure to post things that are generally accepted. Now to be clear I'm not opposed to such a filter. I've personally noticed that for example, slatestarcodex doesn't have the same consistently high quality comments as lesswrong. For example people will have comments like "what's falsification?"etc. So I acknowledge that such a filter might be useful. At the same time however I'm pointing out one potential flaw with such a filter, that it lends itself to creating an echo-chamber.
Could you say more about what you have in mind here?
Maybe something like this? https://debatemap.live (note: I'm the developer of it)

I think you're right that wherever we go next needs to be a clear schelling point. But I disagree on some details.

  1. I do think it's important to have someone clearly "running the place". A BDFL, if you like.

  2. Please no. The comments on SSC are for me a case study in exactly why we don't want to discuss politics.

  3. Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. "Auto-aggregation" would be bad however.

  4. Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri's suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.

  5. I don't believe that the basilisk is the primary reason for LW's brand rust. As I see it, we squandered our "capital outlay" of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.

doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November).

This is important. One of the great things about LW is/was the "LW consensus", so that we don't constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the "LW consensus", but then, I don't think anyone entirely did except Eliezer himself. When I say "the basics", I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying "But what if nothing is real?", we don't have to debate them. That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say "read the sequences", but seriously, it saved a lot of trouble.

There were occasional interesting and original objections to the basics. I can't find it now but there was an interesting series of posts responding to this post of mine on Savage's theorem; this response argu... (read more)

When it's useful it's useful, when it's damaging it's damaging, It's damaging when the sequences don't actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It's just too easy to silently downvote, or write "read the sequences". In an alternative universe there is a LW where people don't RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that's where the damage is coming from. Unfortunately, although all of that is fixable, it cannot be fixed without "debating philosophy". ETA Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That's a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don't revise doctrine.
Good point. I actually think this can be fixed with software. StackExchange features are part of the answer.
I'm not sure so what you mean. Developing Sequences 0.1 can be done with the help of technology, but it can't be done without community effort, and without a rethink of the status of the sequences.

I think the basilisk is at least a very significant contributor to LW's brand rust. In fact, guilt by association with the basilisk via LW is the reason I don't like to tell people I went to a CFAR workshop (because rationality -> "those basilisk people, right?")

Reputations seem to be very fragile on the Internet. I wonder if there's anything we could do about that? The one crazy idea I had was (rot13'd so you'll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.
Ooh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh'yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.
As opposed to what? Memorising the One true Philosophy?
The quotes signify that they're using that specifically as a label; in context, it looks like they're pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There's a sort of philosophy that wants to endlessly hash out the big questions, and there's another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.
How many problems has the second sort solved? Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
Too many for me to quickly count? Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured. I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn't always useful to manufacture difficulty as an opportunity to display intelligence.
Name three, then. :)
What I have in mind there is basically 'approaching philosophy like a scientist', and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more 'philosophical' than not: How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.
Scientists don't approach philosophy though, they run screaming in the other dimension. The Scientific Method doesn't work on untestable stuff.
Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.
Is that a joke?
The last time I counted I came up with two and a half.
I've considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can't be solved in such a way right now, can't be solved at all right now. Adding more "hashing out of big questions" doesn't seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.
Can you tell which problems can never be solved?
Only an ill-posed problem can never be solved, in principle.
Is there a clear, algorithmic way of determining which problems are ill posed?
Yeah, you just need a halting oracle and you're sorted.


For the benefit of anyone else who'd need to Google: Benevolent Dictator For Life

I am working on a project with this purpose, and I think you will find it interesting:


It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.

It is based on the open source platform that I'm building:


This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.

This platform is in active development, and I'm very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet - I will be happy to add it. Let me know what you think!

Strong writers enjoy their independence.

This is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though -- if more than a few can be convinced to do it.


Speaking as a writer for different communities, there are 2 problems with this:

  • Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.

  • "An audience of your own": if a reasonable reader can reasonably assume, that "all good content will also be cross-posted to LW anyways", that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.

The HN "link aggregator" model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.

"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.

My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn't necessarily consider it a bad thing because it meant that almost every post was gold.

In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don't have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn't a good fit for G Wiley's budding rationalist community blog, let alone old LW.

I guess what I'm saying is that there's a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) -> cross posting -> links with centralized discussion -> blogroll (loosest). Any point on the scale could work, but it's important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they're in or out.

I have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we're going to have to figure out how to address.
4Paul Crowley
The big downside of this is that it divides the discussion.
4Gordon Seidoh Worley
But what's so bad about divided discussion? In some ways it helps by increasing the surface area to which the relevant ideas are exposed.

On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.

I think a good estimate is close to $10k. Expect to pay about $100/hr for developer time, and something like 100 hours of work to get from where we are to where we want to be doesn't seem like a crazy estimate. Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work. If you can find volunteers who want to do this, we would love code contributions, and you can point them towards here to see what needs to be worked on.

I think you are underestimating this, and a better estimate is "$100k or more". With an emphasis on the "or more" part.

Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.

Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can't find anyone able and/or willing to accept the deal.

The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxe... (read more)


At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn't work, and setting up new user accounts for testing purposes was a huge pain.

I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn't there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.

The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren't quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.

The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.

Thanks for trying to work on that one!

setting up new user accounts for testing purposes was a huge pain.

This seems like the sort of thing that we should be able to include with whatever makes the admin account that's already there; I was watching someone running a test yesterday and while I showed them the way to award accounts karma, I didn't know of a way to force the karma cache to invalidate, and so they had to wait ~15 minutes to be able to actually make a post with their new test account.

These sorts of usability improvements--a pull request that just adds comments for a section of code you spent a few hours understanding, an improvement to the setup script that makes the dev environment better, are sorely needed and greatly appreciated. In particular, don't feel at all bad about changing the goal from "I'm going to close out issue X" to "I'm going to make it not as painful to have test accounts," since those sorts of improvements will lead to probably more than one issue getting closed out.

Maybe it would be easier to make contributions that rely on the code as little as possible -- scripts running on separate pages, that woud (1) verify that the person running them is a moderator, and (2) connect to the LW database (these two parts would be common for all such scripts, so have them as two functions in a shared library) -- and then have a separate simple user interface for doing whatever needs to be done. For example, make a script called "expose_downvotes" that displays a text field where the moderator can copy the comment permalink, and after clicking "OK" a list of usernames who downvoted the specific comment is displayed (preferably with hyperlinks to their user profiles). For the user's convenience, the comment id is automatically extracted from the permalink. Then the moderator would simply open this script in a second browser tab, copy link location from the "Permalink" icon at the bottom of a comment, click "OK", done. Compared with the solutions integrated into LW web page, this solutions is only slightly more complicated for the moderator, but probably much more simple for the developer to write. Most likely, the moderator will have the page bookmarked, so it's just "open bookmark in a new tab, switch to old tab, right-click on the comment icon, copy URL, switch to new tab, click on the text field, Ctrl+V, click OK". Still hundred times more simple (and thousand times faster!) than calling tech support, even assuming their full cooperation. Each such script could be on a separate page. And they could all be linked together by having another function in the shared library which adds a header containing hyperlinks to all such scripts.
I had difficulties finding people without mentioning a price; I'm pretty sure the defect was in where and how I was looking for people. I also agree that it makes more sense to have a small number of programmers make extensive changes, rather than having a large number of people become familiar with how to deal with LW's code. I will point out there's no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links. The main reason we haven't been approaching it that way is that it's harder to make small moves and test their results; either you switch over, or you don't, and no potential replacement was obviously superior.

I'm new and came here from Sarah Constantin's blog. I'd like to build a new infrastructure for LW, from scratch. I'm in a somewhat unique position to do so because I'm (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.

Here is how I envision the basic architecture of this project:

  1. A server that manages all business logic (i.e. posting, moderation, analytics) and interfaces with the frontend (2) and database (3).
  2. A standalone, modular frontend (probably built with React, maybe reusing components provided by Telescope) that is modern, beautiful, and easily extensible/composable from a dev perspective.
  3. A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc). The first concern is security, all others predicated on that.

I w... (read more)

Thanks for the offer! Maybe we should talk by email? (this username @ gmail.com)

If you don't get a proper response, it may be worthwhile to make this into its own post, if you have the karma. (Open thread is another option.)
Currently HackerNews and LW both run on the Reddit code base. On of the problems is that Reddit didn't design their software to be easily adopted to new projects. That means it's not easily possible to update the code with new versions. A lot of the data will be votes.
Nitpick: Hackernews isn't reddit derived. It is some written in arc. And not open source.
I see various people volunteering for different roles. I'd be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance. Like ananda, I'm happy to do this as an open-contribution project rather than paid. I'll reach out to Vaniver via email.
I have some front-end experience and would love to help you(I'm a student). Email me at my username @gmail.com

Well, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not -- there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let's say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed.

I am not saying that paying me for this job is a rational thing to do; let's just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.)

Maybe it was a mistake that I didn't mention this option sooner... but hearing all the talk about "some volunteers doing it for free in their free time" made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can't change the past.)

I certainly couldn't do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patc... (read more)

Not trolling here, genuine question. How is the LW codebase so awful? What makes it so much more complicated than just a typical blog, + karma? I feel like I must be missing something. From a UI perspective it is text boxes and buttons. The data structure that you need to track doesn't SEEM too complicated (Users have names, karma totals, passwords and roles? What am I not taking into account?
Age, mostly. My understanding is Reddit was one of the first of its kind, and so when building it they didn't have a good sense of what they were actually making. One of the benefits of switching to something new is not just that it's using technology people are more likely to be using in their day jobs, but also that the data arrangement is more aligned with how the data is actually used and thought about.
It's also tied to some pretty old versions of Python and various libraries, and people often need help just getting the development server going.
It's a modified copy of an early Reddit codebase. Besides it has, um, founder effects X-/ -- for example the backend SQL database is used just as an engine behind a handcrafted key-value store...
If the money is there, why not just pay a freelancer via Gigster or Toptal?
Historically, the answers have been things like a desire to keep it in the community (given the number of software devs floating around), the hope that volunteer effort would come through, and me not having much experience with sites like those and thus relatively low affordance for that option. But I think if we pay for another major wave of changes, we'll hire a freelancer through one of those sites. (Right now we're discussing how much we're willing to pay for various changes that could be made, and once I have that list I think it'll be easy to contact freelancers, see if they're cheap enough, and then get done the things that make sense to do.) [edit] I missed one--until I started doing some coordination work, there wasn't shared knowledge of what sort of changes should actually be bought. The people who felt like they had the authority to design changes didn't feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn't feel like they had the authority to design changes, and both of them had more important things to be working on.
This sort of leadership vacuum seems to be a common problem in the LW community. Feels to me like people can err more on the side of assuming they have the authority to do things.
Yeah, a good default is the UNODIR pattern ("I will do X at Y time unless otherwise directed")
I can code in python, but I have no web dev experience - I could work out what algorithms are needed, but I'm not sure I would know how to implement them, at least not off the bat. Still, I'd be willing to work on it for less then $100 per hour.
Thanks for the offer! If you're working for $x an hour, do you think you would take fewer that 100/x times as long as someone who is experienced at web dev?
Fair pay would be $x an hour given that it takes me 100/x times as long as someone who is experienced at web dev. However in reality estimates of how long the work will take seem to vary wildly - for instance you and Viliam disagree by an order of magnitude. The more efficient system might be for me to work with someone who does have some web dev experience, if there is someone else working on this.
Hi. I used to have an LW account and post sometimes, and when the site kinda died down I deleted the account. I'm posting back now. Please do not start discussing politics without enforcing a real-names policy and taking strong measures against groupthink, bullying, and most especially brigading from outside. The basic problem with discussing politics on the internet is that the normal link between a single human being and a single political voice is broken. You end up with a homogeneous "consensus" in the "community" that reflects whoever is willing to spend more effort on spam and disinformation. You wanted something like a particularly high-minded Parliament, you got 4chan. I have strong opinions about politics and also desire to discuss the topic, which is indeed boiling to a crisis point, in a more rationalist way. However, I also moderate several subreddits, and whenever politics intersects with one of our subs, we have to start banning people every few hours to keep from being brigaded to death. I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses: allow talking about global warming in the context of civilization-scale risks, allow talking about science funding and state appropriation of scientific output in the context of AI risk and AI progress, allow talking about fiscal multipliers to state spending in the context of effective altruism. Don't go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet nowadays that talks politics, doesn't moderate a tight ship, and allows open registration. And in general, the watchword for a rationality community ought to be that most of the time, contrarians are wrong, and in fact boring as well. Rationality should be distinguished from intellectual contrarianism -- this is a mistake we made last time, and suffered for.
Ha-ha You seem to have a desire to discuss the topic only in a tightly controlled environment where you get to establish the framework and set the rules.
I didn't see anything in eagain's comment that demanded that he[1] get to establish the framework and set the rules. (It is easy, and cheap, to portray any suggestion that there should be rules as an attempt to get to set them. Human nature being what it is, this will at least sometimes be at least partly right. I don't see that that means that having rules isn't sometimes a damn good idea.) [1] Apologies if I guessed wrong.
Eagain knows which ideas are "deeply bad" and he's quite certain they need to be excluded from the conversation.
I didn't say excluded from the conversation. I said exposed to the bright, glaring sunlight of factual rigor.
These words do not appear anywhere in your comment. Instead you said: "Don't go beyond that" seems to mean not allowing those politics and the bad-idea raiders. "Not allowing" does not mean "expose to sunlight", it means "exclude".
I'm not sure if this what eagain was alluding to, but this does seem advisable; Do not permit (continuous) debates of recognizably bad ideas. I admit this is difficult to enforce, but stating that rule will, in my opinion, color the intended purpose of this website.
The word "bad" looks to be doing all the heavy lifting in here.
Which isnt being done because of what...? Widespread stupidity?
Perhaps he does. It wouldn't exactly be an uncommon trait. However, there is a gap between thinking that some particular ideas are very bad and we'd be better off without them, and insisting on setting the rules of debate oneself, and it is not honest to claim that someone is doing the latter merely because you are sure they must be doing the former.
This thread is about setting the rules for discussions, isn't it? Eagain is talking in the context of specifying in which framework discussing politics can be made to work on LW.
Yup. That is (I repeat) not the same thing as insisting that he get to establish the framework and set the rules. (It seems to me that with at least equal justice someone could complain that you are determined to establish the framework and set the rules; it's just that you prefer no framework and no rules. I don't know whether that actually is your preference, but it seems to me that there's as much evidence for it as there is for some of what you are saying about eagain's mental state.)
And yet I'm not telling LW how to set up discussions...
Aren't you? I mean, you're not making concrete proposals yourself, of course; I don't think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people's. But looking at the things you object to and the things you don't, it seems to me that you're taking a position on how LW's discussions should be just as much as eagain is; you're just expressing it by objecting to things that diverge from it, rather than by stating it explicitly.
Lumifer seems to object to things because he finds it enjoyable to object to things, and this is a good explanation for why he objects to things rather than making his own proposals. But this means that he is not necessarily taking a position on how discussion should be, since he would be likely to object to both a proposal and its opposite, just because it would still be fun to object.
It seems to me that there are definite regularities in which proposals he objects to and which he doesn't.
Hmm. That sounds like a nice rule: anyone who spends all their posting efforts on objecting to other people's ideas without putting forth anything constructive of their own shall be banned, or at least downvoted into oblivion.
I think that would be excessive. Pointing out others' mistakes is a useful activity. (Think of Socrates.) Also, downvoting is disabled right now.
The thing is, I understand the difference between argument points and policy proposals. These are very very different creatures.
I remark that this is not a million miles from what Eugine_Nier tried to do, and unfortunately he was not entirely unsuccessful. (Though he didn't get nearly as far as producing a homogeneous consensus in favour of his ideas.)
I would rather politics happen in all those other places you mentioned.
Re: #2, it seems like most of the politics discussion places online quickly become dominated by one view or another. If you wanted to solve this problem, one idea is 1. Start an apolitical discussion board. 2. Gather lots of members. Try to make your members a representative cross-section of smart people. 3. Start discussing politics, but with strong norms in place to guard against the failure mode where people whose view is in the minority leave the board. I explained here why I think reducing political polarization through this sort of project could be high-impact. Re: #3, I explain why I think this is wrong in this post. "Strong writers enjoy their independence" - I'm not sure what you're pointing at with this. I see lots of people who seem like strong writers writing for Medium.com or doing newspaper columns or even contributing to Less Wrong (back in the day). (I largely agree otherwise.)
What explosions from EY are you referring to? Could you please clarify? Just curious.
1. I agree completely. 2. Politics has most certainly damaged the potential of SSC. Notably, far fewer useful insights have resulted from the site and readership than was the case with LessWrong at it's peak, but that is how Yvain wanted it I suppose. The comment section has, according to my understanding become a haven for NRx and other types considered unsavoury by much of the rationalist community, and the quality of the discussion is substantially lower in general than it could have been. 3. Sure. 4. Codebase, just start over, but carry over the useful ideas implemented, such as disincentivizing flamewars by making responses to downvoted comments cost karma, zero initial karma awarded for posting, and any other rational discussion fostering mechanics which have become apparent since then. 5. I agree, make this site read only, use it and the wiki as a knowledge base, and start over somewhere else.
I think Hacker News has a better solution to that problem (if you reply to someone who replied to you, your reply gets delayed--the deeper the thread, the longer the delay).
I wonder if the correct answer is essentially to fork Hacker News, rather than Reddit (Hacker News isn't open source, but I'm thinking about a site that takes Hacker News's decisions as the default, unless there seems to be a good reason for something different.)
Well, there's a vanilla version of HN that comes with the Arc distribution. It doesn't look like any of the files in the Arc distribution have been modified since Aug 4, 2009. I just got it running on my machine (only took a minute) and submitted a link. Unsure what features are missing. Relevant HN discussion. If someone knows Paul Graham, we might be able to get a more recent version of the code, minus spam prevention features & such? BTW, I believe Y Combinator is hiring hackers. (Consider applying!) Arc isn't really used for anything besides Hacker News. But it's designed to enable "exploratory programming". That seems ideal if you wanted to do a lot of hands-on experimentation with features to facilitate quality online discussion. (My other comment explains why there might be low-hanging fruit here.)
Hacker News was rewritten in something other than Arc ~2-3 years ago IIRC, and it was only after that that they managed to add a lot of the interesting moderation features. There are probably better technologies to build an HN clone in today–Clojure seems strictly better than Arc, for instance–the parts of HN that are interesting to copy are the various discussion and moderation features, and my sense of what they are mostly comes from having observed the site and seeing comments here and there over the years.
Here is some alternative code for building an HN clone: https://github.com/jcs/lobsters (see https://lobste.rs/about for differences to HN).
Yes, I think Hacker News is plausibly the best general-purpose online discussion forum right now. It would not surprise me if it's possible to do much better, though. As far as I can tell, most online discussion software is designed to maximize ad revenue (or some proxy like user growth/user engagement) rather than quality discussions. Hacker News is an exception because the entire site is essentially a giant advertisement to get people applying for Y Combinator, and higher-quality discussions make it a better-quality advertisement.
Relevant: http://danluu.com/hn-comments/
6Paul Crowley
This is the platform Alexandros is talking about: http://www.telescopeapp.org/
If I were NRx, I would feel very amused at the idea of LW people coming to believe that they need to invite an all-powerful dictator to save them from decay and ruin... :-D
What's hilariously ironic is that our problem immigrants are Eugine's sockpuppets, when Eugine is NRx and anti-immigrant. That Eugine is so much of a problem is actually evidence in favour of some of his politics.
And when the dictator stops Eugine, it will also prove that Cthulhu always swims left. (Meanwhile, in a different tribe: "So, they have a dictator now, and of course it's a white male. That validates our beliefs!")
Don't forget that Cthulhu is a white male :-P (race/sex are social constructs, aren't they? Cthulhu is definitely not one of oppressed minorities so there you go)
You're talking about someone using the easiest method of disruption available to individuals, combined with individual voter fraud. This is difficult to stop because of the site's code, which I think the single owner of the site chose.
LW has a BDFL already. He's just not very interested and (many) people don't believe he's able to restore the website. We didn't "come to believe" anything.
No, EY effectively doesn't act as a BDFL. He doesn't have the effective power to ban contributors. The last time I asked him to delete a post he said that he can't for site political reasons. The site is also owned by MIRI and not EY directly.
Lessee... He isn't so much benevolent as he is absent. I don't see him exercising any dictatorial powers and as to "for life", we are clearly proposing that this ain't so. So it seems you're just wrong. An "absentee owner/founder" is a better tag.
As a newbie, I have to say that I am finding it really hard to navigate around the place. I am really interested in rational thinking and the ways people can improve it, as well as persuation techniques to try to get people to think rationally about issues, since most of them fall to cognitive biases and bad illogical thinking. I have found that writing about these concepts for myself really help in clarifying things, but sometimes miss a discussion on these topics, so that's why I came here. For me, some things that could help improve this site: 1) better organization and making it clearer to navigate 2) a set of easy to read newbie texts 3) ability to share interesting posts from other places and discussing them
I didn't delete my account a year ago because the site runs on a fork of Reddit rather than HN (and I recall that people posted links to outside articles all the time; what benefit would a HN-style aggregator add over either what we have now or our Reddit fork plus Reddit's ability to post links to external sites?); I deleted it because the things people posted here weren't good. I think if you want to unify the community, what needs to be done is the creation of more good content and less bad content. We're sitting around and talking about the best way to nominate people for a committee to design a strategy to create an algorithm to tell us where we should go for lunch today when there's a Five Guys across the street. These discussions were going on the last time I checked in on LW, IIRC, and there doesn't seem to have been much progress made. I haven't seen anyone link to a LW post written after I deleted since I deleted. I suspect this has less to do with aggregators or BDFL nomination committees and more to do with the fact that a long time ago people used to post good things here and then they stopped. Then again, better CSS wouldn't hurt. This place looks like Reddit. Nobody wants to link to a place that looks like Reddit.
That's true. LW isn't bringing back yvain/Scott or other similar figures. However, it is a cool training ground/incubator for aspiring writers. As of now I'm a 'no one.' I'd like to try to see if I can become 'some one.' SSC comments don't foster this. LW is a cool place to try, it's not like anyone is currently reading my own site/blog.

There's an issue that I expect will be closed sometime this week that I think will round out the suite of technical tools that will give moderators the edge over trolls. Of course, people are intelligent and can adapt, so I'm not going to hang up a Mission Accomplished banner just yet.

I predict that whatever is in this drop will not suffice. It will require at minimum someone who has both significant time to devote to the project, and the necessary privileges to push changes to production.

I applaud this and am already participating by crossposting from my blog and discussing.

One thing that I like about using LW as a home base is that everyone knows what it is, for good and for ill. This has the practical benefit of not needing further software development before we can get started on the hard problem of attracting high-quality users. It also has the signaling benefit of indicating clearly that we're "embracing our roots", including reclaiming the negative stereotypes of LessWrongers. (Nitpicky, nerdy, utopian, etc.)

I am unusual in this community in taking "the passions" really seriously, rather than identifying as being too rational to be caught up in them. One of my more eccentric positions has long been that we ought to be a tribe. For all but a few unusual individuals, humans really want to belong to groups. If the group of people who explicitly value reason is the one group that refuses to have "civic pride" or similar community-spirited emotions, then this is not good news for reason. Pride in who we are as a community, pride in our distinctive characteristics, seems to be a necessity, in a cluster of people who aspire to do bet... (read more)

Oof, is this really an eccentric position? FWIW, I am extremely convinced that the rationalist community ought to be a tribe, and one of the biggest updates I made at the CFAR reunion was seeing what felt to me like evidence that we were becoming more functional along tribey directions that I really wanted.

In short, because I think tribes are the natural environments in which humans live, and that ignoring that fact produces unhappy and dysfunctional humans.

There's a logic gap there. You are assuming that rationalists don't have pre-existing tribes, that they won't be in any tribe if they are not in the rationalist tribe. And you are assuming that rationalists need to be in a rationality tribe in order to be rational ... arguably, it works the other way..tribalism enhances group think bias, and so lowers the rationality level, on the whole.

I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:

"A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating." (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)

That's obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there's no one person with both the technical and moral authority to:

  • set the rules that all participants have to abide by, and enforce them
  • decide principles for what's on-topic and what's off-topic
  • receive reports of trolls, and warn or ban them
  • respond to complaints about the site not working well
  • decide what the site features should be, and implement the high-priority ones

Pretty much any successful subreddit, even small... (read more)

a proactive, responsive admin/moderation team

Which needs to be backed up by a responsive tech support team. Without the support of the tech support, the moderators are only able to do the following:

1) remove individual comments; and
2) ban individual users.

It seems like a lot of power, but for example when you deal with someone like Eugine, it is completely useless. All you can do is play whack-a-mole with banning his obvious sockpuppet accounts. You can't even revert the downvotes made by those accounts. You can't detect the sockpuppets that don't post comments (but are used to upvote the comments made by the active sockpuppets, which then quickly use their karma to mod-bomb the users Eugine doesn't like). So, all you can do is to delete the mod-bombing accounts after the damage was done. What's the point? It will cost Eugine about 10 seconds to create a new one.

(And then Eugine will post some paranoid rant about how you have some super shady moderator powers, and a few local useful idiots will go like "yeah, maybe the mods are too poweful, we need to stop them", and you keep banging your head against the wall in frustration, wishing you actually had a fraction of thos... (read more)

Was including tech support under "admin/moderation" - obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.

That's okay, I just posted to explain the details, to prevent people from inventing solutions that predictably couldn't change anything, such as: appoint new or more moderators. (I am not saying more help wouldn't be welcome, it's just that without better access to data, they also couldn't achieve much.)

Wow, that is a pretty big issue. Thank you for mentioning this.

Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don't realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.

What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.

When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.

Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision "user xyz123 is abusing the voting mechanism" or "user xyz123 is a sockpuppet for user abc789", describe my case to other mods, and only after getting their agreement I would learn who the "user xyz123" actually is.

(But of course, getting the database without anonymization -- if that would be faster -- would be equally good; I could just anonymize it after I get it.)

Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.

What specific computations would I run there? Well, that's kinda the point that I don't know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of "security by obscurity", to avoid Eugine adjusting to my algorithms. (For example... (read more)

It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven't thought about it much.

If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.

It can happen without full awareness... the user will simply notice that X upvotes them often and Y downvotes them often... they will start liking X and disliking Y... they will start getting pleasant feelings when looking at comments written by X ("my friend is writing here, I feel good") and unpleasant feelings when looking at comments written by Y ("oh no, my nemesis again")... and that will be reflected by how they vote.

And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.

Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.

From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other peop... (read more)

It's not actually obvious to me that downvotes are even especially useful. I understand what purpose they're supposed to serve, but I'm not sure they actually serve it.

It seems like if we removed them, a major tool available to trolls is just gone.

I think downvoting is also fairly punishing for newcomers - I've heard a few people mention they avoided Less Wrong due to worry about downvoting.

Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.

Alternative, go with the Hacker News model of only enabling downvotes after you've accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.

I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.

(Or similar variants like turning off 'comment score below threshold' hiding, etc)

Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook.

Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.

Actual spam could just be reported rather than downvoted

There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying "this comment is unnecessarily rude", which would also effectively halve the number of likes for the purpose of comment sorting.)

I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your "fans" vs "enemies" are, you will inevitably try to play to your "fans" and develop dislike for your "enemies." I think this is the primary thing that makes social media bad.

My current cutoff for what counts as a "social media" site (I have resolved to never use social media again) is "is there a like mechanic where I can see who liked me?" If votes on LW were public, by that rule, I'd have to quit.

Could you elaborate on what you mean by this? "Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted" also sounds like "playing to your fans" to me - in both cases you're responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren't entirely sure of who your target audience consists of.
Do you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn't matter if there is a setting that makes the likes invisible to you in particular? It's a tradeoff, some things may get worse, others may get better. I don't have a clear sense of this tradeoff.
Imagine that you're a new person who's a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you. The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have. OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.

the tech support doesn't give a fuck, and will cite privacy concerns when you ask them for more direct access to the database

Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? Why do they then get their contract renewed? Are they taking orders from some secret deep owners of LW that outrank the moderators ?

Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? ...Why do they then get their contract renewed?

The tech support is Trike Apps, who have freely donated a huge amount of programmer time toward building and maintaining LessWrong.

Yeah, it's a bit of "don't look a gift horse in the mouth" situation. When someone donates a lot of time and money to you, and suddenly becomes evasive or stubborn about some issue that is critical to be solved properly... what are you going to do? It's not like you can threaten to fire them, right?

In hindsight, I did a few big mistakes there. I didn't call Eliezer to have an open debate about what exactly is and isn't in my competence; that is, in case of different opinions about what should be done, who really has the last word. Instead I gave up too soon, when one my ideas was rejected I tried to find an alternative solution, only to have it rejected again... or to finally succeed at something, and then see that Eugine improved his game, and now I am going to have another round of negotiation... until I gradually developed a huge "ugh field" around the whole topic... and wasted a lot of time... and then other people took the role and had to start from the beginning again.

0Paul Crowley
If we built it, would they come? You make a strong case that the workforce wasn't made able to do the job; if that were fixed, would the workforce show up?

I strongly agree with this sentiment, and currently Arbital's course is to address this problem. I realize there have been several discussions on LW about bringing LW back / doing LW 2.0, and Arbital has often come up. Up until two weeks ago we were focusing on "Arbital as the platform for intuitive math explanations", but that proved to be harder to scale than we thought. We now pivoted to a more discussion-oriented truth-seeking north star, which was our long-term goal all along. We are going to need innovation and experimentation both on the software and the community levels, but I'm looking forward to the challenge. :)

I am extremely excited about this. I suspect we should proceed trying to reboot Less Wrong, without waiting, while also attempting to aid Arbital in any ways that can help (test users, etc.).

If half-hearted attempts are doomed (plausible), or more generally we're operating in a region where expected returns on invested effort are superlinear (plausible), then it might be best to commit hard to projects (>1 full-time programmer) sequentially.

Does that take into account, for example, Arbital seeming less promising to people / getting less engagement, because all the users have just sunk energy into trying to get by on a revived LW? There's an intuition pump I could make that I haven't fully fleshed out yet, that goes something like, If both Arbital and Lesswrong get worked on, then whichever seems more promising or better to use will gain more traction and end out on top in a very natural way, without having to go through an explicit failure of the other one. There's caveats/responses to that as well of course — it just doesn't seem 100% clear cut to me.
Exciting stuff! Are you planning to engage with the LW community to figure out what features to implement? I know that Eliezer was heavily involved with Arbital's product management. But I think it's a mistake to make him the BDFL for LW 2.0, because LW 1.0 failed, and this was plausibly due to actions he took. Beware the halo effect: someone can simultaneously be a great blogger and a lousy product manager/forum moderator. I think we should let someone else like Vaniver have a try. If you're planning to engage with the community (which I would strongly recommend--ignoring their userbase is the kind of thing failed startups do), I suggest waiting a bit and then creating a new thread about this, to simulate the effect of a sticky.
Eric R and I read all the comments in this thread. We've also met with multiple people in person to discuss exactly what the platform should look like. So the broad answer is "yes", but if you have a specific mode of engagement in mind, then it might be "no". He is an adviser. There are no advocates to make him a BDFL as far as I know. I expect we'll have a public beta ready in two weeks. I plan to write a blog post of my own to explain Arbital in more details.
Sounds great! Well, if you created a new thread called "Eric and I are taking suggestions for Arbital", I imagine you might get a lot more relevant ideas and feedback :)
I'm very excited to have an Arbital-shaped discussion and writing platform. I've been thinking for awhile that I want some of my online writing to become less blog-like, more wiki-like, but I don't actually want to use a wiki because... yeah. Wikis. Arbital seems way better. Is it at the point now where I could start posting some writing/models to it?
Not yet; hence Anna's comment above.
If Arbital provides a solution, then that would be great, but I think it is best to have multiple projects operating at the same time.
Gives us two changes to succeed.
But also weakens both options' ability to be a Schelling point.
I can see value in having LW as a prototype or scratch pad, making simple modifications of existing discussion platforms (e.g. improved moderator powers as discussed above). Then Arbital can do the harder work of building a collaborative truth-seeking platform, adding in features to, for example, support Double Crux, fine-typed voting, or evidence (rather than comments). Perhaps in the end there's a symbiosis, where the LW is for discussion, and when a topic comes up that needs truth-seeking it's moved to Arbital. That free's Arbital from having to include a solved problem in it's code base.

Successful conversations usually happen as a result of selection circumstances that make it more likely that interesting people participate. Early LessWrong was interesting because of the posts, then there was a phase when many were still learning, and so were motivated to participate, to tutor one another, and to post more. But most don't want to stay in school forever, so activity faded, and the steady stream of new readers has different characteristics.

It's possible to maintain a high quality blog roll, or an edited stream of posts. But with comments, the problem is that there are too many of them, and bad comments start bad conversations that should be prevented rather than stopped, thus pre-moderation, which slows things down. Controlling their quality individually would require a lot of moderators, who must themselves be assessed for quality of their moderation decisions, which is not always revealed by the moderators' own posts. It would also require the absence of drama around moderation decisions, which might be even harder. Unfortunately, many of these natural steps have bad side effects or are hard to manage, so should be avoided when possible. I expect the problem can b... (read more)

Quick note: Having finally gotten used to using discussion as the primary forum, I totally missed this post as a "promoted" post and would not have seen it if it hadn't been linked on Facebook, ironically enough.

I realize this was an important post that deserved to be promoted in any objective sense, but am not sure promoting things is the best way to do that by this point.

Having the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise. The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.
Promoting posts gets them into the RSS feed. Making it possible to promote Discussion posts, or having promoted posts appear in Discussion also, or some other similar approach seems worthwhile.
I follow the DIscussion RSS feed but stopped following the Main RSS feed after Main shut down.
According to Feedly, 96 users are following the discussion RSS and 11k are following the Main RSS. (Feedly is probably not the only place I should be checking to compare those two, but the effect size seems pretty huge. The main problem is missing people who actually check the website every day, but go to discussion/new instead of all/new.)
Hmm. Maybe for short term solutions (until we figure out a way to get promote individual discussion posts while keeping them in discussion), maybe for posts like this: a) create a stub post on Main, which mostly says "we have an important thing to say, check it out in discussion" b) maybe also make a post on Main saying "Main is now deprecated. Apart from major announcements, all stuff will be in Discussion now. Consider updating your RSS. We're also seeing a lot of old timers return to post these days, check it out". etc.
I don't think this will happen with a sufficiently large number of people to make that a good option. I think my current best plan is to keep the sitewide RSS as having only promoted posts, but including promoted posts in Discussion. We can also advertise the Discussion RSS a bit more heavily, but I don't know how many people will want to do that relative to just checking LW.
I don't use Feedly.
Gotcha. Agreed. Do you have any sense of how big a change that is? Sometime after Solstice I can hopefully dedicate more time to hacking on Less Wrong.
I haven't looked at the code that generates the subreddit pages, so not really. It seems like it'd likely be a one-line change in an eligibility function somewhere, but finding that line seems rough.

I think this is completely correct, and have been thinking along similar lines lately.

The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.

The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.

Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all "arguments you want the community to be aware of" to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise "single point of coordination-marked" posts to LW.)

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

Participating in online discussions tends to reduce one's attention span. There's the variable reinforcement factor. There's also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)

These effects are so strong that if I stay away from the internet for a few days ("internet fast"), my attention span increases dramatically. And if I've posted comments online yesterday, it's hard for me to focus today--there's always something in the back of my mind that wants to check & see if anyone's responded. I need to refrain from making new comments for several days before I can really focus.

Lots of people have noticed that online discussions sap... (read more)

Regarding the idea that online discussion hurts attention span and productivity, I agree for the reasons you say. The book Deep Work (my review) talks more about it. I'm not too familiar with the actual research, but my mind seems to recall that the research supports this idea. Time Well Spent is a movement that deals with this topic and has some good content/resources.

I think it's important to separate internet time from non-internet time. The author talks about this in Deep Work. He recommends that internet time be scheduled in advance, that way you're not internetting mindlessly out of impulse. If willpower is an issue, try Self Control, or going somewhere without internet. I sometimes find it useful to lock my phone in the mailbox downstairs.

I'm no expert, but suspect that LW could do a better job designing for Time Well Spent.

  • Remove things on the sidebar like "Recent Posts" and "Recent Comments" (first item on Time Well Spent checklist). They tempt you to click around and stay on longer. If you want to see new posts or comments, you could deliberately choose to click on a link that takes you to a new webpage that shows you those things, rather than alway
... (read more)
I've been thinking about Patri's post for a long time, because I've found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They're focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It's easier to tell if what you're doing works sooner. That's often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It's just taken longer to tell if that's succeeded. Patri's post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that's value added to advancing AI safety that wouldn't have existed if LW never started. CFAR didn't exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn't get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion. What I've read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.
4Gordon Seidoh Worley
I tend to find discussions in comments unhelpful, but enjoy discussions spread out over responding posts. If someone takes the time to write something of length and quality sufficient that they are willing to write it as a top-level post to their blog/etc. then it's more often worth reading to me. My time is valuable, comments are cheap, so I rather read things the author invested thought in writing. (I recognize the irony that I'm participating in this discussion right now, but this particular discussion seems an unusually good chance to spread my thinking on this topic.)
If anyone wants to collaborate in tackling the focus problem, send me a personal message with info on how to contact you. Maybe we can get some kind of randomized trial going.
I agree that there should be much more active enforcement of good norms than heavy-handed moderation (banning etc.), but I have a cached thought that lack of such moderation was a significant part of why I lost interest in lesswrong.com, though I don't remember specific examples. Completely agree. One particularly important mechanism, IMO, is that brains tend to pay substantially more attention to things they perceive other humans caring about. I know I write substantially better code when someone I respect will be reviewing it in detail, and that I have trouble rousing the same motivation without that.
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen. I suspect one of the reasons people have moved discussions to their own blogs or walls is because they feel like they actually can affect the norms there. Unofficial status works (cf. Eliezer, Yvain) but is not very scalable–it requires people willing to spend a lot of time writing content as well as thinking about, discussing, and advocating for community norms. I think you, Ben, Sarah etc. committing to posting here makes a lesswrong revival more likely to succeed, and would place even higher odds if 1 or more people committed to spending a significant amount of time on work such as: * Clarifying what type of content is encouraged on less wrong, and what belongs in discussion vs. main * Writing up a set of discussion norms that people can link to when saying "please do X" * Talking to people and observing the state of the community in order to improve the norms * Regularly reaching out to other writers/cross-posting relevant content, along with the seeds of a discussion * Actually ban trolls * Manage some ongoing development to improve site features

Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen.

One idea that I had, that I still think is good, is essentially something like the Sunshine Regiment. The minimal elements are:

  1. A bat-signal where you can flag a comment for attention by someone in the Sunshine Regiment.

  2. That shows up in an inbox of everyone in the SR until one of them clicks an "I've got this" button.

  3. The person who took on the post writes an explanation of how they could have written the post better / more in line with community norms.

The basic idea here is that lots of people have the ability to stage these interventions / do these corrections, but (a) it's draining and not the sort of thing that a lot of people want to do more than X times a month, and (b) not the sort of thing low-status but norm-acclimated members of the community feel comfortable doing unless they'r... (read more)

Happy to join Sunshine Regiment if you can set it up.

Also happy to join. And I'm happy to commit to a significant amount of moderation (e.g. 10/hours a week for the next 3 months) if you think it's useful.

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content. Probably too meta / too clever an idea, but may be worth some individual brainstorms?
I've been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I've noticed it's not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum. Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn't get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it's far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum. (Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.) [1] I abbreviate "Effective Altruism Forum" as "EA Forum", rather than "EAF", as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don't want people to get confused between the two.
Some guesses: * The EA forum has less of a reputation, so knowing about it selects better for various virtues * Interest in altruism probably correlates with pro-social behavior in general, e.g. netiquette * The EA forum doesn't have the "this site is about rationality, I have opinions and I agree with them, so they're rational, so I should post about them here" problem

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

I don't think that a reboot/revival of LW necessarily has to consist entirely of the people who were in the community before. If we produce good stuff, we can attract new people. A totally new site with new branding might get rid of some of the negative baggage of the past, but is also less likely to get off the ground in the first place. Making use of what already exists is the conservative choice.

I hear you as saying that people here should focus on learning rather than leadership. I think both are valuable, but that there's a lack of leadership online, and my intuition is to trust "forward momentum", carrying something forward even if I do not think I am optimally qualified. He who hesitates is lost, etc.

I see Anna making the same complaint that you yourself have made a few times: namely, that most online discussions are structured in a way that makes the accumulation of knowledge difficult. (My explanation: no one has an incentive to fix this.)

Is the fact that economists mostly cite each other evidence of "cultish in-group favoring biases"? Probably to some degree. But this hasn't fatally wounded economics.

"most online discussions are structured in a way that makes the accumulation of knowledge difficult." It's a different kind of conversation, but I've been trying to improve on this problem by developing a "debate mapping" website, where conversation is structured in tree form based on claims, and then arguments underneath it which support or oppose each claim recursively. This is the website if you're interested: https://debatemap.live
Glad to see you're working on this, it looks pretty nice! I think the bottleneck for efforts like this is typically marketing, not code. (Analogy: If you want to found a city, the first step is not to go off alone in to the wilderness and build a bunch of houses.) I think I've seen other argument mapping sites, and it seems like every few months someone announces a new & improved discussion website on SlateStarCodex (then it proceeds to not get traction). I suspect the solution is to form a committee/"human kickstarter" of some kind so that everyone who's interested in this problem can coordinate to populate the same site simultaneously. For a project like yours that already has code, the best approach might be to try to join forces with a blogger who already has traffic, or a discussion site that already has a demand for a debate map, or something like that.
The behaviour of the Austrian School certainly is.
I have similar uncertainty about the large-scale benefits of lesswrong.com, but on smaller scales i do think the site was very valuable. I've never seen a discussion forum as polite, detailed, charitable, & rigorous as the old Less Wrong.
Spot on in my opinion, and one of the many points I was trying to get at with the 2016 LW Survey. For example, this community seems to have basically ignored Tetlock's latest research, relegating it to the status of a "good book" that SSC reviewed. I wish I'd included a 'never heard of it' button on the communities question because I suspect the vast majority of LessWrongers have never heard of the Good Judgement Project. I've long felt that Eliezer Yudkowsky's sequences could use somebody going over them with a highlighter and filling in the citations for all the books and papers he borrowed from.
This happened, FYI, in the sequences ebook.

"It is dangerous to be half a rationalist."

It is dangerous to half-arse this and every other attempt at recovering lesswrong (again).

I take into account the comments before mine which accurately mention several reasons for the problems on lw.

The codebase is not that bad. I know how many people have looked at it; and it's reasonably easy to fix it. I even know how to fix it; but I am personally without the coding skill to implement the specific changes. We are without volunteers willing to make changes; and without funds to pay someone to do them. Trust me. I collated all comments on all of the several times we have tried to collate ideas. We are unfortunately busy people. Working on other goals and other projects.

I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.

A year from now; crony beliefs may not be easy to find on lesswrong because it was never explicit... (read more)

I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.

So: this is subtle. But to my mind, the main issue isn't that ideas won't mostly-percolate. (Yes, lots of folks seem to be referring to Crony Beliefs. Yes, Molloch. Yes, etc.) It's rather that there isn't a process for: creating common knowledge that an idea has percolated; having people feel empowered to author a reply to an idea (e.g., pointing out an apparent error in its arguments) while having faith that if their argument is clear and correct, others will force the original author to eventually reply; creating a common core of people who have a common core of arguments/analysis/evidence they can take for granted (as with Eliezer's Sequences), etc.

I'm not sure how to fully explicitly model it. But it's not mostly about the odds that a given post will spread (let's call that probability "p"). It's more abou... (read more)

I think I understand what you mean. On one hand it is great to have this fluid network of rationalist websites where everyone chooses the content they prefer to read. We don't have a single point of failure. We can try different writing styles, different moderation styles, etc. The rationalist community can survive and generate new interesting content even when LW is dying and infested by downvoting sockpuppets, and Eliezer keeps posting kitten videos on Facebook (just kidding). On the other hand, it is also great to have a shared vocabulary; a list of words I can use freely without having to explain them. Because inferential distance is a thing. (For example, LW allows me to type "inferential distance" without having to explain. Maybe I could just use a hyperlink to the origin of the term. But doing it outside of LW includes a risk of people starting to debate the concept of the "inferential distance" itself, derailing the discussion.) The opposite of public knowledge is the Eternal September. Maybe "Moloch" is an example that meaningful terms will spread across rationalist websites. (Natural selection of rationalist memes?) Maybe hyperlinking the original source is all it takes; linking to SSC is not more difficult than linking to LW Sequences, or Wikipedia. That is, assuming that the concept is clearly explained in one self-contained article. Which is not always the case. Consider "motte and bailey". I consider it a critical rationalist concept, almost as important as "a map is not the territory". (Technically speaking, it is a narrower version of "a map is not the territory".) I believe it helps me to see more clearly through most political debates, but it can also be applied outside of politics. And what is the canonical link? Oh, this. So, imagine that I am talking with people who are not regular SSC readers, and we are debating something either unrelated to politics, or at least unrelated to the part of politics that the SSC article talks about, but someho
No, this: http://philpapers.org/archive/SHATVO-2.pdf
My problem with the "shared vocabulary" is that as you note yourself here, it implies that something has already been thought through, and it assumes that you have understood the thing properly. So for example if you reject an argument because "that's an example of a motte and bailey fallacy", then this only works if it is in fact correct to reject arguments for that reason. And I don't think it is correct. One reason why people use a motte and bailey is that they are looking for some common ground with their interlocutor. Take one of Scott's examples, with this motte and bailey: 1. God is just the order and love in the universe 2. God is an extremely powerful supernatural being who punishes my enemies When the person asserts #1, it is not because they do not believe #2. It is because they are looking for some partial expression of their belief that the other person might accept. In their understanding, the two statements do not contradict one another, even though obviously the second claims a good deal more than the first. Now Scott says that #1 is "useless," namely that even if he could theoretically accept the word "God" as applying to this, there is no reason for him to do this, because there is nowhere to go from there. And this might be true. But the fact that #2 is false does not prove that it is true. Most likely, if you work hard, you can find some #3, stronger than #1, but weaker than #2, which will also be defensible. And it would be right to tell them to do the work that is needed. But it would be wrong to simply say, "Oh, that's a motte and bailey" and walk away. This is not merely a criticism of this bit of shared vocabulary, so that it would just be a question of getting the right shared vocabulary. A similar criticism will apply to virtually any possible piece of shared vocabulary -- you are always assuming things just by using the vocabulary, and you might be wrong in those assumptions.
Making shared vocabulary common and explicit usually makes it faster to iterate. For example, the EA community converged on the idea of replaceability as an important heuristic for career decisions for a while, and then realized that they'd been putting too much emphasis there and explicitly toned it down. But the general concept had been floating around in discussion space already, giving it a name just made it easier to explicitly think about.
I think I agree with this in one sense and disagree in another. In particular, in regard to "giving it a name just made it easier to explicitly think about" : 1. I agree that this makes it easier to reason about, and therefore you might come to conclusions faster and so on, even correctly. 2. I don't agree that we really made it easier to think about. What we actually did is make it less necessary to think about it at all, in order to come to conclusions. You can see how this works in mathematics, for example. One of the main purpose of the symbols is to abbreviate complicated concepts so that you don't have to think through them every time they come up. I think the second point here is also related to my objection in the previous comment. However, overall, the first point might be overall more important, so that the benefit outweighs the costs, especially in terms of benefit to a community.
What are you using this word to mean? At a guess it sounds like, "ideas will float to the surface" but also it does not always mean that, as used in "has percolated". Percolate relates to filtering of a substance like coffee, to get the good bits from the bad. Can you repeat the above without using this word? Are we looking to separate and elevate good ideas from the general noise on the interwebs, or are we looking to ensure ideas filter through the diaspora to every little sub group that exists? Or are we looking to filter something else? I am not sure which you are trying to describe. ---------------------------------------- If you want to describe an earlier post that is well know, and well spread, it should be enough to describe the name of the concept, i.e. crony beliefs. If you want to reference a less well known concept; it should be enough to name the author and link to their post, like if I wanted to refer to the list of common human goals and talk about things that relate to it. I don't see the gravity of the problem you are trying to describe with your concerns.

I don't think you can say both

The codebase is not that bad.


I am personally without the coding skill [...]

If I don't have the skills to fix a codebase, I'm pretty handicapped in assessing it. I might still manage to spot some bad things, but I'm in no shape to pronounce it good, or "not that bad".

Clarification: I am not a coder any more. I had skill in a few languages but I can't code any more mostly I Frankenstein my own arduino projects out of other people's projects. This means I can now read code and understand it; but not write it. It's not that bad because I read every line of the codebase to get my head around how it works. It's not that bad because when I was trying to explain a fix I could come up with the code for it: https://github.com/tricycle/lesswrong/issues/574 I just can't check my work or create a pull request. It's not that bad in that it still definitely works fine, and does not crash very often and doesn't have security leaks despite having an open code base and is readable to someone with very little code skill.
For a person familiar with Python, reading most of the code, and even suggesting changes is relatively easy. It's just running the whole code on their own computer that is almost impossible. But that means that when you write the code, you can't see it in action, which means you can't test it, which means that if you made a trivial error, you cannot find it and fix it. You can't debug your code, you can't print the intermediate values; you get zero feedback for what you did. Which means that the contribution is practically useless... unless someone else who can run the whole code on their computer will look at your code and finish it. If you need multiple iterations of this, then a work that would be otherwise done in an afternoon may take weeks. That's inconvenience far beyond trivial.

It's true that articles pass around the rationalist network, and if you happen to be in it, you're likely to see some such articles. But if you have something that you'd specifically want the rationalist community to see, and you're not already in the network, it's very hard.

Some time back, I had a friend ask me how to promote their book which they thought might be of interest to the rationalist community. My answer was basically "you could start out by posting about it on LW, but not that many people read LW anymore so after that I can help you out by leveraging my position in the community". If they didn't know me, or another insider, they'd have a lot harder time even figuring out what they needed to do.

"The rationalist network" is composed of a large number of people and sites, scattered over Tumblr blogs, Facebook groups and profiles, various individual blogs, and so on. If you want to speak to the whole network, you can't just make a post on LW anymore. Instead you need to spend time to figure out who the right people are, get to know them, and hope that you either get into the inner circle, or that enough insiders agree with your message and take up sprea... (read more)

I'm disappointed that Elo's comment hasn't gotten more upvotes. He put a lot of work into fixing LW, and it seems to me that we should be very eager to listen & learn from him. (I'm also disappointed that rayalez's comments are being ignored. His previous comment about his project was at -1 until I upvoted it. Seeing this kind of thing makes me cynical. Sometimes it seems like status in the LW community is more about who you know than what you've accomplished or what you're doing for the community.) Arbital seems like the least half-arsed effort at fixing LW thus far. Maybe we should converge around advising Alexei & team?

I'm disappointed that Elo's comment hasn't gotten more upvotes

I think it's got rather a lot of upvotes. It's also got rather a lot of downvotes. I suspect they are almost all from the same person.

Hmm, in that if you forget the name but remember an example from the post, you won't be able to search for it, because the LW page only has the title and comments, as opposed to the full text?
yes, and if someone wants to join and get up to speed by reading the sequences and general discussion posts on LW, they won't ever read crony beliefs because it was not posted here other than by link post.
It seems to me like durable concepts are referred to frequently, and the typical behavior is to link to the source when using a jargony term, so I'm not too worried about that.

Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.

Prediction: If they do, we will see a substantial pickup in discussion here. If they don't, we won't.

People go where the content is. The diaspora left LW a ghost town not because nobody liked LW but because all the best content -- which is ever and always created by a relatively small number of people -- went elsewhere. I read SSC, and post on SSC, not because it is better than LW (it's not, its interface makes me want to hit babies with concrete blocks) but because that's where Yvain writes. LW's train wreck of a technical state is not as much of a handicap as it seems.

I like LW-ish content, so I approve of this effort -- but it will only work to the extent that the Royals return.

Thanks for addressing what I think is one of the central issues for the future of the rationalist community.

I agree that we would be in a much better situation if rationalist discussion was centralized and that we are instead in a tragedy of the commons - more people would post here if they knew that others would. However, I contend that we're further from that desired equilibrium that you acknowledge. Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:

  1. Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.
  2. Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. This is mostly because of (1). (Although posts on LW do reliably get at least a handful of comments and views)
  3. Comments on LessWrong are more critical and less polite than comments on other sites.
  4. Posts on LessWrong are held in lower regard academic communities like ML and policy than posts elsewhere, including on Medium.

The incentive that pushes in our fav... (read more)

Thoughts on RyanCarey's problems list, point by point:

Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:

Not sure all of them are "problems", exactly. I agree that incentive gradients matter, though.

Comments on the specific "problems":

1 Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.

Insofar as 1 is true, it seems like a genuine and simple bug that is probably worth fixing. Matt Graves is I believe the person to talk to if one has ideas or $ to contribute to this. (Or the Arbital crew, insofar as they're taking suggestions.)

2 Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. [snip]

The extent to which this is a bug depends on the extent to which posts are aimed at "going viral" / getting shared. If our aim is intellectual generativity, then we do want to attract the best minds of the internet to come think with us, and that does require sometimes having posts g... (read more)

Yep, message me about this, either here or by email (this username at gmail).

(ii) seems good, and worth adding more hands and voices to; it seems to me we can do it in a distributed fashion, and just start adding to LW and going for momentum, though.

sarahconstantin and some others have in fact been doing something like (ii), and was I suspect a partial cause of e.g. this post of mine, and of:

Efforts to add to (ii) would I think be extremely welcome; it is a good idea, and I may do more of it as well.

If anyone reading has a desire to revitalize LW, reading some of these or other posts and adding a substantive (or appreciative) comment is another way to encourage thoughtful posting.

I also support (ii) and have been trying to recruit more good bloggers.

I'll note that good writers tend to be low on "civic virtue" -- creative work tends to cut against that as a motivation. I'm still trying to think of good ways to smooth the incentive gradient for writers.

One possibility is to get some people to spend a weekend together -- rent a place in Big Sur or something -- and brainstorm/hype up some LW-specific ideas together, which will be posted in real time.

This sounds like an excellent idea.