On the importance of Less Wrong, or another single conversational locus

by AnnaSalamon3 min read27th Nov 2016365 comments

119

Conversation (topic)Public DiscourseIntellectual Progress (Society-Level)Intellectual Progress via LessWrongCommunity
Frontpage
Epistemic status: My actual best bet.  But I used to think differently; and I don't know how to fully explicate the updating I did (I'm not sure what fully formed argument I could give my past self, that would cause her to update), so you should probably be somewhat suspicious of this until explicated.  And/or you should help me explicate it.

It seems to me that:
  1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

  2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.[1]

  3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]

  4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

  5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

  6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)


It seems to me, moreover, that Less Wrong used to be such a locus, and that it is worth seeing whether Less Wrong or some similar such place[3] may be a viable locus again.  I will try to post and comment here more often, at least for a while, while we see if we can get this going.  Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.

I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed).  Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort.  (At least if we can build up toward in fact having a single locus.)

If you believe this is so, I invite you to join with us.  (And if you believe it isn't so, I invite you to explain why, and to thereby help explicate a shared body of arguments as to how to actually think usefully in common!)



[1] By track record, I have in mind most obviously that AI risk is now relatively credible and mainstream, and that this seems to have been due largely to (the direct + indirect effects of) Eliezer, Nick Bostrom, and others who were poking around the general aspiring rationality and effective altruist space in 2008 or so, with significant help from the extended communities that eventually grew up around this space.  More controversially, it seems to me that this set of people has probably (though not indubitably) helped with locating specific angles of traction around these problems that are worth pursuing; with locating other angles on existential risk; and with locating techniques for forecasting/prediction (e.g., there seems to be similarity between the techniques already being practiced in this community, and those Philip Tetlock documented as working).

[2] Again, it may seem somewhat hubristic to claim that that a relatively small community can usefully add to the world's analysis across a broad array of topics (such as the summed topics that bear on "How do we create an existential win?").  But it is generally smallish groups (rather than widely dispersed millions of people) that can actually bring analysis together; history has often involved relatively small intellectual circles that make concerted progress; and even if things are already known that bear on how to create an existential win, one must probably still combine and synthesize that understanding into a smallish set of people that can apply the understanding to AI (or what have you).

It seems worth a serious try to see if we can become (or continue to be) such an intellectually generative circle; and it seems worth asking what institutions (such as a shared blogging platform) may increase our success odds.

[3]  I am curious whether Arbital may become useful in this way; making conversation and debate work well seems to be near their central mission.  The Effective Altruism Forum is another plausible candidate, but I find myself substantially more excited about Less Wrong in this regard; it seems to me one must be free to speak about a broad array of topics to succeed, and this feels easier to do here.  The presence and easy linkability of Eliezer's Less Wrong Sequences also seems like an advantage of LW.

Thanks to Michael Arc (formerly Michael Vassar) and Davis Kingsley for pushing this/related points in conversation.

119

365 comments, sorted by Highlighting new comments since Today at 9:50 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com

... (read more)

Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)

Anyone want to join me in this, or else make a counterproposal?

Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he's up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.

2alyssavance4ySeconding Anna and Satvik
9sarahconstantin4yI also vote for Vaniver as BDFL.
7Alexandros4yWho is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of "who" but I wonder how much weight there will be behind this person. Where would the BDFL's authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.

I'm empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).

I like the idea of granting domain ownership if we in fact go down the BDFL route.

1Alexandros4ythat's awesome. I'm starting to hope something may come of this effort.
2Lumifer4yAn additional point is that you you can only grant the DFL part. The B part cannot be granted but can only be hoped for.
0Alexandros4yAn additional additional point is that the dictator can indeed quit and is not forced to kill themselves to get out of it. So it's actually not FL. And in fact, it's arguably not even a dictatorship, as it depends on the consent of the governed. Yes, BDFL is intentionally outrageous to make a point. What's yours?
1ChristianKl4yThe person who owns the website doesn't need consent of the people who visit the website to make changes to the website.
0Lumifer4yFunny how I didn't notice anyone become outraged. And, of course, BDFL's powers do NOT depend on the consent of the governed -- it's just that the governed have the ability to exit. As to the point, it's merely reminding of the standard trade-off with dictator-like rulers. They are like a little girl: When she was good She was very, very good And when she was bad she was horrid.
7John_Maxwell4yDo we know anyone who actually has experience doing product management? (Or has the sort of resume that the best companies like to see when they hire for product management roles. Which is not necessarily what you might expect [https://medium.com/the-mission/this-is-why-i-never-hire-product-managers-c321ca96505b] .)
9SatvikBeri4yI do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team. That said, I don't think I was great at it, and suspect most of the lessons I learned are easily transferred. Edit: I actually suspect that I've learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.
5John_Maxwell4yOK, I vote for Satvik as the person to choose who the BDFL is :D
0Gurkenglas4yundefined
6Alexandros4yI've done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.
6namespace4yI'm concerned that we're only voting for Vaniver because he's well known, but I'll throw in a tentative vote for him. Who are our other options?
8Viliam4yAlso because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.
7btrettel4yI'll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the "villians" of decision making in a book on decision making he suggested to me, Decisive [https://www.amazon.com/Decisive-Make-Better-Choices-Life/dp/0307956393].) Plus, I scanned this thread and I haven't seen Vaniver say he is okay with such a role.
3Vaniver4yI do agree; one of the reasons why I haven't accepted yet is to give other people time to see this, think about it, and come up with other options. (I considered setting up a way for people to anonymously suggest others, but ended up thinking that it would be difficult to find a way to make it credibly anonymous if I were the person that set it up, and username2 already exists.)
3casebash4yIt would be good to know what he thinks the direction of LW should be, but I would really like to see a new BDFL.
3Viliam4yI agree, assuming that "technical admin powers" really include access to everything he might need for his work (database, code, logs, whatever).
3RyanCarey4yI agree that Vaniver should be.
2moridinamael4yI concur with placing Vaniver in charge. Mainly, we need a leader and a decision maker empowered to execute on suggestions.
1ChristianKl4yHaving a BDFL would be great. Vaniver seems to be a good candidate.
1philh4yThrowing in another vote for Vaniver.
0hairyfigment4yI have reservations about this [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/dimf] , especially the weird 'for life' part.

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a prob
... (read more)
5casebash4y"Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone"." - this seems complex and better done via a comment

For the Russian LessWrong slack chat we agreed on the following emoji semantics:

  • :+1: means "I want to see more messages like this"
  • :-1: means "I want to see less messages like this"
  • :plus: means "I agree with a position expressed here"
  • :minus: means "I disagree"
  • :same: means "it's the same for me" and is used for impressions, subjective experiences and preferences, but without approval connotations
  • :delta: means "I have changed my mind/updated"

We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.

It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.

3Mati_Roy2moShared here: What reacts do you to be able to give to posts? (emoticons, cognicons, and more) [https://www.lesswrong.com/posts/LnxWsDayQ8bBYtuz2/what-reacts-do-you-to-be-able-to-give-to-posts-emoticons?commentId=RDsTitpNxZzgfcWdD]
1oooo4yThis Slack-specific emoji capability is akin to Facebook Reactions; namely a wider array of aggregated post/comment actions.
3btrettel4ySome sort of emoticon could work, like what Facebook does. Personally, I find the lack of feedback from an upvote or downvote to be discouraging. I understand that many people don't want to take the time to provide a quick comment, but personally I think that's silly as a 10 second comment could help a lot in many cases. If there is a possibility for a 1 second feedback method to allow a little more information than up or down, I think it's worth trying.
2Sniffnoy4yI'm reminded of Slashdot. Not that you necessarily want to copy that, but that's some preexisting work in that direction.
2btrettel4yThis would be a top recommendation of mine as well. There are quite a few prediction tracking websites now: PredictionBook [http://predictionbook.com/], Metaculus [http://www.metaculus.com/], and Good Judgement Open [https://www.gjopen.com/] come to mind immediately, and that's not considering the various prediction markets too. I've started writing a command line prediction tracker which will integrate with these sites and some others (eventually, at least). PredictionBook and Metaculus both seem to have APIs which would make the integration rather easy. So integration with LessWrong should not be particularly difficult. (The API for Metaculus is not documented best I can tell, but by snooping around the code you can figure things out...)
1gucciCharles4yOn that topic how you upvote? I've never been able to figure it out. I can't find any upvote button. Does anyone know where the button is?
3arundelo4yIt's a thumbs-up that is in the lower left corner of a comment or post (next to a thumbs-down). It looks like the top of these two thumbs-ups (or the bottom one after you've clicked it): If you don't see it, it may be that they've turned off voting for new or low-karma accounts.
2gucciCharles4yYa, that must be it. I've been on here for like 3 years (not with this account though) but only after the diaspora. Really excited that things are getting posted again. One major issue with such a system is that I now feel pressure to post popular content. A major feature of this community is that nothing is dismissed out of hand. You can propose anything you want so long as it's supported by a sophisticated argument. The problem with only giving voting privileges to >x karma accounts is that people, like myself, will feel a pressure to post things that are generally accepted. Now to be clear I'm not opposed to such a filter. I've personally noticed that for example, slatestarcodex doesn't have the same consistently high quality comments as lesswrong. For example people will have comments like "what's falsification?"etc. So I acknowledge that such a filter might be useful. At the same time however I'm pointing out one potential flaw with such a filter, that it lends itself to creating an echo-chamber.
1ESRogs4yCould you say more about what you have in mind here?
0Venryx3yMaybe something like this? https://debatemap.live [https://debatemap.live] (note: I'm the developer of it)

I think you're right that wherever we go next needs to be a clear schelling point. But I disagree on some details.

  1. I do think it's important to have someone clearly "running the place". A BDFL, if you like.

  2. Please no. The comments on SSC are for me a case study in exactly why we don't want to discuss politics.

  3. Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. "Auto-aggregation" would be bad however.

  4. Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri's suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.

  5. I don't believe that the basilisk is the primary reason for LW's brand rust. As I see it, we squandered our "capital outlay" of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.

doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November).

This is important. One of the great things about LW is/was the "LW consensus", so that we don't constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the "LW consensus", but then, I don't think anyone entirely did except Eliezer himself. When I say "the basics", I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying "But what if nothing is real?", we don't have to debate them. That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say "read the sequences", but seriously, it saved a lot of trouble.

There were occasional interesting and original objections to the basics. I can't find it now but there was an interesting series of posts responding to this post of mine on Savage's theorem; this response argu... (read more)

5TheAncientGeek4yWhen it's useful it's useful, when it's damaging it's damaging, It's damaging when the sequences don't actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It's just too easy to silently downvote, or write "read the sequences". In an alternative universe there is a LW where people don't RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that's where the damage is coming from. Unfortunately, although all of that is fixable, it cannot be fixed without "debating philosophy". ETA Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That's a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don't revise doctrine.
2Alexei4yGood point. I actually think this can be fixed with software. StackExchange features are part of the answer.
1TheAncientGeek4yI'm not sure so what you mean. Developing Sequences 0.1 can be done with the help of technology, but it can't be done without community effort, and without a rethink of the status of the sequences.
7gwillen4yI think the basilisk is at least a very significant contributor to LW's brand rust. In fact, guilt by association with the basilisk via LW is the reason I don't like to tell people I went to a CFAR workshop (because rationality -> "those basilisk people, right?")
2John_Maxwell4yReputations seem to be [https://smile.amazon.com/Glass-Jaw-Manifesto-Defending-Reputations/dp/1455582972/] very fragile on the Internet. I wonder if there's anything we could do about that? The one crazy idea I had was (rot13 [http://rot13.com/]'d so you'll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.
2namespace4yOoh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh'yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.
3TheAncientGeek4yAs opposed to what? Memorising the One true Philosophy?
5Vaniver4yThe quotes signify that they're using that specifically as a label; in context, it looks like they're pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There's a sort of philosophy that wants to endlessly hash out the big questions, and there's another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.
1TheAncientGeek4yHow many problems has the second sort solved? Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
5Vaniver4yToo many for me to quickly count? Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured. I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn't always useful to manufacture difficulty as an opportunity to display intelligence.
5Kaj_Sotala4yName three [http://lesswrong.com/lw/bc3/sotw_be_specific/], then. :)
3Vaniver4yWhat I have in mind there is basically 'approaching philosophy like a scientist', and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more 'philosophical' than not: How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.
2TheAncientGeek4yApproaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:- What causality is is the correct question/. Whether humans have the power of free will is the correct question. Whether human moral intuitions are correct is the correct question.
2Vaniver4yOh, if you count that one as a question, then let's call that one solved too. Disagree; I think this is what it looks like to get the question of where the perception comes from wrong. Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.
1TheAncientGeek4ySolved where? How can philosophers be systematically wrong about the nature of their questions ? And what makes you right? Of course, inasmuch as you agree with Y., you are going to agree that the only question to be answered is where the perception comes for, but this is about truth, not opinion: the important point is that he never demonstrated that. if moral intuitions come from God, that might underpin correctness, but things are much less straightforward in naturalistic explanations.
5Vaniver4yOn one level, by the study of dynamical systems and the invention of differential equations. On a level closer to what you meant when you asked the question, most of the confusing things about 'causality' are actually confusing things about the way our high-level models of the world interact with the world itself. The problem of free will is a useful example of this. People draw this picture that looks like [universe] -> [me] -> [my future actions], and get confused, because it looks like either determinism (the idea that [universe] -> [my future actions] ) isn't correct or the intuitive sense that I can meaningfully choose my future actions (the idea that [me] -> [my future actions] ) isn't correct. But the actual picture is something like [universe: [me] -> [my future actions] ]. That is, I am a higher-level concept in the universe, and my future actions are a higher-level concept in the universe, and the relationship between the two of them is also a higher-level concept in the universe. Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn't a real conflict between them. (The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism; if I had perfect self-knowledge, I wouldn't have any uncertainty about my future actions, but I don't have perfect self-knowledge. It also comes from the relative importance of decision-making as a 'natural concept' in the whole 'being a human' business.) And so when philosophers ask questions like "When the cue ball knocks the nine ball into the corner pocket, what are the terms of this causal relation?" (from SEP [https://plato.stanford.edu/entries/causation-metaphysics/]), it seems to me like what they're mostly doing is getting confused about the various levels of their models, and mistaking properties of their models for properties of the territory. That is, in the territory, the wavefunction of the universe upd
1TheAncientGeek4yNope. On most of the detailed questions a philosopher might want to ask about causality , physics comes down firmly on both sides. Physics is not monolothic. Does causality imply determinism?(In)determinism is an open question in physics. Note that "differential equations" are used in both classical (deterministic by most accounts) and quantum (indeterminstic by most accounts) physics. Must causes precede effects?Perhaps not, if timeless physics, or the theory of closed timelike curves, is correct. Is causality fundamental?It is in causal dynamic triangulation, and a few other things. otherwise not. Which may be true or false depending on whatever "meaningfully" means. If "meaningful" means choosing between more than one possible future, as required by libertarian free will, then determinism definitely excludes meaningful choice, since it excludes the existence of more than one possible future. The main problem here is vagueness: you didn't define "free will" or "meaningful". Philosophers have known for a long time that people who think free will is compatible with determinism are defining it one way, and people who think it is not are defining it another way. If you had shown that the libertarian version of free will is compatible with determinism, you would have shown something momentous , but you actually haven't shown anything because you haven't defined "free will" or "meaningful". Incidentally, you have also smuggled in the idea that the universe actually is, categorically, deterministic. (Compatibilism is usually phrased hypothetically). As noted, that is actually an open question. Explaining the feeling of having free will, is a third definition, something different yet again. You don't see much about in mainstream philosophical literature because the compatibility between a false impression of X and the non-existence of X is too obvious to be worth pointing out -- not because it is some great insight that philosophers have never had because they are
1Viliam4yFor the human level, it is irrelevant whether quantum physics is lawfully deterministic or lawfully following a quantum random number generator. It is still atoms boucing according to equations, except that in one case those equations include a computation of a random number. If every atom is secretly holding a coin that it flips whenever it bounces off another atom, from the human level it makes no difference. People are often mesmerized by the word "indeterministic", because they interpret it as "that means magic is possible, and my thoughts actually could be changing the physical events directly". But that absolutely doesn't follow. It the atoms flips a coin whenever it bounces off another atom, that is still completely unrelated to the content of my thoughts. Quantum experiments that show how particles follow some statistical patterns when moving through two slits, still don't show any connection between the movement of the particle and the human thought. So this is all a huge red herring. If you don't understand how whether the atom is flipping a truly random coin when bouncing off another atom, or whether it only follows a computation that doesn't include a random coin is completely irrelevant for debating human "free will", then you are simply confused about the topic. Maybe this will help: Imagine that a master has two slaves. The first slave receives a command "today, you will pick cotton the whole day". The second slave receives a command "today in the morning, your foreman will flip a coin -- if it lands head, you will pick cotton the whole day; if it lands tails, you will clean the stables the whole day". Is the second slave any more "free" than the first one? (Just because until the foreman flips the coin he is unable to predict what he will be doing today? How is that relevant to freedom? If the foreman instead of a coin uses a quantum device and sends an electron through two slits, does that make the difference?)
2TheAncientGeek4yPerhaps laypeople are that confused, but what we are talking about is Yudkwosky versus professional philosophy. Philosophers have come up with a class of theory called "naturalistic libertarian free will", which is based on appealing to physical indeterminism to provide a basis for free will, without appeals to magic. (eg Robert Kane's). You speak as though your thoughts are distinct from the physical behaviour of your brain...but you don't actually believe. Plugging in your actual belief that thoughts are just a high-level description of fine-grained neural processing, then the question of Fw becomes the following: "How can a physical information-processing system behave in a way that is, seen from the outside indeterminstic (unpredictable in principle) and also, within reasonable limits, rational, intelligent and agentive. (ie from the outside we might want to preserve the validity of "X did Y because they thought it was a good idea" but only as a high-level descritption, and without thoughts appearing in the fundamental ontology). That is the problem that naturalistic FW addresses. Do the reading I've done before calling me confused. You guys would sound a lot more rational f you could get into the habit of saying "I know of no good argument for Y" instead of "Y is wrong and anyone who believes it is an idiot". The usual fallacy: you are assuming that the coin flip is in the driving seat, but actually no part of brain has to act on any particular indeterminstic impulse. If an algorithm contains indeterminsitc function calls embedded in determinstic code, you can't strip out the deterministic code and still be able to predict what it does.
0Viliam4yMore like: my thoughts are implemented by the interaction of the atoms in my brain, but there is no meaningful relation between the content of my thoughts, and how the atoms in my brain flipped their coins. Somewhat related to this part in "The Generalized Anti-Zombie Principle [http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/]": My point is that technically there is an interaction between the content of my thoughts and how the individual atoms in my brain flip their coins (because the "concent of my thoughts" is implemented by positions and movements of various atoms in my brain), but there is still no meaningful correlation. It's not like thinking "I want to eat the chocolate cake now" systematically shifts the related atoms in my brain to the left side, and thinking "I want to keep the chocolate cake for tomorrow" systematically shifts the related atoms in my brain to the right side. If the atoms in my brains would receive different results from flipping their coins, could it change the content of my thoughts? Sure. Some thought impulses carried by those atoms could arrive a few nanoseconds sooner, some of them a few nanoseconds later, some of them could be microscopically stronger or microscopically weaker. According to chaos theory, at some moment later, an imaginary butterfly in my mind could flap its wings differently, and it could make the difference between whether my desire to eat the cake wins over the plan to put it in the fridge, if the desires are sufficiently balanced. On the other hand, the greater imbalance between these two desires (and the shorter time interval for changes to chaotically propagate through the system), the smaller chance of the imaginary butterfly to change the outcome. But my point is, again, that there is no meaningful correlation between the coin flips and the resulting thoughts and actions. Suppose you have two magical buttons: if you press one of them, you can make all my cake-decision-related atoms receive
1TheAncientGeek4yThere is a relationship between your brain state and your thoughts, which is that your thoughts are entirely constituted by, and predictable from, your brain state. Moreover, the temporal sequence of your thoughts is constituted by and predictable from you the evolution of your brain state, whether it is determinsitic or indeterministc. I see no grounds for saying that your thoughts lack a "meaningful" connection to your brain states in the indeterministic case only, ... but then I don't know that you mean by "meaningful". Care to taboo it for me? No. Its more like identity. You seem, to be saying that your thoughts aren't non -physical things are causing physical brain states. That's something. Specifically, it is a refutation of interactionist dualism...but, as such it doesn't have that much to do with free will, as usually defined. If all libertarian theories were a subset of interactionist theories, you would be on to something,, but they are not. Taboo meaningless, please. Saying it is the wrong answer because it is the wrong answer is pointless. You need to find out what naturalistic libertarianism actually says, and then refute. It. So much the better for naturalistic libertarianism , then. One of the standard counterargument to it is that the more free you are , the less rational you would be. Which would refute the claim that indeteminism alone is a sufficient condition for rationality and agency. But that claim is not made naturalistic libertarianism. Would it kill you to do some homework?
0entirelyuseless4yThis is like saying that if physics does not result in consciousness in stones, we shouldn't admit that it results in consciousness in humans. I have no particular reason to think that we have libertarian free will. But we do make choices, and if those choices are indeterminate, then we have libertarian free will. If those choices are indeterminate, it will in fact be because of the indeterminacy of the underlying matter. If your argument is correct, something more is needed for libertarian free will besides choices which are indeterminate. What is that extra component that you are positing as necessary for free will?
0Viliam4yMy point exactly. If physics does not result in consciousness in stones, then "physics" is not an explanation of consciousness in humans. And neither is "quantum physics" an explanation of free will in humans (as long as we use any definition of "free will" which does not also apply to stones). Well, the philosophers are supposed to have some superior insights, so I am waiting for someone to communicate them clearly. Preferably without invoking quantum physics in the explanation. My guess is that "free will" belongs to the realm of psychology. We can talk about when we mean when we feel that other people (or animals, or hypothetical machines) have "free will", and what we mean when we feel that we have "free will". That's all there is about "free will". Start with the experiences that caused us to create the expression "free will" in the first place, and follow the chain of causality backwards (what in the world caused us to have these experiences? how exactly does that work?). Don't have a bottom line of "X, in principle" first. So... what would make me feel that someone or something has a free will? I guess "not completely predictable", "not completely random", "seems to follow some goals" and "can somewhat adapt to changes in its environment" are among the key components, but maybe I forgot something just as important. But whether something seems predictable or unpredictable to me, that is a fact about my ability to predict, not about the observed thing. I mean, if something is "unpredictable in principle", that would of course explain my inability to predict it. But there are also other reasonable explanations for my inability to predict -- some of them so obvious that they are probably low-status to mention -- such as me not having enough information, or not having enough computing power. I don't see the atoms in other people's brains, I couldn't compute their movements fast enough anyway, so I can't predict other people's thoughts or actions precisely eno
1TheAncientGeek4yGeneric physics doesn't even even account for toasters. You need to plug in structure. An explanation all in itself. or a potential part of an explanation , including other things, such as structure. Tracing the feeling back might result in a mechanism that produces a false impression of freedom, or a mechanism that results in freedom. What you are suggesting leaves the question open. Who do yo think is doing that? The claim is hypothetical..that if indeterminism exists in the territory, then it could provide the basis for non-illusory FW. And if we investigate that, we can resolve the question you left open above.
1entirelyuseless4yAdding to my previous comment, to explain the point about stones more fully: I understand libertarian free will to mean, "the ability to make choices, in such a way that those choices are not completely deterministic in advance." We know from experience that people have the ability to make choices. We do not know from experience if they are deterministic in advance or not. And personally I do not know or care. Your objection about the second part seems to be, "if the second part of the definition is satisfied, but only by reason of something which also exists in stones, that says nothing special about people." I agree, it says nothing special about people. That does not prevent the definition from being satisfied. And it is not satisfied by stones, since stones do not have the first part, whether or not they have the second.
0entirelyuseless4yThis is all fine, for how you understand the idea of free will. And I personally agree that it does not matter whether the world is unpredictable in principle or not. I am just saying that people who talk about libertarian free will, define it as being able to make choices, without those choices being deterministic. And that definition would be satisfied in a situation where people make choices, as they actually do, and their choices are not deterministic because of quantum mechanics (which may or may not be the case -- as I said, I do not care.) And notice that this definition of free will would not be satisfied by stones, even if they are not deterministic, because they do not have the choice part. In the previous comment, you seemed to be denying that this would satisfy the definition, which would mean that you would have to define libertarian free will in an idiosyncratic sense.
0entirelyuseless4yYes. Viliam is assuming that if you actions correspond to an non-deterministic physics, it is "randomness" rather than you who are responsible for your actions. But what would the world look like if you were responsible for your actions? Just because they are indeterminate (in this view) does not mean that there cannot be statistics about them. If you ask someone whether he wants chocolate or vanilla ice cream enough times, you will be able to say what percentage of the time they want vanilla. Which is just the way it is if the world results from non-deterministic physics as well. In other worlds the world looks exactly the same. That is because it is the same thing. So there is no reason for Viliam's conclusion that it is not really you doing it; unless you were already planning to draw that conclusion no matter the facts turned out to be.
0Vaniver4yWhat process do you use to determine which problem is more 'real'? That seems like our core disagreement, and we can probably discuss that more fruitfully.
1TheAncientGeek4yThe real problem is the problem as discussed in the literature.
0Vaniver4ySo, implicitly, "the more professional philosophers care about a problem, the more real it is"?
2TheAncientGeek4yThe more you diverge from discussing the problem in the literature, the less you are really solving the age old problem of X, Y or Z, as opposed to a substitute of your own invention. Of course there is also a sense in which some age old problem could be a pseudo problem -- but the above reasoning still applies. To really show that a problem is a pseudo problem, you need to show that about the problem as stated and not, again, your own proxy.
1Vaniver4yI see, but it seems to me that people are interested in age old problems for three main reasons: 1) they have some conflicting beliefs, concepts, or intuitions, 2) they want to accomplish some goal that this problem is a part of, or 3) they want to contribute to the age old tradition of wrestling with problems. My main claim is that I don't care much about the third reason, but do care about the first two. And so if we have an answer for where an intuition comes from, this can often satisfy the first reason. If we have the ability to code up something that works, this can satisfy the second reason. To give perhaps a cleaner example, consider Epistemology and the Psychology of Human Judgment [https://www.amazon.com/dp/0195162307/?tag=vglnk-c319-20], in which a philosopher and a psychologist say, basically, "for some weird reason epistemology as a field of philosophy is mostly ignoring modern developments in psychology, and so is focusing its attention on the definition of 'justified' and 'true' instead of trying to actually improve human decision-making or knowledge acquisition. This is what it would look like to focus on the latter."
1Lumifer4yNo, it does not. If you do not care about that age-old problem, you don't have an obligation to show anything about it. You can just ignore the pseudo problem and deal with the actual problem you're interested in.
2TheAncientGeek4yAll this is posited on having made a claim to have solved problem an existing problem. Read back.
0entirelyuseless4yVaniver was saying that causality is entirely high level. That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction. The true meaning of cause is just "what has something else coming from it, namely when it can help to explain the thing that comes from it." This cannot be reduced to something else, because the thing it was supposedly reduced to would be what causality is from, and would help to explain it, leading to a contradiction.
1Vaniver4yDisagreed, because this looks like a type error to me. Molecular chemistry describes the interactions of atoms, but the interactions of atoms are not themselves made of atoms. (That is, a covalent bond is a different kind of thing than an atom is.) Causality is what it looks like when you consider running a dynamical system forward from various starting points, and noting how the future behavior of the system is different from different points. This is deeply similar to the concept of 'running a dynamical system' in the first place, and so you might not want to draw a distinction between the two of them. My point is that our human view of causality typically involves human-sized objects in it, whereas the update rules of the universe operate on a level much smaller than human-sized, and so the connection between the two is mostly opaque to us.
0entirelyuseless4yI'm not sure I understand what you are saying, and I am very sure that you either did not understand what I was saying, or else you misinterpreted it. I was using "cause" in a very general sense, where it is almost, but not quite, equivalent to anything that can be helpful in explaining something. The one extra element that is needed is that, in some way, the effect comes "from" the cause. In the situation you are calling causality, it is true that you can say "the future behavior comes from the present situation and is somehow explained by it," so there is a kind of causality there. But that is only one kind of causality, and there are plenty of other kinds. For example "is made out of" is a way of being an effect: if something is made out of something else, the thing that is made is "from" the stuff it is made out of, and the stuff helps to explain the existence of the thing. My point is that if you use this general sense of cause, which I do because I consider it the most useful way to use the word, then you cannot completely reduce causality to something else, but it is in some respect irreducible. This is because "reducing" a thing is finding a kind of cause.
0Vaniver4yIt looks to me like you're saying something along the lines of 'wait, reverse reductionism is a core part of causation because the properties of the higher level model are caused by the properties of the lower level model.' I think it makes sense to differentiate between reductionism (and doing it in reverse) and temporal causation, though they are linked. I agree with the point that if someone is trying to figure out the word "because" you haven't fully explained it until you've unpacked each of its meanings into something crisp, and that saying "because means temporal causation" is a mistake because it obscures those other meanings. But I also think it's a mistake to not carve out temporal causation and discuss that independent of the other sorts of causation.
0TheAncientGeek4yMaybe. But Yudkowsky sometimes writes as though it is fundamental. It would mean causality is constituted by the low level. Nowadays, causation means efficient causation, not material causation. As before ...efficient causation is narrower than anything that can explain anything.
0entirelyuseless4yI agree, it would not be a contradiction to think that you could explain efficient causality using material causality (although you still might be wrong.) But you could not explain material causality in the same way.
0WalterL4yScientists don't approach philosophy though, they run screaming in the other dimension. The Scientific Method doesn't work on untestable stuff.
3MugaSofer4yOff the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.
0[anonymous]4yundefined
-2TheAncientGeek4yIs that a joke?
0TheAncientGeek4yThe last time I counted I came up with two and a half.
0eagain4yI've considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can't be solved in such a way right now, can't be solved at all right now. Adding more "hashing out of big questions" doesn't seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.
0TheAncientGeek4yCan you tell which problems can never be solved?
0eagain4yOnly an ill-posed problem can never be solved, in principle.
0TheAncientGeek4yIs there a clear, algorithmic way of determining which problems are ill posed?
0Cloakless3yYeah, you just need a halting oracle and you're sorted.

BDFL

For the benefit of anyone else who'd need to Google: Benevolent Dictator For Life

9rayalez4yI am working on a project with this purpose, and I think you will find it interesting: http://metamind.pro [http://metamind.pro] It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now. It is based on the open source platform that I'm building: https://github.com/raymestalez/nexus [https://github.com/raymestalez/nexus] This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles. This platform is in active development, and I'm very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet - I will be happy to add it. Let me know what you think!
9Error4yThis is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though -- if more than a few can be convinced to do it.

Speaking as a writer for different communities, there are 2 problems with this:

  • Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.

  • "An audience of your own": if a reasonable reader can reasonably assume, that "all good content will also be cross-posted to LW anyways", that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.

The HN "link aggregator" model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.

3atucker4y"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.
6Jacobian4yMy willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn't necessarily consider it a bad thing because it meant that almost every post was gold. In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don't have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn't a good fit for G Wiley's budding rationalist community blog [https://mapandterritory.org/], let alone old LW. I guess what I'm saying is that there's a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) -> cross posting -> links with centralized discussion -> blogroll (loosest). Any point on the scale could work, but it's important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they're in or out.
5sarahconstantin4yI have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we're going to have to figure out how to address.
2Paul Crowley4yThe big downside of this is that it divides the discussion.
3G Gordon Worley III4yBut what's so bad about divided discussion? In some ways it helps by increasing the surface area to which the relevant ideas are exposed.
7roland4yWhat explosions from EY are you referring to? Could you please clarify? Just curious.
7SatvikBeri4yOn (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.
6Vaniver4yI think a good estimate is close to $10k. Expect to pay about $100/hr for developer time, and something like 100 hours of work to get from where we are to where we want to be doesn't seem like a crazy estimate. Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work. If you can find volunteers who want to do this, we would love code contributions, and you can point them towards here [https://github.com/tricycle/lesswrong/issues] to see what needs to be worked on.

I think you are underestimating this, and a better estimate is "$100k or more". With an emphasis on the "or more" part.

Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.

Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can't find anyone able and/or willing to accept the deal.

The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxe... (read more)

At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn't work, and setting up new user accounts for testing purposes was a huge pain.

I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn't there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.

The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren't quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.

The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.

Thanks for trying to work on that one!

setting up new user accounts for testing purposes was a huge pain.

This seems like the sort of thing that we should be able to include with whatever makes the admin account that's already there; I was watching someone running a test yesterday and while I showed them the way to award accounts karma, I didn't know of a way to force the karma cache to invalidate, and so they had to wait ~15 minutes to be able to actually make a post with their new test account.

These sorts of usability improvements--a pull request that just adds comments for a section of code you spent a few hours understanding, an improvement to the setup script that makes the dev environment better, are sorely needed and greatly appreciated. In particular, don't feel at all bad about changing the goal from "I'm going to close out issue X" to "I'm going to make it not as painful to have test accounts," since those sorts of improvements will lead to probably more than one issue getting closed out.

1Viliam4yMaybe it would be easier to make contributions that rely on the code as little as possible -- scripts running on separate pages, that woud (1) verify that the person running them is a moderator, and (2) connect to the LW database (these two parts would be common for all such scripts, so have them as two functions in a shared library) -- and then have a separate simple user interface for doing whatever needs to be done. For example, make a script called "expose_downvotes" that displays a text field where the moderator can copy the comment permalink, and after clicking "OK" a list of usernames who downvoted the specific comment is displayed (preferably with hyperlinks to their user profiles). For the user's convenience, the comment id is automatically extracted from the permalink. Then the moderator would simply open this script in a second browser tab, copy link location from the "Permalink" icon at the bottom of a comment, click "OK", done. Compared with the solutions integrated into LW web page, this solutions is only slightly more complicated for the moderator, but probably much more simple for the developer to write. Most likely, the moderator will have the page bookmarked, so it's just "open bookmark in a new tab, switch to old tab, right-click on the comment icon, copy URL, switch to new tab, click on the text field, Ctrl+V, click OK". Still hundred times more simple (and thousand times faster!) than calling tech support, even assuming their full cooperation. Each such script could be on a separate page. And they could all be linked together by having another function in the shared library which adds a header containing hyperlinks to all such scripts.
3Vaniver4yI had difficulties finding people without mentioning a price; I'm pretty sure the defect was in where and how I was looking for people. I also agree that it makes more sense to have a small number of programmers make extensive changes, rather than having a large number of people become familiar with how to deal with LW's code. I will point out there's no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links. The main reason we haven't been approaching it that way is that it's harder to make small moves and test their results; either you switch over, or you don't, and no potential replacement was obviously superior.

I'm new and came here from Sarah Constantin's blog. I'd like to build a new infrastructure for LW, from scratch. I'm in a somewhat unique position to do so because I'm (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.

Here is how I envision the basic architecture of this project:

  1. A server that manages all business logic (i.e. posting, moderation, analytics) and interfaces with the frontend (2) and database (3).
  2. A standalone, modular frontend (probably built with React, maybe reusing components provided by Telescope) that is modern, beautiful, and easily extensible/composable from a dev perspective.
  3. A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc). The first concern is security, all others predicated on that.

I w... (read more)

5Vaniver4yThanks for the offer! Maybe we should talk by email? (this username @ gmail.com)
2Gram_Stone4yIf you don't get a proper response, it may be worthwhile to make this into its own post, if you have the karma. (Open thread is another option.)
0ChristianKl4yCurrently HackerNews and LW both run on the Reddit code base. On of the problems is that Reddit didn't design their software to be easily adopted to new projects. That means it's not easily possible to update the code with new versions. A lot of the data will be votes.
2whpearson4yNitpick: Hackernews isn't reddit derived. It is some written in arc [https://en.wikipedia.org/wiki/Arc_(programming_language]. And not open source.
0Drea4yI see various people volunteering for different roles. I'd be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance. Like ananda, I'm happy to do this as an open-contribution project rather than paid. I'll reach out to Vaniver via email.
0arunkhanna004yI have some front-end experience and would love to help you(I'm a student). Email me at my username @gmail.com
9Viliam4yWell, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not -- there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let's say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed. I am not saying that paying me for this job is a rational thing to do; let's just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.) Maybe it was a mistake that I didn't mention this option sooner... but hearing all the talk about "some volunteers doing it for free in their free time" made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can't change the past.) I certainly couldn't do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patches. And there is also a risk that I am overestimating my abilities here. I never did a project of this scale alone. I mean, I feel quite confident that I could do it in a given time frame, but maybe there would be problems with performance, or some kind of black swan. I would probably try to solve it as a separate step. First, make the new website, as good as possible. Second, import the old content, and redirect the links. Only worry about the import when the new site works as expected. Or maybe don't even import the old stuff, and keep the old website frozen. Just static pages, without ability to edit anything. All we lose is the ability to vote or comment on a years-old content. At the moment of transition, open officially the new website, block the ability to post new articles on the o
2WalterL4yNot trolling here, genuine question. How is the LW codebase so awful? What makes it so much more complicated than just a typical blog, + karma? I feel like I must be missing something. From a UI perspective it is text boxes and buttons. The data structure that you need to track doesn't SEEM too complicated (Users have names, karma totals, passwords and roles? What am I not taking into account?
4Vaniver4yAge, mostly. My understanding is Reddit was one of the first of its kind, and so when building it they didn't have a good sense of what they were actually making. One of the benefits of switching to something new is not just that it's using technology people are more likely to be using in their day jobs, but also that the data arrangement is more aligned with how the data is actually used and thought about.
0jackk4yIt's also tied to some pretty old versions of Python and various libraries, and people often need help just getting the development server going.
0Lumifer4yIt's a modified copy of an early Reddit codebase. Besides it has, um, founder effects X-/ -- for example the backend SQL database is used just as an engine behind a handcrafted key-value store...
2alyssavance4yIf the money is there, why not just pay a freelancer via Gigster or Toptal?
6Vaniver4yHistorically, the answers have been things like a desire to keep it in the community (given the number of software devs floating around), the hope that volunteer effort would come through, and me not having much experience with sites like those and thus relatively low affordance for that option. But I think if we pay for another major wave of changes, we'll hire a freelancer through one of those sites. (Right now we're discussing how much we're willing to pay for various changes that could be made, and once I have that list I think it'll be easy to contact freelancers, see if they're cheap enough, and then get done the things that make sense to do.) [edit] I missed one--until I started doing some coordination work, there wasn't shared knowledge of what sort of changes should actually be bought. The people who felt like they had the authority to design changes didn't feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn't feel like they had the authority to design changes, and both of them had more important things to be working on.
5John_Maxwell4yThis sort of leadership vacuum seems to be a common problem in the LW community. Feels to me like people can err more on the side of assuming they have the authority to do things.
5SatvikBeri4yYeah, a good default is the UNODIR pattern ("I will do X at Y time unless otherwise directed")
0skeptical_lurker4yI can code in python, but I have no web dev experience - I could work out what algorithms are needed, but I'm not sure I would know how to implement them, at least not off the bat. Still, I'd be willing to work on it for less then $100 per hour.
0Vaniver4yThanks for the offer! If you're working for $x an hour, do you think you would take fewer that 100/x times as long as someone who is experienced at web dev?
0skeptical_lurker4yFair pay would be $x an hour given that it takes me 100/x times as long as someone who is experienced at web dev. However in reality estimates of how long the work will take seem to vary wildly - for instance you and Viliam disagree by an order of magnitude. The more efficient system might be for me to work with someone who does have some web dev experience, if there is someone else working on this.
6John_Maxwell4yRe: #2, it seems like most of the politics discussion places online quickly become dominated by one view or another. If you wanted to solve this problem, one idea is 1. Start an apolitical discussion board. 2. Gather lots of members. Try to make your members a representative cross-section of smart people. 3. Start discussing politics, but with strong norms in place to guard against the failure mode where people whose view is in the minority leave the board. I explained here [http://effective-altruism.com/ea/13v/what_does_trump_mean_for_ea/8yv] why I think reducing political polarization through this sort of project could be high-impact. Re: #3, I explain why I think this is wrong in this post [http://lesswrong.com/lw/not/revitalizing_less_wrong_seems_like_a_lost_purpose/] . "Strong writers enjoy their independence" - I'm not sure what you're pointing at with this. I see lots of people who seem like strong writers writing for Medium.com or doing newspaper columns or even contributing to Less Wrong (back in the day). (I largely agree otherwise.)
5FourFire4y1. I agree completely. 2. Politics has most certainly damaged the potential of SSC. Notably, far fewer useful insights have resulted from the site and readership than was the case with LessWrong at it's peak, but that is how Yvain wanted it I suppose. The comment section has, according to my understanding become a haven for NRx and other types considered unsavoury by much of the rationalist community, and the quality of the discussion is substantially lower in general than it could have been. 3. Sure. 4. Codebase, just start over, but carry over the useful ideas implemented, such as disincentivizing flamewars by making responses to downvoted comments cost karma, zero initial karma awarded for posting, and any other rational discussion fostering mechanics which have become apparent since then. 5. I agree, make this site read only, use it and the wiki as a knowledge base, and start over somewhere else.
5John_Maxwell4yI think Hacker News has a better solution to that problem (if you reply to someone who replied to you, your reply gets delayed--the deeper the thread, the longer the delay).
2SatvikBeri4yI wonder if the correct answer is essentially to fork Hacker News, rather than Reddit (Hacker News isn't open source, but I'm thinking about a site that takes Hacker News's decisions as the default, unless there seems to be a good reason for something different.)
5John_Maxwell4yWell, there's a vanilla version of HN that comes with the Arc distribution [http://arclanguage.org/]. It doesn't look like any of the files in the Arc distribution have been modified since Aug 4, 2009. I just got it running on my machine (only took a minute) and submitted a link. Unsure what features are missing. Relevant HN discussion [https://news.ycombinator.com/item?id=5006037]. If someone knows Paul Graham, we might be able to get a more recent version of the code, minus spam prevention features & such? BTW, I believe Y Combinator is hiring [https://news.ycombinator.com/item?id=12846833] hackers. (Consider applying!) Arc isn't really used for anything besides Hacker News. But it's designed to enable "exploratory programming". That seems ideal if you wanted to do a lot of hands-on experimentation with features to facilitate quality online discussion. (My other comment [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di4c] explains why there might be low-hanging fruit here.)
3SatvikBeri4yHacker News was rewritten in something other than Arc ~2-3 years ago IIRC, and it was only after that that they managed to add a lot of the interesting moderation features. There are probably better technologies to build an HN clone in today–Clojure seems strictly better than Arc, for instance–the parts of HN that are interesting to copy are the various discussion and moderation features, and my sense of what they are mostly comes from having observed the site and seeing comments here and there over the years.
2toner4yHere is some alternative code for building an HN clone: https://github.com/jcs/lobsters [https://github.com/jcs/lobsters] (see https://lobste.rs/about [https://lobste.rs/about] for differences to HN).
4John_Maxwell4yYes, I think Hacker News is plausibly the best general-purpose online discussion forum right now. It would not surprise me if it's possible to do much better, though. As far as I can tell, most online discussion software is designed to maximize ad revenue (or some proxy like user growth/user engagement) rather than quality discussions. Hacker News is an exception because the entire site is essentially a giant advertisement to get people applying for Y Combinator, and higher-quality discussions make it a better-quality advertisement.
2John_Maxwell4yRelevant: http://danluu.com/hn-comments/ [http://danluu.com/hn-comments/]
4Paul Crowley4yThis is the platform Alexandros is talking about: http://www.telescopeapp.org/ [http://www.telescopeapp.org/]
3eagain4yHi. I used to have an LW account and post sometimes, and when the site kinda died down I deleted the account. I'm posting back now. Please do not start discussing politics without enforcing a real-names policy and taking strong measures against groupthink, bullying, and most especially brigading from outside. The basic problem with discussing politics on the internet is that the normal link between a single human being and a single political voice is broken. You end up with a homogeneous "consensus" in the "community" that reflects whoever is willing to spend more effort on spam and disinformation. You wanted something like a particularly high-minded Parliament, you got 4chan. I have strong opinions about politics and also desire to discuss the topic, which is indeed boiling to a crisis point, in a more rationalist way. However, I also moderate several subreddits, and whenever politics intersects with one of our subs, we have to start banning people every few hours to keep from being brigaded to death. I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses: allow talking about global warming in the context of civilization-scale risks, allow talking about science funding and state appropriation of scientific output in the context of AI risk and AI progress, allow talking about fiscal multipliers to state spending in the context of effective altruism. Don't go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet nowadays that talks politics, doesn't moderate a tight ship, and allows open registration. And in general, the watchword for a rationality community ought to be that most of the time, contrarians are wrong, and in fact boring as well. Rationality should be distinguished from intellectual contrarianism -- this is a mistake we made last time, and suffered for.
2Lumifer4yHa-ha You seem to have a desire to discuss the topic only in a tightly controlled environment where you get to establish the framework and set the rules.
0gjm4yI didn't see anything in eagain's comment that demanded that he[1] get to establish the framework and set the rules. (It is easy, and cheap, to portray any suggestion that there should be rules as an attempt to get to set them. Human nature being what it is, this will at least sometimes be at least partly right. I don't see that that means that having rules isn't sometimes a damn good idea.) [1] Apologies if I guessed wrong.
0Lumifer4yEagain knows which ideas are "deeply bad" and he's quite certain they need to be excluded from the conversation.
0eagain4yI didn't say excluded from the conversation. I said exposed to the bright, glaring sunlight of factual rigor.
0Lumifer4yThese words do not appear anywhere in your comment. Instead you said: "Don't go beyond that" seems to mean not allowing those politics and the bad-idea raiders. "Not allowing" does not mean "expose to sunlight", it means "exclude".
0snewmark4yI'm not sure if this what eagain was alluding to, but this does seem advisable; Do not permit (continuous) debates of recognizably bad ideas. I admit this is difficult to enforce, but stating that rule will, in my opinion, color the intended purpose of this website.
0Lumifer4yThe word "bad" looks to be doing all the heavy lifting in here.
0TheAncientGeek4yWhich isnt being done because of what...? Widespread stupidity?
0gjm4yPerhaps he does. It wouldn't exactly be an uncommon trait. However, there is a gap between thinking that some particular ideas are very bad and we'd be better off without them, and insisting on setting the rules of debate oneself, and it is not honest to claim that someone is doing the latter merely because you are sure they must be doing the former.
0Lumifer4yThis thread is about setting the rules for discussions, isn't it? Eagain is talking in the context of specifying in which framework discussing politics can be made to work on LW.
0gjm4yYup. That is (I repeat) not the same thing as insisting that he get to establish the framework and set the rules. (It seems to me that with at least equal justice someone could complain that you are determined to establish the framework and set the rules; it's just that you prefer no framework and no rules. I don't know whether that actually is your preference, but it seems to me that there's as much evidence for it as there is for some of what you are saying about eagain's mental state.)
0Lumifer4yAnd yet I'm not telling LW how to set up discussions...
4gjm4yAren't you? I mean, you're not making concrete proposals yourself, of course; I don't think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people's. But looking at the things you object to and the things you don't, it seems to me that you're taking a position on how LW's discussions should be just as much as eagain is; you're just expressing it by objecting to things that diverge from it, rather than by stating it explicitly.
3entirelyuseless4yLumifer seems to object to things because he finds it enjoyable to object to things, and this is a good explanation for why he objects to things rather than making his own proposals. But this means that he is not necessarily taking a position on how discussion should be, since he would be likely to object to both a proposal and its opposite, just because it would still be fun to object.
0gjm4yIt seems to me that there are definite regularities in which proposals he objects to and which he doesn't.
0eagain4yHmm. That sounds like a nice rule: anyone who spends all their posting efforts on objecting to other people's ideas without putting forth anything constructive of their own shall be banned, or at least downvoted into oblivion.
0gjm4yI think that would be excessive. Pointing out others' mistakes is a useful activity. (Think of Socrates.) Also, downvoting is disabled right now.
0Lumifer4yThe thing is, I understand the difference between argument points and policy proposals. These are very very different creatures.
0gjm4yI remark that this is not a million miles from what Eugine_Nier tried to do, and unfortunately he was not entirely unsuccessful. (Though he didn't get nearly as far as producing a homogeneous consensus in favour of his ideas.)
0Elo4yI would rather politics happen in all those other places you mentioned.
3Lumifer4yIf I were NRx, I would feel very amused at the idea of LW people coming to believe that they need to invite an all-powerful dictator to save them from decay and ruin... :-D
2skeptical_lurker4yWhat's hilariously ironic is that our problem immigrants are Eugine's sockpuppets, when Eugine is NRx and anti-immigrant. That Eugine is so much of a problem is actually evidence in favour of some of his politics.
2Viliam4yAnd when the dictator stops Eugine, it will also prove that Cthulhu always swims left. (Meanwhile, in a different tribe: "So, they have a dictator now, and of course it's a white male. That validates our beliefs!")
0Lumifer4yDon't forget that Cthulhu is a white male :-P (race/sex are social constructs, aren't they? Cthulhu is definitely not one of oppressed minorities so there you go)
0hairyfigment4yYou're talking about someone using the easiest method of disruption available to individuals, combined with individual voter fraud. This is difficult to stop because of the site's code, which I think the single owner of the site chose.
0Alexandros4yLW has a BDFL already. He's just not very interested and (many) people don't believe he's able to restore the website. We didn't "come to believe" anything.
2ChristianKl4yNo, EY effectively doesn't act as a BDFL. He doesn't have the effective power to ban contributors. The last time I asked him to delete a post he said that he can't for site political reasons. The site is also owned by MIRI and not EY directly.
1Lumifer4yLessee... He isn't so much benevolent as he is absent. I don't see him exercising any dictatorial powers and as to "for life", we are clearly proposing that this ain't so. So it seems you're just wrong. An "absentee owner/founder" is a better tag.
2sleepingthinker4yAs a newbie, I have to say that I am finding it really hard to navigate around the place. I am really interested in rational thinking and the ways people can improve it, as well as persuation techniques to try to get people to think rationally about issues, since most of them fall to cognitive biases and bad illogical thinking. I have found that writing about these concepts for myself really help in clarifying things, but sometimes miss a discussion on these topics, so that's why I came here. For me, some things that could help improve this site: 1) better organization and making it clearer to navigate 2) a set of easy to read newbie texts 3) ability to share interesting posts from other places and discussing them
0plethora4yI didn't delete my account a year ago because the site runs on a fork of Reddit rather than HN (and I recall that people posted links to outside articles all the time; what benefit would a HN-style aggregator add over either what we have now or our Reddit fork plus Reddit's ability to post links to external sites?); I deleted it because the things people posted here weren't good. I think if you want to unify the community, what needs to be done is the creation of more good content and less bad content. We're sitting around and talking about the best way to nominate people for a committee to design a strategy to create an algorithm to tell us where we should go for lunch today when there's a Five Guys across the street. These discussions were going on the last time I checked in on LW, IIRC, and there doesn't seem to have been much progress made. I haven't seen anyone link to a LW post written after I deleted since I deleted. I suspect this has less to do with aggregators or BDFL nomination committees and more to do with the fact that a long time ago people used to post good things here and then they stopped. Then again, better CSS wouldn't hurt. This place looks like Reddit. Nobody wants to link to a place that looks like Reddit.
0NatashaRostova4yThat's true. LW isn't bringing back yvain/Scott or other similar figures. However, it is a cool training ground/incubator for aspiring writers. As of now I'm a 'no one.' I'd like to try to see if I can become 'some one.' SSC comments don't foster this. LW is a cool place to try, it's not like anyone is currently reading my own site/blog.

There's an issue that I expect will be closed sometime this week that I think will round out the suite of technical tools that will give moderators the edge over trolls. Of course, people are intelligent and can adapt, so I'm not going to hang up a Mission Accomplished banner just yet.

I predict that whatever is in this drop will not suffice. It will require at minimum someone who has both significant time to devote to the project, and the necessary privileges to push changes to production.

I applaud this and am already participating by crossposting from my blog and discussing.

One thing that I like about using LW as a home base is that everyone knows what it is, for good and for ill. This has the practical benefit of not needing further software development before we can get started on the hard problem of attracting high-quality users. It also has the signaling benefit of indicating clearly that we're "embracing our roots", including reclaiming the negative stereotypes of LessWrongers. (Nitpicky, nerdy, utopian, etc.)

I am unusual in this community in taking "the passions" really seriously, rather than identifying as being too rational to be caught up in them. One of my more eccentric positions has long been that we ought to be a tribe. For all but a few unusual individuals, humans really want to belong to groups. If the group of people who explicitly value reason is the one group that refuses to have "civic pride" or similar community-spirited emotions, then this is not good news for reason. Pride in who we are as a community, pride in our distinctive characteristics, seems to be a necessity, in a cluster of people who aspire to do bet... (read more)

2Qiaochu_Yuan4yOof, is this really an eccentric position? FWIW, I am extremely convinced that the rationalist community ought to be a tribe, and one of the biggest updates I made at the CFAR reunion was seeing what felt to me like evidence that we were becoming more functional along tribey directions that I really wanted.
2TheAncientGeek4yWhy?
6Qiaochu_Yuan4yIn short, because I think tribes are the natural environments in which humans live, and that ignoring that fact produces unhappy and dysfunctional humans.
4TheAncientGeek4yThere's a logic gap there. You are assuming that rationalists don't have pre-existing tribes, that they won't be in any tribe if they are not in the rationalist tribe. And you are assuming that rationalists need to be in a rationality tribe in order to be rational ... arguably, it works the other way..tribalism enhances group think bias, and so lowers the rationality level, on the whole.

I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:

"A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating." (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)

That's obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there's no one person with both the technical and moral authority to:

  • set the rules that all participants have to abide by, and enforce them
  • decide principles for what's on-topic and what's off-topic
  • receive reports of trolls, and warn or ban them
  • respond to complaints about the site not working well
  • decide what the site features should be, and implement the high-priority ones

Pretty much any successful subreddit, even small... (read more)

a proactive, responsive admin/moderation team

Which needs to be backed up by a responsive tech support team. Without the support of the tech support, the moderators are only able to do the following:

1) remove individual comments; and
2) ban individual users.

It seems like a lot of power, but for example when you deal with someone like Eugine, it is completely useless. All you can do is play whack-a-mole with banning his obvious sockpuppet accounts. You can't even revert the downvotes made by those accounts. You can't detect the sockpuppets that don't post comments (but are used to upvote the comments made by the active sockpuppets, which then quickly use their karma to mod-bomb the users Eugine doesn't like). So, all you can do is to delete the mod-bombing accounts after the damage was done. What's the point? It will cost Eugine about 10 seconds to create a new one.

(And then Eugine will post some paranoid rant about how you have some super shady moderator powers, and a few local useful idiots will go like "yeah, maybe the mods are too poweful, we need to stop them", and you keep banging your head against the wall in frustration, wishing you actually had a fraction of thos... (read more)

Wow, that is a pretty big issue. Thank you for mentioning this.

Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don't realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.

Was including tech support under "admin/moderation" - obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.

That's okay, I just posted to explain the details, to prevent people from inventing solutions that predictably couldn't change anything, such as: appoint new or more moderators. (I am not saying more help wouldn't be welcome, it's just that without better access to data, they also couldn't achieve much.)

What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.

When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.

Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision "user xyz123 is abusing the voting mechanism" or "user xyz123 is a sockpuppet for user abc789", describe my case to other mods, and only after getting their agreement I would learn who the "user xyz123" actually is.

(But of course, getting the database without anonymization -- if that would be faster -- would be equally good; I could just anonymize it after I get it.)

Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.

What specific computations would I run there? Well, that's kinda the point that I don't know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of "security by obscurity", to avoid Eugine adjusting to my algorithms. (For example... (read more)

It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven't thought about it much.

If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.

It can happen without full awareness... the user will simply notice that X upvotes them often and Y downvotes them often... they will start liking X and disliking Y... they will start getting pleasant feelings when looking at comments written by X ("my friend is writing here, I feel good") and unpleasant feelings when looking at comments written by Y ("oh no, my nemesis again")... and that will be reflected by how they vote.

And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.

Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.

From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other peop... (read more)

It's not actually obvious to me that downvotes are even especially useful. I understand what purpose they're supposed to serve, but I'm not sure they actually serve it.

It seems like if we removed them, a major tool available to trolls is just gone.

I think downvoting is also fairly punishing for newcomers - I've heard a few people mention they avoided Less Wrong due to worry about downvoting.

Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.

Alternative, go with the Hacker News model of only enabling downvotes after you've accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.

8scarcegreengrass4yI agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population. (Or similar variants like turning off 'comment score below threshold' hiding, etc)
8Viliam4yPreferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice. There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying "this comment is unnecessarily rude", which would also effectively halve the number of likes for the purpose of comment sorting.)
7sarahconstantin4yI think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your "fans" vs "enemies" are, you will inevitably try to play to your "fans" and develop dislike for your "enemies." I think this is the primary thing that makes social media bad. My current cutoff for what counts as a "social media" site (I have resolved to never use social media again) is "is there a like mechanic where I can see who liked me?" If votes on LW were public, by that rule, I'd have to quit.
5Kaj_Sotala4yCould you elaborate on what you mean by this? "Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted" also sounds like "playing to your fans" to me - in both cases you're responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren't entirely sure of who your target audience consists of.
2Vladimir_Nesov4yDo you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn't matter if there is a setting that makes the likes invisible to you in particular? It's a tradeoff, some things may get worse, others may get better. I don't have a clear sense of this tradeoff.
7Kaj_Sotala4yImagine that you're a new person who's a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you. The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have. OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.
8PeerGynt4ySeriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? Why do they then get their contract renewed? Are they taking orders from some secret deep owners of LW that outrank the moderators ?

Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? ...Why do they then get their contract renewed?

The tech support is Trike Apps, who have freely donated a huge amount of programmer time toward building and maintaining LessWrong.

Yeah, it's a bit of "don't look a gift horse in the mouth" situation. When someone donates a lot of time and money to you, and suddenly becomes evasive or stubborn about some issue that is critical to be solved properly... what are you going to do? It's not like you can threaten to fire them, right?

In hindsight, I did a few big mistakes there. I didn't call Eliezer to have an open debate about what exactly is and isn't in my competence; that is, in case of different opinions about what should be done, who really has the last word. Instead I gave up too soon, when one my ideas was rejected I tried to find an alternative solution, only to have it rejected again... or to finally succeed at something, and then see that Eugine improved his game, and now I am going to have another round of negotiation... until I gradually developed a huge "ugh field" around the whole topic... and wasted a lot of time... and then other people took the role and had to start from the beginning again.

0Paul Crowley4yIf we built it, would they come? You make a strong case that the workforce wasn't made able to do the job; if that were fixed, would the workforce show up?

I strongly agree with this sentiment, and currently Arbital's course is to address this problem. I realize there have been several discussions on LW about bringing LW back / doing LW 2.0, and Arbital has often come up. Up until two weeks ago we were focusing on "Arbital as the platform for intuitive math explanations", but that proved to be harder to scale than we thought. We now pivoted to a more discussion-oriented truth-seeking north star, which was our long-term goal all along. We are going to need innovation and experimentation both on the software and the community levels, but I'm looking forward to the challenge. :)

I am extremely excited about this. I suspect we should proceed trying to reboot Less Wrong, without waiting, while also attempting to aid Arbital in any ways that can help (test users, etc.).

9RyanCarey4yIf half-hearted attempts are doomed (plausible), or more generally we're operating in a region where expected returns on invested effort are superlinear (plausible), then it might be best to commit hard to projects (>1 full-time programmer) sequentially.
0Mqrius4yDoes that take into account, for example, Arbital seeming less promising to people / getting less engagement, because all the users have just sunk energy into trying to get by on a revived LW? There's an intuition pump I could make that I haven't fully fleshed out yet, that goes something like, If both Arbital and Lesswrong get worked on, then whichever seems more promising or better to use will gain more traction and end out on top in a very natural way, without having to go through an explicit failure of the other one. There's caveats/responses to that as well of course — it just doesn't seem 100% clear cut to me.
5John_Maxwell4yExciting stuff! Are you planning to engage with the LW community to figure out what features to implement? I know that Eliezer was heavily involved with Arbital's product management. But I think it's a mistake to make him the BDFL for LW 2.0, because LW 1.0 failed, and this was plausibly due to actions he took. Beware the halo effect [https://en.wikipedia.org/wiki/Halo_effect]: someone can simultaneously be a great blogger and a lousy product manager/forum moderator. I think we should let someone else like Vaniver [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2w] have a try. If you're planning to engage with the community (which I would strongly recommend--ignoring their userbase is the kind of thing failed startups do), I suggest waiting a bit and then creating a new thread about this, to simulate the effect of a sticky [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di47] .
2Alexei4yEric R and I read all the comments in this thread. We've also met with multiple people in person to discuss exactly what the platform should look like. So the broad answer is "yes", but if you have a specific mode of engagement in mind, then it might be "no". He is an adviser. There are no advocates to make him a BDFL as far as I know. I expect we'll have a public beta ready in two weeks. I plan to write a blog post of my own to explain Arbital in more details.
0John_Maxwell4ySounds great! Well, if you created a new thread called "Eric and I are taking suggestions for Arbital", I imagine you might get a lot more relevant ideas and feedback :)
5MalcolmOcean4yI'm very excited to have an Arbital-shaped discussion and writing platform. I've been thinking for awhile that I want some of my online writing to become less blog-like, more wiki-like, but I don't actually want to use a wiki because... yeah. Wikis. Arbital seems way better. Is it at the point now where I could start posting some writing/models to it?
1Alexei4yNot yet; hence Anna's comment above.
1casebash4yIf Arbital provides a solution, then that would be great, but I think it is best to have multiple projects operating at the same time.
4Alexei4yWhy?
2casebash4yGives us two changes to succeed.
4Qiaochu_Yuan4yBut also weakens both options' ability to be a Schelling point.
0Drea4yI can see value in having LW as a prototype or scratch pad, making simple modifications of existing discussion platforms (e.g. improved moderator powers as discussed above). Then Arbital can do the harder work of building a collaborative truth-seeking platform, adding in features to, for example, support Double Crux, fine-typed voting, or evidence (rather than comments). Perhaps in the end there's a symbiosis, where the LW is for discussion, and when a topic comes up that needs truth-seeking it's moved to Arbital. That free's Arbital from having to include a solved problem in it's code base.

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

I don't think that a reboot/revival of LW necessarily has to consist entirely of the people who were in the community before. If we produce good stuff, we can attract new people. A totally new site with new branding might get rid of some of the negative baggage of the past, but is also less likely to get off the ground in the first place. Making use of what already exists is the conservative choice.

I hear you as saying that people here should focus on learning rather than leadership. I think both are valuable, but that there's a lack of leadership online, and my intuition is to trust "forward momentum", carrying something forward even if I do not think I am optimally qualified. He who hesitates is lost, etc.

6John_Maxwell4yI see Anna making the same complaint that you yourself have made a few [http://www.overcomingbias.com/2016/11/10-year-blog-anniversary.html] times [http://www.overcomingbias.com/2007/07/blogging-doubts.html]: namely, that most online discussions are structured in a way that makes the accumulation of knowledge difficult. (My explanation: no one has an incentive to fix this [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di4c] .) Is the fact that economists mostly cite each other evidence of "cultish in-group favoring biases"? Probably to some degree. But this hasn't fatally wounded economics.
1Venryx3y"most online discussions are structured in a way that makes the accumulation of knowledge difficult." It's a different kind of conversation, but I've been trying to improve on this problem by developing a "debate mapping" website, where conversation is structured in tree form based on claims, and then arguments underneath it which support or oppose each claim recursively. This is the website if you're interested: https://debatemap.live [https://debatemap.live]
2John_Maxwell3yGlad to see you're working on this, it looks pretty nice! I think the bottleneck for efforts like this is typically marketing, not code. (Analogy: If you want to found a city, the first step is not to go off alone in to the wilderness and build a bunch of houses.) I think I've seen other argument mapping sites, and it seems like every few months someone announces a new & improved discussion website on SlateStarCodex (then it proceeds to not get traction). I suspect the solution is to form a committee/"human kickstarter [https://freestateproject.org/]" of some kind so that everyone who's interested in this problem can coordinate to populate the same site simultaneously. For a project like yours that already has code, the best approach might be to try to join forces with a blogger who already has traffic, or a discussion site that already has a demand for a debate map, or something like that.
0TheAncientGeek3ySeconded.
0TheAncientGeek4yThe behaviour of the Austrian School certainly is.
4namespace4ySpot on in my opinion, and one of the many points I was trying to get at with the 2016 LW Survey. For example, this community seems to have basically ignored Tetlock's latest research, relegating it to the status of a "good book" that SSC reviewed. I wish I'd included a 'never heard of it' button on the communities question because I suspect the vast majority of LessWrongers have never heard of the Good Judgement Project. I've long felt that Eliezer Yudkowsky's sequences could use somebody going over them with a highlighter and filling in the citations for all the books and papers he borrowed from.
4Raemon4yThis happened, FYI, in the sequences ebook.
4scarcegreengrass4yI have similar uncertainty about the large-scale benefits of lesswrong.com, but on smaller scales i do think the site was very valuable. I've never seen a discussion forum as polite, detailed, charitable, & rigorous as the old Less Wrong.

Successful conversations usually happen as a result of selection circumstances that make it more likely that interesting people participate. Early LessWrong was interesting because of the posts, then there was a phase when many were still learning, and so were motivated to participate, to tutor one another, and to post more. But most don't want to stay in school forever, so activity faded, and the steady stream of new readers has different characteristics.

It's possible to maintain a high quality blog roll, or an edited stream of posts. But with comments, the problem is that there are too many of them, and bad comments start bad conversations that should be prevented rather than stopped, thus pre-moderation, which slows things down. Controlling their quality individually would require a lot of moderators, who must themselves be assessed for quality of their moderation decisions, which is not always revealed by the moderators' own posts. It would also require the absence of drama around moderation decisions, which might be even harder. Unfortunately, many of these natural steps have bad side effects or are hard to manage, so should be avoided when possible. I expect the problem can b... (read more)

"It is dangerous to be half a rationalist."

It is dangerous to half-arse this and every other attempt at recovering lesswrong (again).

I take into account the comments before mine which accurately mention several reasons for the problems on lw.

The codebase is not that bad. I know how many people have looked at it; and it's reasonably easy to fix it. I even know how to fix it; but I am personally without the coding skill to implement the specific changes. We are without volunteers willing to make changes; and without funds to pay someone to do them. Trust me. I collated all comments on all of the several times we have tried to collate ideas. We are unfortunately busy people. Working on other goals and other projects.

I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.

A year from now; crony beliefs may not be easy to find on lesswrong because it was never explicit... (read more)

I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.

So: this is subtle. But to my mind, the main issue isn't that ideas won't mostly-percolate. (Yes, lots of folks seem to be referring to Crony Beliefs. Yes, Molloch. Yes, etc.) It's rather that there isn't a process for: creating common knowledge that an idea has percolated; having people feel empowered to author a reply to an idea (e.g., pointing out an apparent error in its arguments) while having faith that if their argument is clear and correct, others will force the original author to eventually reply; creating a common core of people who have a common core of arguments/analysis/evidence they can take for granted (as with Eliezer's Sequences), etc.

I'm not sure how to fully explicitly model it. But it's not mostly about the odds that a given post will spread (let's call that probability "p"). It's more abou... (read more)

5Viliam4yI think I understand what you mean. On one hand it is great to have this fluid network of rationalist websites where everyone chooses the content they prefer to read. We don't have a single point of failure [https://en.wikipedia.org/wiki/Single_point_of_failure]. We can try different writing styles, different moderation styles, etc. The rationalist community can survive and generate new interesting content even when LW is dying and infested by downvoting sockpuppets, and Eliezer keeps posting kitten videos on Facebook (just kidding). On the other hand, it is also great to have a shared vocabulary; a list of words I can use freely without having to explain them. Because inferential distance is a thing. (For example, LW allows me to type "inferential distance" without having to explain. Maybe I could just use a hyperlink to the origin of the term. But doing it outside of LW includes a risk of people starting to debate the concept of the "inferential distance" itself, derailing the discussion.) The opposite of public knowledge is the Eternal September. Maybe "Moloch" is an example that meaningful terms will spread across rationalist websites. (Natural selection of rationalist memes?) Maybe hyperlinking the original source is all it takes; linking to SSC is not more difficult than linking to LW Sequences, or Wikipedia. That is, assuming that the concept is clearly explained in one self-contained article. Which is not always the case. Consider "motte and bailey". I consider it a critical rationalist concept, almost as important as "a map is not the territory". (Technically speaking, it is a narrower version of "a map is not the territory".) I believe it helps me to see more clearly through most political debates, but it can also be applied outside of politics. And what is the canonical link? Oh, this [http://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/]. So, imagine that I am talking with people who are not regular SSC readers, and we are debat
4TheAncientGeek4yNo, this: http://philpapers.org/archive/SHATVO-2.pdf [http://philpapers.org/archive/SHATVO-2.pdf]
3entirelyuseless4yMy problem with the "shared vocabulary" is that as you note yourself here, it implies that something has already been thought through, and it assumes that you have understood the thing properly. So for example if you reject an argument because "that's an example of a motte and bailey fallacy", then this only works if it is in fact correct to reject arguments for that reason. And I don't think it is correct. One reason why people use a motte and bailey is that they are looking for some common ground with their interlocutor. Take one of Scott's examples, with this motte and bailey: 1. God is just the order and love in the universe 2. God is an extremely powerful supernatural being who punishes my enemies When the person asserts #1, it is not because they do not believe #2. It is because they are looking for some partial expression of their belief that the other person might accept. In their understanding, the two statements do not contradict one another, even though obviously the second claims a good deal more than the first. Now Scott says that #1 is "useless," namely that even if he could theoretically accept the word "God" as applying to this, there is no reason for him to do this, because there is nowhere to go from there. And this might be true. But the fact that #2 is false does not prove that it is true. Most likely, if you work hard, you can find some #3, stronger than #1, but weaker than #2, which will also be defensible. And it would be right to tell them to do the work that is needed. But it would be wrong to simply say, "Oh, that's a motte and bailey" and walk away. This is not merely a criticism of this bit of shared vocabulary, so that it would just be a question of getting the right shared vocabulary. A similar criticism will apply to virtually any possible piece of shared vocabulary -- you are always assuming things just by using the vocabulary, and you might be wrong in those assumptions.
1SatvikBeri4yMaking shared vocabulary common and explicit usually makes it faster to iterate. For example, the EA community converged on the idea of replaceability as an important heuristic for career decisions for a while, and then realized that they'd been putting too much emphasis there and explicitly toned it down [https://80000hours.org/2015/07/replaceability-isnt-as-important-as-you-might-think-or-weve-suggested/] . But the general concept had been floating around in discussion space already, giving it a name just made it easier to explicitly think about.
2entirelyuseless4yI think I agree with this in one sense and disagree in another. In particular, in regard to "giving it a name just made it easier to explicitly think about" : 1. I agree that this makes it easier to reason about, and therefore you might come to conclusions faster and so on, even correctly. 2. I don't agree that we really made it easier to think about. What we actually did is make it less necessary to think about it at all, in order to come to conclusions. You can see how this works in mathematics, for example. One of the main purpose of the symbols is to abbreviate complicated concepts so that you don't have to think through them every time they come up. I think the second point here is also related to my objection in the previous comment. However, overall, the first point might be overall more important, so that the benefit outweighs the costs, especially in terms of benefit to a community.
1Elo4yWhat are you using this word to mean? At a guess it sounds like, "ideas will float to the surface" but also it does not always mean that, as used in "has percolated". Percolate relates to filtering of a substance like coffee, to get the good bits from the bad. Can you repeat the above without using this word? Are we looking to separate and elevate good ideas from the general noise on the interwebs, or are we looking to ensure ideas filter through the diaspora to every little sub group that exists? Or are we looking to filter something else? I am not sure which you are trying to describe. -------------------------------------------------------------------------------- If you want to describe an earlier post that is well know, and well spread, it should be enough to describe the name of the concept, i.e. crony beliefs. If you want to reference a less well known concept; it should be enough to name the author and link to their post, like if I wanted to refer to the list of common human goals [http://www.bearlamp.com.au/list-of-common-human-goals/] and talk about things that relate to it. I don't see the gravity of the problem you are trying to describe with your concerns.
8Kaj_Sotala4yIt's true that articles pass around the rationalist network, and if you happen to be in it, you're likely to see some such articles. But if you have something that you'd specifically want the rationalist community to see, and you're not already in the network, it's very hard. Some time back, I had a friend ask me how to promote their book which they thought might be of interest to the rationalist community. My answer was basically "you could start out by posting about it on LW, but not that many people read LW anymore so after that I can help you out by leveraging my position in the community". If they didn't know me, or another insider, they'd have a lot harder time even figuring out what they needed to do. "The rationalist network" is composed of a large number of people and sites, scattered over Tumblr blogs, Facebook groups and profiles, various individual blogs, and so on. If you want to speak to the whole network, you can't just make a post on LW anymore. Instead you need to spend time to figure out who the right people are, get to know them, and hope that you either get into the inner circle, or that enough insiders agree with your message and take up spreading it. Heck, even though I count myself as "an insider", I've also frequently wanted a way to specifically address the "rationalist community" about various topics, and then not knowing how. I mean, a lot of people in the community read my Facebook posts so I could just post something on Facebook, but that's not quite the same thing.
7Paul Crowley4yI don't think you can say both and If I don't have the skills to fix a codebase, I'm pretty handicapped in assessing it. I might still manage to spot some bad things, but I'm in no shape to pronounce it good, or "not that bad".
4Elo4yClarification: I am not a coder any more. I had skill in a few languages but I can't code any more mostly I Frankenstein my own arduino projects out of other people's projects. This means I can now read code and understand it; but not write it. It's not that bad because I read every line of the codebase to get my head around how it works. It's not that bad because when I was trying to explain a fix I could come up with the code for it: https://github.com/tricycle/lesswrong/issues/574 [https://github.com/tricycle/lesswrong/issues/574] I just can't check my work or create a pull request. It's not that bad in that it still definitely works fine, and does not crash very often and doesn't have security leaks despite having an open code base and is readable to someone with very little code skill.
0Viliam4yFor a person familiar with Python, reading most of the code, and even suggesting changes is relatively easy. It's just running the whole code on their own computer that is almost impossible. But that means that when you write the code, you can't see it in action, which means you can't test it, which means that if you made a trivial error, you cannot find it and fix it. You can't debug your code, you can't print the intermediate values; you get zero feedback for what you did. Which means that the contribution is practically useless... unless someone else who can run the whole code on their computer will look at your code and finish it. If you need multiple iterations of this, then a work that would be otherwise done in an afternoon may take weeks. That's inconvenience far beyond trivial.
5John_Maxwell4yI'm disappointed that Elo's comment hasn't gotten more upvotes. He put a lot of work into fixing LW, and it seems to me that we should be very eager to listen & learn from him. (I'm also disappointed that rayalez [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2p] 's comments are being ignored. His previous comment [http://lesswrong.com/lw/not/revitalizing_less_wrong_seems_like_a_lost_purpose/dbot] about his project was at -1 until I upvoted it. Seeing this kind of thing makes me cynical. Sometimes it seems like status in the LW community is more about who you know than what you've accomplished or what you're doing for the community.) Arbital seems like the least half-arsed effort at fixing LW thus far. Maybe we should converge around advising Alexei & team?
8gjm4yI think it's got rather a lot of upvotes. It's also got rather a lot of downvotes. I suspect they are almost all from the same person.
1Vaniver4yHmm, in that if you forget the name but remember an example from the post, you won't be able to search for it, because the LW page [http://lesswrong.com/r/discussion/lw/o2w/crony_beliefs/] only has the title and comments, as opposed to the full text?
4Elo4yyes, and if someone wants to join and get up to speed by reading the sequences and general discussion posts on LW, they won't ever read crony beliefs because it was not posted here other than by link post.
2Vaniver4yIt seems to me like durable concepts are referred to frequently, and the typical behavior is to link to the source when using a jargony term, so I'm not too worried about that.

I think this is completely correct, and have been thinking along similar lines lately.

The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.

The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.

Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all "arguments you want the community to be aware of" to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise "single point of coordination-marked" posts to LW.)

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

Participating in online discussions tends to reduce one's attention span. There's the variable reinforcement factor. There's also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)

These effects are so strong that if I stay away from the internet for a few days ("internet fast"), my attention span increases dramatically. And if I've posted comments online yesterday, it's hard for me to focus today--there's always something in the back of my mind that wants to check & see if anyone's responded. I need to refrain from making new comments for several days before I can really focus.

Lots of people have noticed that online discussions sap... (read more)

6adamzerner4yRegarding the idea that online discussion hurts attention span and productivity, I agree for the reasons you say. The book Deep Work [http://calnewport.com/books/deep-work/] (my review [https://docs.google.com/document/d/1T2mLnuz02t6YaPpoN3brpn2nLSuMq6FOGs1ElD4N-1I/edit?usp=sharing] ) talks more about it. I'm not too familiar with the actual research, but my mind seems to recall that the research supports this idea. Time Well Spent [http://www.timewellspent.io/] is a movement that deals with this topic and has some good content/resources. I think it's important to separate internet time from non-internet time. The author talks about this in Deep Work. He recommends that internet time be scheduled in advance, that way you're not internetting mindlessly out of impulse. If willpower is an issue, try Self Control [https://selfcontrolapp.com/], or going somewhere without internet. I sometimes find it useful to lock my phone in the mailbox downstairs. I'm no expert, but suspect that LW could do a better job designing for Time Well Spent. * Remove things on the sidebar like "Recent Posts" and "Recent Comments" (first item on Time Well Spent checklist [http://www.timewellspent.io/designers]). They tempt you to click around and stay on longer. If you want to see new posts or comments, you could deliberately choose to click on a link that takes you to a new webpage that shows you those things, rather than always having them shoved in your face. * Give users the option of "only be able to see things in your inbox once per day". That way, you're not tempted to constantly be checking it. (second item on checklist; letting users disconnect) * I think it'd be cool to let people display their productivity goals on their profile. Eg. "I check LW Tuesday and Thursday nights, and Sunday mornings. I intend to be working during these hours." That way perhaps you won't feel obligated to respond to people when you should be working. Furthermore, ther
6Evan_Gaensbauer4yI've been thinking about Patri's post for a long time, because I've found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They're focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It's easier to tell if what you're doing works sooner. That's often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It's just taken longer to tell if that's succeeded. Patri's post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that's value added to advancing AI safety that wouldn't have existed if LW never started. CFAR didn't exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn't get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion. What I've read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.
2G Gordon Worley III4yI tend to find discussions in comments unhelpful, but enjoy discussions spread out over responding posts. If someone takes the time to write something of length and quality sufficient that they are willing to write it as a top-level post to their blog/etc. then it's more often worth reading to me. My time is valuable, comments are cheap, so I rather read things the author invested thought in writing. (I recognize the irony that I'm participating in this discussion right now, but this particular discussion seems an unusually good chance to spread my thinking on this topic.)
2John_Maxwell4yIf anyone wants to collaborate in tackling the focus problem, send me a personal message [http://lesswrong.com/message/compose/?to=John_Maxwell_IV] with info on how to contact you. Maybe we can get some kind of randomized trial going.
4SatvikBeri4yI agree that there should be much more active enforcement of good norms than heavy-handed moderation (banning etc.), but I have a cached thought that lack of such moderation was a significant part of why I lost interest in lesswrong.com, though I don't remember specific examples. Completely agree. One particularly important mechanism, IMO, is that brains tend to pay substantially more attention to things they perceive other humans caring about. I know I write substantially better code when someone I respect will be reviewing it in detail, and that I have trouble rousing the same motivation without that.
3SatvikBeri4yThinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen. I suspect one of the reasons people have moved discussions to their own blogs or walls is because they feel like they actually can affect the norms there. Unofficial status works (cf. Eliezer, Yvain) but is not very scalable–it requires people willing to spend a lot of time writing content as well as thinking about, discussing, and advocating for community norms. I think you, Ben, Sarah etc. committing to posting here makes a lesswrong revival more likely to succeed, and would place even higher odds if 1 or more people committed to spending a significant amount of time on work such as: * Clarifying what type of content is encouraged on less wrong, and what belongs in discussion vs. main * Writing up a set of discussion norms that people can link to when saying "please do X" * Talking to people and observing the state of the community in order to improve the norms * Regularly reaching out to other writers/cross-posting relevant content, along with the seeds of a discussion * Actually ban trolls * Manage some ongoing development to improve site features

Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen.

One idea that I had, that I still think is good, is essentially something like the Sunshine Regiment. The minimal elements are:

  1. A bat-signal where you can flag a comment for attention by someone in the Sunshine Regiment.

  2. That shows up in an inbox of everyone in the SR until one of them clicks an "I've got this" button.

  3. The person who took on the post writes an explanation of how they could have written the post better / more in line with community norms.

The basic idea here is that lots of people have the ability to stage these interventions / do these corrections, but (a) it's draining and not the sort of thing that a lot of people want to do more than X times a month, and (b) not the sort of thing low-status but norm-acclimated members of the community feel comfortable doing unless they'r... (read more)

9sarahconstantin4yHappy to join Sunshine Regiment if you can set it up.

Also happy to join. And I'm happy to commit to a significant amount of moderation (e.g. 10/hours a week for the next 3 months) if you think it's useful.

5AnnaSalamon4yYes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content. Probably too meta / too clever an idea, but may be worth some individual brainstorms?
3Evan_Gaensbauer4yI've been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I've noticed it's not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum. Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn't get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it's far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum. (Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.) [1] I abbreviate "Effective Altruism Forum" as "EA Forum", rather than "EAF", as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don't want people to get confused between the two.
4steven04614ySome guesses: * The EA forum has less of a reputation, so knowing about it selects better for various virtues * Interest in altruism probably correlates with pro-social behavior in general, e.g. netiquette * The EA forum doesn't have the "this site is about rationality, I have opinions and I agree with them, so they're rational, so I should post about them here" problem

Quick note: Having finally gotten used to using discussion as the primary forum, I totally missed this post as a "promoted" post and would not have seen it if it hadn't been linked on Facebook, ironically enough.

I realize this was an important post that deserved to be promoted in any objective sense, but am not sure promoting things is the best way to do that by this point.

3TheAltar4yHaving the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise. The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.
2Vaniver4yPromoting posts gets them into the RSS feed. Making it possible to promote Discussion posts, or having promoted posts appear in Discussion also, or some other similar approach seems worthwhile.
1CronoDAS4yI follow the DIscussion RSS feed but stopped following the Main RSS feed after Main shut down.
4Vaniver4yAccording to Feedly, 96 users are following the discussion RSS [http://feedly.com/i/discover/sites/search/http://lesswrong.com/r/discussion/new/.rss] and 11k are following the Main RSS [http://feedly.com/i/subscription/feed/http://lesswrong.com/.rss]. (Feedly is probably not the only place I should be checking to compare those two, but the effect size seems pretty huge. The main problem is missing people who actually check the website every day, but go to discussion/new instead of all/new.)
0Raemon4yHmm. Maybe for short term solutions (until we figure out a way to get promote individual discussion posts while keeping them in discussion), maybe for posts like this: a) create a stub post on Main, which mostly says "we have an important thing to say, check it out in discussion" b) maybe also make a post on Main saying "Main is now deprecated. Apart from major announcements, all stuff will be in Discussion now. Consider updating your RSS. We're also seeing a lot of old timers return to post these days, check it out". etc.
2Vaniver4yI don't think this will happen with a sufficiently large number of people to make that a good option. I think my current best plan is to keep the sitewide RSS as having only promoted posts, but including promoted posts in Discussion. We can also advertise the Discussion RSS a bit more heavily, but I don't know how many people will want to do that relative to just checking LW.
0CronoDAS4yI don't use Feedly.
1Raemon4yGotcha. Agreed. Do you have any sense of how big a change that is? Sometime after Solstice I can hopefully dedicate more time to hacking on Less Wrong.
1Vaniver4yI haven't looked at the code that generates the subreddit pages, so not really. It seems like it'd likely be a one-line change in an eligibility function somewhere, but finding that line seems rough.

Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.

Prediction: If they do, we will see a substantial pickup in discussion here. If they don't, we won't.

People go where the content is. The diaspora left LW a ghost town not because nobody liked LW but because all the best content -- which is ever and always created by a relatively small number of people -- went elsewhere. I read SSC, and post on SSC, not because it is better than LW (it's not, its interface makes me want to hit babies with concrete blocks) but because that's where Yvain writes. LW's train wreck of a technical state is not as much of a handicap as it seems.

I like LW-ish content, so I approve of this effort -- but it will only work to the extent that the Royals return.

There are lots of diverse opinions here, but you are not going to get anywhere just by talking. I recommend you do the following:

  1. Get together a small "LW 2.0" committee that has the authority to make serious changes
  2. Have committee members debate possible changes and hash out a plan. General community members should have a place to voice their feedback, but shouldn't get a vote per se.
  3. Once the plan is decided, implement it. Then reconvene the committee every 3 or 6 months to review the status and make incremental fixes.

To say it in a different way: success or failure depends much more on building and empowering a small group of dedicated individuals, than on getting buy-in from a large diffuse group of participants.

This is being done.

2casebash4yCan you tell us who the committee members are?
2John_Maxwell4yThis is how most companies work: there are employees of the company working full-time on making users as happy as possible. (In this case, I'd guess the users to focus on are users who have a history of making valuable contributions.)

Thanks for addressing what I think is one of the central issues for the future of the rationalist community.

I agree that we would be in a much better situation if rationalist discussion was centralized and that we are instead in a tragedy of the commons - more people would post here if they knew that others would. However, I contend that we're further from that desired equilibrium that you acknowledge. Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:

  1. Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.
  2. Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. This is mostly because of (1). (Although posts on LW do reliably get at least a handful of comments and views)
  3. Comments on LessWrong are more critical and less polite than comments on other sites.
  4. Posts on LessWrong are held in lower regard academic communities like ML and policy than posts elsewhere, including on Medium.

The incentive that pushes in our fav... (read more)

Thoughts on RyanCarey's problems list, point by point:

Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:

Not sure all of them are "problems", exactly. I agree that incentive gradients matter, though.

Comments on the specific "problems":

1 Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.

Insofar as 1 is true, it seems like a genuine and simple bug that is probably worth fixing. Matt Graves is I believe the person to talk to if one has ideas or $ to contribute to this. (Or the Arbital crew, insofar as they're taking suggestions.)

2 Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. [snip]

The extent to which this is a bug depends on the extent to which posts are aimed at "going viral" / getting shared. If our aim is intellectual generativity, then we do want to attract the best minds of the internet to come think with us, and that does require sometimes having posts g... (read more)

5Vaniver4yYep, message me about this, either here or by email (this username at gmail).
8AnnaSalamon4y(ii) seems good, and worth adding more hands and voices to; it seems to me we can do it in a distributed fashion, and just start adding to LW and going for momentum, though. sarahconstantin and some others have in fact been doing something like (ii), and was I suspect a partial cause of e.g. this post of mine, and of: * By paulchristiano: * http://lesswrong.com/lw/o5y/if_we_cant_lie_to_others_we_will_lie_to_ourselves/ [http://lesswrong.com/lw/o5y/if_we_cant_lie_to_others_we_will_lie_to_ourselves/] * http://lesswrong.com/lw/o5h/less_costly_signaling/ [http://lesswrong.com/lw/o5h/less_costly_signaling/] * By Benquo: * http://lesswrong.com/r/discussion/lw/o43/matching_donation_fundraisers_can_be_harmfully/ [http://lesswrong.com/r/discussion/lw/o43/matching_donation_fundraisers_can_be_harmfully/] * http://lesswrong.com/lw/o56/sample_means_how_do_they_work/ [http://lesswrong.com/lw/o56/sample_means_how_do_they_work/] * By sarahconstantin: * http://lesswrong.com/lw/o5m/industry_matters_2_partial_retraction/ [http://lesswrong.com/lw/o5m/industry_matters_2_partial_retraction/] * http://lesswrong.com/lw/o55/vestibular_stimulation_and_fat_loss/ [http://lesswrong.com/lw/o55/vestibular_stimulation_and_fat_loss/] * http://lesswrong.com/lw/o41/on_trying_not_to_be_wrong/ [http://lesswrong.com/lw/o41/on_trying_not_to_be_wrong/] Efforts to add to (ii) would I think be extremely welcome; it is a good idea, and I may do more of it as well. If anyone reading has a desire to revitalize LW, reading some of these or other posts and adding a substantive (or appreciative) comment is another way to encourage thoughtful posting.

I also support (ii) and have been trying to recruit more good bloggers.

I'll note that good writers tend to be low on "civic virtue" -- creative work tends to cut against that as a motivation. I'm still trying to think of good ways to smooth the incentive gradient for writers.

One possibility is to get some people to spend a weekend together -- rent a place in Big Sur or something -- and brainstorm/hype up some LW-specific ideas together, which will be posted in real time.

5Vaniver4yThis sounds like an excellent idea.
1RyanCarey4yI agree that this is great. I meant to propose something even more specific. Using for example a Google Form, you collect a list of people who agree to post on LW if and only if that list surpasses 200 names. Once it gets to 200, you email everybody and tell them LW is relaunching. Do I think it'd work? Maybe.
4TheAltar4yA separate action that could be taken by bloggers who are interested in it (especially people just starting new blogs) is to continue posting where they do, but disable comments on their posts and link people to corresponding LW link post to comment on. This is far less ideal, but allows them to post elsewhere and to have the comments content appear here on LW.
9sarahconstantin4yThis is a nontrivial cost. I'm considering it myself, and am noticing that I'm a bit put off, given that some of my (loyal and reflective) readers/commenters are people who don't like LW, and it feels premature to drag them here until I can promise them a better environment. Plus, it adds an extra barrier (creating an account) to commenting, which might frequently lead to no outside comments at all. A lighter-weight version of this (for now), might be just linking to discussion on LW, without disabling blog comments.
1FeepingCreature4yWould you use the LW comments section if it was embeddable, like Disqus is?
0casebash4yI thought ii) had been discussed in the past and was supposedly happening, but nothing ever came from it.

Brian Tomasik's article Why I Prefer Public Conversations is relevant to

I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed). Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort. (At least if we can build up toward in fact having a single locus.)

I might have missed it, but reading through the comment thread here I don't see prominent links to past discussions. There's LessWrong 2.0 by Vaniver last year, and, more recently, there is LessWrong use, successorship, and diaspora. Quoting from the section on rejoin conditions in the latter:

A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?

(links to rejoin condition write-ins)

Feel free to read these

... (read more)

The impression I form based on this is that the main blocker to LessWrong revitalization is people writing sufficiently attractive posts. This seems to mostly agree with the emerging consensus in the comments, but the empirical backing from the survey is nice. Also, it's good to know that software or interface improvements aren't a big blocker.

As for what's blocking content creators from contributing to LessWrong, here are a few hypotheses that don't seem to have been given as much attention as I'd like:

  1. Contributing novel content becomes harder as people's knowledge base and expectations grow: Shooting off a speculative missive no longer works in 2016 the way it might have worked in 2011 -- people have already seen a lot of the basic speculation, and need something more substantive to catch their attention. But the flip side is that something that's truly substantive is going to require a lot of work to research and write, and then even more work to simplify and explain elegantly. This problem is stronger on LessWrong because of the asymmetric nature of rewards. On Facebook, you can still shoot off a speculative missive -- it's your own Facebook post -- and you won't get blasted
... (read more)
9John_Maxwell4yI compiled some previous discussion here [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di0p] , but the troll downvoted it below visibility (he's been very active in this thread). Crazy idea to address point #2: What if posts were made anonymously by default, and only became nonymous once they were upvoted past a certain threshold? This lets you take credit if your post is well-received while lessening the punishment if your post is poorly received.
3VipulNaik4yWhoops, sorry for missing that. Upvoted, hopefully it gets to zero and resurfaces.
0Lumifer4ySometimes that's deliberate. It it well known that the best way to get teh internets to explain things to you is not to ask for an explanation, but to make a confident though erroneous claim.

It it well known that the best way to get teh internets to explain things to you is not to ask for an explanation, but to make a confident though erroneous claim.

I've noticed you using this strategy in the past. It makes me frustrated with you, but I want to uphold LW's norms of politeness in conversation, so I grit my teeth through the frustration and politely explain why you're wrong. This drains my energy and makes me significantly less enthusiastic about using LW.

Please stop.

4Lumifer4yI don't make deliberately erroneous claims (unless I'm trolling which happens very rarely on LW and is obvious). I sometimes make claims without describing my confidence in them which, I think, is pretty normal. Offering an observation or an assertion up for debate so that it may be confirmed or debunked is one of the standard ways of how conversations work. I am not sure what do you want me to do. My comments are already peppered with "I think...", and "seems to me...", and other expressions like that. Would you like me to make no errors? I would gladly oblige if only you show me how.
4John_Maxwell4yI'll try to give you more specific feedback if I get frustrated by your comments again in the future.
6VipulNaik4yIt could also be a good way for the Internets to give up on trying to talk in a forum where you are around.
0btrettel4yAccording to 538's survey [http://fivethirtyeight.com/features/we-asked-8500-internet-commenters-why-they-do-what-they-do/] more people reported that they comment to fix errors than anything else. This doesn't mean that you're wrong, though, because it doesn't seem 538 asked why people stop commenting (based on my skim of the article; feel free to correct me).
0Lumifer4yWhy would teh internets be scared by the presence of lil' ol' me? I am very ignorable and have no desire to sealion. Not wanting to talk to me is perfectly fine.
6Jacobian4yBecause we're talking about the quality of discussion on LW and how to encourage people to post more good stuff. Whether or not you're OK with people ignoring your trollishness, trollishness lowers the quality of discussion and discourages people from posting. If you persist at it, you are choosing personal gain (whether provocation or learning stuff) over communally beneficial norms. And you're not "lil' ol' me" when you're in top 5 of commentors month in and month out. "Feel free to ignore me" IS sealioning, because when people react to you in a way you didn't want (for example, they get angry or frustrated) you accept no blame or responsibility for it. The first comment I got to a post about empathy and altruism was you telling me [http://lesswrong.com/lw/nzw/putanumonit_discarding_empathy_to_save_the_world/dg14] that my recommendation leads to kulaks, ghettos and witch burning (I'm being uncharitable, but not untruthful). If I am then discouraged from posting new stuff, will you say that it's entirely my fault for being too sensitive and not ignoring you?
1g_pepper4yBy the same token, doesn't being in the top 5 of commentators regularly suggest that a person is not really too far outside of community norms? IMO there is a difference between trolling and blunt but rational commentary, and the example you linked to above (involving kulaks and the like) is blunt but rational commentary (and frankly, it was not excessively blunt); there is a good case to be made for emotional human empathy acting as a check on utilitarianism running awry. The 20th century provides several examples of utopian projects ending badly, and it seems to me useful to ask if removing emotional empathy from the moral calculation is a good idea. IMO, that is a false dichotomy - (being discouraged from posting new stuff vs. ignoring disagreeing posts). A third option is to read the disagreeing post, think about it, respond to it if you deem doing so worthwhile, and move on, while recognizing that divergent viewpoints exist. My fear is that if comments like Lumifer's Kulak comment are discouraged for fear of discouraging future postings, LW is at risk of becoming an echo chamber.
3Jacobian4yAs you've noticed in that thread, I didn't cry that Lumifer offended me. I replied to his comment and we ended up having a semi-productive discussion on empathy, coercion and unintended consequences. If bringing that specific example up reads as concern trolling on my part, I apologize. I wanted to make a more general point: I do recognize that there's a trade off to be made between criticism and niceness, both of which are needed for a good discussion. I'm also OK if you think LW is too nice and the comments should be harsher. The directness of criticism is one of my favorite things about LW, along with overall commitment to free speech. But I also care about practical outcomes on discussion quality, not abstract ideology. I think that there's an important distinction between the following two positions: 1. "I made a blunt comment because I judged that criticism is more important than niceness in this specific case." 2. "I made a blunt comment and niceness is not my concern at all, because other people are free to ignore me." I think that an environment where people hold #1 produces better discussion. And unless I'm corrected, it seems like Lumifer espouses #2.
1g_pepper4yYes I did notice. That is why that particular exchange was a great example of how one need neither ignore nor be discouraged by a comment like Lumifer's kulak comment; instead, allow the comment to engender a useful dialog. No, I don't think that. I really like the quality of the comments on LW, that is why I come here. However, I think that Lumifer's comments are within the range of LW community norms. One thing I like about LW is that there exists a diversity of commenting styles just as it has a diversity of viewpoints on various subjects. An example of another high-karma commentator with a style (and opinions) that are quite different from Lumifer's is gjm. IMO both commentators make thoughtful, valuable contributions to LW, albeit their styles are quite different; I think that LW benefits from both commentators' styles and opinions, and the distinct styles and opinions of many others as well. Note that I am in favor of community norms, but I feel that Lumifer's comments are within those norms. IMO, Lumifer is not in category 2. Using the kulak comment again as an illustrative example, it seems to me that the comment was in no way a personal attack on you or anyone else and was not what I would classify as "not nice". It seems to me that the specific examples he chose did bring clarity to the discussion in a way that voicing an abstract objection or a less extreme example would not have. IMO Stalin's dekulakization (which is I suppose what Lumifer was referring to) really is the sort of thing that can happen more easily when an idealized (albeit flawed) utilitarian goal is pursued in the absence of emotional empathy. In short, I suspect that the examples were selected because they effectively made the point that Lumifer intended to make rather than because Lumifer was trying to offend or troll.
1Lumifer4yI don't accept that I'm trollish. Trolling is basically pushing buttons to get an emotional reaction, mostly for the lulz. I'm not interested in triggering emotional reactions in people two screens away from me and LW isn't a fertile ground for the lulz, anyway. I will confess to the propensity to make my arguments forcefully. I count it as a feature and not a bug. I will also confess to liking extreme and exaggerated examples -- the reason is that in some situations I want to trade off nuance against clarity and obviousness. As to discouraging people from posting, I do want to discourage people from posting low-quality stuff. I see nothing wrong with that. Generally speaking, yes. I am not your mother, your nanny, or your mentor and making sure you're emotionally comfortable is not one of my duties. I also reject the popular political correctness / social justice notion that it's sufficient for the listener to claim offense (or some other variety of victimhood) to put all the responsibility/blame on the speaker. I wouldn't put it in terms of "fault" and I don't know about you personally, but yes, I think that some chunk of the LW population is too thin-skinned and would greatly benefit from a dose of HTFU. Note, though, that I don't consider it my obligation to go out of the way to provide that dose (see above re being a nanny). I just don't think that being particularly thin-skinned gives you any special rights.
2Jacobian4yI'm pretty sure I didn't write anything to suggest that the blame is all the speakers', and yet you seem to have read it this way. Who's responsible for this misunderstanding? I hope we can both agree that the responsibility is shared between speaker and listener, it can't work any other way in a dialogue when both people alternate roles. And when you write something in direct criticism of someone (and not some general statement), you are engaged in dialogue. Now it also seems to me that "political correctness/SJ culture" is basically a pejorative on LW, but I'll take your word that you're not trying to push buttons by comparing me to them. Instead I'll just remind you that reversed stupidity is not intelligence, and being careless about offending people is not correlated with truth seeking. I support the Buddhist Victorian Sufi [http://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/] standard of SSC, and kindness is 33% of that standard.
1Lumifer4yBoth are responsible for the misunderstanding, but only one of them is responsible for his own anger and frustration. I agree. But note that "not correlated" is different from "negatively correlated". As in "being very careful to not offend people is negatively correlated with truth-seeking" :-P I like the SSC standard, too, but notice that it's very flexible and can be bent into many different shapes :-/ And, of course, once in a while Yvain declares a reign of terror.
2gjm4yWho said anything about scared? Or for that matter about you? Someone in the habit of making confident erroneous claims may start to get ignored for being a blowhard even if no one is scared of them.
0Lumifer4yHere [http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/digw] : :-) And, as I mentioned, I'm perfectly fine with being ignored.
0gjm4yAh. Fair enough, but some other people contemplating using the same technique might be less so.
0Lumifer4yFeel free to point out to those some other people their shortcomings, then. I hope you don't think I'm a role model, do you now? X-)
0gjm4yI don't really believe in role models. Anyway, I wasn't intending to point out any person's shortcomings; I was agreeing with VipulNaik's misgivings about the technique. (To be more concrete, "doing X may get you ignored as a blowhard" is a criticism of doing-X, not a criticism of someone who either does X or contemplates doing X.)
1Lumifer4ySure, one might come across as a blowhard. But one might also come across as someone who can be persuaded by evidence to change his mind without a lot of kicking and screaming. This is really about reputation management in an online community, a complicated topic.
0Loiathal4yI've kind of lurked around LW for the last year, reading (mostly) older posts with a few newer ones mixed in. Reading those two threads you linked has me sitting at my desk with a pretty intense wistfulness for a community I never met.

I think we need to put our money and investment where our mouths are on this. Either Less Wrong (or another centralized discussion platform) are very valuable and worth tens of thousands of dollars in investment and moderation, or they are not that important and not worth it. It seems that every time we have a conversation about Less Wrong and the importance of it, the problem is that we expect everyone to do things on a volunteer basis and things will just magically get going again. It seems like Less Wrong was going great back when there was active and c... (read more)

6RyanCarey4yThe list of plausibly worthwhile changes that would help to revitalize LessWrong is long: 1. redesigning LW's appearance 2. cleaning up the codebase 3. forming a new moderation team 4. producing a bunch of new content 5. removing the main/discussion distinction 6. choosing one or more people to take full leadership of the project 7. (maybe) recentering the list of topics for discussion to include more about EA, tech or politics 8. (maybe) allow more links, rather than just posts 9. rebranding. x) getting many people join at once. Effort might be superlinear here - once you commit to a few, you might just want to bite the bullet a build a new damned site. That's going to cost time and dollars - maybe hundreds of thousands, but if it's what has to be done...

I know that there have been several attempts at reviving Less Wrong in the past, but these haven't succeeded because a site needs content to succeed and generating high quality content is both extremely hard and extremely time intensive.

I agree with Alexandros that Eliezer's ghost is holding this site back - you need to talk to Eliezer and ask if he would be willing to transfer control of this site to CFAR. What we need at the moment is clear leadership, a vision and resources to rebuild the site.

If you produced a compelling vision of what Less Wrong shoul... (read more)

I just want Eliezer to write stuff again. All I see from him now are Facebook posts.

I agree with your comments on small intellectually generative circles and wonder if the optimal size there might not be substantially smaller than LW. It's my sense that LW has been good for dissemination, but most of the generation of thoughts has been done in smaller IRL circles. A set of people more selected for the ability and will to focus on the problem you describe in 1-3, if gathered in some internet space outside LW, might be able to be a lot more effective.

Here's an opinion on this that I haven't seen voiced yet:

I have trouble being excited about the 'rationalist community' because it turns out it's actually the "AI doomsday cult", and never seems to get very far away from that.

As a person who thinks we have far bigger fish to fry than impending existential AI risk - like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively... (read more)

Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.

I don't think that it's true that you need to buy into the AI-thing to be a member of the community, and so I think that it seems that way is a problem.

But I think you do need to be able to buy into the non-weirdness of caring about the AI-thing, and that we may need to be somewhat explicit about the difference between those two things.

[This isn't specific to AI; I think this holds for lots of positions. Cryonics is probably an easy one to point at that disproportionately many LWers endorse but is seen as deeply weird by society at large.]

0[anonymous]4yIf you're going to downvote this, at least say why. (Hm, I just learned that Lesswrong doesn't let you delete comments? That's strange.)

I think I disagree with your conclusion here, although I'd agree with something in its vicinity.

One of the strengths of a larger community is the potential to explore multiple areas in moderate amounts of depth. We want to be able to have detailed conversations on each of: e.g. good epistemic habits; implications of AI; distributions of cost-effectiveness; personal productivity; technical AI safety; ...

It asks too much for everyone to keep up with each of these conversations, particularly when each of them can spawn many detailed sub-conversations. But if ... (read more)

6AnnaSalamon4yIt seems to me that for larger communities, there should be both: (a) a central core that everyone keeps up on, regardless of subtopical interest; and (b) topical centers that build in themselves, and that those contributing to that topical center are expected to be up on, but that members of other topical centers are not necessarily up on. (So that folks contributing to a given subtopical center should be expected to be keeping up with both that subtopic, and the central cannon.) It seems to me that (a) probably should be located on LW or similar, and that, if/as the community grows, the number of posts within (a) can remain capped by some "keep up withable" number, with quality standards raising as needed.
3owencb4yYour (a) / (b) division basically makes sense to me.[*] I think we're already at the point where we need this fracturing. However, I don't think that the LW format makes sense for (a). I'd probably prefer curated aggregation of good content for (a), with fairly clear lines about what's in or out. It's very unclear what the threshold for keeping up on LW should be. Also, I quite like the idea of the topical centres being hosted in the same place as the core, so that they're easy to find. [*] A possible caveat is dealing with new community members nicely; I haven't thought about this enough so I'm just dropping a flag here.
2Ben Pace4yAlso it makes it easy for mods to enforce the distinction. Instead of "I think this post and discussion is not suited for this place, could you delete it and take it elsewhere?" it can just be "This should actually be over in sub-forum X, so I've moved it there."
5owencb4yIn general if we don't explicitly design institutions that will work well with a much larger community, we shouldn't be surprised if things break down when the community grows.

It was good of you to write this post out of a sense of civic virtue, Anna. I'd like to share a few thoughts on the incentives of potential content creators.

Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of r... (read more)

As someone who is actively doing something in this direction at Map and Territory, a couple thoughts.

A single source is weak in several ways. In particular although it may sound nice and convenient from the inside, no major movement that affects a significant portion of the population has a single source. It may have its seed in a single source, but it is spread and diffuse and made up of thousands of voices saying different things. There's no one play to go for social justice or neoreaction or anything else, but there are lots of voices saying lots of thi... (read more)

9SatvikBeri4y100% centralization is obviously not correct, but 100% decentralization seems to have major flaws as well–for example, it makes discovery, onboarding, and progress in discussion a lot harder. On the last point: I think the LW community has discovered ways to have better conversations, such as tabooing words. Being able to talk to someone who has the same set of prerequisites allows for much faster, much more interesting conversation, at least on certain topics. The lack of any centralization means that we're not building up a set of prerequisites, so we're stuck at conversation level 2 when we need to achieve level 10.
2Jacobian4yThat's a good point, but I also want to offer that I don't personally see this as a huge problem for LW. Maybe it's because I'm a latecomer, but I never really cared or kept track of who was high status on LW. First of all, I imagine that a lot of the status hierarchy is settled in real-life interactions and not by counting karma. We're all in Eliezer's shadow anyway. I just want LW to be great again. I don't mind donating money to a small group of people who will take responsibility for making it great again. I certainly don't mind letting this small group get glory and status, especially if getting paid in status will get us a discount on the monetary cost :)

I disagree with #1 and #2, and I don't identify as a rationalist (or for that matter, much as a member of any community), but I think it is true that Less Wrong has been abandoned without being replaced by anything equally good, and that is a sad thing. In that sense I would be happy to see attempts to revive it.

I definitely disagree with the comment that SSC has a better layout, however; I think people moved there because there were no upvotes and downvotes. The layout for comments there is awful, and it has a very limited number of levels, which after a few comments prevents you from responding directly to anything.

7Ben Pace4yGonna chip in a +1 regarding SSC's comment system. There are good comments, but this seems in spite of the comment mechanism, not because.

Eh, one thing I've noticed about SSC is a number of deeply bad comments, which I don't think I've seen on LW. Yes, there are also good comments, but I can imagine someone five years ago looking at the state of SSC commenting now and saying "and this is why we need to ban politics" instead of seeing it as a positive change.

We have lately ceased to have a "single conversation" in this way.

Can we hope to address this without understanding why it happened?

What are y'all's theories of why it happened?

9John_Maxwell4yOne thought that occurs to me re: why this discussion tends to fail, and why Less Wrong has trouble getting things done in general, is the forum structure. On lots of forums, contributing to a thread will cause the thread to be "bumped", which gives it additional visibility. This means if a topic is one that many people are interested in, you can have a sustained discussion that does not need to continually be restarted from scratch. Which creates the possibility of planning out and executing a project. (I imagine the linear structure of an old school forum thread is also better for building up knowledge, because you can assume that the person reading your post has already read all the previous posts in the thread.) A downside of the "bump" mechanic is that controversial threads which attract a lot of comments will receive more attention than they deserve. So perhaps an explicit "sticky" mechanic is better. (Has anyone ever seen a forum where users could vote on what posts to "sticky"?)

#1: the general move of the internet away from blogs and forums and towards social media.

In particular, there seems to be a mental move that people make, that I've seen people write about quite frequently, of wanting to avoid the more "official"-seeming forms of online discussion, and towards more informal places. From blogging to FB, from FB to Tumblr and Twitter, and thence to Snapchat and other stuff I'm too old for. Basically, people say that they're intimidated to talk on the more official, public channels. I get a sense of people feeling hassled by unfriendly commenters, and also a sense of something like "kids wanting to hang out where the grownups aren't", except that the "kids" here are often adults themselves. A sense that you'll be judged if you do your honest best to write what you actually believe, in front of people who might critique it, and so that it's safer to do something that leaves you less exposed, like sharing memes.

I think the "hide, go in the darkness, do things that you can't do by daylight" Dionysian kind of impulse is not totally irrational (a lot of people do have judgmental employers or families) but it's really counterproductive to discourse, which is inherently an Apollonian, daylight kind of activity.

3steven04614yTo me, the major advantage of social media is they make it easy to choose whose content to read. A version of LW where only my 25 favorite posters were visible would be exciting where the current version is boring. (I don't think that's a feasible change, but maybe it's another data point that helps people understand the problem.)
7Evan_Gaensbauer4yYou can already do this. If you click on a user's profile, there will be a little box in the top right corner. Click on the button that says "add to friends" there. When you "friend" someone on LessWrong, it just means you follow them. If you go to www.lesswrong.com/r/friends [http://www.lesswrong.com], there's a feed with submissions from only the other users you're following.
3steven04614yCool, thanks, but it looks like that's posts only, not comments.
2AnnaSalamon4yIgnoring the feasibility question for a minute, I'm confused about whether it would be desirable (if feasible). There are some obvious advantages to making it easy for people to choose what to read. And as a general heuristic, making it easy for people to do things they want to do seems usually good/cooperative. But there are also strong advantages to having common knowledge of particular content/arguments (a cannon; a single thread of assumed "yes that's okay to assume and build on"); and making user displays individual (as e.g. Facebook does) cuts heavily against that. (I realize you weren't talking about what was all-things-considered desirable, only about what feels exciting/boring.)
1steven04614yThat seems an important set of concerns, but also I'm not sure how much people are letting lack of canonicity bother them in choosing what to cite and reply to, and popular content will become canon through other mechanisms than the front page, and the more canon there exists, the harder it will be to take it as common knowledge. User-picked content is to some extent also compatible with canon, e.g. through social pressure to read a general "best of" feed. (Just to be clear, though, I don't think this is probably the way we should go / the best use of resources.)
2Morendil4yYes, and this would be a general trend - affecting all community blogs to some extent. I was looking for an explanation for the downfall of LessWrong specifically, but I suppose it's also interesting to consider general trends. Would you say that LessWrong is particularly prone to this effect, and if so because of what properties?

Specifically, I think that LW declined from its peak by losing its top bloggers to new projects. Eliezer went to do AI research full-time at MIRI, Anna started running CFAR, various others started to work on those two organizations or others (I went to work at MetaMed). There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.

One trend I dislike is that highly competent people invariably stop hanging out with the less-high-status, less-accomplished, often younger, members of their group. VIPs have a strong temptation to retreat to a "VIP island" -- which leaves everyone else short of role models and stars, and ultimately kills communities. (I'm genuinely not accusing anybody of nefarious behavior, I'm just noting a normal human pattern.) Like -- obviously it's not fair to reward competence with extra burdens, I'm not that much of a collectivist. But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.

But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.

As for why this is a problem for LW specifically, I would probably point at age. The full explanation is too long for this comment, and so may become a post, but the basic idea is that 'career consolidation' is a developmental task that comes before 'generativity', or focusing mostly on shepherding the next generation, which comes before 'guardianship', or focusing mostly on preserving the important pieces of the past.

The community seems to have mostly retracted because people took the correct step of focusing on the next stage of their development, but because there hadn't been enough people who had finished previous stages of their development, we didn't have enough guardians. We may be able to build more directly, but it might only work the long way.

7Alexei4ySounds interesting. I'd love to read the post.
8kechpaja4yTo expand on what sarahconstantin said, there's a lot more this community could be doing to neutralize status differences. I personally find it extremely intimidating and alienating that some community members are elevated to near godlike status (to the point where, at times, I simply cannot read i.e. SSC or anything by Eliezer — I'm very, very celebrity-averse). I've often fantasized about a LW-like community blog that was entirely anonymous (or nearly so), so that ideas could be considered without being influenced by people's perceptions of their originators (if we could solve the moderation/trolling problem, that is, to prevent it from becoming just another 4chan). A step in the right direction that might be a bit easier to implement would be to revamp the karma system so that the number of points conferred by each up or down vote was inversely proportional to the number of points that the author of the post/comment in question had already accrued. The thing is, in the absence of something like what I just described, I'm skeptical that it would be possible to prevent the conversation from quickly becoming centered around a few VIPs, with everyone else limited to commenting on those individuals' posts or interacting with their own small circles of friends.
5Morendil4yI'd be wary of post hoc ergo propter hoc in this context. You might also have expected that by leaving for other projects these posters would create a vacuum for others to fill. It could be worth looking at why that didn't happen.

One interesting thing is that at one point post-Eliezer, there were two "rising stars" on LW who were regularly producing lots of fascinating content: lukeprog and So8res. Both stopped regularly posting here some time after they were recruited by MIRI and their priorities shifted.

1G Gordon Worley III4yThis is why I very much like Medium. I think of it as Twitter for people who want to write/read long things rather than short things. It's also much nicer than Twitter in my experience.
4SatvikBeri4yMy theory is that the main things that matter are content and enforcement of strong intellectual norms, and both degraded around the time a few major high-status members of the community mostly stopped posting (e.g. Eliezer and Yvain.) The problem with lack of content is obvious, the problem with lack of enforcement is that most discussions are not very good, and it takes a significant amount of feedback to make them better. But it's hard for people to get away with giving subtle criticism unless they're already a high-status member of a community, and upvotes/downvotes are just not sufficiently granular.
8Morendil4yThis feels like a good start but one that needs significant improvement too. For instance, I'm wondering how much of the situation Anna laments is a result of LW lacking an explicit editorial policy. I for one never quite felt sure what was or wasn't relevant for LW - what had a shot at being promoted - and the few posts I wrote here had a tentative aspect to them because of this. I can't yet articulate why I stopped posting, but it may have had something to do with my writing a bunch of substantive posts that were never promoted to Main. If you look at the home page only (recent articles in Main) you could draw the inference that the main topics on LessWrong are MIRI, CFAR, FHI, "the LessWrong community", with a side dish of AI safety and startup founder psychology. This doesn't feel aligned with "refining the art of human rationality", it makes LessWrong feel like more of a corporate blog.

Agree that a lot more clarity would help.

Assuming Viliam's comment on the troll is accurate, that's probably sufficient to explain the decline: http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2n

SSC linked to this LW post (here http://slatestarcodex.com/2016/12/06/links-1216-site-makes-right/ ). I suspect it might be of some use to you if explain my reasons why I'm interested in reading and commenting on SSC but not very much on LW.

First of all, the blog interface is confusing, more so than regular blogs or sub-reddits or blog-link-aggregators.

Also, to use LW terminology, I have pretty negative prior on LW. (Some other might say the LW has not a very good brand.) I'm still not convinced that AI risk is very important (nor that decision theory is g... (read more)

0Vaniver4yThanks for sharing! I appreciate the feedback but because it's important to distinguish between "the problem is that you are X" and "the problem is that you look like you are X," I think it's worth hashing out whether some points are true. Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW [http://lesswrong.com/top/], the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding. (His views have since changed [https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit] , a document I think is worth reading in full.) And the Sequences themselves are rarely if ever directly about AI risk; they're more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them "no, they're different" it's easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say "oh yeah, those are different." And if you're just curious about intelligence and morality, then you still end up with a crisper model than you started with! I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of "do I understand this piece of mental machinery well enough to program it?", which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.) I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents. For the first half, I think considering this involves becoming more precise about 'healthiest'. On the one hand, LW's reputation has a lot of black spots, and those basically can't be washed off, but on the other hand, it doesn't seem like r
1nimim-k-m4yI grant that I was talking out of my memory; the previous time I read the LW stuff was years ago. MIRI and CFAR logos up there did not help.

I oversee a list of Facebook groups so if there's any way I can help support this, please let me know and your arguments: https://www.facebook.com/EffectiveGroups/

Here's some intuitions I have:

  1. It will be really hard to work against the network effects and ease of Facebook but I think its social role should be emphasised instead. Likewise for EA Forum but maybe this can take on a specific role like being more friendly to new people / more of a place to share information and do announcements.

  2. If you position LW as setting the gold standard of conversation

... (read more)

I realize I haven't given a direct answer yet, so here it is: I'm in, if I'm wanted, and if some of the changes discussed here take place. (What it would take to get me onboard is, at the least, an explicit editorial policy and people in charge of enforcing it.)

[This comment is no longer endorsed by its author]Reply

Others have made these points, but here are my top comments:

  • The site was best when there was a new, high-quality post from a respected community member every day or two.
  • The ban on politics means that a lot of interesting discussion migrates elsewhere, e.g. to Scott's blog.
  • The site's current structure - posts vs. comments seems dated. I'd like to try something like discourse.org?

I'm up for doing this, because I think you're right; I notice that commenting/posting on LessWrong has less draw for me than it did in 2011/2012, but it's also much less intimidating, which seems useful.

A interesting discussion on HN -- not about LW but about Reddit -- which still offers useful commentary about what HN people expect from a "conversational locus".

Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.

My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.

Finally, a reminder on Less Wrong history, which sugge... (read more)

3ChristianKl4yI don't buy that account of the history as being complete. Many people in the rationality community have contact with other communities that also have a higher prevalence of polyamory. The vegan community also has a higher share of polygamous people.
0itaibn04yPerhaps I should not have used such sensationalist language. I admit I don't know the whole story, and that more details are likely to find many nonrational reasons the change occurs. Still, I suspect rational persuasion did play a role, if not a complete one. Anecdotally, the Less Wrong discussion changed my opinion of polyamory from "haven't really thought about it that much" to "sounds plausible but I haven't tried it". In any case, if your memory of that section of Less Wrong history contributes positively to your nostalgia, it's worth reconsidering the chance events like that will ever happen again.

I think the Less Wrong website diminished in popularity because of the local meetups. Face to face conversation beats online conversation for most practical purposes. But many Less Wrongers have transitioned to being parents, or have found more professional success so I'm not sure how well the meetups are going now. Plus some of the meetups ban members rather than rationally explaining why they are not welcome in the group. This is a horrible tactic and causes members to limit how they express themselves... which goes against the whole purpose of rationality meetups.

My 2 cents. We are not at a stage to have a useful singular discussion. We need to collect evidence about how agents can or cannot be implemented before we can start to have a single useful discussion. Each world view needs their own space.

My space is currently my own head and I'll be testing my ideas against the world, rather than other people in discussion. If they hold up I'll come back here.

I've known about Less Wrong for about two full years. A few weeks ago I started coming here regularly. A week ago I made an account -- right before this post and others like it.

My own poetic feeling is there is a change in the winds, and the demand for a good community is growing. SSC has no real community. Facebook is falling apart with fake news and awful political memes. People are losing control of their emotions w.r.t politics. And calm scientific rationalist approaches are falling apart.

I deactivated my FB, made an account here, and have done my best... (read more)

[This comment is no longer endorsed by its author]Reply

I am working on a project with the similar purpose, and I think you will find it interesting:

http://metamind.pro

It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.

If you find it interesting and can offer some feedback - I would really appreciate it!

[This comment is no longer endorsed by its author]Reply