EDIT: Thanks to people not wanting certain words google-associated with LW: Phyg

Lesswrong has the best signal/noise ratio I know of. This is great. This is why I come here. It's nice to talk about interesting rationality-related topics without people going off the rails about politics/fail philosophy/fail ethics/definitions/etc. This seems to be possible because a good number of us have read the lesswrong material (sequences, etc) which innoculate us against that kind of noise.

Of course Lesswrong is not perfect; there is still noise. Interestingly, most of it is from people who have not read some sequence and thereby make the default mistakes or don't address the community's best understanding of the topic. We are pretty good about downvoting and/or correcting posts that fail at the core sequences, which is good. However, there are other sequences, too, many of them critically important to not failing at metaethics/thinking about AI/etc.

I'm sure you can think of some examples of what I mean. People saying things that you thought were utterly dissolved in some post or sequence, but they don't address that, and no one really calls them out. I could dig up a bunch of quotes but I don't want to single anyone out or make this about any particular point, so I'm leaving it up to your imagination/memory.

It's actually kindof frustrating seeing people make these mistakes. You could say that if I think someone needs to be told about the existence of some sequence they should have read before posting, I ought to tell them, but that's actually not what I want to do with my time here. I want to spend my time reading and participating in informed discussion. A lot of us do end up engaging mistaken posts, but that lowers the quality of discussion here because so much time and space has been spent battling ignorance instead of advancing knowledge and dicussing real problems.

It's worse than just "oh here's some more junk I have to ignore or downvote", because the path of least resistance ends up being "ignore any discussion that contains contradictions of the lesswrong scriptures", which is obviously bad. There are people who have read the sequences and know the state of the arguments and still have some intelligent critique, but it's quite hard to tell the difference between that and someone explaining for the millionth time the problem with "but won't the AI know what's right better than humans?". So I just ignore it all and miss a lot of good stuff.

Right now, the only stuff I can be resonably guaranteed is intelligent, informed, and interesting is the promoted posts. Everything else is a minefield. I'd like there to be something similar for discussion/comments. Some way of knowing "these people I'm talking to know what they are talking about" without having to dig around in their user history or whatever. I'm not proposing a particular solution here, just saying I'd like there to be more high quality discussion between more properly sequenced LWers.

There is a lot of worry on this site about whether we are too exclusive or too phygish or too harsh in our expectation that people be well-read, which I think is misplaced. It is important that modern rationality have a welcoming public face and somewhere that people can discuss without having read three years worth of daily blog posts, but at the same time I find myself looking at the moderation policy of the old sl4 mailing list and thinking "damn, I wish we were more like that". A hard-ass moderator righteously wielding the banhammer against cruft is a good thing and I enjoy it where I find it. Perhaps these things (the public face and the exclusive discussion) should be separated?

I've recently seen someone saying that no-one complains about the signal/noise ratio on LW, and therefore we should relax a bit. I've also seen a good deal of complaints about our phygish exclusivity, the politics ban, the "talk to me when you read the sequences" attitude, and so on. I'd just like to say that I like these things, and I am complaining about the signal/noise ratio on LW.

Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.

Our Phyg Is Not Exclusive Enough
New Comment
518 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've lurked here for over a year and just started posting in the fan fic threads a month ago. I have read a handful of posts from the sequences and I believe that some of those are changing my life. Sometimes when I start a sequence post I find it uninteresting and I stop. Posts early in the recommended order do this, and that gets in the way every time I try to go through in order. I just can't be bothered because I'm here for leisure and reading uninteresting things isn't leisurely.

I am noise and I am part of the doom of your community. You have my sympathy, and also my unsolicited commentary:

Presently your community is doomed because you don't filter.

Noise will keep increasing until the community you value splinters, scatters, or relocates itself as a whole. A different community will replace it, resembling the community you value just enough to mock you.

If you intentionally segregate based on qualifications your community is doomed anyway.

The qualified will stop contributing to the unqualified sectors, will stop commending potential qualifiers as they approach qualification, and will stop driving out never qualifiers with disapproval. Noise will win as soon as something ... (read more)

I suspect communities have a natural life cycle and most are doomed. Either they change unrecognisably or they die. This is because the community members themselves change with time and change what they want, and what they want and will put up with from newbies, and so on. (I don't have a fully worked-out theory yet, but I can see the shape of it in my head. I'd be amazed if someone hasn't written it up.)

What this theory suggests: if the forum has a purpose beyond just existence (as this one does), then it needs to reproduce. The Center for Modern Rationality is just the start. Lots of people starting a rationality blog might help, for example. Other ideas?

4Armok_GoB
This is a good idea if and only if we can avoid summoning Azartoth.
3TheOtherDave
You seem to be implying here that LW's purpose is best achieved by some forum continuing to exist in LW's current form. Yes? If so, can you expand on your reasons for believing that?
2David_Gerard
No, that would hold only if one thinks a forum is the best vehicle. It may not even be a suitable one. My if-then does assume a further "if" that a forum is, at the least, an effective vehicle.

(nods) OK, cool.

My working theory is that the original purpose of the OB blog posts that later became LW was to motivate Eliezer to write down a bunch of his ideas (aka "the Sequences") and get people to read them. LW continues to have remnants of that purpose, but less and less so with every passing generation.

Meanwhile, that original purpose has been transferred to the process of writing the book I'm told EY is working on. I'm not sure creating new online discussion forums solves a problem anyone has.

As that purpose gradually becomes attenuated beyond recognition, I expect that the LW forum itself will continue to exist, becoming to a greater and greater extent a site for discussion of HP:MoR, philosophy, cognition, self-help tips, and stuff its users think is cool that they can somehow label "rational." A small group of SI folks will continue to perform desultory maintenance, and perhaps even post on occasion. A small group of users will continue to discuss decision theory here, growing increasingly isolated from the community.

If/when EY gets HP:MoR nominated for a Hugo award, a huge wave of new users will appear, largely representative of science-fictio... (read more)

3wedrifid
And, more precisely
5David_Gerard
Like NaNoWriMo or thirty things in thirty days (which EY indirectly inspired) - giving the muse an office job. Except, of course, being Eliezer, he made it one a day for two years.
0FourFire
I'm responding to congratulate you on your correct prediction. I see this account hasn't been active in over four years.

If anyone does feel motivated to post just bare links to sequence posts, hit one of the Harry Potter threads. These seem to be attracting LW n00bs, some of whom seem actually pretty smart - i.e., the story is working to its intended purpose.

Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.

I can understand people wanting that. If the goal is to spread this information, however, I'd suggest that those wanting to be part of an Inner Circle should go Darknet, invitation only, and keep these discussions there, if you must have them at all.

As someone who has been around here maybe six months and comes everyday, I have yet to drink enough kool aid not to find ridiculous elements to this discussion.

"We are not a Phyg! We are not a Phyg! How dare you use that word?" Could anything possibly make you look more like a Phyg than tabooing the word, and karmabombing people who just mention it? Well, the demand that anyone who shows up should read a million words in blog posts by one individual, and agree with most all of it before speaking does give "We are not... (read more)

imagine yourself at a new site that had some interesting material, and then coming on a discussion like this.

I'm amused by the framing as a hypothetical. I'm far from being an old-timer, but I've been around for a while, and when I was new to this site a discussion like this was going on. I suspect the same is true for many of us. This particular discussion comes around on the gittar like clockwork.

4buybuydandavis
What impression did it leave you?

In my case it left the impression that (a) this was an Internet forum like any other I've been on in the past seventeen years (b) like all of them, it behaved as though its problems were unique and special, rather than a completely generic phenomenon. So, pretty much as normal then.

BTW, to read the sequences is not to agree with every word of them, and when I read all the rest of the posts chronologically from 2009-2011 the main thing I got from it was the social lay of the land.

(My sociology is strictly amateur, though an ongoing personal interest.)

9buybuydandavis
This is hardly my first rodeo, but this place is unlike any others I've been on for exactly the point at issues here - the existence of a huge corpus written overwhelmingly by one list member that people are expected to read before posting and relate their posts to. The closest I've come to such attitudes were on two lists; one Objectivist, one Anarchist. On the Objectivist list, where there was a little bit of "that was all answered in this book/lecture from Rand", people were not at all expected to have read the entire corpus before participating. Rand herself was not participating on the list, so there is another difference. The Anarchist list was basically the list of an internet personality who was making a commercial venture of it, so he controlled the terms of the debate as suited his purposes, and tabooed issues he considered settled. Once that was clear to me, I left the site, considering it too phygish. I'd imagine that there are numerous religious sites with the same kind of reading/relating requirements, but only a limited number of those where the author of the corpus was a member of the list.

To LW's credit, "read the sequences" as a counterargument seems increasingly rare these days. I've seen it once in the last week or two, but considering that we're now dealing with an unusually large number of what I'll politely describe as contrarian newcomers, I'll still count that as a win.

In any case, I don't get the sense that this is an unknown issue. Calls for good introductory material come up fairly often, so clearly someone out there wants a better alternative to pointing newcomers at a half-million words of highly variable material and hoping for the best -- but even if successful, I suspect that'll be of limited value. The length of the corpus might contribute to accusations of phygism, but it's not what worries me about LW. Neither is the norm of relating posts to the Sequences.

This does give me pause, though: LW deals politely with intelligent criticism, but it rarely internalizes it. To the best of my recollection none of the major points of the Sequences have been repudiated, although in a work of that length we should expect some to have turned out to be demonstrably wrong; no one bats a thousand. A few seem to have slipped out of the de-facto canon... (read more)

What can we do about this?

Reply not with "read the sequences", but with "This is covered in [link to post], which is part of [link to sequence]." ? Use one of the n00b-infested Harry Potter threads, with plenty of wrong but not hopeless reasoning, as target practice.

6buybuydandavis
I think that you've got a bigger problem than internalizing repudiations. The demand for repudiations is the mistake Critical Rationalists make - "show me where I'm wrong" is not a sufficiently open mind. First, the problem might be that you're not even wrong. You can't refute something that's not even wrong. When someone is not even wrong, he has to be willing to justify his ideas, or you can't make progress. You can lead a horse to water, but you can't make him think. (As an aside, is there an article about Not Even Wrong here? I don't remember one, and it is an important idea to which a lot are probably already familiar. Goes well with the list name, too.) Second, if one is only open to repudiations, one is not open to fundamentally different conceptualizations on the issue. The mapping from one conceptualization to another can be a tedious and unproductive exercise, if even possible in practical terms. I've spent years on a mailing list about Stirner - likely The mailing list on Stirner. In my opinion, Stirner has the best take on metaethics, and even if you don't agree, there are a number of issues he brings up better than others. A lot of smart folks on that list, and we made some limited original progress. Stirner is near the top of the list for things I know better than others. People who would know better, are likely people I already know in a limited fashion. I thought to write an article from that perspective, contrasting that with points in the Metaethics sequence. But I don't think the argument in the Metaethics sequence really follows, and contemplating an exegesis of it to "repudiate" it fills me with a vast ennui. So, it's Bah Humbug, and I don't contribute. Whatever you might think of me, setting up impediments to people sharing what they know best is probably not in the interest of the list. There's enough natural impediment to posting an article in a group; always easier to snipe at others than put your own ideas up for target practice. Ther
1Nornagest
Not that I know of, although it's referenced all over the place -- like Paul Graham's paper on identity, it seems to be an external part of the LW canon. The Wikipedia page on "Not Even Wrong" does appear in XiXiDu's list of external resources -- a post that's faded into undeserved obscurity, I think. As to your broader point, I agree that "show me where I'm wrong" is suboptimal with regard to establishing a genuinely open system of ideas. It's also a good first step, though, and so I'd view a failure to internalize repudiation as a red flag of the same species as what you seem to be pointing to -- a bigger one, in fact. Not sufficient, but necessary.
0buybuydandavis
Certainly if you have been repudiated, but fail to internalize the repudiation, you've got a big red flag. But that's why I think's it less dangerous and debilitating - it's clear, obvious, and visible. I consider only listening to repudiations as the bigger problem: it is being willfully deaf and non responsive to potential improvement. It's not failing to understand, it's refusing to listen.
2khafra
In that case, Lukeprog's metaethics sequence must have been of great comfort to you, since he didn't really spend much time on Eliezer's metaethics sequence. Perhaps you could just start covering Stimer's material in a discussion post or two and see what happens.
4[anonymous]
Just curious, was the anarchist Fgrsna Zbylarhk?
3buybuydandavis
DIng! Ding! Ding! We have a winner! Yeah, that's the one. I don't begrudge a guy trying to make a buck, or wanting to push his agenda. I find him a bright guy with a lot of interesting things to say. And I'll still listen to his youtube videos. But his agenda conflicts with mine, and I don't want to spend energy discussing issues in a community where one isn't allowed to publicly argue against some dogma in philosophy. That which can be destroyed by the truth should be.
2[anonymous]
Oooh, what's my prize? Yup, I pretty much agree with your assessment. It was quite the interesting rabbit hole to go down. But at least for me, it became anti-productive and unhealthy. I found much better uses of my time.
-8Alsadius
2David_Gerard
That's an important difference, but I don't think it's one for the social issues being raised in this post or this thread, which are issues of community interaction - and I think so because it's the same issues covered in A Group Is Its Own Worst Enemy. This post is precisely the call for a wizard smackdown.
3TheOtherDave
I was going to say essentially this, but the other David did it for me.
2buybuydandavis
I'm sure. What I wonder is how much the sequences even represent a consensus of the original list members involved in the discussion. In my estimation, it varies a lot. In particular, I doubt EY carried the day with even a strong plurality with both his conclusions and argument in the metaethics sequence.
2wedrifid
I doubt even Eliezer_2012 would agree with all of them. They were a rather rapidly produced bunch of blog posts and very few people would maintain consistent endorsement of past blogging output.
3[anonymous]
Hmm. I generally agree with the original post, but I don't want to be part of an inner circle. I want access to a source of high insight-density information. Whether or not I myself am qualified to post there is an orthogonal issue. Of course, such a thing would have an extremely high maintenance cost. I have little justification for asking to be given access to it at no personal cost. Spreading information is important too, but only to the extent that what's being spread is contributing to the collective knowledge.
-2buybuydandavis
Which is yet another purpose that involves tradeoffs with the ones I previously mentioned. I'm puzzled why you think a private email list involves extremely high maintenance costs. Private google group? A technological solution to the mass of the problem on this list wouldn't seem that hard either. As I've pointed out in other threads, complex message filtering has been around at least since usenet. Much of the technical infrastructure must already be in place, since we have personally customizable filtering based on karma and Friends. Or add another Karma filter for total Karma for the poster, so that you don't even have to enter Friends by hand. Combine Poster Karma with Post Karma with an inclusive OR, and you've probably gone 80% of the way there to being able to filter unwanted noise.
3[anonymous]
Not infrastructural costs. Social costs (and quite a bit of time, I expect). It takes effort to select contributors and moderate content, especially when those contributors might be smarter than you are. Distinguishing between correct contrarianism and craziness is a hard problem. The difficulty is in working out who to filter. Dealing with overt trolling is easy. I change my opinions often enough over a long enough period of time that a source of 'information that I agree with' is nearly useless to me.
2buybuydandavis
I think I get it. You want someone/something else to do the filtering for you? That's easy enough too. If others are willing, instead of being Friended, they could be FilterCloned, and you could filter based on their settings. Let EY be the DefaultFilterClone, or let him and his buddies in the Star Chamber set up a DefaultFilterClone.
0[anonymous]
Not exactly 'want'. The nature of insights is that they are unexpected. But essentially yes.
[-]brilee250

[meta] A simple reminder: This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let's try to combat that.

In line with the above, please be liberal with explanations as to why you think an opinion should be downvoted. Going through the thread and mass-downvoting every post you disagree with is not helpful. [/meta]

This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let's try to combat that.

The post came across to me as an explicit call to such, which is rather stronger than "has a high potential".

[-]Larks250

I agree. Low barriers to entry (and utterly generic discussions, like on which movies to watch) seem to have lowered the quality. I often find myself skimming discussions for names I recognize, and just read their comments - ironic, given that once upon a time the anti-kibitzer seemed pressing!

Lest this been seen as unwarranted arrogance: there are many values of p in [0,1] such that I would run a p risk of getting personally banned in return for removing the bottom p of the comments. I often write out a comment and delete it, because I think that, while above the standard of the adjacent comments, it is below what I think the minimal bar should be. Merely saying new, true things about the topic matter is not enough!

The Sequence Re-Runs seem to have had little participation, which is disappointing - I had great hope for those.

The Sequence Re-Runs seem to have had little participation, which is disappointing - I had great hope for those.

As someone who is rereading the sequences I think I have a data point as to why. First of all, the "one post a day" is very difficult for me to do. I don't have time to digest a LW post every day, especially if I've got an exam coming up or something. Secondly, I joined the site after the effort started, so I would have had to catch up anyway. Thirdly, ideally I'd like to read at a faster average rate than one per day. But this hasn't happened at all, my rate has actually been rather slower, which is kind of depressing.

9hesperidia
I've actually been running a LW sequence liveblog, mostly for my own benefit during the digestive process. See here. I find myself wondering whether others will join me in the liveblogging business sooner or later. I find it a good way to enforce actually thinking about what I am reading.
1EStokes
What I did personally was read through them through relatively quickly. I might not have understood it at the same level of depth but if something is related to something in the sequences then I'll know and know where I can find the information if there's anything I've forgotten.
7atorm
I read them, but engaging in discussion seems difficult. Am I just supposed to pretend all of the interesting comments below don't exist and risk repeating something stupid on the Repeat post? Or should I be trying to get involved in a years-old discussion on the actual article? Sadly, this is something that has a sort of activation energy: if enough people were discussing the sequence repeats, I would discuss them too.
6Viliam_Bur
Perhaps we could save users one click by putting the summary of the article on the top of the main page with links "read the article" and "discuss the article" below. Sometimes saving users one click increases the traffic significantly.
2[anonymous]
Organizing reading the squence into classes of people (think Metaethics Class of 2012) that commit to reading them and debating them and then answer a quizz about seems more likely to get participation.
0David_Gerard
I still read them and usually remember to vote them up for MinibearRex bothering to post them, and comment if I have something to say.
[-][anonymous]210

Edit: Eliminated text to conform to silly new norm. Check out relevant image macro.

It's whimsical, I like it. The purported SEO rationale behind it is completely laughable (really, folks? People are going to judge the degree of phyggishness of LW by googling LW and phyg together, and you're going to stand up and fight that? That's just insane), but it's cute and harmless, so why not adopt it for a few days? Of all reasons to suspect LW of phyggish behavior, this has got to be the least important one. If using the word "phyg" clinches it for someone, I wouldn't take them seriously.

6John_Maxwell
To avoid guilt by association?
5Bugmaster
Beats me. And yet I find myself going along with the new norm, just like you. One of us... One of us...
7Eugine_Nier
Well stop it. We should be able to just call a cult a cult.
0Bugmaster
Dur ? I think you might have quoted the wrong person in your comment above. Edit: Retracting my comment now that the parent is fixed
3Eugine_Nier
Fixed. Stupid clipboard working differently on windows and linux.

Why in the name of the mighty Cthulhu should people on LW read the sequences? To avoid discussing the same things again and again, so that we can move to the next step. Minus the discussion about definitions of the word phyg, what exactly are talking about?

When a tree falls down in a LessWrong forest, why there is a "sound":

Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.

Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.

Because people on LW pretend they know some things better that everyone else, and that's an open challenge that someone should go and kick their butts, preferably literally. Only strong or popular people are ... (read more)

Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.

No, that isn't it. LW isn't at all special in that respect - a huge number of specialized communities exist on the net which talk about "crazy stuff", but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that's not really warranted.

Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.

No, that isn't it. Every self-help book (of which there's a huge industry, and most of which are complete crap) is "trying to change the way people think", and nobody sees that as weird. The Khan academy is challenging the scho... (read more)

It's not the Googleability of "phyg". One recent real-life example is a programmer who emailed me deeply concerned (because I wrote large chunks of the RW article on LW). They were seriously worried about LessWrong's potential for decompartmentalising really bad ideas, given the strong local support for complete decompartmentalisation, by this detailed exploration of how to destroy semiconductor manufacture to head off the uFAI. I had to reassure them that Gwern really is not a crazy person and had no intention of sabotaging Intel worldwide, but was just exploring the consequences of local ideas. (I'm not sure this succeeded in reassuring them.)

But, y'know, if you don't want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.

Edit: Technically just sabotage, not "terrorism" per se. Not that that would assuage qualms non-negligibly.

On your last point, I have to cite our all-*cough*-wise Professor Quirrell

"Such dangers," said Professor Quirrell coldly, "are to be discussed in offices like this one, not in speeches. The fools […] are not interested in complications and caution. Present them with anything more nuanced than a rousing cheer, and you will face your war alone.

5[anonymous]
Nevermind that there were no actual plans for destroying fabs, and that the whole "terrorist plot" seems to be a collective hallucination. Nevermind that the author in question has exhaustively argued that terrorism is ineffective.

Yeah, but he didn't do it right there in that essay. And saying "AI is dangerous, stopping Moore's Law might help, here's how fragile semiconductor manufacture is, just saying" still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.

You're pointing out it doesn't technically say that, but multiple people coming to that essay have taken it that way. You can say "ha! They're wrong", but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.

1[anonymous]
Yes, I accept that it's a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I'm not saying anything like "Ha! They're wrong!" I'm lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.

"Just saying" is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn't intend lunatics to kill them - if you say "we were just saying", the courts say "no you really weren't."

We don't have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko's Forbidden Thread), but "just saying" in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.

0[anonymous]
As far as I can tell, "just saying" is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don't recall saying anything about disclaimers, either. So what are you really trying to say here?

It's a name for the style of argument: that it's not advocating people do these things, it's just saying that uFAI is a problem, slowing Moore's Law might help and by the way here's the vulnerabilities of Intel's setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.

Note that I am not intending to claim that the implication was Gwern's original intention (as I note way up there, I don't think it is); I'm saying it's a property of the text as rendered. And that me saying it's a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they're adding 2 and 2 and getting 666.

0[anonymous]
It's completely orthogonal to the point that I'm making. If somebody reads something and comes to a strange conclusion, there's got to be some sort of five-second level trigger that stops them and says, "Wait, is this really what they're saying?" The responses to the essay made it evident that there's a lot of people that failed to have that reaction in that case. That point is completely independent from any aesthetic/ethical judgments regarding the essay itself. If you want to debate that, I suggest talking to the author, and not me.
4David_Gerard
I'd have wondered about it myself if I hadn't had prior evidence that Gwern wasn't a crazy person, so I'm not convinced that it's as obviously surface-innocuous as you feel it is. Perhaps I've been biased by hearing crazy-nerd stories (and actually going looking for them, 'cos I find them interesting). And I do think the PR disaster potential was something I would class as obvious, even if terrorist threats from web forum postings are statistically bogeyman stories. I suspect we've reached the talking past each other stage.
7TheOtherDave
I understood "just saying" as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience's inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn't something you can take for granted.
5Nick_Tarleton
Meaning does not excuse impact, and on some level you appear to still be making excuses. If you're going to reason about impressions (I'm not saying that you should, it's very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.
0[anonymous]
I'm not making excuses. Talking about one aspect of a problem does not imply that other aspects of the problem are not important. But honestly, that debate is stale and appears to have had little impact on the author. So what's the point in rehashing all of that?
2khafra
I agree that it's not fair to blame LW posters for the problem. However, I can't think of any route to patching the problem that doesn't involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.
2Viliam_Bur
Anyway, we shouldn't make it too easy for people to get the false conlusion, and we should err on side of caution. Having said this, I join your lamentations.
4jacoblyles
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages. Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here ("shut up and calculate!") LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
0gwern
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
0jacoblyles
Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior. However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution. One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
9gwern
I don't know how you could read LW and not realize that we certainly do accept precautionary principles ("running on corrupted hardware" has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal's mugging in the last week, neither of which says 'you should just bite the bullet'!), and libertarianism is heavily overrepresented compared to the general population. No, one of the 'big black marks' on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There's nothing particular to SIAI/LW there.
3jacoblyles
It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob. The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he's not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake. So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it's straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean. I'm pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren't so nerdy and pacifistic to begin with. And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by "shut up and calculate", which says trust your arithmetic utilitarian calculus and not your ugh fields.
6TheOtherDave
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization. The question is whether we believe L3, and whether we ought to believe L3. Many of us don't seem to believe this. Do you believe it? If so, why?
8fubarobfusco
I don't expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don't tend to accomplish their goals; but also, in those cases where a terrorist group's stated goal is achieved or becomes obsolete, they don't dissolve and say "our work is done" — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members' psychological needs. "although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature." — Max Abrahms, "Why Terrorism Does Not Work" "The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists." — Max Abrahms, "What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy". Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won't do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.) As such, L3 is false: terrorism predictably wouldn't work.
8gwern
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea... if you are an idiot who can't spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence. Apparently they are just convinced that utilitarians must be stupid or ignorant. Well! I guess that settles everything.

There's a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.

I got the idea that XiXiDu was going crazy because he didn't see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn't want to do the former just because a philosophical argument told him to and couldn't quite manage the latter.

If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren't able to dismiss as obviously flawed extremely quickly.

1gwern
XiXi was around for a while before he began 'freaking out'.
3TheOtherDave
I think what Risto meant was "an argument for terrorism they weren't able to (dismiss as obviously flawed extremely quickly)", not "people with this behavior pattern would probably freak out (..) extremely quickly". How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
2TheOtherDave
(nods) I do have some sympathy for how easy it is to go from "I endorse X based on Y, and you don't believe Y" to "You reject X." But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there's not much else to say.
3TheOtherDave
Yup, I agree with all of this. I'm curious about jacoblyles' beliefs on the matter, though. More specifically, I'm trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
5gwern
'Pretty sure', eh? Would you care to take a bet on this? I'd be happy to go with a few sorts of bets, ranging from "an organization that used to be SIAI or CFAR is put on the 'Individuals and Entities Designated by the State Department Under E.O. 13224' or 'US Department of State Terrorist Designation Lists' within 30 years" to ">=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years" etc. I'd be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give. If you're worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I'd have to reduce my bet substantially).

Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match... and then those which don't.

My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only "weird" is pretty normal. Being "weird + thorough", for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.

The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying "yeah, we read sci-fi about parallel universes, and we know it's not real, ha-ha silly us" and saying "actually, our intepretation of quantum physics is right, and you are wrong, that's the fact, no excuses". This is the part that makes people angry. You are allowed to take the position of authority only if you are... (read more)

-4Pentashagon
If the phyg-meme gets really bad we can just rename the site "lessharmful.com".
7gwern
Seriously?
4Anatoly_Vorobey
Which part of my comment are you incredulous about?
[-]gwern200

That nobody sees self-help books as weird or cultlike.

0John_Maxwell
redacted
0whowhowho
That is one of the central fallacies of LW. The Sequnces generally don't settle issues in a step-by-step way. They are made up of postings, each of which is followed by a discussion often containing a lot of "I don't see what you mean" and "I think that is wrong because". The stepwise model may be attractive, but that doesn't make it feasible. Science isn't that linear, and most of the topics dealt with are philosophy...nuff said.

I think your post is troubling in a couple of ways.

First, I think you draw too much of a dichotomy between "read sequences" and "not read sequences". I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that's a pretty large task - especially if you're in school, trying to work a demanding job, etc. I don't wish to speak for you, since you're not clear on the matter, but are people in the process of reading the sequences noise? I'm only in QM, and certainly wasn't there when I started posting, but I've gotten over 1000 karma (all of it on comments or discussion level posts). I'd like to think I've added something to the community.

Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn't want to make all of these points in that thread, since they were off topic to the original. When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I'd never been part of a... (read more)

5wedrifid
Get a few more (thousand?) karma and you may find getting karmassassinated doesn't hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable - it happens quite a lot though) and it doesn't bother me all that much. These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don't like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost! I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
7RobertLumley
I think if I were karmassassinated again I wouldn't care nearly as much, because of how stupid I felt after the first time it happened. It was just so obvious that it was just some idiot, but I somehow convinced myself it wasn't. But that being said, one of the reasons it bothered me so much was that there were a number of posts that I was proud of that were downvoted - the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma. (This was also one of the reasons I wasn't so sure it was karmassassination) Which put a number of posts I liked way below the visibility threshold. And it bothered me that if I linked to those comments later, people would just see a really low karma score and probably ignore it.
3Wei Dai
I think you can't give more downvotes than your karma, so that person would need 5-10 sockpuppets with at least 15-20 (EDIT: actually 4-5) karma each. If someone is going to the trouble of doing that, it seems unlikely that they would just pick on you and nobody else (given that your writings don't seem to be particularly extreme in some way). Has anyone else experience something similar?
4thomblake
Creating sockpuppets for downvoting is easy. (kids, don't try this at home). Just find a Wikipedia article on a cognitive bias that we haven't had a top-level post on yet. Then, make a post to main with the content of the Wikipedia article (restated) and references to the relevant literature (you probably can safely make up half of the references). It will probably get in the neighborhood of 50 upvotes, giving you 500 karma, which allows 2000 comment downvotes. Even if those estimates are really high, that's still a lot of power for little effort. And just repeat the process for 20 biases, and you've got 20 sockpuppets who can push a combined 20 downvotes on a large number of comments. Of course, in the bargain Less Wrong is getting genuinely high-quality articles. Not necessarily a bug.
1steven0461
If restating Wikipedia is enough to make for a genuinely high-quality article, maybe we should have a bot that copy-pastes a relevant Wikipedia article into a top-level post every few days. (Based on a few minutes of research, it looks like this is legal if you link to the original article each time, but tell me if I'm wrong.)
1thomblake
Really, I think the main problem with this is that most of the work is identifying which ones are the 'relevant' articles.
0thomblake
I was implying a non-copy-paste solution. Still, interesting idea.
0steven0461
Yes; I didn't mean to say you were implying a copy-paste solution. But if we're speaking in the context of causing good articles to be posted and not in the context of thinking up hypothetical sock-puppeting strategies, whether it's copy-pasted or restated shouldn't matter unless the restatement is better-written than the original.
0thomblake
agreed
0othercriteria
Modulo the fake references, of course.
0thomblake
of course
-2RobertLumley
There's not much reason to do something like this, when you can arbitrarily upvote your own comments with your sockpuppets and give yourself karma.
0thomblake
But then those comments / posts will be correctively downvoted, unless they're high-quality. And you get a bunch more karma from a few posts than a few comments, so do both!
2Eugine_Nier
You can delete them afterwards, you keep karma from deleted posts.
6wedrifid
Let's keep giving the disgruntled script kiddies instructions! That's bound to produce eudaimonia for all!
0RobertLumley
We found one of the sockpuppets, and he had one comment that added nothing that was at like 13 karma. It wasn't downvoted until I was karmassassinated.
3pedanterrific
It's some multiple of your karma, isn't it? At least four, I think- thomblake would know.
1thomblake
Yes, 4x, last I checked.
1wedrifid
I should note that I have never actually been in your shoes. I haven't had any cases where there was unambiguous use of bulk sockpuppets. I've only been downvoted via breadth (up to 50 different comments from my recent history) and usually by only one person at a time (occasionally two or three but probably not two or three that go as far as 50 comments at the same time). That would really mess with your mind if you were in a situation where you could not yet reliably model community preferences (and be personally confident in your model despite immediate evidence.) Take it as a high compliment! Nobody has ever cared enough about me to make half a dozen new accounts. What did you do to deserve that?

It was this thread.

Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they're risk averse and want to make sure they're doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn't like it.

3wedrifid
That seems to be something I would agree with, with an explicit acknowledgement that it relies on a combination of risk aversion and non-consequentialist values.
2RobertLumley
It didn't really help that I made my point very poorly.
2pedanterrific
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
1[anonymous]
It's great that you are reading the sequences. You are right it's not as simple as read them -> not noise, not read them -> noise. You say you are up to QM, then I would expect you to not make the sort of mistakes that would come from not having read the core sequences. On the other hand, if you posted something about ethics or AI (I forget where the AI stuff is chronologically), I would expect you to make some common mistakes and be basically noise. The high barrier to entry is a problem for new people joining, but I also want a more strictly informed crowd to talk to sometimes. I think having a lower barrier to entry overall, but at least somewhere where having read stuff is strictly expected would be best, but there are problems with that. Don't leave, keep reading. When you are done you will know what I'm getting at.
3RobertLumley
I think it's close to the end, right before/after the fun theory sequence? I've read some of the later posts just from being linked to them, but I'm not sure. And I quite intentionally avoid talking about things like AI, because I know you're right. I'm not sure that necessarily holds for ethics, since ethics is a much more approachable problem from a layperson's standpoint. I spent a three hour car drive for fun trying to answer the question "How would I go about making an AI" even though I know almost nothing about it. The best I could come up with was having some kind of program that created a sandbox and randomly generated pieces of code that would compile, and pitting them in some kind of bracket contest that would determine intelligence and/or friendliness. Thought I'd make a discussion post about it, but I figured it was too obvious to not have been thought of before.
0David_Gerard
Aside: That sockpuppetry seems to now be an accepted mode of social discourse on LessWrong strikes me as a far greater social problem than people not having read the Sequences. ("Not as bad as" is a fallacy, but that doesn't mean both things aren't bad.) edit: and now I'm going to ask why this rated a downvote. What does the downvoter want less of? edit 2: fair enough, "accepted" is wrong. I meant that it's a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
6RobertLumley
I think it was downvoted because sockpuppetry wasn't really "accepted" by LW, it was just one guy.
0David_Gerard
Yeah, "accepted" is connotationally wrong - I mean it's observed, and it's hard to do much about it.
0RobertLumley
To what extent does anyone except EY have moderation control over LW?
6Rain
There are several people capable of modifying or deleting posts and comments.
0Viliam_Bur
Ahem, on my side it was a case of bad pattern-matching. When I realized it, I deleted the reply I was writing here, and also removed the downvote. Perhaps you should have explained further why do you think sockpuppetry is bad. My original guess was that you speak about people having multiple votes from multiple accounts (I was primed by other comments in this thread) and I habitually downvote most comments speaking about karma. But now it seems to me that you are concerned with other aspects, such as anonymity and role-playing. But this is only a guess, I can't see it from your comment.
5David_Gerard
Yeah, bad explanation on my account. I'm not so concerned with roleplay accounts (e.g. Clippy), as with socking up to mass-downvote. (Getting initial karma is very easy.) Socking-up to be a dick to people also strikes me as problematic. I think I mean "observed" rather than "accepted", which implies a social norm.

My $0.02 (apologies if it's already been said; I haven't read all the comments): wanting to do Internet-based outreach and get new people participating is kind of at odds with wanting to create an specialized advanced-topics forum where we're not constantly rehashing introductory topics. They're both fine goals, but trying to do both at once doesn't work well.

LW as it is currently set up seems better optimized for outreach than for being an advanced-topics forum. At the same time, LW doesn't want to devolve to the least common denominator of the Internet. This creates tension. I'm about .6 confident that tension is intentional.

Of course, nothing stops any of us from creating invitation-only fora to which only the folks whose contributions we enjoy are invited. To be honest, I've always assumed that there exist a variety of more LW-spinoff private forums where the folks who have more specialized/advanced groundings get to interact without being bothered by the rest of us.

Somewhat relatedly, one feature I miss from the bad old usenet days is kill files. I suspect that I would value LW more if I had the ability to conceal-by-default comments by certain users here. Concealing sufficiently downvoted comments is similar in principle, but not reliable in practice.

I suspect that I would value LW more if I had the ability to conceal-by-default comments by certain users here.

My LessWrong Power Reader has a feature that allows you to mark authors as liked/disliked, which helps to determine which comments are expanded vs collapsed. Right now the weights are set so that if you've disliked an author, then any comment written by him or her that has 0 points or less, along with any descendants of that comment, will be collapsed by default. Each comment in the collapsed thread still has a visible header with author and points and color-coding to help you determine whether you still want to check it out.

6TheOtherDave
(blink) You are my new favorite person. I am, admittedly, fickle.
6John_Maxwell
And for discussion and top-level posts, there is already the friends feature: http://lesswrong.com/prefs/friends/ (You can also add someone as a friend from their user page.) There is something that appeals to me about this "roll your own exclusive forum" approach.
2Bugmaster
I am ashamed to say that I had no idea about the Friends feature. Thanks !
8Percent_Carbon
You're suggesting a strategy of tension? Aw. And they didn't invite nyan_sandwich. That's so sad. He or she should get together with other people who haven't been invited to Even Less Wrong and form their own. Then one day they can get together with Even Less Wrong like some NFL/AFL merger, only with more power to save the world. There would have to be a semaphore or something, somewhere. So these secret groups can let each other know they exist without tipping off the newbs.

There's probably no need for the groups to signal each other's existence.

When a new Secret Even Less Wrong is formed, members are previously formed Secret Even Less Wrongs who are still participating in Less Wrong are likely to receive secret invites to the new Secret Even Less Wrong.

Nyan_sandwich might set up his secret Google Group or whatever, invite the people he feels are worthy and willing to form the core of his own Secret Even Less Wrong, and receive in reply an invite to an existing Secret Even Less Wrong.

That might have already happened!

4TheOtherDave
Nothing nearly that Macchiavelian, more of a strategy of homeostasis through dynamic equilibrium.
6Armok_GoB
I have tried, and failed, to launch elitist spinof subcomunities like that multiple times.
2TheOtherDave
To what do you attribute the failures?
2Armok_GoB
Lack of interest, lack of exposure, lack of momentum.
0cousin_it
LW's period of fastest growth was due to Eliezer's posts that were accessible and advanced (and entertaining, etc.) Encouraging other people to do work like that could be more promising than splitting the goals as you propose.
[-]TimS140

Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I'm interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I've read most of the core posts of LW, but haven't gone through most of the sequences in any rigorous way (i.e. read them in order).

I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I've had several ideas for posts (in Discussion) that I don't post, but I don't think it meets the community's expected quality standard.

Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the "raising the sanity line" mission. That's not what I want.

[-][anonymous]210

Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus.

No martyrs allowed.

I don't propose simply disallowing people who havn't read everything from being taken seriously, if they don't say anything stupid. It's fine if you havn't read the sequences and don't care about AI or heavy philosophy stuff, I just don't want to read dumb posts about those topics that come from someone having not read the stuff.

As a matter of fact, I was careful to not propose much of anything. Don't confuse "here's a problem that I would like solved" with "I endorse this stupid solution that you don't like".

4TimS
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are "spouting garbage that's been discussed to death" and aren't being sufficiently punished for it, you could say that instead. If that's not what you are concerned about, then I have failed to comprehend your message. Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
[-]Emile100

I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity's main attempt to formalize morality, so I would expect some overlap with FAI.

I don't mind people who haven't read the sequences, as long as they don't start spouting garbage that's already been discussed to death and act all huffy when we tell them so; common failure modes are "Here's an obvious solution to the whole FAI problem!", "Morality all boils down to X", and "You people are a cult, you need to listen to a brave outsider who's willing to go against the herd like me".

8Vladimir_Nesov
If you're interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of "Don't feed the clueless (just downvote them)" (this post suggests widening the sense of "clueless"), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
4Normal_Anomaly
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren't many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was--or seeing some comment I upvoted be at -2, and not knowing what I'm missing. So I'd like to ask everyone: if you downvote one comment for being wrong, but think the poster isn't hopeless, please explain your downvote. It's the only way to make the person stop being wrong.
3Vladimir_Nesov
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn't have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently. I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
6wedrifid
While I usually share a similar sentiment, upon consideration I disagree with your prediction when it comes to the example conversation in question. People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts - including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.) Trying to explain what is usually simply assumed - to a listener who is at least willing to communicate in good faith - can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place. There are many conversations where downvoting both sides of a discussion is advisable, yet it isn't conversations with the "Clueless" that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
5TheOtherDave
Startlingly, Googling "Debaters of Perfect Emptiness" turned up no hits. This is not the best of all possible worlds.
0wedrifid
Think "Lawyer", "Politician" or the bottom line.
8TheOtherDave
Sorry, I wasn't clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
0wedrifid
Oh, gotcha. I'm kind of surprised we don't have a post on it yet. Lax of me!
2TimS
I accept your criticism in the spirit it was intended - but I'm not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn't appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion) Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don't agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a "good thing." Note that your example is not that pattern, and I haven't responded to Clueless. C is anti-empiricism, but he hasn't shown anything that makes me think that he has anything valuable to contribute to the community - he's 100% confused. So it isn't worth my time to try to persuade him to be less wrong.
-1Vladimir_Nesov
I'm stating an expectation of a policy's effectiveness.
-1gRR
I think Monkeymind is deliberately trying to gather lots of negative karma as fast as possible. Maybe for a bet? If the goal was -100, then writing should stop now (prediction).
-2brilee
I'm not the one who downvoted you, but if I were to hazard a guess, I'd say your were downvoted because when you start off by saying "people like me", it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of "people like me", you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms "less wrong" and "more wrong" to describe elites and peasants of LW, but I'm using them to point out to you the identity you've painted for yourself.) Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, "Noobs will be noobs, let's give up".
7TimS
If by "giving up on trying to be less wrong," you mean I'm never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful. Raising the sanity line does not require any of those things.
-2brilee
Don't put up straw men; I never said that to be less wrong, you had to do all those things. "less wrong" represents a attitude towards the world, not an endpoint.
4TimS
Then I do not understand what you mean when you say I am "giving up on trying to be less wrong"
-2brilee
Could I get an explanation for the downvotes?
-7XiXiDu

I think the barrier of entry is high enough - the signal-to-noise ratio is high, and if you only read high-karma posts and comments you are guaranteed to get substance.

As for forcing people to read the entire Sequences, I'd say rationalwiki's critique is very appropriate (below). I myself have only read ~20% of the Sequences, and by focusing on the core sequences and highlighted articles, have recognized all the ideas/techniques people refer to in the main-page and discussion posts.

The "sequences"[9] are several collated series of Yudkowsky's blog posts, and there are eighteen sequences in all. The indexes for just the four "core sequences"[10] are somewhere north of 10,000 words. Those link to over a hundred and fifty 2,000-3,000-word blog posts. That's about 300,000-450,000 words for those four, and around a million words for the lot.[11] With a few million more words of often-relevant comments. For comparison, the Lord Of The Rings trilogy is 473,000 words.[12] As such, "You should try reading the sequences" is LessWrong for "fuck you."

[-][anonymous]120

You should try reading the other 80% of the sequences.

As far as I can tell (low votes, some in the negative, few comments), the QM sequence is the least read of the sequences, and yet makes a lot of EY's key points used later on identity and decision theory. So most LW readers seem not to have read it.

Suggestion: a straw poll on who's read which sequences.

I've seen enough of the QM sequence and know enough QM to see that Eliezer stopped learning quantum mechanics before getting to density matrices. As a result, the conclusions he draws from QM rely on metaphysical assumptions and seem rather arbitrary if one knows more quantum mechanics. In the comments to this post Scott Aaronson tries to explain this to Eliezer without much success.

0Douglas_Knight
Could you be specific about which conclusions seem arbitrarily based on which metaphysical assumptions?
0Eugine_Nier
I just answered a similar question in another thread here. Note: please reply there so we can consolidate discussions.
9Desrtopa
I've read it, but I took away less from it than any of the other sequences. Reading any of the other sequences, I can agree or disagree with the conclusion and articulate why. With the QM sequence, my response is more along the lines of "I can't treat this as very strong evidence of anything because I don't think I'm qualified to tell whether it's correct or not." Eliezer's not a physicist either, although his level of fluency is above mine, and while I consider him a very formidable rationalist as humans go, I'm not sure he really knows enough to draw the conclusions he does with such confidence. I've seen the QM sequence endorsed by at least one person who is a theoretical physicist, but on the other hand, I've read Mitchell Porter's criticisms of Eliezer's interpretation and they sound comparably plausible given my level of knowledge, so I'm not left thinking I have much more grounds to favor any particular quantum interpretation than when I started.
9[anonymous]
A poll would be good. I've read the QM sequence and it really is one of the most important sequences. When I suggest this at meetups and such, people seem to be under the impression that it's just Eliezer going off topic for a while and totally optional. This is not the case, the QM sequence is used like you said to develop a huge number of later things.

The negative comments from physicists and physics students are sort of a worry (to me as someone who got up to the start of studying this stuff in second-year engineering physics and can't remember one dot of it). Perhaps it could do with a robustified rewrite, if anyone sufficiently knowledgeable can be bothered.

6Paul Crowley
The negative comments I've heard give off a strong scent of being highly motivated - in one case an incredible amount of bark bark bark about how awful they were, and when I pressed for details, a pretty pathetic bite. I'd like to get a physicist who didn't seem motivated to have an opinion one way or the other to comment. It would need to be someone who bought MWI - if the sole problem with them is that they endorse MWI then that's at least academically respectable, and if an expert reading them doesn't buy MWI then they'll be motivated to find problems in a way that won't be as informative as we'd like.

The Quantum Physics Sequence is unusual in that normally, if someone writes 100,000(?) words explaining quantum mechanics for a general audience, they genuinely know the subject first: they have a physics degree, they have had an independent reason to perform a few quantum-mechanical calculations, something like that. It seems to me that Eliezer first got his ideas about quantum mechanics from Penrose's Emperor's New Mind, and then amended his views by adopting many-worlds, which was probably favored among people on the Extropians mailing list in the late 1990s. This would have been supplemented by some incidental study of textbooks, Feynman lectures, expository web pages... but nonetheless, that appears to be the extent of it. The progression from Penrose to Everett would explain why he presents the main interpretive choice as between wavefunction realism with objective collapse, and wavefunction realism with no collapse. His prose is qualitative just about everywhere, indicating that he has studied quantum mechanics just enough to satisfy himself that he has obtained a conceptual understanding, but not to the point of quantitative competence. And then he has undertaken to convey ... (read more)

Excellent idea - done. Thank you!

2Rain
Result from Ron Maimon's review of the QM sequence: (more at the link from ciphergoth's post)
2XiXiDu
You could also ask for an independent evaluation of AI risks here.
8Paul Crowley
That seems less valuable. The QM sequences are largely there to set out what is supposed to be an existing, widespread understanding of QM. No such understanding exists for AI risk.
0whowhowho
So why isnt that pointed out anywhere? EY seems oddly oblivious to his potential -- indeed likely -- limitations as an autodictat.
0Alsadius
This was a big concern I had reading it. Much of it made sense to me, as someone who has had formal education in basic quantum, and some of it felt very illuminating(the waveform-addition stuff in particular was taught far better than my quantum prof ever managed), but I'm always skeptical of people claiming Truth of a controversy in a highly technical field with no actual training in that field. I've always preferred many-worlds, but I would never claim it is the sole truth in the sort of way that EY did.
0XiXiDu
What reason do I have to believe that this risk isn't even stronger when it comes to AI?
1David_Gerard
It's not clear how to compare said risk - "quantum" is far more widely abused - but the creationist AI researcher suggests AI may be severely prone to the problem. Particularly as humans are predisposed to think of minds as ontologically basic, therefore pretty simple, therefore something they can have a meaningful opinion on, regardless of the evidence to the contrary.
-3Alsadius
What, you mean the part where we're discussing a field that's still highly theoretical, with no actual empirical evidence whatsoever, and then determining that it is definitely the biggest threat to humanity imaginable and that anyone who doesn't acknowledge that is a fool?
4Paul Crowley
This is one of the classic straw men, adaptable to any purpose.
-3Alsadius
Mockery is generally rather adaptable, yes.
0David_Gerard
I suspect a lot of it is "oh dear, someone saying 'quantum'" fatigue. But that sounds a plausible approach.
8amit
Yes. No, as far as I can tell.
-1David_Gerard
Probably not, then. (The decision theory posts were where I finally hit a tl;dr wall.)
6wedrifid
Something I recall noticing at the time I read said posts is that some of the groundwork you mention didn't necessarily need to be in with the QM. Sure, there are a few points that you can make only by reference to QM but many of the points are not specifically dependent on that part of physics. (ie. Modularization fail!)
5David_Gerard
That there are no individual particles is something of philosophical import that it'd be difficult to say without bludgeoning the point home, as the possibility is such a strong implicit philosophical assumption and physics having actually delivered the smackdown may be surprising. But yeah, even that could be moved elsewhere with effort. But then again, the sequences are indeed being revised and distilled into publishable rather than blog form ...
6wedrifid
Yes, that's the one thing that really relies on it. And the physics smackdown was surprising to me when I read it. Ideal would seem to be having the QM sequence then later having an identity sequences wherein one post does an "import QM;". Of course the whole formal 'sequence' notion is something that was invented years later. These are, after all, just a stream of blog posts that some guy spat out extremely rapidly. At that time they were interlinked as something of a DAG, with a bit of clustering involved for some of the bigger subjects. I actually find the whole 'sequence' focus kind of annoying. In fact I've never read the sequences. What I have read a couple of times is the entire list of blog posts for several years. This includes some of my favorite posts which are stand alone and don't even get a listing in the 'sequences' page.

Yes! I try to get people to read the "sequences" in ebook form, where they are presented in simple chronological order. And the title is "Eliezer Yudkowsky, blog posts 2006-2010".

7[anonymous]
Totally, there are whole sequences of really good posts that get no mention in the wiki.

Working on it.

In all seriousness though, I often find the Sequences pretty cumbersome and roundabout. Eliezer assumes a pretty large inferential gap for each new concept, and a lot of the time the main point of an article would only need a sentence or two for it to click for me. Obviously this makes it more accessible for concepts that people are unfamiliar with, but right now it's a turn-off and definitely is a body of work that will be greatly helped by being compressed into a book.

1Alsadius
Fuck you.
6David_Gerard
Downvoted for linking to that site. ... what?
-2Alsadius
It's both funny and basically accurate. I'd say it's a perfectly good link.
7[anonymous]
David is making a joke, because he wrote most of the content of that article.

Tetronian started the article, so it's his fault actually, even if he's pretty much moved here.

I have noted before that taking something seriously because it pays attention to you is not in fact a good idea. Every second that LW pays a blind bit of notice to RW is a second wasted.

See also this comment on the effects of lack of outside world feedback, and a comparison to Wikipedia (which basically didn't get any outside attention for four or five years and is now part of the infrastructure of society, at which I still boggle).

And LW may or may not be pleased that even on RW, when someone fails logic really badly the response is often couched in LW terms. So, memetic infections ahoy! Think of RW as part of the Unpleasable Fanbase.

2thomblake
Memetic hazard warning!
3David_Gerard
ITYM superstimulus ;-)
0wedrifid
Not really. There is content there that is not completely useless. Especially if the 'seconds wasted' come out of time that would have otherwise been spent on lesswrong itself.
0Alsadius
Ahhhh. Well, that flips my downvote.
0wedrifid
Oh, that explains a lot!
4drethelin
It's not a barrier to entry if no one actually HAS to surmount it.
-1Alsadius
Yeah, but if we make a policy of abusing and hounding out anyone who hasn't, it's not much better.
-3faul_sname
Kahneman and Tversky's Thinking Fast and Slow is basically the sequences + some statistics - AI and metaethics in (shorter) book form (well actually, the other way around, as the book was there first). So perhaps we should say "read the sequences, or that book, or otherwise learn the common mistakes".
6Paul Crowley
Strongly disagree; I think there is fairly limited overlap between the two.
5endoself
Your comment describes (or at least intends to describe as per the people disagreeing with you) Judgment under Uncertainty: Heuristics and Biases, not Thinking Fast and Slow.
2wedrifid
Can someone verify this for me? I've heard good things about the authors but my prior for that book containing everything in the (or most of the) sequences is rather low.

I disagree with the grandparent. I read the book a while ago having already read most of the Sequences -- I think that the book gives a fairly good overview of heuristics and biases but doesn't do as good of a job in turning the information into helpful intuitions. I think that the Sequences cover most (but not quite all) of what's covered in the book, while the reverse is not true.

Lukeprog reviewed the book here: his estimate is that it contains about 30% of the Core Sequences.

2David_Gerard
The reasoning for downvote on this suggestion is not clear. What does the downvoter actually want less of?
4Dorikka
As the suggestion stands, it's at -2. I'm not downvoting it because I don't think it's so bad as to be invisible, but saying that the book is a good substitute for the sequences seems inaccurate enough to downvote. My other comment here contains (slightly) more of an explanation.
[-]brilee130

From Shirky's Essay on online groups: "The Wikipedia right now, the group collaborated online encyclopedia is the most interesting conversational artifact I know of, where product is a result of process. Rather than "We're specifically going to get together and create this presentation" it's just "What's left is a record of what we said."

When somebody goes to a wiki, they are not oging there to discuss elementary questions that have already been answered; they are going there to read the results of that discussion. Isn't this basically what the OP wants?

Why aren't we using the wiki more? We have two modes of discussion here: discussion board, and wiki. The wiki serves more as an archive of the posts that make it to main-page level, meaning that all the hard work of the commenters in the discussion boards is often lost to the winds of time. (Yes, some people have exceptionally good memory and link back to them. But this is obviously not sustainable.) If somebody has a visionary idea on how to lubricate the process of collating high-quality comments and incorporating them into a wiki-like entity, then I suspect our problem could be solved.

[-][anonymous]110

Why aren't we using the wiki more?

This is a really good question.

I don't use the wiki because me LW acount is not valid there. You need to make a seperate acocunt for the wiki.

That seems like an utterly stupid reason in retrospect, but I imagine that's a big reason why no one is wikiing.

0eurg
It is explicitly mentioned (somewhere) that the wiki is only for referencing ideas and terms that have been used/discussed/explained in LW posts. So, yes, inconvenience, but not solely.
[-]XiXiDu120

The best way to become more exclusive while not giving the impression of a cult, or by banning people, is by raising your standards and being more technical. As exemplified by all the math communities like the n-Category Café or various computer science blogs (or most of all technical posts of lesswrong).

[-]Rain120

Stop using that word.

7wedrifid
In fact, edit your post now please Nyan. Apart from that it's an excellent point. "Community", "website" or just about anything else. "You're a ...." is already used as a fully general counterargument. Don't encourage it!
[-][anonymous]170

I want to keep the use of the word, but to hide it from google I have replaced it with it's rot13: phyg

And now we can all relax and have a truly uninhibited debate about whether LW is a phyg. Who would have guessed that rot13 has SEO applications?

4radical_negative_one
Just to be clear, we're all reading it as-is and pronouncing it like "fig", right? Because that's how i read it in my head.
2pedanterrific
I hope so, or this would make even less sense than it should.
1Alicorn
I've been pronouncing it to rhyme with the first syllable in "tiger".
0[anonymous]
No; stop. This 'fix' is ineffective and arguably worse.
-1David_Gerard
The C-word is still there in the post URL!
6David_Gerard
That's much better! (I hadn't realised the post titles were redundant in Reddit code ...)
6[anonymous]
Upvoted for agreeing and for reminding me to re-read a certain part of the sequences. I loath fully general counterarguments, especially that one. That being said, would it be appropriate for you to edit your own comment to remove said word? I don't know (to any signifigant degree) how Google's search algorithms work, but I suspect that having that word in your comment also negatively affects the suggested searches.
7wedrifid
Oh, yeah, done.
6[anonymous]
You mean the one that shouldn't be associated with us in google's search results? I'll think about it.
6pedanterrific
Suggestion: "Our Ult-Cay Is Not Exclusive Enough"

I feel pain just looking at that sentence.

I sure as hell hope self-censorship or encryption for the sake of google results isn't going to become the expected norm here. It's embarassingly silly, and, paradoxically, likely to provide ammunition for anyone who might want to say that we are this thing-that-apparently-must-not-be-named. Wouldn't be overly suprrised if these guys ended up mocking it.

The original title of the post had a nice impact, the point of the rhetorical technique used was to boldly use a negatively connotated word. Now it looks weird and anything but bold.

Also, reading the same rot13-ed word multiple times caused me to learn a small portion of rot13 despite my not wanting to. Annoying.

3pedanterrific
Yes, well... I don't give a phyg.
3David_Gerard
Your comment would have been ridiculously enhanced by this link.
-2CronoDAS
What word?
[-][anonymous]100

The only word that shouldn't be used for reasons that extend to not even identifying it. (google makes no use/mention distinction).

4Nisan
"In a riddle whose answer is chess, what is the only prohibited word?"
-24TwistingFingers

Reading the comments, it feels like the biggest concern is not chasing away the initiates to our phyg. Perhaps tiered sections, where demonstrable knowledge in the last section gains you access to higher levels of signal to noise ratio? Certainly would make our phyg resemble another well known phyg.

[-][anonymous]110

Maybe we should charge thousands of dollars for access to the sequences as well? And hire some lawyers...

More seriously, I wonder what people's reaction would be to a newbie section that wouldn't be as harsh as the now-much-harsher normal discussion. This seems to go over well on the rest of the internet.

Sort of like raising the price and then having a sale...

5Bugmaster
This sounds like a good idea, but I think it might be too difficult to implement in practice, as determined users will bend their efforts toward guessing the password in order to gain access to the coveted Inner Circle. This isn't a problem for that other phyg, because their access is gated by money, not understanding.
2thescoundrel
I think the freemasons have this one solved for us: instead of a passwords, we use interview systems, where people of the level above have to agree that you are ready before you are invited to the next level. Likewise, we make it known that helpful input on the lower levels is one of the prerequisites to gaining a higher level- we incentivise constructive input on the lower tiers, and effectively gate access to the higher tiers.
9Bugmaster
Why does this solution need to be so global ? Why don't we simply allow users to blacklist/whitelist other users as they see fit, on an individual basis ? This way, if someone wants to form an ultra-elite cabal, they can do that without disturbing the rest of the site for anyone else.
4Alsadius
So, who is going to sit on the interview committee to control access to a webforum? You're asking more of the community than it will ever give you, because what you advocate is an absurd waste of time for any actual person.
4hesperidia
The SCP Foundation creepypasta wiki used to use a very complex application system, designed to weed out those with insufficient writing skill. It turned away a fairly significant number of potential writers due to its sheer size. It was also maintained through Google Docs by one dedicated admin for several years. I'm not sure anyone here would give up their free time to maintaining bureaucracy rather than winning, and it seems counterproductive to me, but it's theoretically possible that it can be kept to a part-time job.
0thescoundrel
That's possible- it may be that the cost of doing this effectively is not worth the gain, or that there is a less intensive way to solve this issue. However, I think there could be benefits to a tiered structure- perhaps even have the levels be read only for those not there yet- so everyone can read the high signal to noise, but we still make sure the protect it. I do know there is much evidence to suggest the prestige among even small groups is enough to motivate people to do things that normally would be considered an absurd waste of time.
2Percent_Carbon
You're not proposing a different system, you're just proposing additional qualifiers.
1TrE
Sounds like a good idea, would be an incentive for reading and understanding the sequences to many people and could raise the quality level in the higher 'levels' considerably. There are also downsides: We might look more phyg-ish to newbies, discussion quality at the lower levels could fall rapidly (honestly, who wants to debate about 'free will' with newbies when they could be having discussions about more interesting and challenging topics?) and, well, if an intelligent and well-informed outsider has to say something important about a topic, they won't be able to. For this to be implemented, we'd need a user rights system with the respective discussion sections as well as a way to determine the 'level' of members. Quizzes with questions randomly drawn from a large pool of questions with a limited number of tries per time period could do well, especially if you don't give any feedback about the scoring other than 'you leveled up!' and 'Your score wasn't good enough, re-read these sequences:__ and try again later.' And, of course, we need the consent of many members and our phyg-leaders as well as someone to actually implement it.
0buybuydandavis
Instead of setting up gatekeepers, why not let people sort themselves first? No one wants to be a bozo. We have different interests and aptitudes. Set up separate forums to talk about the major sequences, so there's some subset of the sequences you could read to get started. I'd suggest too that as wonderful as EY is, he is not the fount of all wisdom. Instead of focusing on getting people to shut up, how about focusing on getting people to add good ideas that aren't already here?
0Viliam_Bur
Depending on other factors, it could also resemble a school system.
0[anonymous]
Rationology? Edit: I apologize.
[-]tut70

What you want is an exclusive club. Not a cult or phyg or whatever.

4gwern
There's only one letter's difference between 'club' and 'phyg'!
2tut
And there is only one letter's difference between paid and pain. The meaning of an English word is generally not determined by the letters it contains.

I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who'd already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a "phyg" or not, but it definitely wouldn't be a place where any prod... (read more)

[-][anonymous]140

I don't see why having the debate at a higher level of knowledge would be a bad thing. Just because everyone is familar with a large bit of useful common knowledge doesn't mean no-one disagrees with it, or that there is nothing left to talk about. There are some LW people who have read everything and bring up interesting critiques.

Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting. Having read the sequences it's the same sort of boring as someone explaining for the millionth time that "no, technological progress or happyness is not a sufficient goal to produce a valuable future, and yes, an AI coded with that goal would kill us all, and it would suck".

Not being an ultra-exclusive "phyg" is one of such strategies.

The point of my post was that that is not an acceptable solution.

-1Bugmaster
Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly: No, it's not very interesting to you and me, but to the "uneducated folks" whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it's easily correctable. However, kicking people out for being ignorant does not facilitate such correction. What's your solution, then ? You say, To me, "more exclusive LW" sounds exactly like the kind of solution that doesn't work, especially coupled with "enforcing a little more strongly that people read the sequences" (in some unspecified yet vaguely menacing way).
2Zetetic
Whether the sequences constitute knowledge is beside the point - they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren't willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people. I've participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone's head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I've been a part of in this respect, but I have noticed a slow decline in this department. All of that said, I'm not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn't sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I've been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I'd like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.
3Bugmaster
I agree with pretty much everything you said (except for the sl4 stuff, because I haven't been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions -- outreach as well as technical. I'm not proposing that we set the barrier to entry at zero; I merely think that the guideline, "you must have read and understood all of the Sequences before posting anything" sets the barrier too high. I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it's possible that I'm biased in this regard, since I myself do not agree with everything Eliezer says in those posts.
4Zetetic
Disagreement is perfectly fine by me. I don't agree with the entirety of the sequences either. It's disagreement without looking at the arguments first that bothers me.
1[anonymous]
What is the difference between knowledge and opinion? Are the points in the sequences true or not? Read map and territory, and understand the way of Bayes. The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there's no qualitative difference between opinion and fact. I don't have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don't want a particular type of solution (relaxing LW's phygish standards), doesn't mean I support any particular straw-solution.
4Bugmaster
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can't tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO. See, it's exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words. I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, "rocks tend to fall down" is a fact, but "the Singularity is imminent" is an opinion -- in my opinion -- and so is "we should kick out anyone who hadn't read the entirety of the Sequences". When you said "we should make LW more exclusive", what did you mean, then ? In any case, I do have a solution for you: why don't you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own "phyg" and make it as exclusive as you want.
6Viliam_Bur
This would disrupt the flow of discussion. I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but... Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can't filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won't, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too. People have evolved some communication strategies that don't work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn't tell the person openly to buzz off. But when we speak online, and I ignore someone's comments, you don't see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.) Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
0Bugmaster
I see, so you felt that the comments of "smart" (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to "stupid" (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual "smart" people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again. Slashdot has something like this (though not exactly). I think it's a neat idea. If you implemented this, I'd even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side. And everyone's assumptions are different, which is why I'm very much against global solutions such as "ban everyone who hadn't read the Sequences", or something to that extent. Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are -- which is what nearly always happens when people decide to shut out dissenting voices. That's just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
0Viliam_Bur
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting. It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.) If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas. I also wouldn't like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement. I think a reasonble approach is to calculate the probability of "reasonable disagreement" based on the previous comments. This is something that we approximately do in real life -- based on our previous experience we take some people's opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
0Bugmaster
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel. This sounds pretty similar to Google's PageRank, only for comments instead of pages. Should be doable. Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else. Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we'd have a whole lot of other problems to worry about :-) That said, I personally tend to distrust my emotions. I'd rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn't want to join a darknet such as yours. That's just me though, your experience is probably different.
6[anonymous]
I mean that I'd like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don't think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea. And I'm not afraid of being a phyg. That's all, nothing revolutionary.
2Bugmaster
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn't enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is. I'm not afraid of it either, I just don't think that power-sliding down a death spiral is a good idea. I don't need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
4wedrifid
Specifically 'the way of'. Would you have the same objection with 'and understand how bayesian updating works'? (Objection to presumptuousness aside.)
8Bugmaster
Probably. The same sentiment could be expressed as something like this: This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
-2wedrifid
Educating people is like that! What I would have said about the phrasing is that it is wrong.
-3Bugmaster
Merely telling people that they aren't worthy is not very educational; it's much better to tell them why you think they aren't worthy, which is where the links come in. Sure, but I have no problem with people being wrong, that's what updating is for :-)
1wedrifid
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points - your retort seems odd. Huh again? You seemed to have missed a level of abstraction.
-10Percent_Carbon

I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who'd already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a "phyg" or not, but it definitely wouldn't be a place where any productive debate could occur. People who wholeheartedly agree with each other tend not to debate.

A 'debate club' mindset is one of the things I would try to avoid. Debates emerge when there are new ideas to be expressed and new outlooks or bodies of knowledge to consider - and the supply of such is practically endless. You don't go around trying to artificially encourage an environment of ignorance just so some people are sufficiently uninformed that they will try to argue trivial matters. That's both counterproductive and distasteful.

I would not be at all disappointed if a side effect of maintaining high standards of communication causes us to lose some participants who "come to Less Wrong specifically for the debates". Frankly, that would be among the best things we could hope for. That sort of mindset is outright toxic to conversations and often similarly deleterious to the social atmosphere.

0Bugmaster
I wasn't suggesting we do that, FWIW. I think there's a difference between flame wars and informed debate. I'm in favor of the latter, not the former. On the other hand, I'm not a big fan of communities where everyone agrees with everyone else. I acknowledge that they can be useful as support groups, but I don't think that LW is a support group, nor should it become one. Rationality is all about changing one's beliefs, after all...
-2Alsadius
Debate is a tool for achieving truth. Why is that such a terrible thing?
-1wedrifid
I didn't say it was. Please read again.
0Alsadius
You said that we should avoid debate because it's bad for the social atmosphere. I'm not seeing much difference.
-1wedrifid
No I didn't. I said we should avoid creating a deliberate environment of ignorance just so that debate is artificially supported. To the extent that debate is a means to an end it is distinctly counterproductive to deliberately sabotage that same end so that more debate is forced. See also: Lost purpose.
0Alsadius
Upon rereading, I think I see what you're getting at, but you seem to be arguing from the principle that creating ignorance is the preferred way to create debate. That seems ahem non-obvious to me. There's no shortage of topics where informed debate is possible, and seeking to debate those does not require(and, in fact, generally works against) promoting ignorance. Coming here for debate does not imply wanting to watch an intellectual cripplefight.
1wedrifid
I seem to be coming from a position of making a direct reply to Bugmaster with the specific paragraph I was replying to quoted. That should have made the meaning more obvious to you. Which is what I myself advocated with:
2MarkusRamikin
(italics mine) How did you arrive at that idea? The point isn't to agree with the stuff, but to be familiar with it, with standard arguments that the Sequences establish. If you tried to talk advanced mathematics/philosophy/whatever with people, and didn't know the necessary math/philosophy/whatever, people would tell you some equivalent of "read the sequences". This is not the rest of the Internet, where everyone is entitled to their opinion and the result is that discussions never get anywhere (in reality, nobody is really interested in anyone's mere opinion, and the result is something like this). If you're posting uninformedly and rehashing old stuff or committing errors the core sequences teach you not to commit, you're producing noise. This is what i love about LW. There is an actual signal to noise ratio, rather than a sea of mere opinion.
3Bugmaster
nyan_sandwich said that the Sequences contain not merely arguments, but knowledge. This implies a rather high level of agreement with the material. I agree, but: I am perfectly fine with that, as long as they don't just say, "read all of the Sequences and then report back when you're ready", but rather, "your arguments have already been discussed in depth in the following sequence: $url". The first sentence merely dismisses the reader; the second one provides useful material.
6David_Gerard
Yesss ... the sequences are great stuff, but they do not reach the level of constituting settled science. They are quite definitely settled tropes, but that's a different level of thing. Expecting familiarity with them may (or may not) be reasonable; expecting people to treat them as knowledge is rather another thing.
0MarkusRamikin
Hm, that's a little tricky. I happen to agree that they contain much knowledge - they aren't pure knowledge, there is opinion there, but there is a considerable body of insight and technique useful to a rationalist (that is, useful if you want to be good at arriving at true beliefs or making decisions that achieve your goals). Enough that it makes sense to want debate to continue from that level, rather than from scratch. However, let's keep our eyes on the ball - that being the true expectation around here. The expectation is emphatically NOT that people should agree with the material in the Sequences. Merely that we don't have to re-hash the basics. Besides, if you manage to read a sequence, understand it, and still disagree, that means your reply is likely to be interesting and highly upvoted. Hm. Yeah, I wouldn't want anyone to actually be told "read all the sequences" (and afaik this never happens). It'd be unreasonable to, say, expect people to read the quantum mechanics sequence if they don't intend to discuss QM interpretations. However, problems like what is evidence and how to avoid common reasoning failures are relevant to pretty much everything, so I think an expectation of having read Map and Territory and Mysterious Answers would be useful.
4Bugmaster
Agreed. I emphatically agree with you there, as well; but by making this site more "phygvfu", we risk losing this capability. I agree that these are very useful concepts in general, but I still maintain that it's best to provide the links to these posts in context, as opposed to simply locking out anyone who hadn't read them -- which is what nyan_sandwich seems to be suggesting.
6MarkusRamikin
Trouble is, I'm not really sure what nyan_sandwich is suggesting, in specific and concrete terms, over and above already existing norms and practices. "I wish we had higher quality debate" is not a mechanism.

Upvoted.

I agree pretty much completely and I think if you're interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else. Edit, after many replies: This claim is too strong. I should have said instead that people should at least be making an effort to read and understand the sequences if they wish to comment here, not that everyone should read the whole volume before making a single comment.

There are those who think rationality needs to be learned through osmosis or whatever. That... (read more)