If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
348 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have banned advancedatheist. While he's been tiresome, I find that I have more tolerance for nastiness than some, but this recent comment was the last straw. I've found that I can tolerate bigotry a lot better than I can tolerate bigoted policy proposals, and that comment was altogether too close to suggesting that women should be distributed to men they don't want sex with.

I agree with the banning, given the fact that he was basically constantly commenting on the same issue, and one which is not particularly relevant to Less Wrong. But I disagree with this reason. Basically I think banning someone for the content of their proposals or implied proposals should be limited to the kind of the thing which might be banned by law (basically imminent threat of harm.)

Basically I think banning someone for the content of their proposals or implied proposals should be limited to the kind of the thing which might be banned by law (basically imminent threat of harm.)

LW self regulates the content of proposals via karma voting. In advancedatheist the communities desires were quite clearly expressed via karma votes and he still continued to bring up the topic.

Those post significantly reduce the likelihood that woman who read LW want to contribute. When the community karma votes that it doesn't want posts like this a user should accept that.

Yes, that's why I said I agreed with the banning.

I also think that this sets a very murky precedent. I don't disagree at all with banning AA if it turns out he has abused voting privileges, but so far there's no hard evidence that he did. Putting that aside for now, all we're left with is a block being based on whether some individual moderator "can tolerate" some controversial comment (meaning that it attracts both downvotes and upvotes, as far as the LW userbase is concerned). This strikes me as careless.

I sympathize with your point of view, but I find it difficult to come up with rules. I don't know if this is enough, but I think the fact that I'm pretty tolerant about content (spam doesn't count as content) means people aren't at high risk of me losing my temper with them.

I'm not convinced I'm obligated to take my system 1 completely off-line when I'm dealing with ideas that are inimical to my interests.

For what it's worth, I have a long history at LW with a high karma score (typically 92% positive), I was offered the job of moderator rather than asking for it, and when I announced that I had become moderator, I got a lot of upvotes. I think these facts are evidence that I have a pretty good sense of the community.

Have a rule-- I detest it (though not to the point of banning people) when someone mentions something they saw online and doesn't offer a link, or at least apologize for not having one.

It sounds like we had an effective if unstated rule: "When someone does a bunch of stuff wrong, get rid of them."

AA checked four boxes:

  • Doesn't listen to feedback
  • Doesn't make strong arguments
  • Repeatedly posts on topics not of particular interest to LW
  • Posts things that are likely to be offensive to many

We are missing some rules that might be useful to have, specifically 'what are the boxes' and 'how many do you need to check to get banned'. But quite frankly, looking at those four sins, I would think that any three should be enough to get someone banned. If anything, NancyLebovitz probably waited longer than necessary.

I would also say that making a rule based on only one of those factors would be counterproductive. I think most of us are forgiving (as far as bans go, albeit perhaps not in voting) when a user repeatedly fails on one of those, as long as they are also providing useful content in other posts.

I think, as a general rule, people in a decision-making capacity are best advised to recuse themselves from any choice whenever they feel that their System 1 is interfering. (In your case, I would've waited for some solid evidence on the karma-abuse question. After all, if the upvotes on that comment turned out to be genuine, that would definitely affect my own views.) I am aware that this is not always realistic. But make no mistake here - the thought process that led to this decision will also make LW less, not more trustworthy (however mildly) when dealing with issues that are unusually complex or politically contentious. Masculinity and involuntary celibacy are canaries in the coalmine - our treatment of them is direct evidence of how well we can treat everything else.

You care about false upvoting a great deal more than I do.

Is it worth mentioning that I was kinder to aa than most of the people who replied to him?

Check out the discussion at SlateStarCodex about banning Steve Johnson, a time-wasting fellow who wasn't quite breaking the rules.

I really want to hope I can say the same. I sort of took it as my personal mission to respond to every outrageous thing he said, and point out the problems with his politics and his theory of sexuality. As a former member of the online incel community, I thought I was in a better position to empathize with his situation, and could present alternative arguments in a way that he might be more receptive to than standard refutation. But AA never replied directly to me, so I don't know how he took my approach.
SlateStarCodex does not have a karma system, though.On LW, time wasters tend to be downvoted swiftly, so they don't really waste much time anyway. If someone who's broadly considered a "time-waster" is nonetheless upvoted, this tells me that what they're posting is unusually interesting.
In this case AA's post got downvoted swiftly but still wasted a lot of energy.
You can have a voting ring.
That depends very much on the audience. Some people will trust more others will trust less.
I'm pretty sure that the latter will outnumber the former quite a bit. Speaking generally, we want social norms that discourage excess political talk (politics is the mindkiller, and gender politics is no exception) but when it does come up, people should be allowed to speak freely if they have something worthwhile to say. Anything else is a recipe for severe bias (via "evaporative cooling" and factionalization).
Given that the post from him on that topic were constantly downvoted, the community seemed to feel that he didn't have something worthwile to say.
I think that's a really bad rule in almost any setting, including this one. It amounts to acting as a straw Vulcan.
Well, System 1 is a complicated beast. In most cases, it helps you reach better and quicker decisions than a Straw Vulcan would, and this is a good thing. But there are some times when you're fairly sure that it cannot be trusted - this is arguably one of these times.
It's funny, that this triggered up your system I in this case. Offensivness on LW...
No, it was the suggestion that women should be given to men they don't want to marry combined with a bad posting history which caused me to ban. I'm also none too fond of suggestions that people should mistrust their own motives from someone who shows no capacity for examining their own motives. Also note that I said I wouldn't ban for failure to include links. (Or were you joking?) My system 1 was rather activated. I don't normally flame people, but I had some ideas for flaming aa to a crackly crunch.
I would say that flaming is a lot more polite than blocking - at least insofar as "politeness" is actually something ethically worthwhile. But maybe that's just me.
That sounds to me like a system II analysis of the situation. Not examine one's own motives and not including links is a sign of a kind of intellectual laziness, that alone wouldn't be ground for banning but is in combination with offensive content it has a different quality than carefully crafted posts that communicate offensive content.
If I'm putting it in words, especially for LW, system 2 is going to get involving. However, a proposition of a system of forcing women into sex is something that I take personally because I imagine myself (not in great detail) being mistreated that way. I'm against a military draft, but I don't react the same way to a proposal of a draft for men. Actually, I don't react the same way to a proposal of a military draft for women. This is a personal issue, and trust me, my system one was involved. (Sidetrack: I liked The Rainbow Cadenza, a science fiction novel in which women are drafted for sex, as a rather clear parallel-to-create-outrage to the military draft for men.) It wasn't just not examining one's own motives in general, it was pushing opposed people to think the worst of their own motives while not looking at one's own.
I don't actually see him as either saying or suggesting that women should be forced into sex. He seems to be saying that women (and all people) should be forced to not have sex outside of marriage, which would then lead to women settling for lower status partners. Also, in this case the problem with your system 1 is that it affects your conclusions about what he means. He didn't, after all, make the bigoted proposal you decry. Rather, you interpret him as almost making it. It's a lot easier for bias to get in the way when banning someone for what they're almost-saying than when banning someone for what they're actually saying.
It's possible that what he meant was that women shouldn't be allowed to have sex with the men they choose. Instead, they can either be celibate or learn to tolerate sex with men they don't choose. Is this how you interpret what aa said?
He wanted to ban sex outside of marriage. Describing that as "can't have sex with the men you choose" is misleading, because it's such a noncentral example of that. It's literally true (if you choose someone outside of marriage, you're not allowed to have sex with him) but the same could be said for banning sex on public busses (if you choose someone on a public bus, you're not allowed to have sex with him). Furthermore, I find it hard to accept that "ban sex outside of marriage" is such a bigoted policy that anyone who espouses it should not be allowed here. (And it's not even restricted to women--he just thinks the policy would affect women differently than men.)

Have a rule-- I detest it (though not to the point of banning people) when someone mentions something they saw online and doesn't offer a link, or at least apologize for not having one.

That's a strawman. Nancy said "last straw". It wasn't a single comment that caused the ban.

This community doesn't suffer from being overmoderated. I think it's worthwhile to have a moderator who is in the position to moderate when they think it's necessary to do so.

Just a few thoughts:

I completely approve the ban. Although next time maybe getting a formal warning first would be better.

Let's not debate what exactly AA meant and what he didn't. He is not here to defend himself.


I'm somewhat glad for aa's ban. I've lurked LW for a while now, and have found a lot of content posted here extremely interesting. Seeing aa's posts in open threads on incels every week being upvoted, containing content I felt was extremely prejudiced and malformed, with no apparent improvement over time, unnerved me quite a bit, and I felt like I was not only wasting my time reading his posts, but also gave me a negative impression of what LWers think. This was enough to stop me from browsing open-threads/browsing less wrong for a while.

Not being a constant user of LW, I was unaware of vote manipulation, but I did feel myself being confused by the apparent clash between aa's upvoted posts on incels and general concept I had of LW, so it shouldn't have been hard to conclude that there were alternative explanations for his upvotes.

I'm inclined to think there were some actual people who liked what aa was saying. They're a small proportion of LW, and there were a good many more people who didn't like what he was saying.

I think that banning him was good from a consequentialist POV, but bad from deontological POV.

You may have a point. It turns out that at least one person would like to get in touch with aa, and I'm not sure how that's possible. What's more (and this sounds like karma) I read something by a man who was involuntarily celibate, and discovered that hormone therapy helped. I'd have sworn I saw this on the most recent SlateStarCodex open thread, and now I can't find it. Meanwhile, it would be exactly like the usual human level of competence to treat a physical problem as though it has an emotional cause. What deontological rule did you have in mind?
Try here: https://www.reddit.com/user/advancedatheist I looked through his comments for a second, and at least on reddit he's talking about incel stuff in the relationship subreddits and cryonics in the transhuman subreddits.
Freedom of Speech seems most obvious.

I was expecting a rule like bans should be preceded by a warning and a chance to reply.

That's a rule I'd strongly support other than in cases of absolutely unambiguous spamming or clear sockpuppets of banned individuals.
But a rule like "don't ban people for opinions you disagree with" would also fit the bill, no?
It would, and I was following it for a while.
That would be a horrible rule -- no one would be able to ban me for my ardent desire to eat babies alive. I mean, unless you have some equally perverted moderators...
There is a debate like this- about, abortion. And you're right, I don't think that people should be banned for having the position that pro-lifers think of as "pro-killing babies",
Okay, so that was a bad example... (?) But the point still stands. If in order to ban someone we have to have a moderator who agrees with the person being banned, then some people will be much harder to ban than others. And we will most likely have to add some sub-optimal choices to the moderator pool, simply because now we are selecting for a factor that we otherwise wouldn't value. I am surprised that my original comment was down-karma's so much -- if you have useful feedback onthis (especially 'bad point' vs. 'bad expression of point'), please respond or private msg me -- learning is good!
Ahh, I see what's happening. You're thinking of my suggestion as "Don't ban people who's opinion you disagree with." But that's not actually what I meant. You're very welcome to disagree with the person you ban - it's just that you shouldn't ban them BECAUSE you find their opinion objectionable.
Doesn't that become equivalent to saying that you cannot be banned for saying generally offensive things? It is better than my original interpretation -- people can now be banned for being illogical, unintelligible, repetitive, and even unresponsive, but they still can't be banned for for being intentionally offensive -- whether through extreme positions (women should be forced to have sex) or insults/obscenities. I suppose that you could batch those under illogical? Edit: I overlooked the obvious case of being able to ban someone for posting multiple low-rep posts -- that should cover the last of my objections. Thank you for explaining!
No, if someone is being intentionally offensive in a trolling way, this rule says nothing about it either way. Likewise, it doesn't say anything one way or the other about low rep posts. However, if someone has a position that you find offensive, but is being reasonable about presenting their opinion and is not just trying to start a flamewar, then that's not sufficient cause for banning under this rule.
I've said things which could be interpreted as wanting to eat babies, at least if you go by Nancy's "altogether too close to saying" standard (I didn't actually say it, but I got close). I really would not want to be banned for such a thing, and I think banning people for such things is poisonous to the discourse here. The example of killing patients for their organs is another one.
He is free to continue speaking about the subject, just not on LW.
That's not a deontological rule.
Thou shalt not restrict freedom of speech.
Sigh. Jerking knees are rarely the best responses. Trolls. Spam. Speech inside your home. Big loudspeakers outside your windows. Etc. etc. Freedom of speech is a right with a matching duty to not interfere with the speech owed by the government. It's not a general deontological rule applicable to all human interactions.
There's a concept of "free speech absolutism" which basically says that if you are in a venue that encourages discourse, you should allow any speech. You're not a deontologist, so you might look at that rule and say "but what about the consequences". But, that's not what a free speech absolutist would do.
Unless you are arguing that you are a free speech absolutist, or, maybe, that LW should be run under such absolutism, I don't see the relevance. There are a LOT of fringe concepts around.
I'm not a free speech absoluist, but I do think that Advanced Atheist should not ahve been banned for the reason of free speech. Regardless of what I believe though, I wasn't arguing for or against it, I was answering Nancy's Question.
And my point was and remains that you did not provide an answer. She didn't ask whether you can make up a deontological rule she violated. She asked whether there was a reasonable and practical rule you think she violated. Free speech absolutism isn't one. As to "but I do think", that's still not a deontological rule -- that's an ad hoc resolution which you happen to prefer.
Free speech absolutism absolutely is one. It's a common deontological rule that would have prevented AA from being banned. All moral intuitions are ad hoc.
Common?? Show me a place where it is practiced. Spam folders do not count. Actually, it would prevent all moderation. Would you like to learn one weird trick which would extend your manhood and make all women get naked and bring you offers to reclaim your wealth from a bank in Nigeria while stomping on pink commie faggots?
Free speech absolutism only applies to the reasons for free speech (discourse). Spam does not count - objectionable opinions do.

Thank you.

I have mixed feelings about this. He was posting the same argument about being incel in every single open thread, and the repetitiveness seems more annoying than the content, to me. But OTOH he also posted some interesting cryonics stuff.

Incidentally, suppose someone posted on the forum to say "As an Indian, my cultural heritage says that parents should decide who a woman marries."

Should this person be banned?

I'm not saying to support AA's position, nor as an attempt to criticise Indian culture, I'm just trying to see if we can have a consistent position on what counts as unacceptably offensive.

suppose someone posted on the forum to say "As an Indian, my cultural heritage says that parents should decide who a woman marries."

Do they say it once, or do they keep mentioning it all the time despite the downvotes?

If they only say that once, no they shouldn't. If they say it umpteen times and continue doing so even after being downvoted to oblivion umpteen times, maybe.
Seems reasonable and consistent.
No, but that might be because the hypothetical Indian is making a much weaker policy suggestion. By the way, arranged marriage means that neither partner has a choice.
I'm not sure what policy suggestion AA was making. I thought that you thought he was proposing forced marriages. What do you think he was proposing? And of course, a lot of pressure is put on men to go into arranged marriages, but at the end of the day they do have a little more freedom, as if it comes down to violence they are more able to defend themselves. And that's a possibility - I have heard an girl of Indian decent say "I can't be forced into marriage because I have no male relatives and I could take my mum in a fight."

I... what? As I understand the comment, he wanted to ban sex outside marriage. Describing that as "women should be distributed to men they don't want sex with" seems ridiculously exaggerated.

I agree that his one-issue thing was tiresome, and perhaps there is some argument for making "being boring and often off-topic" a bannable offense in itself. But this moderation action seems poorly thought through.

Edit: digging through his comment history finds this comment, where he writes it would be better to marry daughters off as young virgins. So I guess he did hold the view Nancy ascribed to him, even if it was not in evidence in the comment she linked to.

Also, "monogamy versus hypergamy" has been discussed on Less Wrong since the dawn of time. See e.g. this post and discussion in comments, from 2009. Deciding now that this topic is impermissible crimethink seems like a pretty drastic narrowing of allowed thoughts.

In my opinion, the problem wasn't the topic per se, but how the author approached it:
comments in every Open Thread on the same topic, zero visible learning.

Sure, I think that was annoying. But it's not the stated reason for the ban.

I disapprove.

Upvote because disapproval is not wrong around my universe. not sure if people are trying to downvote in support (aka they also disapprove) or against your disapproval.
Note that those that support the disapproval apparently have the decency not to downvote the approval.
How do we improve this? Edit: wait - I support the show of approval too. I disagree with the disapproval but I support someone's ability to voice their opinion.

I approve.

While I'm deeply concerned about the possibility that AA has been engaging in vote-gaming which does seem to be a bannable offense, it isn't clear to me that, as reprehensible as that comment is, that it is enough reason by itself for banning, especially because some of his comments (especially those on cryonics) have been clearly highly productive. I do agree that much of the content of that comment is pretty disgusting and unproductive, and at this point his focus on incel is borderline spamming with minimal connection to the point of LW. Maybe it would be more productive to just tell him that he can't talk about incel as a topic here?
Why not ask advancedatheist to make his opinion clearer? My internal model of AA does not include him being especially supportive of, say, ISIS' sex slavery (to take one crystal-clear example of "women ... be[ing] distributed to men they don't want to have sex with"). Could it be that you're simply misinterpreting his original intent?

Why not ask advancedatheist to make his opinion clearer?

He has been sufficiently clear already. Nitpicking over the exact role he sees for women in society as he would arrange it is something that cannot possibly be to the benefit of this site and its community.

That's a strawman. AA speaks in favor of traditional partriarchy and that's a system that has arranged marriages where woman often have little to say about whom they want to marry and then have sex with.
Does it include him declaring that society must make sure that men get enough sex, whatever it takes, and then averting his eyes from the "whatever it takes" particulars?
Well, what should "whatever it takes" mean, exactly? Very few values are anything close to non-negotiable - EY's Sequences are unusually clear on this. If I had to guess, I'd say that AA thinks "men getting enough sex" could be achieved cheaply enough, by improving male attitudes (and more broadly, societal attitudes) towards masculinity and sex. That would doubtlessly make some radical feminists uncomfortable, but this is clearly the sort of "policy" option that's actually on the table. Which means that even treating your "particulars" as if they could ever be meant seriously is a batshit-crazy misrepresentation of what incels are actually talking about.
Averting one's eyes means that you never ask yourself that question. "Make it happen, I don't want to know how" is not a terribly uncommon sentiment.
I think you meant "improving female attitudes".
Well, I can't speak for the whole incel subculture, but I'm pretty sure I meant what I wrote above. Of course, the point of changing societal attitudes is that once you stop telling women that they're supposed to hate "toxic" masculinity, their attitudes will improve as well. But that's pretty much obvious.
No problem-- I was reacting aa's complaints that women are too picky about men, and also revolted by men. A lot of this discussion has convinced me that communication is difficult.
Yeah well, this whole exercise starts making very little sense once you go into such specifics - Viliam is right about this. It might be that you're putting too much weight on that one single complaint (which would just be considered a typically 'edgy' throwaway remark if it came from within the incel 'community'), or that I'm oversimplifying in assuming AA shares the broader views of the incel subculture and, more generally, the "Dark Enlightenment" (incels, redpillars, puas, neoreaction, what have you).
I don't mind this ban, but I think it would be a good idea to make a clearly defined ultimatum before making such bans. E.g. tell him any additional comments on the topic would result in a ban. Worst case scenario he gets to make one more annoying post before he gets banned, best case scenario he cleans up his act and we get to keep a positive-sum commenter. Was AA ever given such an ultimatum?
Possible karma fraud probably didn't help.
I just want to take a moment to point this out: the hypotheses people like advancedatheist push for why they're incel are very emotionally salient (a small number of men are monopolizing all of our women! omg!) So everyone, please don't let this very emotionally salient hypothesis prematurely crowd out other explanations for the same phenomenon. Stanford psychologist Philip Zimbardo wrote a book called the Demise of Guys. Among other things, he discusses the sexual frustrations of modern men and offers some possible explanations: He's also got a section on how men are being diagnosed with erectile dysfunction at younger and younger ages, linking to the site yourbrainonporn.com which discusses this. Are we really supposed to believe that evolutionary factors like female hypergamy are responsible for increased shyness and erectile dysfunction among young men? Female hypergamy, insofar as it exists, is a mostly static biological phenomenon that's been around for 100s or 1000s of years. Are we really supposed to believe that right around the time when the world is changing faster than ever, suddenly female hypergamy goes from being a constant in the background to a destroyer of societies? I'm sure the liberation of women plays an important role here, but I think its role is frequently overstated. Think back to the 60s and 70s when the sexual revolution first happened. Where were the hopeless incels back then? Or think of forager societies where chastity was not held to be valuable... where were the "omega males" at that point? Anyway, yourbrainonporn.com also has a page on how excessive porn use may destroy social confidence. Like most addictions, porn decreases your brain's dopamine receptor levels, and lower dopamine receptor levels have been shown to predict lower social status in monkeys. Anecdotally if I avoid porn completely for extended periods my social confidence and abilities with women improve significantly. (This also matches perfectly with nerds being
Well, in other forums he suggested that women have systematically less intelligence than men. So I guess that to him women are not much more than domestic animals. One side of me is happy that he is gone, the other side is mildly disappointed for the lack of a local bigot to study in a safe environment.

Well, in other forums he suggested that women have systematically less intelligence than men. So I guess that to him women are not much more than domestic animals.

I don't think the second sentence follows from the first. Children certainly have less intelligence than adults, yet we shouldn't treat children as animals.

(Not that I agree with the first sentence)

Not per se, it follows from the first sentence and NancyLebovitz comment on him denying women autonomy. This sentence is weird to me because I was not talking about what I think is right or how to steelman aa's thought. Anyway, consider these: * he believes that fully formed females have less intelligence than males; * he attributes the difference to a systematic genetic trait; * that he thinks women should be denied autonomy on a basic right. How would you call the status of a sub-human non-autonomous being? Domestic or friendly animal seems to me quite precise.
Well children are both less intelligent than adults, and non-autonomous, in that they have no choice over whether they go to school etc., so I think my comparison still stands. I also don't think that someone or some group having below-average intelligence means they are sub-human. Also, does AA think that women have less general intelligence, or that they are less good specifically at STEM subjects? Because a lot of scientists do think that there are cognitive differences, but balanced, in that women have higher verbal & empathising intelligence.
I don't remember aa saying anything one way or the other about women's intelligence vs. men's.
Not here, in another forum. Quoting verbatim (regarding the ability to think abstractly): "Women generally either lack, or fail to develop, that ability, so they don't think about right and wrong in the way men do."
I don't support this ban, but I have to admit I'm more of a naturalist than a cultivator when it comes to gardens: weeds are plants too, right? If there's significant evidence of karma fraud (even if that evidence isn't shared), that's a good reason. If it's just "annoying posts that don't get downvoted enough for our tastes", that's pretty weak.
I've seen quite a bit of evidence of karma fraud on their part.
It sounded like he suggested that "we need to restore a healthy patriarchy where women can't get sexual experience until marriage." That doesn't mean "women should be distributed to men they don't want to have sex with". He is advocating prohibiting sex, not requiring sex, and more specifically that if society prohibits sex with lots of partners, women would be willing to settle for partners that they won't settle for now. Also, prohibiting "bigoted policy proposals" is a really bad idea. All sorts of suggestions turn up here that could be put in that category, from cutting up travellers for their organs to valuing one's countrymen more than immigrants to letting employers hire based on IQ.

Advancedatheist is flagrantly abusing the voting system. How can this be addressed/reported/stopped?

I literally saw a long post of his in this open thread, nearly-universally downvoted to -10, rise to 0 in 3 minutes just now.

EDIT: An additional 7 upwards in 5 minutes as I made this post, contemporaneous with a blast of +7 on another of his posts.

Seriously, how can his constant trolling be stopped? He is hurting discussion and he's been at this for quite some time, I've seen this happen over and over again for more than a year and I'm sick of it.

Regardless of whether or not advancedatheist has been abusing the voting system, I'd like him to stop posting about involuntary celibacy (incel) entirely on LW. Though I sympathize with his plight-- people don't ever deserve to be in a state of mental strife, or experience anything that feels like suffering-- his posts on incel mostly don't attract quality replies, and probably scare people off. Moreover, he hasn't stopped posting about this despite having been consistently downvoted.

Are there any appropriate forums where he might be able to post about incel to a more receptive audience? Don't neoreactionaries tend to be sympathetic to incel folks?

I used to belong to a couple of incel fora many years ago, and from my experience I wouldn't recommend it to anyone. Male incel communities are very hard to keep sane. They function as training camps for misogynists and PUA predators, and the few women who post advice there don't help as much as they believe they do. I was ridiculed every time I tried to calm down the hatred and resentment. I wouldn't wish to inflict that level of stress on anyone, much less anyone desperate enough to seek for such a place.

(Full disclosure: I'm bisexual, 32 years old, still a virgin with women, and opposed to both the premises and the methods of PUA.)

More charitable hypothesis: The people most likely to notice an advancedatheist comment the quickest downvote. The next wave of people finds the downvoting excessive and upvote in response. This doesn't really predict -10 to +3 swings, though.
Not only does it not predict such large swings it also doesn't fit with the fact that after such a swing (which occurs rapidly) he then gets a slow downward trend. I pointed this out to the moderators a while ago and so I have a record of how rapid some of the changes were: http://lesswrong.com/lw/ls5/if_you_can_see_the_box_you_can_open_the_box/c1kf was at -9 within 8 hours of being posted, 12 hours later or so it was at +4. Note that it has now reverted to +0. http://lesswrong.com/lw/ln8/february_2015_media_thread/bx5u was at -5, then within 24 hours went to +6 and is now +3. http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw6v was at -8 at 5 PM EST. At 7:10 EST it was at +6. In the same span http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw6w was at -13 and went to +0. After the fact over the next few days, both those comments went into the deep negative. Similarly http://lesswrong.com/lw/lk7/optimal_eating_or_rather_a_step_in_the_right/bvmk was at -4, then went in the same 2 hour time span up to 3 and then went to 2 (so was left alone after that). Curiously, within the same 2 hour time span as that set of rapid upvoting, two highly negative comments in support of A went through a similar swing with again a slow reversion over the next few days http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw9t and http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw7l These aren't the only examples, but simply the most blatant Based on this evidence I assign an extremely high credence that some form of karma abuse is going on with someone using multiple accounts (approximately 90% certain). I assign an 80% chance that this person is doing so deliberately to upvote comments which are seen at odds with "liberal" politics in some form. I assign a slightly over 50% chance that AA is doing this himself. The fact that it took until now for him to address such concerns despite the fact that others have mentioned them is not positive. After
I was thinking that OP was describing a situation [Post receives many upvotes and many downvotes] and ascribing the half he disagrees with to some kind of fake votes (sockpuppetry), while those who agree with him are depicted as being the genuine opinion of LW posters. Which, if true, that's bad, but don't you sort of have to establish that? Like, isn't the exact opposite equally likely? Alternatively, what if all of votes are "genuine" (that is, represent different LW posters), or alternatively, are all false (that is, dude and some opponent are butting heads through false votes)?
I think such people may be more harmful to the voting system than the usual vote manipulation. Your vote should express whether you want to see more of something or less of something on LessWrong. Not to be used strategically to counter other people's votes. Then not only you don't contribute to the system, but also remove other people's contributions. What is it exactly you aim for? A webpage where no one will bother to downvote annoying content, because they will know someone else will immediately upvote it back? You should upvote only those comments you would upvote regardless of their current score.
I disagree, that is, I think it is reasonable to upvote or downvote "strategically." I agree with the proposed motive (how much of this kind of content do you want to see), but e.g. if I see a comment which I think is not particularly bad, but also not particularly good, so I don't care to increase or decrease the amount of it on Less Wrong, then I will upvote that comment if I see it downvoted, and might very well downvote it if I see it upvoted. If I see a comment downvoted to -2 or -3, and I would like to see less of it on Less Wrong, that does not necessarily mean I should downvote it again, since this could result in not seeing such comments at all, which is not necessarily what I want. I want there to be less content like that, but not none at all. In other words, I agree with your proposed goal, but I think strategic voting is a reasonable means of attaining that goal.
I may be misunderstanding what you wrote, but it seems to me you just said that if you have no genuine preference for having more or less of some kind of content, your second preference is to negate the expressed preferences of other LW readers. If too many have voted to see less of X, you vote for more X, not because you literally want "more X", but because you want "more of what many other people don't want". And if too many have voted to see more of X, you vote for less X, again not because you literally want "less X", but because you want "less of what many other people want". So, essentially, your preference is that other people get less of what they want, and more of what they don't want?
I do the same thing, but the preference for me is really "The vote score should be in proportion to how much I think the post adds to the discussion." If it's at -10, but I think it adds a little to the discussion (or only takes away a little) I'll upvote, because the score is out of proportion with the value it provides or takes away. If a comment is at +100 but only adds a little to the discussion, I'll downvote.
A consequence of this is that the total score of a comment depends on the order of voting. For example, if your algorithm is "upvote below 5, downvote above 5", and ten other people want to upvote unconditionally, then the final score may be 11 or 9 depending on whether you voted first or last.
A consequence of voting unconditionally is that you'll contribute to comments being higher than you think they deserve. All scoring rules have tradeoffs.
I think I disagree with the idea that a comment deserves a specific number of votes. Comment karma is "the number of people who liked it, and cared enough to click the button, minus the number of people who disliked it, and cared enough to click the button". What does it mean to say that a comment deserves that the result should be e.g. five? Downvoting a comment strategically is like saying "this is a nice comment, but it doesn't deserve more than five people to like it; and because six people said they like it, I am saying that I dislike it, just so that it gets the result it deserves".
It might be worth a poll to find out whether people think posts "deserve" a certain number (or number in a small range) of comments. I'm not sure that sort of voting makes sense, but I do a little of it myself. I'm guessing that "justice" based voting stabilizes the value of karma, and otherwise it would take increasingly high numbers of votes to indicate that a post is unusually good.
It actually doesn't mean anything if there's only one comment. But the way LW works is that if there's one comment with 5, and another with 6, the one with 6 gets displayed first and read by more people. I think your scoring rules makes more sense in a binary "vote yes or no" democracy. If you're trying to decide whether you should or shouldn't enact a policy, and if there are more negative than positive votes then the policy is enacted, you should yes vote if you agree with the policy and no if you disagree. But in a meritocratic system like LW, where individual posts are ranked against each other based on score, this results in "pretty good comments" getting ranked the same level as "really good comments".
You can change your vote later if necessary, and sometimes I do, either to no vote at all, or to the opposite vote.
It is not a question of opposing other people's preferences. It is question of taking the actions that will most likely result in the situation which is closest to the one I want. For example, in the first case, I meant that I do not want that amount of the content either increased or decreased. I do not mean that I do not care. I mean I like things the way they are. If the comment is at -1, I will likely start to see less of it. Since I do not want it increased or decreased, I upvote it. That certainly does not mean that I want to increase anything just because other people want less of it, or decrease anything because they want more of it.
But the mechanism by which you do so is opposing other people's preferences. That is, if there's a comment that I want to be at net 0, then upvoting it if it's at -1 or downvoting it if it's at +1 accomplishes that goal, but which one I do depends on what the community consensus was at the time of voting. In general, I think voting based on current karma decreases the info content of voting and harms more than it helps. Vote on your desire to see or not see a comment, not your desire for the community to want to see or not want to see the comment!
I don't think your second paragraph follows from your first.
I agree that establishing the general claim that voting based on current karma harms more than it helps requires more than the first paragraph, and is just a statement of a conclusion rather than an argument leading to that conclusion. But I think the rest of the second paragraph is related to the first--the reason why it decreases the info content of voting is because the votes are clashing (your vote on a comment is now negatively correlated with my vote, making your vote less influential).
I also don't think the first claim makes much sense. First of all, it's not always anti correlated. It's only anti-correlated if you vote unconditionally, and the post is far below or far above the value we think it provides. If it's positive, but not positive enough, the vote is correlated. If it's negative, but not negative enough, the vote is correlated. Secondly, you're assuming everyone uses the same scoring rule you do. We've already established that at least two people use the different scoring rule, and as another commenter pointed out, it's likely that there are many people who vote strategically. In that case, if we think the post has the same value, we'd do the same thing in the same situation, and if we think it doesn't ahve the same value, they're not - which is how it should be.
That's one possible interpretation of voting on LW. It is not the only one possible. Do you think one can apply terms like "correct" or "wrong" to these interpretations?
Some people think in terms of people behind the comments and not comments themselves. They think that downvotes cause sadness for a person who was downvoted and they use their upvote as a consolation, as an attempt to cheer a downvoted person up.
"Strategic" voting is pretty much unavoidable, since voting has some cost (however mild). It makes sense to vote when you think it will make a useful contribution, by expressing a different POV than other LessWrong contributors would. Does this make scores less representative? It's not clear that it does - how many people would care if some unambiguously good comment is at, say, +17 as opposed to +19 because some users just didn't bother to vote it up?
I've seen this happen with non-AA posts, too. Specifically, I'm thinking of buybuydandavis' replies to me in this thread (and I think the actual comment linked too, but I'm not sure about that). I currently think (~75%) that it's not AA himself doing it. Eugine Nier seems more likely.
Do you think Eugine Nier would think that the posts are valuable?
I don't have a strong model of either of them. But Eugine is known to abuse the voting mechanism with alts, and I generally expect that most people don't do that. I also find it plausible that Eugine would mass-upvote those posts just to be a douche, even if he didn't particularly care for them.
Downvoting for stating a conjecture as certainty. Insulting language doesn't help, either.
If those timings are correct, then it seems like very strong evidence for something highly improper going on. (I agree that that's not the same as advancedatheist being responsible for it.)
I understand the objections. It is certainly true that it is possible there is a third party that has been consistently doing this same thing selectively to his heavily downvoted posts in particular for over a year. I just don't find it particularly likely.
Given access to the raw data of who upvotes what and at what time, an algorithm should be able to auto flag sockpuppets, at least until the sockpuppets get wiser and start upvoting at different times of day. Looking for lots of accounts with similar IP addresses is a strategy too, but proxies could be a problem.
I haven't done anything to "abuse" the voting system, and you should retract your accusation because you have no evidence of that. I don't understand how my posts can gain so many upvotes in such a short time.
Do you believe that those posts that receive massive downvotes are healthy for LW? Otherwise why do you continue posting them?
Speaking for myself, I find most of his contributions relevant and interesting.

The question was specifically about the ones that get lots of downvotes. That is, the ones where he's riding his hobbyhorse of complaining about the phenomenon of men not getting any sex even though they'd like to, and specifically the fact that he is in that situation. Do you find those relevant and interesting?

(Most recent examples, in reverse-historical order: one, two, three though that one only kinda fits the pattern, four, five.)

From the net karma and the ratio of karma one can compute the number of votes, approximately. (Approximately, because the ratio is only reported to the nearest 1%.) As of this moment, these five posts have received at least the following number of votes, listed as up, down, and total: 21 25 46 20 22 42 10 11 21 6 6 12 11 14 25 These are minimum numbers, e.g. the first (-2 total, 48% positive) is also consistent with 32 34 66. 20 is an extraordinary number of downvotes to receive, but as far as I know, there's no karma minimum required for upvotes, One might think about changing that. I have to wonder how many accounts there are whose sole activity has been to upvote him.
Could we ask an admin to make a graph of all users on LW, with edges saying how many posts of one user another has upvoted, and all name labels removed except advancedatheist's? The numbers would have to be shuffled enough that no group of people could use public karma counts and their knowledge of whom they upvoted to gain too much info that ought to be anonymous. Do we have a crypthography expert that can think of an algorithm that would work for that? Or the admins could leave out the shuffling/delabeling and only examine the graph to see whether the situation is reasonable.

We could surely ask. Experience suggests that asking for such things is futile, I think mostly because the LW database is difficult to work with and the Tricyclists have little time (or enthusiasm, or something) for doing things to LW that require admin access.

That seems way too much work for a little bit of internet drama.
Basically work that's not done by asking an admin to do it but by somebody writing the necessary code (the system is open source) and then giving that code to be run against the database.
I seem to remember that there is a way to access the latest (simplified) database dump without admin access. Don't remember where or whether it shows vote sources though.
If there are any such accounts, I would regard that as strong evidence of some kind of malfeasance. Note that advancedatheist vigorously denies any sort of abuse of the system and says he doesn't know how those comments got so many upvotes.
I have also upvoted a significant number of his posts esp. if those were 'excessively' downvoted. I agree that there is a common theme and that he repeats himself but one could read that cheritably as providing context for his posts which are not always about th same thing but highlight differnt albeit tangential aspects of some general topic.

Heh. Andrew Gelman of the Bayesian Data Analysis textbook discovers Yvain.


My employer changed their donation matching policy such that I now have an incentive to lump 2 years' donations into a single year, so I can claim the standard deduction during the year that I don't donate, thereby saving around $1200 every 2 years. I've been donating between 10 and 12.5 percent for the last few years. This year I would be donating around 21%. Has anyone here been audited because they claimed a large fraction of their income as charitable contributions? How painful was the experience? I doubt it's worth paying $1200 to avoid, but I thought I'd ask.

Julia and I donate 50% and haven't been audited yet, but I expect we will at some point. We keep good records, which should help a lot.
I donated roughly that percentage several years ago & was not audited.

I've found I've become a smidge more conservative-- I was in favor of the Arab Spring, and to put it mildly, it hasn't worked well. I'm not even sure the collapse of the Soviet Union was a net gain.

Any thoughts about how much stability should be respected?

I'm not even sure the collapse of the Soviet Union was a net gain.

I think it was a gain for me, because it decreased the probability that Soviet Union would attack my country. Many people from former Soviet area of influence have the same opinion. Then again, many have the opposite opinion.

Also, as a result of collapse of Soviet Union, I am allowed to cross borders and attend LW meetups at Vienna. I know, it's pretty selfish to wish an entire empire to collapse only to improve my weekends, but still, I am selfishly happy.

The Arab Spring has worked quite well in the one country that actually had a well-established civil society prior to it, namely Tunisia. (Not coincidentally, this is also where the AS got its start.) All else being equal, I am in favor of having solid evidence about the factors that can actually lead to long-lasting social improvement in the Arab world and elsewhere.
People tend to conflate two different things by that phrase. 1) The fall of Communism. 2) The break up of the Soviet Union into 15 republics. Which one are you asking about.
There is yet a third interpretation: the loss of control by the USSR over the other states of the Warsaw Pact. This is the aspect that is most clearly a good development. Added: In fact, I think that is the most common use (cf Viliam).
I think the question is WAY too general. The only possible answer is: "It depends".
Was exactly does that mean? That you cheered when it happened? Or do you mean something more political significant?
I cheered when it happened.
The interesting question is how did you decide the Arab Spring was a good thing. Was it because the New York Times told you so? Or was it a consequence of the prior that "More democracy is always good?"
There may have been some influence from the NYT, but it was also less tyranny as well as more democracy.
Democracy is a quite deceptive word. 74% of Egyptians want Egypt to be ruled via the Sharia. Did the NYT narrative have Egyptians suddenly stoning homosexuals which a majority of that country believes, or did it have the new government not representing the views of the Egyptian population? As far as I remember not really. It had the idea that western democracy with people who value western value suddenly came to Egypt without really thinking it through.
"Less tyranny" isn't the same thing as "more democracy".
I'm not sure that I know what's meant with "less tyranny".
Some governments are more abusive than others, and governments which are very abusive tend not to be democracies.
What do you mean with being abusive? Democracies don't have inherent protection of minorities. Do you believe that the Pakistani government was less abusive than prerevolution Egypt?
I can't speak for Nancy, but my own reaction to the Arab Spring was something like "oh, that looks like a good thing if it actually works out rather than leading to more repression in the end", and it was a consequence of a prior that resembles the one you describe but contains less straw: "More democracy is usually good, other things being equal". [EDITED to add: I mention this only because I find it striking how the two possibilities you mention are both, if you'll pardon my directness, rather stupid[1], and I'm wondering on what basis you assume that Nancy's reasons were stupid ones.] [1] Meaning "it would be rather stupid to decide on that basis" rather than "it is stupid to think that someone else might decide on that basis". And of course "stupid" is a strong word; believing whatever you read in the NYT isn't really that bad a strategy. But I'm sure you see what I mean.
This is an entirely generic attitude suitable for everything that claims to have a noble aim in mind. Doesn't look like a workable prior given that other things are never equal. Looks like a hedged version of "the expected value of more democracy is more good". I don't think so. Nancy is not an expert in Arab politics -- she relies on opinions of others. Given this, accepting the prevailing opinion of the media (of the appropriate political flavour) is an entirely normal thing and happens all the time. "There is another coup in Backwardistan? The newspaper I read says it's bad? Oh, I guess it must be so ". Ditto with using general priors when you can't or can't bother to analyze the situation yourself.
Nope. For instance, abstinence-only sex education claims to have in mind the noble end of preserving the virtue of the young. I do not particularly hope that it succeeds in its aims, because I disagree about their nobility. Regarding what the "Arab Spring" was trying to do as a noble end (as opposed to one merely claimed to be noble) says something not altogether trivial about the values of the person who so regards it.
I cheer when there's a hot summer day but that doesn't mean that I endorse politics that lead to more hot summer days. Cheering mostly isn't a very political action and it's not very helpful to think of it in that way.
Cheering says something about what I expect to work out well.
In some sense it does. People however don't cheer for sport teams because they have specific expectations. Most cheering is in it's nature very tribal based.
For a consequentialist this is a question for historians and those who model historical what-ifs (psycho-historians? Hari Seldon, where are you?) There are multiple possibilities that I can think of offhand: 1. a revolution/regime change/instability may have some negative or positive effect in the near term, but no measurable long-term effect anywhere 2. there are some long-term positive/negative effects locally, but none globally 3. there are both local and global effects, positive and/or negative A historical analysis can only get you so far, as it is hard to come up with controlled examples. Was the US revolution similar to the French revolution? To the Russian revolution? To the Spartacus' uprising? To the Chinese dynastic revolts? Would the US have been better off peacefully separating from Great Britain like Australia and Canada? Did the horrors of the World War II scare Europe into peace? Was the Holocaust a net good for the Jews, since it led to creation of Israel and the rise of Jewish influence in the US and in the world? (Assuming either of those are beneficial. Most Arabs would disagree.) Even the seemingly clear-cut "good" cases, like the end of the Apartheid in South Africa eventually resulted in rising crime rates in the country. To misquote a famous historian, "History is just one damned thing after another". My current position on the issue is that any uprising is only worth considering if you can reasonably expect near-term positive effects for the group you care about, because there is currently no way to estimate long-term effects, and you cannot hope to be honest about the welfare of people you don't care about. This position is an awkward one, since it means that Hitler's takeover of Germany was worth supporting at the time it happened, unless you cared about Jews, Gypsies and gays more than about ethnic Germans. It also means being against most armed revolts of uncertain prospects of success, since they necessarily lead to near-term
Lots! But it seems like if we start doing "yay stability" vs. "boo stagnation" we'll be at politics pretty quick.
Stagnation is actually a stable condition. It's "yay stability" vs. "boo instability," and "yay growth" vs. "boo stagnation."
Those are true words you wrote. I lounge corrected.
Not being in favor of the collapse of the Soviet Union seems to me a gigantic mistake. The threat of large scale nuclear war is greatly reduced. 100s of millions of people live in a much less repressive environment. (If you don't believe that, consider information was greatly restricted in the communist bloc with communist propaganda keeping the sad truth that communist lives were way circumscribed and poor compared to Western lives, and people were literally shot for trying to leave). It would be interesting to poll people over the age of 45 or 50 that live in eastern europe to find out how many of them would not be in favor of getting out from behind the iron curtain.
I would be inclined to agree, but the B vs P comparison is a bit unsettling...
I definitely value it higher than the momentary high of getting to impose your values on others, which seems to be the opposite of the current US foreign policy.

This week on the slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/

  • AI - Orthogonality thesis, Bostrom's superintelligence, Pascal's mugging, Looking for the video of the Superintelligence panel at EAglobal.
  • Effective altruism - Blood donation, climate change
  • finance - Things to do with spare money; ongoing profit-making ventures
  • goals of lesswrong - considering reaching out to other similar groups to grow outreach; but we don't have a clear understanding of what we are yet.
  • human relationships - Hacking OKC, Dating sites, Tinder, B
... (read more)

repeat, as I posted at the end of the last Open Thread, probably too late in its life for comments.

I'm planning on running an experiment to test the effects of Modafinil on myself. My plan is to use a three armed study:

  • Modafinil (probably 50mg as I am quite small)
  • B12 pill (as active control) or maybe Vitamin D
  • Passive Control (no placebo)

Each day I will randomly take one of the three options and perform some test. I was thinking of dual-n-back, but do people have any other suggestions?

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV) and how much weight each agent will be given?

Nick Bostrom's Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocati... (read more)

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV), and how much weight each agent will be given?

I don't think anyone has a satisfactory solution to what is inherently a political question, and I think people correctly anticipate that analyzing it through the lens of politics will lead to unsatisfying discussions.

Thinking of the prisoners-dilemma-with-access-to-sourcecode, an obvious strategy would be to allocate negentropy to agents that would employ the same strategy in proportion to the probability that they would have ended up in the position to allocate the universe's negentropy.
Presumably "employ the same strategy" should be interpreted loosely, as it seems problematic to give no consideration to agents who would use a slightly different allocation strategy. Thanks for the idea. I will look into it.

A new (for me) word: mathiness.

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

It's maybe worth saying that the term is clearly based on "truthiness".
Etymologically, yes, but conceptually I think it's more related to the ages-old idea of "dazzle 'em with bullshit".
Or in SSCese Eulering
Yep, an excellent connection.

I don't seem to be able to reply to a Gunnar Zarncke reply to my comment on another thread because of my low comment score.

How can I explain my comment and myself [to the extent that I can] to this resident of Germany?

BTW, my view of the world seems to be different than most of you.
Possibly it's because the mortality tables say that half the men born on the same day as me will dead in 14 years and so my priorities may be different. Also, most of my life has been lived so I'm not so much worried about the uncertainties that most of you seem to be. In fac... (read more)

This assumes that there is a high social opportunity cost to academics' time.
Not quite -- this assumes there is a high opportunity cost to high-IQ people being in academia.
A majority of people in academia don't strike me as actually that high-IQ. That does not mean their time couldn't be more valuable elsewhere.
Compared to what?
Compared to groups of other people selected for intelligence, like engineers, mathematicians or professional politicians. What I find remarkable about academics is that they seem to have much longer attention spans than any of these other groups. But in quick learning, logical reasoning or handling unfamiliar information, few academics impress me as much as a typical member of these other groups will. This is strictly my informal observation, but I've studied and worked in universities for 17 years now, so I do think it is a fairly informed one.
Does he know which portion is the waste of intelligence?
You can't know which. You can only infer from the overall effect I'd guess.
I agree. I was flippantly making a point on the lines of this quote -John Wanamaker-

So, Steven Hawking basically quotes Eliezer Yudkowsky almost verbatim, without giving him any credit, as usual: https://www.reddit.com/r/science/comments/3nyn5i/science_ama_series_stephen_hawking_ama_answers/


A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.


I think it's great, the ideas getting out is what matters. Whether Eliezer gets some credit or not, the whole reason he said this stuff in the first place was so that people would understand it, repeat it and spread the concept, and that's exactly what's going on. If anything, Eliezer was trying very early to optimize for most convincing and easily understandable phrases, analogies, arguments, etc. so the fact that other people are repeating them or perhaps convergently evolving towards them shows that he did a good job.

And really, if Eliezer's status as a non-formally educated autodidact or whatever else is problematic or working against easing the spread of the information, then I don't see a problem with not crediting him in every single reddit post and news article. The priority is presumably ensuring greater awareness of the problems, and part of that is having prestigious people like Stephen Hawking deliver the info. It's not like there aren't dated posts and pdfs online that show Eliezer saying this stuff more than a decade ago, people can find how early he was on this train.

What's the saying? Something like "When you're young, you worry people will steal your ideas, when you're old, you worry they won't."

So academia keeps people forever young.
Unsuprising if someone generated that independently. Even more unsuprising if an intelligent person does. Be more charitable.
As usual for Hawking, or for people quoting Eliezer, or?

Man, I want to try playing a game of Rationality Cardinality online, but the place is a wasteland. Anyone want to coordinate for some upcoming evening or something?

I've been trying to prove things more often because I haven't done it a lot and I'm interested in a mathy career. I started reading Sipser's Introduction to the Theory of Computation and came across a chance to try and prove the statement 'For every graph G, the sum of the degrees of all nodes in G is even.' I couldn't find other proofs online, so I thought I'd share mine here before I look at the book, especially because mine might be completely different and I wouldn't really know if it was any good.

A graph G equals the set of the set of nodes/vertices V... (read more)

FYI, this is called the sum of degrees theorem. In fact, the sum of degrees is not only an even number, but twice the number of edges in the graph. This is due to Euler, I think. He used the famous Koenigsberg bridges problem as a motivation for thinking about graphs. Good work on thinking about proofs, +1 to you.
I love that I can come to this website and have one of Judea Pearl's former students check my elementary graph-theoretic proofs. But really, thanks for the encouragement. I had also been wondering if it had a name.
Your operation for turning G into G' doesn't let you construct all graphs, e.g. K3 (the triangle graph) can't be formed like that. The rest of that paragraph is probably more dense than it needs to be. You're on the right track, but I can't quite tell if you actually rely on that construction.
Thanks for the feedback. I think you can construct all graphs and use it to prove the theorem if you prove that you can add an arbitrary number of additional edges and nodes to an arbitrary graph and keep the sum of the degrees of all nodes even, instead of just one additional node and one additional edge. I also see what you mean about this: I think the inductive hypothesis in the rest of that paragraph might be enough, and I just wrote down how I intuitively visualized the proof before that without realizing that it wasn't necessary (nor sufficient, I now know) for the argument to carry through. If you have an idea of how you would write the proof, I'd be interested in seeing it. I looked at the book and the proof is actually even less formal there.
Lemma: sum of the degrees of the nodes is twice the number of edges. Proof: We proceed by induction on the number of edges. If a graph has 0 edges, the the sum of degrees of edges is 0=2(0). Now, by way of induction, assume, for all graphs with n edges, the sum of the degrees of the nodes 2n; we wish to show that, for all graphs with n+1 edges, the sum of the degrees of the nodes is 2(n+1). But the sum of the degrees of the nodes is (2n)+2 = 2(n+1). ∎ The theorem follows as a corollary. ---------------------------------------- If you want practice proving things and haven't had much experience so far, I'd recommend Mathematics for Computer Science, a textbook from MIT and distributed under a free license, along with the associated video lectures *. To use Terry Tao's words, Sipser is writing at both level 1 and 3: he's giving arguments an experienced mathematician is capable of filling in the details to form a rigorous argument, but also doing so in such a way that a level 1 mathematician can follow along. Critically, however, from what I understand from reading Sipser's preface, he's definitely not writing a book to move level 1 mathematicians to level 2, which is a primary goal of the MIT book. If you're looking to prove things because you haven't done it much before, I infer you're essentially looking to transition from level 1 to 2, hence the recommendation. A particular technique I picked up from the MIT book, which I used here, was that, for inductive proofs, it's often easier to prove a stronger theorem, since it gives you stronger assumptions in the inductive step. PM me if you want someone to look over your solutions (either for Sipser or the MIT book). In the general case, I'm a fan learning from textbooks and believe that working things out for yourself without being helped by an instructor makes you stronger, but I'm also convinced that you need feedback from a human when you're first getting learning how to prove things. * The lectures follow an
I think it's actually cleaner to prove the theorem non-inductively (though I appreciate that what GS asked for was specifically a cleaned-up inductive proof). E.g.: "Count pairs (vertex,edge) where the edge is incident on the vertex. The number of such pairs for a given vertex equals its degree, so the sum equals the sum of the degrees. The number of such pairs for a given edge equals 2, so the sum equals twice the number of edges." (More visually: draw the graph. Now erase all of each edge apart from a little bit at each end. The resulting picture is a collection of stars, one per vertex. How many points have the stars in total?)
I really appreciate this comment, thank you. I've actually never studied automata, computability, or complexity before either, so that's really why I picked up Sipser. But I'm downloading your other recommendation now (just moved, mobile Internet only); I can certainly imagine that some books are more useful than others for learning proof, I just saw an opportunity to practice and see how my natural ability is. I'll try to include things more specifically for learning proof in my diet. I sure will PM you if I need some feedback (I expect to), thanks.
If I were doing it inductively, I'd go in the other direction, removing edges instead of adding them. Take a graph G with n>0 edges, and remove an edge to get G'. The degree sum of G' is two less than the degree sum of G (two vertices lose one degree, or one vertex loses two degree). Then induction shows that the degree sum is twice the edge count. There are probably simpler proofs, but having been primed by yours, this is the one that comes to mind. I feel like being completely formal is the sort of thing that you learn to do at the beginning of your math education, and then gradually move away from it. But you move to a higher class of non-rigor than you started from, where you're just eliding bookwork rather than saying things that don't necessarily work. E.g. here I've omitted the inductive base case, because I consider it obvious that the base case works, and the word "induction" tells me the shape of the argument without needing to write it explicitly.



(all caps in the original X-D)

P.S. This is a Just Another Psych Study, so any resemblance between its conclusions and reality is merely coincidental. Good for lulz, not too good for serious consideration. But it's funny :-)

Guessing the distribution before I look: Small-ish penalty for below-average intelligence, a flat line through average into slightly above average, then a small-ish penalty for above-average intelligence. ETA: Oh. No data provided. Pity.
So male intelligence does increase romantic attraction, but if all you want is a shag then you don't care about intelligence. This makes sense, and also makes the title a little misleading. Wouldn't a better approach be to look at okcupid profile reading level (as in, does their profile use long words and correct grammar) or answers to match questions such as "which is bigger, the sun or the moon?" and correlate this with how many messages they get? I suppose this wouldn't be very academic, but you could get a sample size of millions.
Some of the posts on OkTrends, the official OkCupid blog, have studied similar things.
Why do you believe that correlates with intelligence? It might very well correlate with willingness to provide contrarian answers.
The usual caveats about small and culturally limited studies apply, not to mention that it's a hypothetical behavior study. This being said, it's worth noting that a lot of mating venues have so much background noise that conversation is discouraged.
Well, it certainly agrees with the anecdotal evidence.
Not with mine. My anecdotal evidence says that high IQ does NOT compensate for a variety of other deficiencies (from personal hygiene to self-confidence issues) but otherwise it's very useful :-)
In which case there's still the issue that it seems to correlate with said deficiencies.
I think it's more because of restriction-of-range effects (people who have both low IQ and said deficiencies are likely to be in their parents' basements so we don't usually see them, and people who have both are likely to be in places like DC so we don't usually see them either) than because they actually correlate in the whole population.
Well, autism causes both for starters.
What? EDIT: Do you mean the technical meaning or the colloquial meaning? The former aren't that smart in average...
Autism is a spectrum. Here I mean the ones whose social skills aren't so ban its impossible to meaningfully interact with them. EDIT: fixed typo.
And having social skills so bad it's impossible to meaningfully interact with you causes high IQ? What?
Citation needed. A paper titled "High IQ is correlated with the inability to learn to use a shower" got to have a decent chance at getting an IgNobel X-)
It's worth noting that it cites an existing study titled "Intelligence and mate choice: intelligent men are always appealing"
Wild hypothesis: it is possible that the Flynn effect has levelled out the range where intelligence was a factor of sexual attractiveness? Maybe it's more important to mate with a 90 IQ rather than an 85 IQ, but after 100 IQ every male seems equal.
It's interesting how you are not conditioning this on the IQ of the girl...
Sure, you can always add a parameter to make the model more complex, if needed. How is that interesting? How would you have conditioned the preference on women's IQ?

I've received several PMs from different users that would like to continue a discussion, but would not do it publicly -- they were afraid to be received negatively, or in other words, "negative karma".

I thought people on LW would be able to look past insignificant and shallow virtual ratings that I. personally, cannot tell what their meaning is. My own karma fluctuates between -15 to 15 and I'm perfectly fine with that; but other people seem to view it as some steps toward hell.

I thought I could escape all the usual nonsense surrounding discussions here, but I think I might be wrong.

I'd enjoy a conversation with anyone who thinks they have a useful comment (on any topic) which is un-postable because it would be received negatively. I'd like to explore whether it's about avoiding negative karma points, or fear of unkind followup comments, or wanting their user page to have only "important" things, or something else. I'd like to have it in public, though - if you fear any of these things (or other reasons I haven't thought of), make a throwaway/burner account and use that.
Karma scores mean that the community doesn't welcome a certain post. If you want lesswrong to be enjoyable for all participants it's reasonable to focus on writing posts that are likely to have high karma. Apart from that you are a person who hides behind an anonymous handle that is expandly to you. Other people on LW don't hide but have their identities attached to what they write and there the possibility for real life effects.
You could treat it as a failed gut check and tell 'em to go grow an pair and then brass-plate it. Or you can think about it as image management. Reputations are delicate things and are more than just your karma score.
Once again, a point I want to emphasize: I thought that at LessWrong people would be able to overcome things such as "image management" and "reputation". In my view those things are just a few steps away from not asking a question or not presenting an opinion. Being scared of being wrong won't make your situation any better. Do tell me if this isn't the case, or this isn't supposed to be the case.
Unless Lesswrong exists in a vacuum, it has no or almost no power to overcome those things. Even if you didn't worry about being judged by people on lesswrong, the risk of being judged by someone elsewhere online still exists.
Why do you think this would be a good thing? Reputations are a valid concept, highly useful in social interactions. If you care about social interactions, you should (= it's rational to) care about your reputation which leads directly to the image management. The real issue is the trade-off between maintaining a desirable reputation and the costs of doing so (e.g. not asking questions for the fear of looking stupid).
Some of us are exhausted of the status games of meatspace life and just want to dissect ideas.
You can choose groups with different status indicators and different ways of measuring reputation, but you probably can't find any human communication (and I'd argue this applies intra-personally as well as inter-; you're dealing with past-you and constraining future-you RIGHT NOW) that doesn't involve status, power, and image.
No one forces you to play status games. If you don't care, you don't care so just dissect ideas and ignore the rest. LessWrong was talking about other people being too concerned with their image. If you don't have this problem, well, there is no problem, is there?
For myself (and from what I can tell of some others) I've chosen to accept and incorporate my humanity and the complexity of human social interactions, rather than "overcome", which is hard to distinguish from "denial of reality". Image management, and especially self-image management, are important and difficult. They're going to color all human interactions, whether you or not you prefer that.
It may also be not that they think that they are talking about a unwelcome subject, but only that they recognize that not every conversation needs to be held publicly and recorded for posterity. If they want to talk about the weather, they should not do it in a thread -- not because it will be downvoted, but because it is rather rude to broadcast every conversation when we have a number of perfectly acceptable ways to hold conversations without distracting the entire site. Of course, if "negative karma" was what they were really worried about, and this is not just your interpretation, it may be useful to hold conversations out in the open. At best you will be happily surprised, and at worst you will have am audience to encourage you to do your best when talking about questionable subjects.

In the US, 'Professor' seems to refer to several classes of academic rank that are more junior ranks in the Australian system, where Professor denotes a full professor specifically. Are you aware of anyone who tried to assess the signalling benefit of cost of seeking a U.S professorship instead of a local academic position for career capital, authority or grants?

I am not aware of any such cases despite having been working in US and now UK academia for the past 20 years. "Professor" in this sense tends to be a title of address rather than a job title: US students have learned that in most circumstances it is appropriate to refer to an instructor as Professor (whether assistant professor, associate professor or full professor.... or indeed in many cases even university teaching staff who do not have a PhD yet); in the UK this is only appropriate for full professors, and many still prefer to be addressed by first names. Career capital, authority, grants: anyone who matters in the UK is likely to be aware of differences in job titles and the approximate mapping between them (ie UK lecturer = US assistant professor, UK reader = US associate prof, professor = professor). Grants: while biased toward established academics this is more about publications, other grants, profile and not the title itself. Sometimes people use honorary appointments the way you suggest, though. I know of one person who got updated business cards to add "Honorary Professor, X University" once a meaningless honorary appointment (granting library access and little more) was approved. I have also seen cases that could work in the opposite direction: UK lecturers who want to maintain a US profile sometimes qualify their job title for the US market, as in "Lecturer (US equivalent = assistant professor)". This is because many US universities use "lecturer" as job title for adjunct teaching staff (lower status than appropriate).
...and then, there are the Germans :-)

Potential crank warning; non-physicist proposing experiments. Sorry if I'm way off-base here, please let me know where I've gone wrong.

I was contemplating MWI and dark matter, and wondered if dark matter was just the gravitational influence of matter in other universes, where the other universes' matter is distributed differently to ours. Google tells me that others have proposed theories like this, but I can't find if anyone has ever tried to test it.

Has anyone ever tried to test this directly? We have gravimeters sensitive enough that one "detected ... (read more)

MWI doesn't work that way. Universes are close iff the particles are in about the same place.
Is this the sort of experiment in which you would need macroscopically different 'universes' separated from each other by single quantum events, such that the thermal noise/interaction with the environment of the large experimental mass must be dealt with?
There have been some similar ideas, but not related to MWI - as DanielLC says, the "distance" that separates two different states of the universe does not behave like we commonly imagine distance between "parallel worlds" to behave. However, something that can behave like that is these extra spatial dimensions proposed by string theory, brane theory, etc. See wikipedia. I'm sure someone has proposed this as an explanation for dark matter.

How do you convince people of Cromwell's rule? (the use of prior probabilities of 0 or 1 should be avoided)

Next time I discuss degrees of belief with my local atheist group, I'm going to try this one: absolute certainty = faith. That will surely shock them enough into abandoning absolute certainty.
Judging by my experience with atheists... no it won't. Your group might be better. Those I've encountered who hold to absolute certainty believe their absolute certainty is justified on the basis that God is impossible, or infinitely unlikely, or some similar line of argument. ETA: That is, expect some line of argument about your certainty about the nonexistence of a triangle with total internal angles of 240 degrees, or something like that.
The first time I tried to use this argument, my test subjects switched from exact 0 to a ridiculously low percentage intended to mean basically the same. Now I notice that your comment reveals that I may have committed an inconsistency with respect to a chain of comments I wrote before on the same topic. This has me thinking. Eliezer said, I guess a fully consistent position would sound like this: "I'm very confident that the traditional description of God conflicts with itself, which makes the existence of God extremely unlikely in this universe, but of course there's always some likelihood that this universe doesn't work the way I suppose it did, or logic has a loophole that nobody has seen before, or omnipotence doesn't really care for impossibilities, and as a result God is real. But so far this doesn't look like the type of universe where that would happen."

Why isn't microcurrent therapy more common for pain management?

  1. When you were a child did you prefer to play the hero or the villain in pretend and role-playing games?

  2. Today, are your favourite fictional characters heroes or villains?

May be worthwhile to ask this on the Polling Thread.

I wanted to sow some spinach and lettuce this month cause it's the right time, both all these aphids are eating my brocoli. Not hard to get rid of, but so disgusting. Don't even want to eat it now. Growing your own food is so hard. Thank god for economic specialisation.