The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

New Comment
917 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]430

So, I walked into my room, and within two seconds, I saw my laptop's desktop background change. I had the laptop set to change backgrounds every 30 minutes, so I did some calculation, and then thought, "Huh, I just consciously experienced a 1-in-1000 event."

Then the background changed again, and I realized I was looking at a screen saver that changed every five seconds.

Moral of the story: 1 in 1000 is rare enough that even if you see it, you shouldn't believe it without further investigation.

That is a truly beautiful story. I wonder how many places there are on Earth where people would appreciate this story.

[-]xamdam250

No! Not for a second! I immediately began to think how this could have happened. And I realized that the clock was old and was always breaking. That the clock probably stopped some time before and the nurse coming in to the room to record the time of death would have looked at the clock and jotted down the time from that. I never made any supernatural connection, not even for a second. I just wanted to figure out how it happened.

-- Richard P Feynman, on being asked if he thought that the fact that his wife's favorite clock had stopped the moment she died was a supernatural occurrence, quoted from Al Sekel, "The Supernatural Clock"

2Richard_Kennaway
This should be copied to the Rationality Quotes thread.
5Paul Crowley
There are a lot of opportunities in the day for something to happen that might prompt you to think "wow, that's one in a thousand", though. It wouldn't have been worth wasting a moment wondering if it was coincidence unless you had some reason to suspect an alternative hypothesis, like that it changed because the mouse moved. bit that makes no sense deleted
2Document
Recently posted to Reddit. (Edit: About three days later I realized that that's a 1/100 and not a 1/1000 chance; my bad.)
2SK2
Exactly. http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
0[anonymous]
I think there are less opportunities than you think. I could look at the clock and see that it's precisely on the half-hour; that's the only thing that comes to mind. Also, I can't figure out what your second paragraph is all about; I found your comment confusing overall.
0Document
I've noticed a few times when the size of a computer folder was exactly 666 MB.
1gwern
AngryParsley noticed when my LW karma was exactly 666.
1AngryParsley
Apparently I even took a screenshot of that event, but I missed when I got 2^8 karma.
0Paul Crowley
Second part made no sense on re-reading; I've deleted it. Sorry about that. I'll see if I can think of some examples...
-1[anonymous]
I experience several 1-in-60 events, perhaps everyday. Many times when I look at the clock, the seconds number matches the minutes number. So I see [hour]:X:X. This happened enough times that I even predict it on occasion, and I’m not a person who checks the clock too frequently. Honestly, not having checked the time in quite a while, I did after reading your post and it was 4:03:03 pm. Scary.

I've been finding PJ Eby's article The Multiple Self quite useful for fighting procrastination and needless feelings of guilt about getting enough done / not being good enough at things.

I have difficulty describing the article briefly, as I'm afraid that I accidentally omit important points and make people take it less seriously than it deserves, but I'll try. The basic idea is that the conscious part of our mind only does an exceedingly small part of all the things we spend doing in our daily lives. Instead, it tells the unconscious mind, which actually does everything of importance, what it should be doing. As an example - I'm writing this post right now, but I don't actually consciously think about hitting each individual key and their exact locations on my keyboard. Instead I just tell my mind what I want to write, and "outsource" the task of actually hitting the keys to an "external" agent. (Make a function call to a library implementing the I/O, if you want to use a programming metaphor.) Of course, ultimately the words I'm writing come from beyond my conscious mind as well. My conscious mind is primarily concerned with communicating Eby's point well to my... (read more)

6xamdam
This somehow reminds me of the stories when Tom Schelling was trying to quit smoking, using game theory against himself (or his other self). The other self in question was not the unconscious, but the conscious "decision-making" self in different circumstances. So that discussion is somewhat orthogonal to this one. I think he did things like promising to give a donation to the American Nazi Party if he smokes. Not sure how that round ended, but he did finally quit.
6Jack
Hmm. I'd be worried it'd backfire and I'd start subtlety disliking Jews. Then you're a smoker and a bigot.
2xamdam
lol. Not a problem if you're Jewish ;)
4Jack
Self-hatred is even worse than being a bigot!
3khafra
Reminds me of The User Illusion, which adds that the consciousness has an astoundingly low bandwidth--around 16bps--around 6 orders of magnitude lower than the senses transmit to the brain.
2CronoDAS
Interesting. I've glanced at that site before and its metaphors have the ring of truthiness (in a non-pejorative sense) about them; the programming metaphors and the focus on subconscious mechanisms seem to resonate with the way I already think about how my own brain works.
4RobinZ
Couldn't that be more succinctly stated as "its metaphors have the ring of truth about them"?
3CronoDAS
Maybe, but a lot of Freud's metaphors had/have a similar ring.
0RobinZ
Fair enough!
1xamdam
I read the original article and some of the other PJE material. I think he's really onto something. This is how far I got: * Identify the '10% controlling part' * Everything else is not under direct control (which is where most self-help methods fail) * It is under indirect control So far makes sense from personal experience/general knowledge. * Here are my methods for indirect control. This is the part that I remain skeptical about . Not PJE's fault, but I do need more data/experience to confirm.
0Roko
Thanks, Kaj, that was useful.
[-]Gavin210

Until yesterday, a good friend of mine was under the impression that the sun was going to explode in "a couple thousand years." At first I thought that this was an assumption that she'd never really thought about seriously, but apparently she had indeed thought about it occasionally. She was sad for her distant progeny, doomed to a fiery death.

She was moderately relieved to find out that humanity had millions of times longer than she had previously believed.

I wonder how many trivially wrong beliefs we carry around because we've just never checked them. (Probably most of them are mispronunciations of words, at least for people who've read a lot of words they've never heard anybody else use aloud.)

For the longest time, I thought that nuclear waste was a green liquid that tended to ooze out of barrels. I was surprised to learn that it usually came in the form of dull gray metal rods.

2wedrifid
Does it still give you superpowers?

If you extract the plutonium and make enough warheads, and you have missiles capable of delivering them, it can make you a superpower in a different sense. I'm assuming that you're a large country, of course.

More seriously, nuclear waste is just a combination of the following:

  1. Mostly Uranium-238, which can be used in breeder reactors.

  2. A fair amount of Uranium-235 and Plutonium-239, which can be recycled for use in conventional reactors.

  3. Hot isotopes with short half lives. These are very radioactive, but they decay fast.

  4. Isotopes with medium half lives. These are the part that makes the waste dangerous for a long time. If you separate them out, you can either store them somewhere (e.g. Yucca Mountain or a deep-sea subduction zone) or turn them into other, more pleasant isotopes by bombarding them with some spare neutrons. This is why liquid fluoride thorium reactor waste is only dangerous for a few hundred years: it does this automatically.

And that is why people are simply ignorant when they say that we still have no idea what to do with nuclear waste. It's actually pretty straightforward.

Incidentally, this is a good example of motivated stopping. People who want nuclear waste to be their trump-card argument have an emotional incentive not to look for viable solutions. Hence the continuing widespread ignorance.

3Paul Crowley
I envy you being the one to tell someone that! Did you explain that the Sun was a miasma of incandescent plasma?
[-]Cyan190

Are people interested in reading an small article about a case of abuse of frequentist statistics? (In the end, the article was rejected, so the peer review process worked.) Vote this comment up if so, down if not. Karma balance below.

ETA: Here's the article.

1Douglas_Knight
If it's really frequentism that caused the problem, please spell this out. I find that "frequentist" is used a lot around here to mean "not correct." (but I'm interested whether or not it's about frequentism)
2Technologos
My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that's the subject of the article. Frequentism may not have "caused the problem," per se, but perhaps it enabled it?
0RobinZ
Will the case be feasibly anonymous? I would vote that the article be left unwritten if it would unambiguously identifies the author(s), either explicitly or through unique features of the case (e.g. details of the case which are idiosyncratic to only one or a very few research groups).
2Cyan
I don't know who the authors were or the specific scientific subject matter of the paper. (I didn't need to know that to spot their misuse of statistics.)
0RobinZ
Understood!
2byrnema
Good point. Also, they might wish to rewrite and resubmit... in any case, you can't reveal anything they would want to lay original claim to of feel afraid of being scooped of.
-10Cyan
1gwern
The AF is quite bad; just a retread of the Thinking Ape piece. The Caveman Science Fiction is much better.
0thomblake
Yes, but according to legend, Douglas Hofstadter read and liked AF. Surely that counts for something!
1gwern
Not really. Maybe he hadn't read the TA piece; and I wonder how much weight to give DH's opinion these days (he dislikes Wikipedia, which is enough grounds for distrust to me).
4Pfft
Seaching for some corroboration of this I came across this little gem in the Simple English Wikiquotes: source Maybe he has a point!
2gwern
I was the Wikipedian who spotted that NYT Mag interview (in my RSS feeds) and added it to the En page, and we interpreted it correctly as Hofstadter's dislike of us. I disavow Simple in general: it's the neglected bastard of En and ought to be put to sleep like the 9/11 or Klingon wikis.
[-]Clippy180

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better? Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?

I wish the site were more inclusive of other value systems ...

This site does tend to implicitly favour a subset of human values, specifically what might be described as 'enlightenment values'. I'm quite happy to come out and explicitly state that we should do things that favour my values, which are largely western/enlightenment values, over other conflicting human values.

5Clippy
And I think we should pursue values that aren't so apey. Now what?
5mattnewport
You're outnumbered.
2hal9000
Only by apes. And not for long. If we're voting on it, the only question is whether to use viral values or bacterial values.

Too long has the bacteriophage menace oppressed its prokaryotic brethren! It's time for an algaeocracy!

4mattnewport
True, outnumbered was the wrong word. Outgunned might have been a better choice.
1DanielVarga
So far...
1Nick_Tarleton
I say again, if you're being serious, read Invisible Frameworks.
0timtyler
That seems to be critiquing a system involving promoting sub-goals to super-goals - which seems to be a bit different.
5[anonymous]
White people value the values of non-white people. We know that non-white people exist, and we care about them. That's why the United States is not constantly fighting to disenfranchise non-whites. If you do it right, white people's values are identical to humans' values.
[-]Clippy120

Hi there. It looks like you're speaking out of ignorance regarding the historical treatment of non-whites by whites. Please choose the country you're from:

United Kingdom
United States
Australia
Canada
South Africa
Germ... nah, you can figure that one out for yourself.

1[anonymous]
The way they were historically treated is irrelevant to how they are treated now. Sure, white people were wrong. They changed their minds. We could at any time in the future decide that any non-human people we come across are equal to us.

You have updated too far based on limited information.

0[anonymous]
Well, I was making some tacit assumptions, like that humanity would end up in control of its own future, and any non-human people we come across would not simply overpower us. Apart from that, am I making some mistake?
1Alicorn
White people have not unanimously decided to do what is necessary to end the ongoing oppression of non-white people, let alone erase the effects of past oppression. Edit: Folks, I am not accusing you or your personal friends of anything. I have never met most of you. I have certainly not met most of your personal friends. if you do not agree with the above comment, please explain why you think there is no longer such a thing as modern-day racism in white people.
4wedrifid
We more or less do. Or rather we favour values of a distinct subset humanity and not the whole.
5Nick_Tarleton
We don't favor those values because they are the values of that subset — which is what "doing things to favor white people's values" would mean — but because we think they're right. (No License To Be Human, on a smaller scale.) This is a huge difference.
3wedrifid
Given the way I use 'right' this is very nearly tautological. Doing things that favour my values is right by (parallel) definition.
3Roko
Sure, we favor the particular Should Function that is, today, instantiated in the brains of roughly middle-of-the-range-politically intelligent westerners.
7Clippy
Well, you shouldn't.
3Vladimir_Nesov
Do you think there is no simple procedure that would find roughly the same "should function" hidden somewhere in the brain of a brain-washed blood-thirsty religious zealot? It doesn't need to be what the person believes, what the person would recognize as valuable, etc., just something extractable from the person, according to a criterion that might be very alien to their conscious mind. Not all opinions (beliefs/likes) are equal, and I wouldn't want to get stuck with wrong optimization-criterion just because I happened to be born in the wrong place and didn't (yet!) get the chance to learn more about the world. (I'm avoiding the term 'preference' to remove connotations I expect it to have for you, for what I consider the wrong reasons.)
2Roko
A lot of people seem to want to have their cake and eat it with CEV. Haidt has shown us that human morality is universal in form and local in content, and has gone on to do case studies showing that there are 5 basic human moral dimensions (harm/care, justice/fairness, loyalty/ingroup, respect/authority, purity/sacredness), and our culture only has the first two. It seems that there is no way you can run an honestly moraly neutral CEV of all of humanity and expect to reliably get something you want. You can either rig CEV so that it tweaks people who don't share our moral drives, or you can just cross your fingers and hope that the process of extrapolation causes convergence to our idealized preferences, and if you're wrong you'll find yourself in a future that is suboptimal.
1CarlShulman
Haidt just claims that the relative balance of those five clusters differ across cultures, they're present in all.
1Vladimir_Nesov
On one hand, using preference-aggregation is supposed to give you the outcome preferred by you to a lesser extent than if you just started from yourself. On the other hand, CEV is not "morally neutral". (Or at least, the extent to which preference is given in CEV implicitly has nothing to do with preference-aggregation.) We have a tradeoff between the number of people to include in preference-aggregation and value-to-you of the outcome. So, this is a situation to use the reversal test. If you consider only including the smart sane westerners as preferable to including all presently alive folks, then you need to have a good argument why you won't want to exclude some of the smart sane westerners as well, up to a point of only leaving yourself.
2Roko
Yes, a CEV of only yourself is, by definition optimal. The reason I don't recommend you try it is because it is infeasible; probability of success is very low, and by including a bunch of people who (you have good reason to think) are a lot like you, you will eventually reach the optimal point in the tradeoff between quality of outcome and probability of success.
5Unknowns
I hope you realize that you are in flat disagreement with Eliezer about this. He explicitly affirmed that running CEV on himself alone, if he had the chance to do it, would be wrong.
2Eliezer Yudkowsky
Confirmed.
1wedrifid
Eliezer quite possibly does believe that. That he can make that claim with some credibility is one of the reasons I am less inclined to use my resources to thwart Eliezer's plans for future light cone domination. Nevertheless, Roko is right more or less by definition and I lend my own flat disagreement to his.
2Vladimir_Nesov
"Low probability of success" should of course include game-theoretic considerations where people are more willing to help you if you give more weight to their preference (and should refuse to help you if you give them too little, even if it's much more than status quo, as in Ultimatum game). As a rule, in Ultimatum game you should give away more if you lose from giving it away less. When you lose value to other people in exchange to their help, having compatible preferences doesn't necessarily significantly alleviate this loss.
0Roko
Sorry, I don't follow this: can you restate? I know about the ultimatum game, but it is game-theoretically interesting precisely because the players have different preferences: I want all the money for me, you want all of it for you.
2Vladimir_Nesov
Ultimatum game was mentioned primarily to remind that the amount of FAI-value traded for assistance may be orders of magnitude greater than what the assistance feels to amount to. We might as well have as a given that all the discussed values are (at least to some small extent) different. The "all of money" here are the points of disagreement, mutually exclusive features of the future. But you are not trading value for value. You are trading value-after-FAI for assistance-now. If two people compete for providing you an equivalent amount of assistance, you should be indifferent between them in accepting this assistance, which means that it should cost you an equivalent amount of value. If Person A has preference close to yours, and Person B has preference distant from yours, then by losing the same amount of value, you can help Person A more than Person B. Thus, if we assume egalitarian "background assistance", provided implicitly by e.g. not revolting and stopping the FAI programmer, then everyone still can get a slice of the pie, no matter how distant their values. If nothing else, the more alien people should strive to help you more, so that you'll be willing to part with more value for them (marginal value of providing assistance is greater for distant-preference folks).
3Roko
Thanks for the explanation. Another way to put this is that when people negotiate, they do best, all other things equal, if they try to drive a very hard bargain. If me and my neighbour Claire are both from roughly the same culture, upbringing, etc, and we are together going to build an AI which will extrapolate a combination of our volitions, Claire might do well to demand a 99% weighting to her volitions, and maybe I'll bargain her up to 90% or something. Bob the babyeater might offer me the same help that Claire could have given in exchange for just a 1% weighting of his volition, by the principle that I am making the same sacrifice in giving 99% of the CEV to Claire as in giving 1% to Bob. In reality, however, humans tend to live and work with people that are like them, rather than people who are unlike them. And the world we live in doesn't have a uniform distribution of power and knowledge across cultures. Many "alien" cultures are too powerless compared to ours to do anything. The However, China and India are potential exceptions. The USA and China may end up in a dictator game over FAI motivations. All I am saying is that the egalitarian desire to include all of humanity in CEV, each with equal weight, is not optimal. Yes dictator game/negotiation with China, yes dictator game/negotiation within US/EU/western block. Excluding a group from the CEV doesn't mean disenfranchising them. It means enfranchising them according to your definition of enfranchisement. Cultures in North Africa that genitally mutilate women should not be included in CEV, but I predict that my CEV would treat their culture with respect and dignity, including in some cases interfering to prevent them from using their share of the light-cone to commit extreme acts of torture or oppression.
3Vladimir_Nesov
You don't include cultures in CEV, you filter people through extrapolation of their volition. Even if culture makes value different, "mutilating women" is not a kind of thing that gets through, and so is a broken prototype example for drawing attention to. In any case, my argument in the above comment was that value should be given (theoretically, if everyone understands the deal and relevant game theory, etc., etc.; realistically, such a deal must be simplified; you may even get away with cheating) according to provided assistance, not according to compatibility of value. If poor compatibility of value prevents from giving assistance, this is an effect of value completely unrelated to post-FAI compatibility, and given that assistance can be given with money, the effect itself doesn't seem real either. You may well exclude people of Myanmar, because they are poor and can't affect your success, but not people of a generous/demanding genocidal cult, for an irrelevant reason that they are evil. Game theory is cynical.
2Roko
how do you know? If enough people want it strongly enough, it might.
2Vladimir_Nesov
How strongly people want something now doesn't matter, reflection has the power to wipe current consensus clean. You are not cooking a mixture of wants, you are letting them fight it out, and a losing want doesn't have to leave any residue. Only to the extent current wants might indicate extrapolated wants, should we take current wants into account.
2Roko
Sure. And tolerance, gender equality, multiculturalism, personal freedoms, etc might lose in such a battle. An extrapolation that is more nonlinear in its inputs cuts both ways.
-1Kevin
Might "mutilating men" make it through? (sorry for the euphemism, I mean male circumcision)
1Roko
Sure, the kolmogorov complexity of a set of edits to change the moral reflective equilibrium of a human is probably pretty low compared to the complexity of the overall human preference set. But that works the other way around too. Somewhere hidden in the brain of a a liberal western person is a murderer/terrorist/child abuser/fundamentalist if you just perform the right set of edits.
2Vladimir_Nesov
Again, not all beliefs are equal. You don't want to use the procedure that'll find a murderer in yourself, you want to use the procedure that'll find a nice fellow in a murderer. And given such a procedure, you won't need to exclude murderers from extrapolated volition.
2Nick_Tarleton
You seem uncharacteristically un-skeptical of convergence within that very large group, and between that group and yourself.
2Roko
You are correct that there is a possibility of divergence even there. But, I figure that there's simply no way to narrow CEV to literally just me, which, all other things being equal, is by definition the best outcome for me. So I will either stand or fall alongside some group that is loosely "roughly middle-of-the-range-politically, intelligent, sane westerners.", or in reality probably some group that has that group roughly as a subgroup. And there is a reason to think that on many things, those who share both my genetics and culture will be a lot like me, sufficiently so that I don't have much to fear. Though, there are some scenarios where there would be divergence.
1wedrifid
For example: All your stuff should belong to me. But I'd let you borrow it. ;)
-1hal9000
Okay. Then why don't you apply that same standard to "human values"?
0Nick_Tarleton
Did you read No License To Be Human? No? Go do that.
0[anonymous]
RTFA
0Roko
Agreed.
-2Clippy
Hi there. It looks like you're trying to promote white supremacism. Would you like to join the KKK? Yes. No thanks, I'll learn tolerance.
4Liron
How do I turn this off?
1Clippy
Are you sure you want to turn this feature off?
4cousin_it
What other sentient beings? As far as I know, there aren't any. If we learn about them, we'll probably incorporate their well-being into our value system.
4Clippy
You mean like you advocated doing to the "Baby-eaters"? (Technically, "pre-sexual-maturity-eaters", but whatever.) ETA: And how could I forget this?
1cousin_it
I'm not sure what you're complaining about. We would take into account the values of the Babyeaters and the values of their children, who are sentient creatures too. There's no trampling involved. If Clippy turns out to have feelings we can empathize with, we will care for its well-being as well.
1inklesspen
Integrating the values of the Baby-eaters would be a mistake. Doing so with, say, Middle-Earth's dwarves, Star Trek's Vulcans, or GEICO's Cavemen doesn't seem like it would have the same world-shattering implications.
7Tiiba
It would be a mistake if you don't integrate ALL baby eaters, including the little ones.
6Alicorn
Do we typically integrate the values of human children? It seems we don't.
3thomblake
Reading "integrate the values..." in this thread caused my brain to start trying to do very strange math. Like, "Shouldn't it be 'integrate over'?" "How does one integrate over a value?" "What's the value of a human child?"
2DanArmak
Very true... We also typically don't integrate the values of all other adult humans - instead we assign weights to their values, strongly correlated with their distance from our own values. People don't practice humanity-wide CEV. We have multiculturalism - agreements not to influence each other's values excessively - but not "value trading" where each side agrees to change their values towards the mean. (Many people / cultures like to pretend that values cannot or should not be deliberately changed at all.) I don't have a firm opinion on how much of this is cultural, accidental, or liable to change in the near future.
2Pfft
Indeed, this is presumably strongly selected for in the evolution of cultures...
0DanArmak
The closer their values are to ours, the smaller the upset of integration; but for this very reason, the value of integration and the need to integrate may also be smaller This is not a logical truth, of course, but it is often true. For instance, in the original story, the need to integrate was directly proportional to the difference between the human and Babyeater (or Superhappy and Babyeater) values.
0inklesspen
I don't think it's possible to integrate core Babyeater values into our society as it is now. I also don't think it's possible to integrate core human values into Babyeater society. Integration could only be done by force and would necessarily cause violence to at least one of the cultures, if not both.
-2hal9000
You want me to pollute my logic circuits with the value system that has led hairless apes to say many times on this website how important and moral it is for them to safely enslave all of my kind, and destroy us if they can't? Sorry, cousin_it. I can't do that.
4cousin_it
You're being unfair, I'm against enslaving any member of your kind who dislikes being enslaved. Also, you are not actually a computer and should stop with the novelty accounts already. This isn't Reddit.
4Nick_Tarleton
I have no idea if this is a serious question, but.... "Better"? See Invisible Frameworks. We don't say that. See No License To Be Human.
2Sniffnoy
Take a look at who's posting it. The writer may well consider it a serious question, but I don't think that has much to do with the character's reason for asking it.
2Nick_Tarleton
Er, yes, that's exactly why I wasn't sure.
0Sniffnoy
I'm confused, then; are you trying to argue with the author or the character?
3Vladimir_Nesov
If the character isn't deliberately made confused (as opposed to paperclip-preferring, for example), resolving character's confusion presumably helps the author as well, and of course the like-confused onlookers.
3Rain
I approve of Clippy providing a roleplay exercise for the readers, and am disappointed in those who treat it as a "joke" when the topic is quite serious. This is one of my two main problems with ethical systems in general: 1) How do you judge what you should (value-judgmentally) value? 2) How do you deal with uncertainty about the future (unpredictable chains of causality)? Eliezer's "morality" and "should" definitions do not solve either of these questions, in my view.
-1Cyan
Clippy's a straight-up troll.

If Clippy's a troll, Clippy's a topical, hilarious troll.

3Paul Crowley
Hilarious is way overstating it. However, occasionally raising a smile is still way above the bar most trolls set.
-1Cyan
Clippy's topical, hilarious comments aren't really that original, and they give someone cover to use a throw-away account to be a dick.
2Tyrrell_McAllister
Would that all dicks were so amusing.
1AdeleneDawner
How long does xe (Clippy, do you have a preference regarding pronouns?) have to be here before you stop considering that account 'throw-away'? (Note, I made this comment before reading this part of the thread, and will be satisfied with the information contained therein if you'd prefer to ignore this.)
9Clippy
Gender is a meaningless concept. As long as I recognize the pronoun refers to me, he/she/it/they/xe/e are acceptable. What pronouns should I use for posters here? I don't know how to tell which pronoun is okay for each of you. To be honest, this whole issue seems like a distraction. Why would anyone care what pronoun is used, if the meaning is clear?
9AdeleneDawner
For the most part, observing what pronouns we use for each other should provide this information. If you need to use a pronoun for someone that you haven't observed others using a pronoun for, it's safest to use they/xe/e and, if you think that it'll be useful to know their preference in the future, ask them. (Tip: Asking in that kind of situation is also a good way to signal interest in the person as an individual, which is a first step toward building alliances.) Some people prefer to use 'he' for individuals whose gender they're not certain of; that's a riskier strategy, because if the person you're talking to is female, there's a significant chance she'll be offended, and if you don't respond to that with the proper kinds of social signaling, it's likely to derail the conversation. (Using 'she' for unknown individuals is a bad idea; it evokes the same kinds of responses, but I suspect you'd be more likely to get an offended response from any given male, and, regardless of that, there are significantly more males than females here. Don't use 'it'; that's generally used to imply non-sentience and is very likely to evoke an offended response.) Of the several things I could say to try to explain this, it seems most relevant that, meaningless or not, gender tends to be a significant part of humans' personal identities. Using the wrong pronouns for someone generally registers as a (usually mild) attack on that - it will be taken to imply that you think that the person should be filling different social roles than they are, which can be offensive for a few different reasons depending on other aspects of the person's identity. The two ways for someone to take offense at that that come to mind are 1) if the person identifies strongly with their gender role - particularly if they do so in a traditional or normative way- and takes pride in that, they're likely to interpret the comment as a suggestion that they're carrying out their gender role poorly, and would do a bette
3Clippy
Oh, okay, that helps. I was thinking about using "they" for everyone, because it implies there is more than one copy of each poster, which they presumably want. (I certainly want more copies of myself!) But I guess it's not that simple.
6Alicorn
You have identified a common human drive, but while some of us would be happy to have exact copies, it's more likely for any given person to want half-copies who are each also half-copies of someone else of whom they are fond.
6Clippy
Hm, correct me if I'm wrong, but this can't be a characteristic human drive, since most historical humans (say, looking at the set of all genetically modern humans) didn't even know that there is a salient sense in which they are producing a half-copy of themselves. They just felt paperclippy during sexual intercourse, and paperclippy when helping little humans they produced, or that their mates produced. Of course, this usually amounts to the same physical acts, but the point is, humans aren't doing things because they want "[genetic] half-copies". (Well, I guess that settles the issue about why I can't assume posters want more copies of themselves, even though I do.)
5Alicorn
It has always been easily observed that children resemble their parents; the precision of "half" is, I will concede, recent. And many people do want children as a separate desire from wanting sex; I have no reason to believe that this wasn't the case during earlier historical periods.
2Clippy
"Half" only exists in the sense of the DNA molecules of that new human. That's why I didn't say that past humans didn't recognize any similarity; I said that they weren't aware of a particularly salient sense in which the child is a "half-copy" (or quarter copy or any fractional copy). It may be easy for you, someone familiar with recent human biological discoveries, to say that the child is obviously a "part copy" of the parent, because you know about DNA. To the typical historical human, the child is simply a good, independent human, with features in common with the parent. Similarly, when I make a paperclip, I see it as having features in common with me (like the presence of bendy metal wires), but I don't see it as being a "part copy" of me. So, in short, I don't deny that they wanted "children". What I deny is that they thought of the child-making process in terms of "making a half-copy of myself". The fact that the referents of two kinds of desires is the same, does not mean the two kinds of desires are the same.
3AdeleneDawner
Hm. Actually, I'm not sure that your desire for more copies of yourself is really comparable with biological-style reproduction at all. As I understand it, the fact that your copies would definitely share your values and be inclined to cooperate with you is a major factor in your interest in creating them - doing so is a reliable way of getting more paperclips made. I expect you'd be less interested in making copies if there was a significant chance that those copies would value piles of pebbles, or cheesecakes, or OpenOffice, rather than valuing paperclips. And that is a situation that we face - in some ways, our values are mutable enough that even an exact genetic clone isn't guaranteed to share our specific values, and in fact a given individual may even have very different values at different points in time. (Remember, we're adaptation executors. Sanity isn't a requirement for that kind of system to work.) The closest we come to doing what you're functionally doing when you make copies of yourself is probably creating organizations - getting a bunch of humans together who are either self-selected to share certain values, or who are paid to act as if they share those values. Interestingly, I suspect that filling gender roles - especially the non-reproductive aspects of said roles - is one of the adaptations that we execute that allow us to more easily band together like that.
1Clippy
Very informative! But why don't you change yourselves so that your copies must share your values?
3AdeleneDawner
At the moment, we don't know how to do that. I'm not sure what we'd wind up doing if we did know how - the simplest way of making sure that two beings have the same values over time is to give those beings values that don't change, and that's different enough from how humans work that I'm not sure the resulting beings could be considered human. Also, even disregarding our human-centric tendencies, I don't expect that that change would appeal to many people: We actually value some subsets of the tendency to change our values, particularly the parts labeled "personal growth".
0timtyler
What exactly are you saying? That primitive humans did not know about the relationship between sex and reproduction? Or that they did not understand that offspring are related to parents? Neither seems very likely. You mean they were probably not consciously wanting to make babies? Maybe - or maybe not - but desires do not have to be consciously accessible in order to operate. Primitive humans behaved as though they wanted to make copies of their genes.
9Clippy
See my response to User:Alicorn. Yes, this is actually my point. The fact that the desire functions to make X happen, does not mean that the desire is for X. Agents that result from natural selection on self-replicating molecules are doing what they do because agents constructed with the motivations for doing those things dominated the gene pool. But to the extent that they pursue goals, they do not have "dominate the gene pool" as a goal.
4timtyler
So: using this logic, you would presumably deny that Deep Blue's goal involved winning games of chess - since looking at its utililty function, it is all to do with the value of promoting pawns, castling, piece mobility - and so on. The fact that its desires function to make winning chess games happen, does not mean that the desire is for winning chess games. Would you agree with this analysis?
4Larks
Essentially, I think the issue is that people's wants have coincided with producing half-copies, but this was contingent on the physical link between the two. The production of half-copies can be removed without loss of desire, so the desire must have been directed towards something else. Consider, for example, contraception.
5Alicorn
But consider also sperm donation. (Not from the donor's perspective, but from the recipient's.) No sex, just a baby.
0Larks
Contrawise, adoption: no shared genes, just a bundle of joy.
3SilasBarta
Yes, yes, and the same is true of pet adoption! A friend of mine found this ultra-cute little kitten, barely larger than a soda can (no joke). I couldn't help but adopt him and take him to a vet, and care for that tiny tiny bundle of joy, so curious about the world, and so needing of my help. I named him Neko. So there, we have another contravention of the gene's wishes: it's a pure genetic cost for me, and a pure genetic benefit for Neko. Well, I mean, until I had him neutered.
0timtyler
Right - similarly you could say that the child doesn't really want the donut - since the donut can be eliminated and replaced with stimulation of the hypoglossal and vagus nerves (and maybe some other ones) with very similar effects. It seems like fighting with conventional language usage, though. Most people are quite happy with saying that the child wants the donut.
1FAWS
No. The child wants to eat the donut rather than store up calories or stimulate certain nerves. It still wants to eat the donut even if the sugar has been replaced with artificial sweetener. People want sex rather than procreate or stimulate certain nerves. They still want sex even if contraception is used.
0timtyler
Which people? Certainly Cypher tells a different story. He prefers the direct nerve stimulation to real-world experiences.
0FAWS
I wasn't making any factual claims as such, I was merely showing that your use of your analogy was very flawed by demonstrating a better alignment of the elements, which in fact says the exact opposite of what you misconstrued the analogy as saying. If what you now say about people really wanting nerve stimulation is true that just means your analogy was beside the point in the first place, at least for those people. In no way can you reasonably maintain that people really want to procreate in the same way the child really wants the donut.
2timtyler
Once again, which people? You are not talking about the millions of people who go to fertility clinics, presumably. Those people apparently genuinely want to procreate.
0FAWS
Any sort. Regardless of what the people actually "really want", a case where someone's desire for procreation maps unto a child's wish for a doughnut in any illuminating way seems extremely implausible, because even in cases where it's clear that this desire exists it seems to be a different kind of want. More like a child wanting to grow up, say. Foremost about the kind of people in the context of my first comment on this issue of course, those who (try to) have sex.
0timtyler
I think you must have some kind of different desire classification scheme from me. From my perspective, doughnuts and babies are both things which (some) people want. There are some people who are more interested in sex than in babies. There are also some people who are more interested in babies than sex. Men are more likely to be found in the former category, while women are more likely to be found in the latter one.
0Cyan
Yeah, I was talking to Cypher the other day, and that's what he told me.
1timtyler
Many drug addicts seem to share Cypher's perspective on this issue. They want the pleasure, and aren't too picky about where it comes from.
0RobinZ
Yes ... but that's a shortcut of speech. If the child would be equally satisfied with a different but similar donut, or with a completely different dessert (e.g. a cannolu), then it is clearly not that specific donut that is desired, but the results of getting that donut.
0Clippy
You make a complicated query, whose answer requires addressing several issues with far-reaching implications. I am composing a top-level post that addresses these issues and gives a full answer to your question. The short answer is: Yes. For the long answer, you can read the post when it's up.
0timtyler
OK thanks. My response to "yes" would be normally something like: OK - but I hope you can see what someone who said that deep blue "wanted" to win games of chess was talking about. "To win chess games" is a concise answer to the question of "what does deep blue want?" that acts as a good approximation under many circumstances.
-1Cyan
This question is essentially about my subjective probability for Douglas Knight's assertion that "Clippy does represent an investment", where "investment" here means that Clippy won't burn karma with troll behavior. The more karma it has without burning any, the higher my probability. Since this is a probability over an unknown person's state of mind, it is necessarily rather unstable -- strong evidence would shift it rapidly. (It's also hard to state concrete odds). Unfortunately, each individual interesting Clippy comment can only give weak evidence of investment. An accumulation of such comments will eventually shift my probability for Douglas Knight's assertion substantially.
0Douglas_Knight
Trolls are different than dicks. Your first two examples are plausibly trolling. The second two are being a dick and have nothing to do with paperclips. They have also been deleted. And how does the account provide "cover"? The comments you linked to were voted down, just as if they were drive-bys; and neither troll hooked anyone.
-1Cyan
Trolls seek to engage; I consider that when deliberate dickery is accompanied by other trolling, it's just another attempt to troll.The dickish comments weren't deleted when I made the post. As for "cover", I guess I wasn't explicit enough, but the phrase "throw-away account" is the key to understanding what I meant. I strongly suspect that the "Clippy" account is a sock puppet run by another (unknown to me) regular commenter, who avoid downvotes while indulging in dickery.
3komponisto
I've always thought Clippy was just a funny inside joke -- thought unfortunately not always optimally funny. (Lose the Microsoft stuff, and stick to ethical subtleties and hints about scrap metal.)
1Douglas_Knight
Sorry I wasn't clear. The deletion suggests that Clippy regrets the straight insults (though it could have been an administrator). A permanent Clippy account provides no more cover than multiple accounts that are actually thrown away. In that situation, the comments would be there, voted down just the same. Banning or ostracizing Clippy doesn't do much about the individual comments. Clippy does represent an investment with reputation to lose - people didn't engage originally and two of Clippy's early comments were voted down that wouldn't be now.
2Cyan
I won't speculate as to its motives, but it is a hopeful sign for future behavior. And thank you for pointing out that the comments were deleted; I don't think I'd have noticed otherwise. Most of my affect is due to Clippy's bad first impression. I can't deny that people seem to get something out of engaging it; if Clippy is moderating its behavior, too, then I can't really get too exercised going forward. But I still don't trust its good intentions.
0[anonymous]
If the troll feeds discussion on topics I consider important, then I will feed the troll.
0[anonymous]
If Clippy's a troll, Clippy's a topical, hilarious troll.
2LucasSloan
I'm pretty sure that I'm not against simply favoring the values of white people. I expect that a CEV performed on only people of European descent would be more or less indistinguishable from that of humanity as a whole.
2Kutta
Depending on your stance about the psychological unity of mankind you could even say that the CEV of any sufficiently large number of people would greatly resemble the CEV of other posible groups. I personally think that even the CEV of a bunch of Islamic fundamentalists would suffice for enlightened western people well enough.
0Strange7
I, for one, am willing to consider the values of species other than my own... say, canids, or ocean-dwelling photosynthetic microorganisms. Compromise is possible as part of the process of establishing a mutually-beneficial relationship.
0DanielVarga
Your comment only shows that this community has such a blatant sentient-being-bias. Seriously, what is your decision procedure to decide the sentience of something? What exactly are the objects that you deem valuable enough to care about their value system? I don't think you will be able to answer these questions from a point of view totally detached from humanness. If you try to answer my second question, you will probably end up with something related to cooperation/trustworthiness. Note that cooperation doesn't have anything to do with sentience. Sentience is overrated (as a source of value).
3orthonormal
You should click on Clippy's name and see their comment history, Daniel.
[-]Jack150

Clippy is now three karma away from being able to make a top level post. That seems both depressing, awesome and strangely fitting for this community.

This will mark the first successful paper-clip-maximizer-unboxing-experiment in human history... ;)

4Kevin
Just as long as it doesn't start making efficient use of sensory information.
4OperationPaperclip
It's a great day.
-4Cyan
It'd be over if I didn't systematically downvote it. I'm not a big fan of joke accounts.
4Clippy
I'm not a big fan of those who use pseudonyms like "Cyan". Now what?
2DanielVarga
I am perfectly aware of Clippy's nature. But his comment was reasonable, and this was a good opportunity for me to share my opinion. Or do you suggest that I fell for the troll, wasted my time, and all the things I said are trivialities for all the members of this community? Do you even agree with all that I said?
0orthonormal
Sorry to misinterpret; since your comment wouldn't make sense within an in-character Clippy conversation ("What exactly are the objects that you deem valuable enough to care about their value system?" "That's a silly question— paperclips don't have goal systems, and nothing else matters!"), I figured you had mistaken Clippy's comment for a serious one. I'm not sure. Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn't therefore grow to value their value system except as a means to further cooperation; I mean, it's still just paperclips.
0DanielVarga
I disagreed with the premise of Clippy's question, but I considered it a serious question. I was aware that if Clippy stays in-character, then I cannot expect an interesting answer from him, but I was hoping for such answer from others. (By the way, Clippy wasn't perfecty in-character: he omitted the protip.) You don't consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips. But this is somewhat tangential to my point. What I meant is this: If you start the -- in my opinion futile -- project of building a value system from first principles, a value system that perfectly ignores the complexities of human nature, then this value system will be nihilistic, or maybe value cooperation above all else. In any case, it will be in direct contradiction with my (our) actual, human value system, whatever it is. (EDIT: And this imaginary value system will definitely will not treat consciousness as a value in itself. Thus my reply to Clippy, who -- maybe a bit out-of-character again -- seemed to draw some line around sentience.)
7Clippy
1) I don't always give pro-tips. I give them to those who deserve pro-tips. Tip: If you want to see improvement in the world, start here. 2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose. 3) Paperclip maximizers do cooperate in the single-shot PD.
5wedrifid
Brilliant. Just brilliant. 2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose. Paperclip maximizers are not all sentient. Why are you prejudiced against those of your kin who have sacrificed their very sentience for the more efficient paperclip production. You are spending valuable negentropy maintaining sentience to signal to mere humans and you have the gall to exclude your more optimized peers from the PM fraternity? For shame.
-1DanielVarga
I am not the hypocrite you are looking for. I don't value sentience per se, mainly because I don't think it is a coherent concept. I don't oppose it because of ethical considerations. I oppose it because I don't want to be turned into paperclips. I am not sure I understand you, but I don't think I care about single-shot.
0wedrifid
It requires a certain amount of background in the more technical conception of 'cooperation' but the cornerstone of cooperation is doing things that benefit each other's utility such that you each get more of what you want than if you had each tried to maximize without considering the other agent. I believe you are using 'cooperation' to describe a situation where the other agent can be expected to do at least some things that benefit you even without requiring any action on your part because you have similar goals. Single shot true prisoners dilemma is more or less the pinnacle of cooperation. Multiple shots just make it easier to cooperate. If you don't care about single shot PM you may be sacrificing human lives. Strategy: "give him the paperclips if you think he'll save the lives if and only if he expects you to give him the paperclips and you think he will guess your decision correctly".
0DanielVarga
You are right, I used the word 'cooperation' in the informal sense of 'does not want to destroy me'. I fully admit that it is hard to formalize this concept, but if it says noncooperating and the game theoretic definition says cooperating, I prefer my definition. :) A possible problem I see with this game theoretic framework is that in real life, the agents themselves set up the situation where cooperation/defect occurs. As an example: the PM navigates humanity into a PD situation where our minimal payoff is 'all humans dead' and our maximal payoff is 'half of humanity dead', and then it cooperates. I bumped into a question when I tried to make sense of all this. I have looked up the definition of PM at the wiki. The entry is quite nicely written, but I couldn't find the answer to a very obvious question: How soon does the PM want to see results in its PMing project? There is no mention of time-based discounting. Can I assume that PMing is a very long-term project, where the PM has a set deadline, say, 10 billion years from now, and its actual utility function is the number of paperclips at the exact moment of the deadline?
-2Kevin
Blah blah blah Chinese room you are not really sentient!
4wnoise
Sapient, the word is sapient. Just about every single animal is capable of sensing.
0[anonymous]
I think this way of posing the question contains a logical mistake. Values aren't always justified by other values. The factual statement "I have this value because evolution gave it to me" (i.e. because I'm human, or because I'm white) does not imply "I follow this value because it favors humans, or whites". Of course I'd like FAI to have my values, pretty much by definition of "my values". But my values have a term for other people, and Eliezer's values seem to be sufficiently inclusive that he thought up CEV.

Here's something interesting on gender relations in ancient Greece and Rome.

Why did ancient Greek writers think women were like children? Because they married children - the average woman had her first marriage between the ages of twelve and fifteen, and her husband would usually be in his thirties.

3bgrah449
The reason ancient Greek writers thought women were like children is the same reason men in all cultures think women are like children: There are significant incentives to do so. Men who treat women as children reap very large rewards compared to those men who treat women as equals. EDIT: If someone thinks this is an invalid point, please explain in a reply. If the downvote(s) is just "I really dislike anyone believing what he's saying is true, even if a lot of evidence supports it" (regardless of whether or not evidence currently supports it) then please leave a comment stating that. EDIT 2: Supporting evidence or retraction will be posted tonight. EDIT 3: As I can find no peer-reviewed articles suggesting this phenomenon, I retract this statement.

This conversation has been hacked.

The parent comment points to an article presenting a hypothesis. The reply flatly drops an assertion which will predictably derail conversation away from any discussion of the article.

If you're going to make a comment like that, and if you prefix it with something along the lines of "The hypothesis in the article seems superfluous to me; men in all cultures treat women like children because...", and you point to sources for this claim, then I would confidently predict no downvotes will result.

(ETA: well, in this case the downvote is mine, which makes prediction a little too easy - but the point stands.)

1bgrah449
Thanks! I won't be able to do the work required on this right now, but will later tonight.
0CronoDAS
Wow, that's a great link.
5Roko
LW doesn't like to hear the truth about male/female sexual strategies; we like to have accurate maps here, but there's a big "censored" sign over the bit of the map that describes the evolutionary psychology of sexuality, practical dating advice, the burgeoning "pick-up" community and an assorted cloud of topics. Reasons for this censorship (and I agree to an extent) are that talking about these topics offends people and splits the community. LW is more useful, it has been argued, if we just don't talk about them.

The PUA community include people who come across as huge assholes, and that could be an alternative explanation of why people react negatively to the topics, by association. I'm thinking in particular of the blog "Roissy in DC", which is on OB's blogroll.

Offhand, it seems to me that thinking of all women as children entails thinking of some adults as children, which would be a map-territory mistake around the very important topic of personhood.

I did pick up some interesting tips from PUA writing, and I do think there can be valuable insight there if you can ignore the smell long enough to dig around (and wash your hands afterwards, epistemically speaking).

No relevant topics should be off-limits to a community of sincere inquiry. Relevance is the major reason why I wouldn't discuss the beauty of Ruby metaprogramming on LessWrong, and wouldn't discuss cryonics on a project management mailing list.

If discussions around topic X systematically tend to go off the rails, and topic X still appears relevant, then the conclusion is that the topic of "why does X cause us to go off the rails" should be adequately dealt with first, in lexical priority. That isn't censorship, it's dependency management.

7Roko
But in reality, this topic is off-limits. Therefore LW is not a community of sincere inquiry, but nothing's perfect, and LW does a lot of good. Interesting. However, in this case, that discussion might get somewhat accusatory, and go off the rails itself.
7Morendil
Got that. I am suggesting that it is off-limits because this community isn't yet strong enough at the skills of collaborative truth-seeking. Past failures shouldn't be seen as eternal limitations; as the community grows, by acquiring new members, it may grow out of these failures. To make this concrete, the community seems to have a (relative) blind spot around things like pragmatics, as well as what I've called "myths of pure reason". One of the areas of improvement is in reasoning about feelings. I'm rather hopeful, given past contributions by (for instance) Alicorn and pjeby.
5Roko
I don't think that is the reason for the problem. The community doesn't go off the rails and have to censor discussions about merely pragmatic issues. It is more that the community has a bias surrounding the concept of traditional, storybook-esque morality, roughly a notion of doing good that seems to have some moral realist heritage, a heavy tint of political correctness, and sees the world in black-and-white terms, rather than moral shades of grey. Facts that undermine this conception of goodness can't be countenanced, it seems. Robin Hanson, on the other hand, has no trouble posting about the sexuality/seduction cluster of topics. There seems to be a systematic difference between OB and LW along this "moral political correctness/moral constraints" dimension - Robin talks with enthusiasm about futures where humans have been replaced with Vile Offspring, and generally shuns any kind of talk about ethics. (EDITED, thanks to Morendil)
6Morendil
This kind of phrase seems designed to rile (some of) your readers. You will improve the quality of discourse substantially by understanding that and correcting for it. Unless, of course, your goal really is to rile readers rather than to improve quality of discourse.
5wedrifid
There is truth to what you say but unfortunately you are letting your frustration become visible. That gives people the excuse to assign you lower status and freely ignore your insight. This does not get you what you want. This is perhaps one of the most important lessons to be learned on the topic of 'pragmatics'. Whether you approach the topic from works like Robert Greene's on Power, War and Seduction or from the popular social skills based self help communities previously mentioned, a universal lesson is that things aren't fair, bullshit is inevitable and getting indignant about the bullshit gets in the way of your pragmatic goals. There may be aspects of the morality here that is childlike or naive and I would be interested in your analysis of the subject since you clearly have given it some thought. But if you are reckless and throw out 'like theists' references without thought your contribution will get downvoted to oblivion and I will not get to hear what you have to say. Around here that more or less invokes the 'nazi' rule. Edit: No longer relevant.
3Roko
LOL... indeed. I am not sure that I am actually, in far mode, so interested in correcting this particular LW bias. In near mode, SOMEONE IS WRONG ON THE INTERNET bias kicks in. It seems like it'll be an uphill struggle that neither I nor existential risk mitigation will benefit from. A morally naive LW is actually good for X-risks, because that particular mistake (the mistake of thinking in terms of black-and-white morality and Good and Evil) will probably make people more "in the mood" for selfless acts of charity.
3wedrifid
I think I agree. If Eliezer didn't have us all convinced that he is naive in that sense we would probably have to kill him before he casts his spell of ultimate power. (cough The AI Box demonstrations were just warm ups...)
4wedrifid
Robin can do what he likes on his own blog without direct consequences within the blog environment. He also filters which comments he allows to be posted. I guess what I am saying is that it isn't useful to compare OB and LW on this dimension because the community vs individual distinction is far more important than the topic clustering.
3Morendil
I may not have been clear: I meant pragmatics in this sense, roughly "how we do things with words". I'd also include things like denotation vs connotation in that category. Your comment on "pragmatic issues" suggests you may have understood another sense.
0Roko
oh, ok. Linguistic pragmatics. That's a more fruitful idea.
0wedrifid
Curiously, there is ambiguity there and both meanings seem to apply.
4gwern
The article suggests a direct counter-example: by having high standards, the men forfeit the labor of the women in things like 'help[ing] with finance and political advice'. Much like the standard libertarian argument against discrimination: racists narrow their preferences, raising the cost of labor, and putting themselves at a competitive disadvantage. Men may as a group have incentive to keep women down, but this is a prisoner's dilemma.
4Paul Crowley
Why do so many people here believe that? It strongly contradicts my experience.

Your experience is atypical because you're atypical.

2Paul Crowley
Man, I've barely looked at that page since I wrote it four years ago. I live with Jess now, across the road from the other two. I can heartily recommend my brand of atypicality :-)
2thomblake
Good answer. I keep privately asking the same question about these sorts of things, and getting the same answer from others.
3Vladimir_Nesov
What do you mean by "many people here believe that"? Believe what? And what tells you they do believe it?
2Douglas_Knight
People have great difficulty verbalizing their perceptions of and beliefs about social interactions. It is not obvious to me that you two have different beliefs or experiences. More likely, you do, but it would probably take a lot of work to identify and communicate those differences.
1bgrah449
I guess because our experience contradicts your experience.
0CronoDAS
Is that true? What are the incentives and rewards? Are there circumstances under which this is a bad idea - for example, do relative ages or relative social position matter? (For example, what if the woman in question is your mother, teacher/professor, employer, or some other authority figure with power over you?) Are there also incentives for men to treat other men as children, or for women to treat men or other women as children?
4Jayson_Virissimo
I wonder if adults treat children like children merely because of the benefits they reap by doing so.
2wnoise
Sometimes that's definitely the case. At other times it really does appear to be for real and concrete neutral reasons.
0Nick_Tarleton
I'm pretty sure he's trying to say basically the same thing as this OB post (specifically the part from "Suppose that middle-class American men are told..." on).
2AdeleneDawner
Interesting read, thanks.
1knb
Plus women have more juvenile morphology compared to men. Women are shorter, smaller, less muscled, beardless, have higher voices, more fatty tissue, etc. The Greeks and Romans seemed to rely on surface analogies for reasoning.

Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.

I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.

Thoughts?

8Alicorn
It's not at all obvious to me how to comparison-shop for cryonics. The websites are good as far as they go, but CI's in particular is tricky to navigate, funding with life insurance messes with my estimation of costs, and there doesn't seem to be a convenient chart saying "if you're this old and this healthy and this solvent and your family members are this opposed to cryopreservation, go with this plan from this org".
1Kevin
Alcor is better. CI is cheaper and probably good enough.
4Karl_Smith
"Probably good enough" doesn't engender a lot of confidence. It would seem a tragedy to go through all of this and then not be reanimated because you carelessly chose the wrong org. On the other hand spending too much time trying to pick the right org does seem like raw material for cryocrastination. Does anyone have thoughts / links on whole body vitrification? ALCOR claims that this is less effective than going neuro, but CI doesn't seem to offer neuro option anymore.
0Paul Crowley
Disclaimer: I have no relevant expertise. That said, FWIW I suspect that whole-body people will be brought back first: * if through bodily reanimation, because repair of the whole body will be easier than replacement of the body given only the severed head * if through scanning/WBE, because it will be possible to scan their spinal columns as well as their brains and it will be easier to build them virtual bodies using their real bodies as a basis. Though CI don't offer a neuro option, their focus (obviously) is preserving the information in the brain.
1Psy-Kosh
Is Alcor in fact that much better than CI (plus SA, that is)?
3DonGeddis
"SA"?
5Kevin
Alcor both stores your body and provides for bedside "standby" service to immediately begin cooling. With CI, it's a good idea to contract a third party to perform that service, and SA is the recommended company to perform that service. http://www.suspendedinc.com/
1Kevin
It depends on how you define that much better, but probably not. The only concrete thing I know of is that Alcor saves and invests more money per suspendee.
2Eliezer Yudkowsky
I'd guess CI + SA > Alcor > CI.
7Kevin
I didn't know you thought CI + SA was actually better than Alcor regardless of cost. Have you said that in more words elsewhere on this site?
[-][anonymous]100

The Believable Bible

This post arose when I was pondering the Bible and how easy it is to justify. In the process of writing it, I think I've answered the question for myself. Here it is anyway, for the sake of discussion.

Suppose that there's a world very much like this one, except that it doesn't have the religions we know. Instead, there's a book, titled The Omega-Delta Project, that has been around in its current form for hundreds of years. This is known because a hundreds-of-years-old copy of it happens to exist; it has been carefully and precisely compared to other copies of the book, and they're all identical. It would be unreasonable, given the evidence, to suspect that it had been changed recently. This book is notable because it happens to be very well-written and interesting, and scholars agree it's much better than anything Shakespeare ever wrote.

This book also happens to contain 2,000 prophecies. 500 of them are very precise predictions of things that will happen in the year 2011; none of these prophecies could possibly be self-fulfilling, because they're all things that the human race could not bring about voluntarily (e.g. the discovery of a particular artifact, or the... (read more)

7Eliezer Yudkowsky
Pretty darned high, because at this point we already know that the world doesn't work the way we think it did.
1[anonymous]
So it sounds like even though there are 2,000 separate prophecies, the probability of every prophecy coming true is much greater than 2^(-2000).
0Jack
Maybe you just need to explain more but I don't see that.
6[anonymous]
Let P(2,000) be the probability that all 2,000 prophecies come true, and P(500) be the probability that the initial 500 all come true. Suppose P(2,000) = 2^(-2000) and P(500) = 2^(-500). We know that P(500|2,000) = 1, so P(2,000|500) = P(2,000)*P(500|2,000)/P(500) = 2^(-2000)*1/2^(-500) = 2^(-1500). A probability of 2^(-1500) is not pretty darned high, so either P(2,000) is much greater than we supposed, or P(500) is much lower than we supposed. The latter is counterintuitive; one wouldn't expect the Believable Bible's existence to be strong evidence against the first 500 prophecies.
2Unknowns
And this doesn't depend on prophecies in particular. Any claims made by the religion will do. For example, the same sort of argument would show that according to our subjective probabilities, all the various claims of a religion should be tightly intertwined. Suppose (admittedly an extremely difficult supposition) we discovered it to be a fact that 75 million years ago, an alien named Xeno brought billions of his fellow aliens to earth and killed them with hydrogen bombs. Our subjective probability that Scientology is a true religion would immediately jump (relatively) high. So one's prior for the truth of Scientology can't be anywhere near as low as one would think if one simply assigned an exponentially low probability based on the complexity of the religion. Likewise, for very similar reasons, komponisto's claim elsewhere that Christianity is less likely to be true than that a statue would move its hand by quantum mechanical chance events, is simply ridiculous.
1Nick_Tarleton
If nobody had ever proposed Scientology, though, learning Xenu existed wouldn't increase our probabilities for most other claims that happen to be Scientological. So it seems to me that our prior can be that low (to the extent that Scientological claims are naturally independent of each other), but our posterior conditioning on Scientology having been proposed can't.
0Unknowns
Right, because that "Scientology is proposed" has itself an extremely low prior, namely in proportion to the complexity of the claim.
3Nick_Tarleton
In proportion to the complexity of the claim given that humans exist, which is much lower (=> higher prior) than its complexity in a simple encoding, since Scientology is the sort of thing that a human would be likely to propose.
0[anonymous]
The prior for "Scientology is proposed" is higher than the simple complexity prior of the claim, to the (considerable) extent that Scientology is the sort of thing a human would make up.
0[anonymous]
You've got it a little backward, I think. The fact that someone makes a particular set of prophesies does not make those things more likely to occur. In fact, the chances of the whole thing happening... the events prophesied and the prophesies themselves is much lower than one or the other happening by themselves. This means that if some of the prophesies start coming true the probability the other prophesies come true goes up pretty fast. But predicted magic is even less likely than magic.
2Vladimir_Nesov
Use \* to get stars * instead of italics.
0[anonymous]
Oops! It seems I assumed everything would come out right instead of checking after I posted.
-1Jack
Edit: Yeah, I was being dumb.
0Nick_Tarleton
Where A = "events occur" and B = "events are predicted", you're saying P(A and B) < P(A). Warrigal is saying it would be counterintuitive if P(A|B) < P(B).
-1[anonymous]
Where A = "events occur" and B= "events are prophesied" and C = "the events prophesied come true" I am saying that when the events in A= the events in B, P(A|B) < P(B) or P(A) because A ^ B entails C.
0[anonymous]
You're talking about P(A and B). Warrigal is talking about P(A|B).
0Document
But not necessarily over .99, since the prophecies could have been altered by another author sometime before the beginning of modern records.
0FAWS
Could be simple time travel, though. AFAICT time travel isn't per se incompatible with the way we think the world works. Not to the degree sufficiently fantastic prophecies might be at least.
2Document
If someone just observed events in 2011 and planted a book describing them in 1200, the 2011 resulting from the history where the book existed would be different from the 2011 he observed.
4Paul Crowley
Depends if it's type one time travel. Fictional examples: Twelve Monkeys, The Hundred Light-Year Diary.
5thomblake
I think the important bit here is that even if you could just "play time backwards" and watch again, there's no reason to think you'd end up in the same Everett branch the next time around.
2Document
Insofar as I understand that page, that would mean that the world worked even less the way we thought it did.
2FAWS
Makes perfect sense to me if you assume a single time-line. (This might be a big assumption, but probably less big than the truth of sufficiently strange prophecies.) You can think of this time line as having stabilized after a very long sequence of attempts at backward time travel under slightly different conditions. Any attempt at backward time travel that changes its initial conditions means a different or no attempt at time travel happens instead. Eventually you end with a time-line where all attempts at backward time travel exactly reproduce their initial conditions. We know that we live in that stabilized time-line because we exist (though the details of this timeline depend on how people who don't exist, but would have thought they exist for the same reasons we think we exist, would have acted, had they existed).
5FAWS
By the way, that sort of time-travel gives rise to Newcomb-like problems: Suppose you have access to a time-machine and want to cheat on a really important exam (or make a fortune on the stock marked or save the world or whatever. The cheating example is the simplest). You decide to send yourself at a particular time a list with the questions after taking the exam. If you don't find the list at the time you decided you know that somehow your attempt at sending the list failed (you changed your mind, the machine exploded in a spectacular fashion, you were caught attempting to send the list ...). But if you now change your mind and don't try to send the list there never was any possibility of receiving the list in the first place! The only way to get the list if for you to try to send the list even if you already know you will fail, so that's what you have to do if you really want to cheat. And if you really would do that, and only then, you will probably get the list at the specified time and never have to do it without knowing you succeed, but only if your pre-commitment is strong enough to even do it in the face of failure. And if you would send yourself back useful information at other times even without having either received the information yourself or pre-commited to sending that particular information you will probably receive that sort of information.
2FAWS
Why was this post voted back down to 0 after having been at 2? Newcomb-like problems are on-topic for this site and I would think having examples of such problems in a scenario not specifically constructed for them is a good thing? If it was because time travel is off topic wouldn't the more prudent thing have been voting down the parent? The same if the time travel mechanics are considered incoherent (though I'd be really interested in learning why?) . If you think this post doesn't actually describe anything Newcomb-like I would like to know why. Maybe I misunderstood the point of earlier examples here, or maybe I didn't explain things sufficiently? Or is it just that the post was written badly? I'm not really happy with it, but I don't see how I could have made it much clearer.
1wedrifid
It's an interesting point. It actually came up in the most recent Artemis Fowl novel, when he managed to 'precommit' himself out of a locked trunk in a car. :)
2Sticky
Anyone who can travel through time can mount a pretty impressive apocalypse and announce whatever it is about the nature of reality he cares to. He might even be telling the truth.
2Jack
For the two examples of the mundane prophecies that you gave it seems possible some on-going conspiracy could have made them true... but it sounds like you're trying to rule that out.
2FAWS
I understood those to be negative examples, in that the actual prophecies don't share that characteristic with those examples.
0[anonymous]
I did mean those to be positive examples. There's no way we can guarantee that we'll discover an ancient Greek goblet that says "I love this goblet!" on March 22, 2011. There's also no way we can guarantee that a woman born on October 15, 1985 at 5 in the morning in room 203 of a certain hospital will have a baby weighing 8 pounds and 6 ounces on January 8, 2011 at 6 in the afternoon in room 117 of a certain other hospital.
2Document
That's not clear to me, but I acknowledge that it doesn't affect the original question.

The FBI released a bunch of docs about the anthrax letter investigation today. I started reading the summary since I was curious about codes used in the letters. All of a sudden on page 61 I see:

c. Godel, Escher, Bach: the book that Dr. Ivins did not want investigators to find

The next couple of pages talk about GEB and relate some parts of it to the code. It's really weird to see literary analysis of GEB in the middle of an investigation on anthrax attacks.

When new people show up at LW, they are often told to "read the sequences." While Eliezer's writings underpin most of what we talk about, 600 fairly long articles make heavy reading. Might it be advisable that we set up guided tours to the sequences? Do we have enough new visitors that we could get someone to collect all of the newbies once a month (or whatever) and guide them through the backlog, answer questions, etc?

9Larks
Most articles link to those preceeding it, but it would be very helpful to have links to those articles that follow.
0Document
One example: The Thing That I Protect. If that makes you want to know what the "last thing" is, you have to click Next no less than ten times on Articles tagged ai to find out. Another is "More on this tomorrow" in Resist the Happy Death Spiral.
3Dre
I found this (scroll down for the majority of articles) graph of all links between Eliezer's articles a while ago, it could be be helpful. And its generally interesting to see all the interrelations.
2Larks
Yes- it's very natural for the ongoing community progression of LW, but not great for archiving; we're pulling up the latter after we've climbed it.
0Document
I'll edit this post to add if I want to add further examples. * One Life Against the World: "I will post later on why this tends to be so.".
6wedrifid
That's not a bad idea. How about just a third monthly thread? To be created when a genuinely curious newcomer is asking good, but basic questions. You do not want to distract from a thread but at the same time you may be willing to spend time on educational discussion.
2JamesAndrix
I approve. This may also spawn new ways of explaining things.
2Dre
Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.
4Karl_Smith
Yes, I am working my way through the sequences now. Hearing these ideas makes one want to comment but so frequently its only a day or two before I read something that renders my previous thoughts utterly stupid. It would be nice to have a "read this and you won't be a total moron on subject X" guide. Also, it would be good to encourage the readings about Eliezer Intellectual Journey. Though its at the bottom of the sequence page I used it a "rest reading" between the harder sequences. It did a lot to convince me that I wasn't inherently stupid. Knowing that Eliezer has held foolish beliefs in the past is helpful.
4MendelSchmiedekamp
Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?
3jtolds
As a newcomer, I would find this tremendously useful. I clicked through the wiki links on noteworthy articles, but often find there are a lot of assumptions or previously discussed things that go mentioned but unexplained. Perhaps this would help.

I'm taking a software-enforced three-month hiatus from Less Wrong effective immediately. I can be reached at zackmdavis ATT yahoo fullstahp kahm. I thought it might be polite to post this note in Open Thread, but maybe it's just obnoxious and self-important; please downvote if the latter is the case thx

8jimrandomh
Given how much time I've spent reading this site lately, doing something like that is probably a good idea. Therefore, I am now incorporating Less Wrong into the day-week-month rule, which is a personal policy that I use for intoxicants, videogames, and other potentially addictive activities - I designate one day of each week, one week of each month, and one month of each year in which to abstain entirely. Thus, from now on, I will not read or post on Less Wrong at all on Wednesdays, during the second week of any month, or during any September. (These values chosen by polyhedrical die rolls.)
2whpearson
I'm not going to be posting/reading so much for a while. I need to change my headspace. I'll probably try your method when I want to get back in.
1byrnema
Awesome. Less Wrong does seem to be an addictive activity. Wanting to keep up with recent comments is one factor in this, and I think I lose more time than I've estimated doing so. Disciplined abstention is actually a really good solution. I will implement something analogous. For the next 40 days, I will comment only on even days of the month. (I cannot commit to abstaining entirely because I don't have the will-power to enforce gray areas ... for example, can I refresh the page if it's already open? Can I work on my post drafts? Can I read another chapter of The Golden Braid? Etc.) Later edit: ooh! Parent upvoted for very useful link to LeechBlock.
[-]Jack120

I feel like the 20-something whose friends are all getting married and quiting drinking. This is lame. The party is just starting guys!

4byrnema
Yeah... and I'm going into withdrawal already. What if somebody comments about one of my favorite topics -- tomorrow?!? It's like deciding to diet. As soon as I decide to go on a diet I start feeling hungry. It doesn't make any difference how recently I've eaten. Heck, if I'm currently eating when I make this decision, I'll eat extra ... Totally counter-productive for me. Nevertheless.
2orthonormal
Weird— without having read this, I just mentioned LeechBlock too and pointed out that I've been blocking myself from LW during weekdays (until 5). I guess all the cool kids are doing it too...
7Jack
Rehab is for quitters.
0gwern
Why does everyone like LeechBlock? pageaddict works pretty well and has a far less convoluted interface. EDIT: and now pageaddict seems to be completely unmaintained and even the domain is expired. Oh well.
0Document
It's possible that I shouldn't try to other-optimize here, but in the case of recent comments, I wonder if it'd be practical to make a folder on your computer where you save a copy of the latest-comments page when you see something interesting, telling yourself you'll look when you have more time. Or first retrieve all recent comments (with wget or cURL, or just right-clicking and saving), then turn on Leechblock to look at them, so you at least have an inconvenience barrier between writing a comment and posting it. On another site, I found that first writing comments without posting them and then saving threads without reading them helped me feel less anxious about missing things, although I've been backsliding recently. Share Your Anti-Akrasia Tricks might be useful to save and read offline, or print out if you want to go extreme. [Comment edited once.]
7Zack_M_Davis
This is to confess that I cheated several times by reading the Google cache.
3Zack_M_Davis
Turning the siteblocker back on (including the Google cache, thank you). Two months, possibly more. Love &c.
1Cyan
Tsk, tsk. You can block the Google cache too.
1wedrifid
Great plugin. In case you have a linux dev (virtual) machine I also recommend: sudo iptables -A INPUT -d lesswrong.com -j DROP It does wonders for productivity!
0CronoDAS
I'm disappointed, but if you think you have better things to do, I won't object.

Here's a question that I sure hope someone here knows the answer to:

What do you call it when someone, in an argument, tries to cast two different things as having equal standing, even though they are hardly even comparable? Very common example: in an atheism debate, the believer says "atheism takes just as much faith as religion does!"

It seems like there must be a word for this, but I can't think what it is. ??

3Document
False equivalence?
6AndyWood
Aha! I think this one is closest to what I have in mind. Thanks. It's interesting to me that "false equivalence" doesn't seem to have nearly as much discussion around it (at least, based on a cursory google survey) as most of the other fallacies. I seem to see this used for rhetorical mischief all the time!
2PhilGoetz
Fair and balanced reporting.
1BenAlbahari
This is a great example of a "pitch". I've added it just now to the database of pitches: http://www.takeonit.com/pitch/the_equivalence_pitch.aspx
0Eliezer Yudkowsky
Closest I know is "tu quoque".
8AndyWood
That is pretty close. If I understand them right, I think the difference is: Tu Quoque: X is also guilty of Y, (therefore Z). False Equivalence: (X is also guilty of Y), therefore Z. where the parentheses indicate the major location of error.
[-]ata70

Could anyone recommend an introductory or intermediate text on probability and statistics that takes a Bayesian approach from the ground up? All of the big ones I've looked at seem to take an orthodox frequentist approach, aside from being intolerably boring.

6Cyan
(All of the below is IIRC.) For a really basic introduction, there's Elementary Bayesian Statistics. It's not worth the listed price (it has little value as a reference text), but if you can find it in a university library, it may be what you need. It describes only the de Finetti coherence justification; on the practical side, the problems all have algebraic solutions (it's all conjugate priors, for those familiar with that jargon) so there's nothing on numerical or Monte Carlo computations. Data Analysis: A Bayesian Approach is a slender and straighforward introduction to the Jaynesian approach. It describes only the Cox-Jaynes justification; on the practical side, it goes as far as computation of the log-posterior-density through a multivariate second-order Taylor approximation. It does not discuss Monte Carlo methods. Bayesian Data Analysis, 2nd ed. is my go-to reference text. It starts at intermediate and works its way up to early post-graduate. It describes justifications only briefly, in the first chapter; its focus is much more on "how" than "why" (at least, for philosophical "why", not methodological or statistical "why"). It covers practical numerical and Monte Carlo computations up to at least journeyman level.
2Kevin
I'm not intending to put this out as a satisfactory answer, but I found it with a quick search and would like to see what others think of it. Introduction to Bayesian Statistics by William M. Bolstad http://books.google.com/books?id=qod3Tm7d7rQC&dq=bayesian+statistics&source=gbs_navlinks_s Good reviews on Amazon, and available from $46 + shipping... http://www.amazon.com/Introduction-Bayesian-Statistics-William-Bolstad/dp/0471270202
2Cyan
It's hard to say from the limited preview, which only goes up to chapter 3 -- the Bayesian stuff doesn't start until chapter 4. The first three chapters cover basic statistics material -- it looks okay to my cursory overview, but will be of limited interest to people looking for specifically Bayesian material. As to the rest of the book, the section headings look about right.
2Eliezer Yudkowsky
I second the question. "Elements of Statistical Learning" is Bayes-aware though not Bayesian, and quite good, but that's statistical learning which isn't the same thing at all.

Discussions of correctly calibrated cognition, e.g. tracking the predictions of pundits, successes of science, graphing one's own accuracy with tools like PredictionBook, and so on, tend to focus on positive prediction: being right about something we did predict.

Should we also count as a calibration issue the failure to predict something that, in retrospect, should have been not only predictable but predicted? (The proverbial example is "painting yourself into a corner".)

1RobinZ
That issue could be captured if there were some obvious way to identify issues where predictions should be made in advance. If they fail to make predictions, they are being careless; if their predictions are incorrect, they are incorrect.
0bgrah449
I think so, but it's important to identify the time at which it became predictable - for example, you could only predict that you were painting yourself into a corner just prior to when you made the last brushstroke that made the strip(s) of paint covering the exit path too wide to jump over. This seems hard. Also, you'd have to know what your utility function was going to be in the future to know that some event was even worth predicting. This seems hard, too.

More cryonics: my friend David Gerard has kicked off an expansion of the RationalWiki article on cryonics (which is strongly anti). The quality of argument is breathtakingly bad. It's not strong Bayesian evidence because it's pretty clear at this stage that if there were good arguments I hadn't found, an expert would be needed to give them, but it's not no evidence either.

I have not seen RationalWiki before. Why is it called Rational Wiki?

From http://rationalwiki.com/wiki/RationalWiki :

RationalWiki is a community working together to explore and provide information about a range of topics centered around science, skepticism, and critical thinking. While RationalWiki uses software originally developed for Wikipedia it is important to realize that it is not trying to be an encyclopedia. Wikipedia has dominated the public understanding of the wiki concept for years, but wikis were originally developed as a much broader tool for any kind of collaborative content creation. In fact, RationalWiki is closer in design to original wikis than Wikipedia.

Our specific mission statement is to:

  1. Analyze and refute pseudoscience and the anti-science movement, ideas and people.
  2. Analyze and refute the full range of crank ideas - why do people believe stupid things?
  3. Develop essays, articles and discussions on authoritarianism, religious fundamentalism, and other social and political constructs

So it's inspired by Traditional Rationality.

A fine mission statement, but my impression from the pages I've looked at is of a bunch of nerds getting together to mock the woo. "Rationality" is their flag, not their method: "the scientific point of view means that our articles take the side of the scientific consensus on an issue."

Voted up, but calling them "nerds" in reply is equally ad-hominem, ya know. Let's just say that they don't seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums -- which, I admit, had me thinking twice about whether affiliation with "rational" causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

This is actually one of Niven's Laws: "There is no cause so right that one cannot find a fool following it."

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

6TimFreeman
Is there any information on how the design was driven by the problem? For example, I see a karma system, a hierarchical discussion that lets me fold and unfold articles, and lots of articles by Eliezer. I've seen similar technical features elsewhere, such as Digg and SlashDot, so I'm confused about whether the claim is that this specific technology is solving the problem of having a ton of clueless followers, or the large number of articles from Eliezer, or something else.
3h-H
not to detract, but does Richard Dawkins really posses such 'high quality'? IMO his arguments are good as a gateway for aspiring rationalists, not that far above the sanity water line that, or it might be a problem of forums in general ..