Open Thread: July 2010

by komponisto1 min read1st Jul 2010698 comments

10

Open Threads
Personal Blog

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Part 2

Rendering 500/697 comments, sorted by (show more) Highlighting new comments since Today at 11:20 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I propose that LessWrong should produce a quarterly magazine of its best content.

LessWrong readership has a significant overlap with the readers of Hacker News, a reddit/digg-like community of tech entrepreneurs. So you might be familiar with Hacker Monthly, a print magazine version of Hacker News. The first edition, featuring 16 items that were voted highly on Hacker News, came out in June, and the second came out today. The curator went to significant effort to contact the authors of the various articles and blog posts to include them in the magazine.

Why would we want LessWrong content in a magazine? I personally would find it a great recruitment tool; I could have copies at my house and show/lend/give them to friends. As someone at the Hacker News discussion commented, "It's weird but I remember reading some of these articles on the web but, reading them again in magazine form, they somehow seem much more authoritative and objective. Ah, the perils of framing!"

The publishing and selling part is not too difficult. Hacker Monthly uses MagCloud, a company that makes it easy to publish and sell PDFs into printed magazines.

Unfortunately, I don't have the skills or time to d... (read more)

5mattnewport11yDoes anyone else find the idea of creating a printed magazine rather anachronistic [http://www.google.ca/search?q=death+of+print+media]?
3Blueberry11yThe rumors of print media's death have been greatly exaggerated.

This comment would seem much more authoritative if seen in print.

2LucasSloan11yI don't think there's enough content on LW to be worthwhile publishing a magazine. However, Eliezer's book on rationality should offer many of the same benefits.
8michaelkeenan11yNot all of the content needs to be from the most recent quarter. There could be classic articles too. But I think we might have enough content each quarter anyway. Let's see... There were about 120 posts to Less Wrong from April 1 to June 30. The top ten highest-voted were Diseased thinking: dissolving questions about disease [http://lesswrong.com/lw/2as/diseased_thinking_dissolving_questions_about/] by Yvain, Eight Short Studies On Excuses [http://lesswrong.com/lw/24o/eight_short_studies_on_excuses/] by Yvain, Ugh Fields [http://lesswrong.com/lw/21b/ugh_fields/] by Roko, Bayes Theorem Illustrated [http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/] by komponisto, Seven Shiny Stories [http://lesswrong.com/lw/2aw/seven_shiny_stories/] by Alicorn, Ureshiku Naritai [http://lesswrong.com/lw/20l/ureshiku_naritai/] by Alicorn, The Psychological Diversity of Mankind [http://lesswrong.com/lw/28k/the_psychological_diversity_of_mankind/] by Kaj Sotala, Abnormal Cryonics [http://lesswrong.com/lw/2a8/abnormal_cryonics/] by Will Newsome, Defeating Ugh Fields In Practice [http://lesswrong.com/lw/2cv/defeating_ugh_fields_in_practice/] by Psychohistorian, and Applying Behavioral Pscyhology on Myself [http://lesswrong.com/lw/2dg/applying_behavioral_psychology_on_myself/] by John Maxwell IV. Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long). So maybe swap a couple of them out for other ones. Then maybe add a few classic LessWrong articles (for example, Disguised Queries [http://lesswrong.com/lw/nm/disguised_queries/] would make a good companion piece to Diseased Thinking), add a few pages of advertising and maybe some rationality quotes, and you'd have at least 30 pages. I know I'd buy it.
1komponisto11yIt's not actually all that long; it's just that the diagrams take up a lot of space.
2michaelkeenan11yWell, I'd like to keep the diagrams if the article is to be used. I do like Bayes Theorem Illustrated and I think an explanation of Bayes Theorem is perfect content for the magazine. If I were designing the magazine I'd want to try to include it, maybe edited down in length.
3NancyLebovitz11yMonthly seems too often. Quarterly might work.
3gwern11yA yearly anthology would be pretty good, though. HN is reusing others' content and can afford a faster tempo; but that simply means we need to be slower. Monthly is too fast, I suspect that quarterly may be a little too fast unless we lower our standards to include probably wrong but still interesting essays. (I think of "Is cryonics necessary?: Writing yourself into the future" [http://lesswrong.com/lw/1ay/is_cryonics_necessary_writing_yourself_into_the/] as an example of something I'm sure is wrong, but was still interesting to read.)
2Kevin11yHow about thirdly!?
1Kevin11yThere's certainly enough content to do at least one really good issue.

A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:

http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all

While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.

Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.

This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.

This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn'... (read more)

Wanting cryo signals disloyalty to your present allies.

Women, it seems, are especially sensitive to this (mothers, wives). Here's my explanation for why:

  1. Women are better than men at analyzing the social-signalling theory of actions. In fact, they (mostly) obsess about that kind of thing, e.g. watching soap operas, gossiping, people watching, etc. (disclaimer: on average)

  2. They are less rational than men (only slightly, on average), and this is compounded by the fact that they are less knowledgeable about technical things (disclaimer: on average), especially physics, computer science, etc.

  3. Women are more bound by social convention and less able to be lone dissenters. Asch's conformity experiment found women to be more conforming.

  4. Because of (2) and (3), women find it harder than men to take cryo seriously. Therefore, they are much more likely to think that it is not a feasible thing for them to do

  5. Because they are so into analyzing social signalling, they focus in on what cryo signals about a person. Overwhelmingly: selfishness, and as they don't think they're going with you, betrayal.

6Alicorn11yIf you're right, this suggests a useful spin on the disclosure: "I want you to run away with me - to the FUTURE!" However, it was my dad, not my mom, who called me selfish when I brought up cryo.
5Wei_Dai11yMaybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?
6Roko11yAha, but if I signed up, I'd have to non-conform, darling. Think of what all the other girls at the office would say about me! It would be worse than death!
6lmnop11yIn the case of refusing cryonics, I doubt that fear of social judgment is the largest factor or even close. It's relatively easy to avoid judgment without incurring terrible costs--many people signed up for cryonics have simply never mentioned it to the girls and boys in the office. I'm willing to bet that most people, even if you promised that their decision to choose cryonics would be entirely private, would hardly waver in their refusal.
2Will_Newsome11yFor what it's worth Steven Kaas emphasized social weirdness as a decent argument against signing up. I'm not sure what his reasoning was, but given that he's Steven Kaas I'm going to update on expected evidence (that there is a significant social cost so signing up that I cannot at the moment see).
6Wei_Dai11yI don't get why social weirdness is an issue. Can't you just not tell anyone that you've signed up?
2gwern11yThe NYT article points out that you sometimes want other people to know - your wife's cooperation at the hospital deathbed will make it much easier for the Alcor people to wisk you away.
2Vladimir_Nesov11yIt's not an argument against signing up, unless the expected utility of the decision is borderline positive and it's specifically the increased probability of failure because of lack of additional assistance of your family that tilts the balance to the negative.
2wedrifid11yIf my spouse played that card too hard I'd sign up to cryonics then I'd dump them. ("Too hard" would probably mean more than one issue and persisting against clearly expressed boundaries.) Apart from the manipulative aspect it is just, well, stupid. At least manipulate me with "you will be abandoning me!" you silly man/woman/intelligent agent of choice.
2JoshuaZ11yVoted up as an interesting suggestion. That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with. Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.
2NancyLebovitz11yI don't see why you'd be showing disloyalty to those of your allies who are also choosing cryo. Here are some more possible reasons for being opposed to cryo. Loss aversion. "It would be really stupid to put in that hope and money and get nothing for it." Fear that it might be too hard to adapt to the future society. (James Halperin's The First Immortal [http://www.amazon.com/First-Immortal-Novel-Future/dp/0345421825/ref=sr_1_2?s=books&ie=UTF8&qid=1278633440&sr=1-2] has it that no one gets thawed unless someone is willing to help them adapt. would that make cryo seem more or less attractive?) And, not being an expert on women, I have no idea why there's a substantial difference in the proportions of men and women who are opposed to cryo.
6Roko11yDifference between showing and signalling disloyalty. To see that it is a signal of disloyalty/lower commitment, consider what signal would be sent out by Rob saying to Ruby: "Yes, I think cryo would work, but I think life would be meaningless without you by my side, so I won't bother"
8SilasBarta11yI -- quite predictably [http://lesswrong.com/lw/2ax/open_thread_june_2010/23p8] -- think this is a special case of the more general problem that people have trouble explaining themselves. You mom doesn't give her real reason because she can't (yet) articulate it. In your case, I think it's due to two factors: 1) part of the reasoning process is something she doesn't want to say to your face so she avoids thinking it, and 2) she's using hidden assumptions that she falsely assumes you share. For my part, my dad's wife is nominally unopposed, bitterly noting that "It's your money" and then ominously adding that, "you'll have to talk about this with your future wife, who may find it loopy". (Joke's on her -- at this rate, no woman will take that job!)
4NancyLebovitz11yI don't have anything against cryo, so this are tentative suggestions. Maybe going in for cryo means admitting how much death hurts, so there's a big ugh field. Alternatively, some people are trudging through life, and they don't want it to go on indefinitely. Or there are people they want to get away from. However, none of this fits with "I'll write you out of my will". This sounds to me like seeing cryo as a personal betrayal, but I can't figure out what the underlying premises might be. Unless it's that being in the will implies that the recipient will also leave money to descendants, and if you aren't going to die, then you won't.
3Blueberry11yIs there evidence for this? Specifically the "intense" part? ETA: Did you ask her why she had such strong feelings about it? Was she able to answer?
7Vladimir_Nesov11yA factual error: I'm fairly sure that head-only preservation doesn't involve any brain-removal. It's interesting that in context the purpose of the phrase was to present a creepy image of cryonics, and so the bias towards the phrases that accomplish this goal won over the constraint of not generating fiction.
3Wei_Dai11yI wonder if Peggy's apparent disvalue of Robin's immortality represents a true preference, and if so, how should an FAI take it into account while computing humanity's CEV?
3Clippy11yIt should store a canonical human "base type" in a data structure somewhere. Then it should store the information about how all humans deviate from the base type, so that they can in principle be reconstituted as if they had just been through a long sleep. Then it should use Peggy's body and Robin's body for fuel.
1red7511yIt seems plausible that "know more" part of EV should include result of modelling of applying CEV to humanity, i.e. CEV is not just result of aggregation of individuals' EVs, but one of fixed points of humans' CEV after reflection on results of applying CEV. Maybe Peggy's model will see, that her preferences will result in unnecessary deaths and that death is no more important part for society to exist/for her children to prosper.
3Wei_Dai11yIt seems to me if it were just some factual knowledge that Peggy is missing, Robin would have been able to fill her in and thereby change her mind. Of course Robin's isn't a superintelligent being, so perhaps there is an argument that would change Peggy's mind that Robin hasn't thought of yet, but how certain should we be of that?

Communicating complex factual knowledge in an emotionally charged situation is hard, to say nothing of actually causing a change in deep moral responses. I don't think failure is strong evidence for the nonexistence of such information. (Especially since I think one of the most likely sorts of knowledge to have an effect is about the origin — evolutionary and cognitive — of the relevant responses, and trying to reach an understanding of that is really hard.)

3Wei_Dai11yYou make a good point, but why is communicating complex factual knowledge in an emotionally charged situation hard? It must be that we're genetically programmed to block out other people's arguments when we're in an emotionally charged state. In other words, one explanation for why Robin has failed to change Peggy's mind is that Peggy doesn't want to know whatever facts or insights might change her mind on this matter. Would it be right for the FAI to ignored that "preference" and give Peggy's model the relevant facts or insights anyway? ETA: This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.

You are underestimating just how enormously Peggy would have to change her mind. Her life's work involves emotionally comforting people and their families through the final days of terminal illness. She has accepted her own mortality and the mortality of everyone else as one of the basic facts of life. As no one has been resurrected yet, death still remains a basic fact of life for those that don't accept the information theoretic definition of death.

To change Peggy's mind, Robin would not just have to convince her to accept his own cryonic suspension, but she would have to be convinced to change her life's work -- to no longer spend her working hours convincing people to accept death, but to convince them to accept death while simultaneously signing up for very expensive and very unproven crazy sounding technology.

Changing the mind of the average cryonics-opposed life partner should be a lot easier than changing Peggy's mind. Most cryonics-opposed life partners have not dedicated their lives to something diametrically opposed to cryonics.

1Roko11yYou mean you want to make an average IQ woman into a high-grade rationalist? Good luck! Better plan: go with Rob Ettinger's advice. If your wife/gf doesn't want to play ball, dump her. (This is a more alpha-male attitude to the problem, too. A woman will instinctively sense that you are approaching her objection from an alpha-male stance of power, which will probably have more effect on her than any argument) In fact I'm willing to bet at steep odds that Mystery could get a female partner to sign up for cryo with him, whereas a top rationalist like Hanson is floundering.
6Alicorn11yIs this generalizable? Should I, too, threaten my loved ones with abandonment whenever they don't do what I think would be best?
1Alexandros11yI don't think this is about doing what you think best, it's about allowing you to do what you think best. And yes, you should definitely threaten abandonment in these cases, or at least you're definitely entitled to threatening and/or practicing abandonment in such cases.
1Roko11yI'm not sure. It might work, but you're going outside of my areas of expertise.
1Larks11yBetter yet, sign up while you're single, and present it Fait accompli. It won't get her signed up, but I'd be willing to be she won't try to make you drop your subscription.
2steven046111yYes -- calling it "factual knowledge" suggests it's only about the sort of fact you could look up in the CIA World Factbook, as opposed to what we would normally call "insight".
2red7511yI meant something like embedding into culture where death is unnecessary, rather than directly arguing for that. Words aren't best communication channel for changing moral values. Will it be enough? I hope yes, if death of carriers of moral values isn't necessary condition for moral progress. Edit: BTW, if CEV will be computed using humans' reflection on its application, then it means that FAI cannot passively combine all volitions, it must search for and somehow choose fixed point. Which rule should govern that process?
3wedrifid11yThat was very nearly terrifying.
2Vladimir_Nesov11yGood article overall. Gives a human feel to the decision of cryonics, in particular by focusing on an unfair assault it attracts (thus appealing cryonicist's status).
1mattnewport11yThe hostile wife phenomenon doesn't seem to have been mentioned much here. Is it less common than the article suggests or has it been glossed over because it doesn't support the pro-cryonics position? Or has it been mentioned and I wasn't paying attention?
2ata11yAt last count [http://lesswrong.com/lw/fk/survey_results/] (a while ago admittedly), most LWers were not married, and almost none were actually signed up for cryonics. So perhaps this phenomenon just isn't a salient issue to most people here.
4Morendil11yI'm married and with kids, my wife supports my (so far theoretical only) interest in cryo. Though she says she doesn't want it for herself.
2Paul Crowley11yData point FWIW: my partners are far from convinced of the wisdom of cryonics, but they respect my choices. Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

Drowning Does Not Look Like Drowning

Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.

Cryonics scales very well. People who think cryonics is costly, even if you had to come up with the entire lump sum close to the end of your life, are generally ignorant of this fact.

So long as you keep the shape constant, for any given container the surface area is a based on a square law whereas the volume is a cube. For example with a cube shaped object, one side squared times 6 is the surface area whereas one side cubed is the volume. Surface area is where the heat gets entry, so if you have a huge container holding cryogenic goods (humans in this case) it costs much less per unit volume (human) than is the case with a smaller container of equal insulation. A way to understand this is that you only have to insulate the outside -- the inside gets free insulation.

But you aren't stuck using equal insulation. You can use thicker insulation, with a much smaller proportional effect on total surface area as you use bigger sizes. Imagine the difference between a marble sized freezer and a house-sized freezer, when you add a foot of insulation. The outside of the insulation is where it begins collecting heat. But with a gigantic freezer, you might add a meter of insulation without it ha... (read more)

7Morendil11yThis needs to be a top-level post. Even with minimal editing. Please. (ETA: It's not so much that we need to have another go at the cryonics debate; but the above is an argument that I can't recall seeing discussed here previously, that does substantially change the picture, and that illustrates various kinds of reasoning - about scaling properties, about predefining thresholds of acceptability, and about what we don't know we don't know - that are very relevant to LW's overall mission.)
1lsparrish11yDone.

This is a mostly-shameless plug for the small donation matching scheme I proposed in May:

I'm still looking for three people to cross the "membrane that separates procrastinators and helpers" by donating $60 to the Singularity Institute. If you're interested, see my original comment. I will match your donation.

6Kutta11yDone, 60 USD sent.
2VNKKET11yThank you! Matched [http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/28c9].
4Scott Alexander11yDone!
4WrongBot11yI'm sorry I didn't see that earlier; I donated $30 to the SIAI yesterday [http://lesswrong.com/lw/2em/a_challenge_for_lesswrong/27v9], and I probably could have waited a little while longer and donated $60 all at once. If this offer will still be open in a month or two, I will take you up on it.
2zero_call11yWithout any way of authenticating the donations, I find this to be rather silly.
3VNKKET11yI'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment [http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr]: Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.

I was at a recent Alexander Technique workshop, and some of the teachers had been observing how two year olds crawl.

If you've had any experience with two year olds, you know they can cover ground at an astonishing rate.

The thing is, adults typically crawl with their faces perpendicular to the ground, and crawling feels clumsy and unpleasant.

Two year olds crawl with their faces at 45 degrees to the ground, and a gentle curve through their upper backs.

Crawling that way gives access to a surprisingly strong forward impetus.

The relevance to rationality and to akrasia is the implication that if something seems hard, it may be that the preconditions for making it easy haven't been set up.

Here's a puzzle I've been trying to figure out. It involves observation selection effects and agreeing to disagree. It is related to a paper I am writing, so help would be appreciated. The puzzle is also interesting in itself.

Charlie tosses a fair coin to determine how to stock a pond. If heads, it gets 3/4 big fish and 1/4 small fish. If tails, the other way around. After Charlie does this, he calls Al into his office. He tells him, "Infinitely many scientists are curious about the proportion of fish in this pond. They are all good Bayesians with the same prior. They are going to randomly sample 100 fish (with replacement) each and record how many of them are big and how many are small. Since so many will sample the pond, we can be sure that for any n between 0 and 100, some scientist will observe that n of his 100 fish were big. I'm going to take the first one that sees 25 big and team him up with you, so you can compare notes." (I don't think it matters much whether infinitely many scientists do this or just 3^^^3.)

Okay. So Al goes and does his sample. He pulls out 75 big fish and becomes nearly certain that 3/4 of the fish are big. Afterwards, a guy na... (read more)

7Vladimir_M11yFirst, let's calculate the concrete probability numbers. If we are to trust this calculator [http://www.statisticshowto.com/calculators/binomial-distribution-calculator/], the probability of finding exactly 75 big fish in a sample of a hundred from a pond where 75% of the fish are big is approximately 0.09, while getting the same number in a sample from a 25% big pond has a probability on the order of 10^-25. The same numbers hold in the reverse situation, of course. Now, Al and Bob have to consider two possible scenarios: 1. The fish are 75% big, Al got the decently probable 75/100 sample, but Bob happened to be the first scientist who happened to get the extremely improbable 25/100 sample, and there were likely 10^(twenty-something) or so scientists sampling before Bob. 2. The fish are 25% big, Al got the extremely improbable 75/100 big sample, while Bob got the decently probable 25/100 sample. This means that Bob is probably among the first few scientists who have sampled the pond. So, let's look at it from a frequentist perspective: if we repeat this game many times, what will be the proportion of occurrences in which each scenario takes place? Here we need an additional critical piece of information: how exactly was Bob's place in the sequence of scientists determined? At this point, an infinite number of scientists will give us lots of headache, so let's assume it's some large finite number N_sci, and Bob's place in the sequence is determined by a random draw with probabilities uniformly distributed over all places in the sequence. And here we get an important intermediate result: assuming that at least one scientist gets to sample 25/100, the probability for Bob to be the first to sample 25/100 is independent of the actual composition of the pond! Think of it by means of a card-drawing analogy. If you're in a group of 52 people whose names are repeatedly called out in random order to draw from a deck of cards, t
1utilitymonster11yI was assuming Charlie would show Bob the first person to see 75/100. Anyway, your analysis solves this as well. Being the first to see a particular result tells you essentially nothing about the composition of the pond (provided N_sci is sufficiently large that someone or other was nearly certain to see the result). Thus, each of Al and Bob should regard their previous observations as irrelevant once they learn that they were the first to get those results. Thus, they should just stick with their priors and be 50/50 about the composition of the pond.
4Blueberry11yInteresting problem! I think these two statements are inconsistent. If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason. If Bob doesn't trust Al completely, they don't have the same information. Bob doesn't know for sure that Charlie told Al about the selection. From his point of view, Al could be lying. If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond. If each of them only knows the other was selected and they both trust the other one's statements, same thing. But if each puts more trust in Charlie than in the other, then they don't have the same information.
1prase11yIt is strange. Shall Bob discount his observation after being told that he is selected? What does it actually mean to be selected? What if Bob finds 25 big fish and then Charlie tells him, that there are 3^^^3 other observers and he (Charlie) decided to "select" one of those who observe 25 big fish and talk to him, and that Bob himself is the selected one (no later confrontation with AI). Should this information cancel the Bob's observations? If so, why?
1Kingreaper11yYes, it should, if it is known that Charlie hasn't previously "selected" any other people who got precisely 25. The probability of being selected (taken before you have found any fish) p[chosen] is approximately equal regardless of whether there are 25% or 75% big fish. And the probability of you being selected if you didn't find 25 p[chosen|not25] is zero Therefore, the probability of you being selected, given as you have found 25 big fish p[chosen|found25] is approximately equal to p[chosen]/p[found25] The information of the fact you've been chosen directly cancels out the information from the fact you found 25 big fish.
2RobinZ11yOne key observation is that Al made his observation after being told that he would meet someone who made a particular observation - specifically, the first person to make that specific observation, Bob. This makes Al and Bob special in different ways: * Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting. * Bob is special because he has been selected to meet Al because of the specific data he observes. More precisely, because he will be the first to obtain that specific result. Therefore his result has been selected, and he is only at the meeting because he happens to be the first one to get that result. In the original case, Bob's result is effectively a lottery ticket - when he finds out from Al the circumstances of the meeting, he can simply follow the Natural Answer himself and conclude that his results were unlikely. In the modified case, assuming perfect symmetry in all relevant aspects, they can conclude that an astronomically unlikely [http://www.google.com/search?q=%28100%21%2F%2825%21*75%21%29%29*%28.25%5E25*.75%5E75%29+*+%28100%21%2F%2825%21*75%21%29%29*%28.25%5E75*.75%5E25%29] event has occurred and they have no net information about the contents of the pond.
1Kingreaper11yOkay, qualitative analysis without calculations: Let's go for a large, finite, case. Because otherwise my brain will explode. Question 1: for any large, finite number of scientists Bob should defer MOSTLY to Alice. First lets look at Alice; In any large finite number of scientists there is a small finite chance that NO scientist will get that result. This chance is larger in the case where 75% of the fish are big. Thus, upon finding that a scientist HAS encountered 25 fish, Alice must adjust her probability slightly towards 25% big fish. Bob has also received several new pieces of information. *He was the first to find 25 big fish. P[first25|found25] approaches 1/P[found25] as you increase the number of scientists. This information almost entirely cancels out the information he already had. *All the information Alice had. This information therefore tips the scales. Bob's final probability will be the same as Alice's. Question two is N/A I will answer question three in a reply to this to try and avoid a massive wall of text.
1Kingreaper11yQuestion 3: lateral answer: in the symmetrical variant the issue of "how many people are being given other people to meet, and is this entire thing just a weird trick" begins to rise. In fact, the probability of it being a weird trick is going to overshadow almost any other attempt at analysis. The first person to get 25 happens to be a person who is told they will meet someone who got 75, and the person who was told they would meet the first person to get 25 happens to get 75? Massively improbable. However, if it is not a trick, the probability is significantly in favour of it being 75% still. Alice isn't talking to Bob due to the fact she got 75, she's talking to Bob due to the fact he got the first 25. Otherwise Bob would most likely have ended up talking to someone else. The proper response at this point for both Alice and Bob is to simply decide that it is overwhelming probable that Charlie is messing with them. I can produce similar variants which don't have this issue, and they come out to 50:50. These include: Everyone is told that the first person to get 25 will meet the first person to get 75.
1Dagon11yWhat is each of their prior probabilities for this setup being true? Bob, knowing that he was selected for his unusual results, can pretty happily disregard them. If you win a lottery, you don't update to believe that most tickets win. Bob now knows of 100 samples (Al's) that relate to the prior, and accepts them. Bob's sampling is of a different prior: coin flipped, then a specific resulting sample will be found. If they are both selected for their results, they both go to 50/50. Neither one has non-selected samples.
1prase11yIs there any particular reason why one of the actors is an AI?
2utilitymonster11yAl, not AI. ("Al" as in "Alan")
5prase11ySorry. I have some Lesswrong bias. Google statistics on Less Wrong: * AI (second i): 2400 hits * Al (second L): 318 hits (mostly in "et al." and "al Qaida", without capital A) By the way, are these two strings distinguishable when written in the font of this site? Seem to me the same.
2RobinZ11yYou're right - they're pixel-for-pixel identical. That's a bit problematic.
1Douglas_Knight11yMaybe that's why cryptographers say "Alice" rather than "Al."
1JGWeissman11yFrom Bob's perspective, he was more likely to be chosen as the one to talk to Al, if there are fewer scientist that observed exactly 25 big fish, which would happen if there are more big fish. So Bob should update on the evidence of being chosen.

I wonder how these women feel about being labeled "The Hostile Wife Phenomenon"?

2Roko11yFull of righteous indignation, I should imagine. After all, they see it as their own husbands betraying them.

I would say that if you aren't yet married, be prepared to dump them if they won't sign up with you. Because if they won't, that is a strong signal to you that they are not a good spouse. These kinds of signals are important to pay attention to in the courtship process.

After marriage, you are hooked regardless of what decision they make on their own suspension arrangements, because it's their own life. You've entered the contract, and the fact they want to do something stupid does not change that. But you should consider dumping them if they refuse to help... (read more)

Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?

1Kazuo_Thow11yPart of the San Francisco skyline [http://www.lebookbusiness.com/images/San_Francisco_Skyline.sized.jpg], maybe?
1cousin_it11yThanks. This is the first time I ever noticed this. Absolutely no idea what it is or why it's there. Talk about selective blindness!
0matt10yIt was an early draft of the map vs territory theme that became the site header, which we intended to finish but forgot about.

"what would convince you that you are wrong?" It's the sort of thing that creationists arguing against evolution

But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures' skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.

1RichardKennaway11yThe creationist generally puts his universal question after having unsuccessfully argued that the fossil record and radiocarbon dating support him.

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. E... (read more)

5cousin_it11yIf you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation. If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse. (These arguments are not standard LW fare, but I've floated them here before and they seem to work okay.)
6JoshuaZ11yThis depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don't have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.
7cousin_it11yWha? Any sequence of observations can be embedded in a consistent system that "hardcodes" it.
1JoshuaZ11yYeah, that's a good point. Hardcoding complicated changes is consistent. So any such argument of this form about level IV fails. I withdraw that claim.
1Vladimir_Nesov11yConsistency is about logics, while Tegmark's madness is about mathematical structures. Whenever you can model your own actions (decision-making algorithm) using huge complicated mathematical structures, you can also do so with relatively simple mathematical structures constructed from the syntax of your algorithm (Lowenheim-Skolem type constructions). There is no fact of the matter about whether a given consistent countable first order theory, say, talks about an uncountable model or a countable one.
4Vladimir_Nesov11yNot if you interpret your preference about those worlds as assigning most of them low probability, so that only the ordered ones matter.
1Roko11yhttp://www.nickbostrom.com/papers/anthropicshadow.pdf [http://www.nickbostrom.com/papers/anthropicshadow.pdf]

I'm sorry, but from the perspective of someone with no prior knowledge of Actual Freedom, you sound as though you're saying that there is a magical mental state that fixes every problem that evolution baked into the brain over hundreds of millions of years and that the only people who have ever successfully achieved this mental state in all of human history are the devoted followers of a particular charismatic leader who doesn't believe in last names.

If you wish to distinguish yourself from people who are promoting cults, you need to not sound like someone promoting a cult.

Please don't feed the trolls!

1NancyLebovitz11yThere are two issues here. One is whether Actual Freedom's approach produces the claimed effects and whether those effects actually improve people's lives, and the other is whether it's a cult. There's a minor question of whether Actual Freedom is the only path to get those effects. I don't think it sounds all that much like a cult-- they aren't asking for money, they aren't asking for devotion to a leader, and they're saying they have a simple method of getting access to an intrinsic ability. Whether it's as absolutely true as they say is a harder question, though it might improve quality of life without working all the time. Whether it's safe is a harder question-- it sounds like a sort of self-modification which is would be very hard to reverse. Whether no other system produces comparable results is unknowable.

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we

... (read more)
1RobinZ11yIt makes me think of Richard Hamming [http://www.paulgraham.com/hamming.html] talking about having "an attack".

As the comments by this user have been consistently voted down and he cannot seem to take the hint, comments by him will be deleted/banned.

2JoshuaZ11yI'm not sure that wholesale deletion of comments prior to banning is ideal in this case, in that it a) substantially disrupts the flow of conversations that occurred and b) makes it very difficult for an interested lurker to realize what was occurring. I don't see a good reason to delete the existing comments (many seem to be merely wrong) although I agree with banning the individual.
3Morendil11yHe meant "further comments".
[-][anonymous]11y 8

Here are some assumptions one can make about how "intelligences" operate:

  1. An intelligent agent maintains a database of "beliefs"
  2. It has rules for altering this database according to its experiences.
  3. It has rules for making decisions based on the contents of this database.

and an assumption about what "rationality" means:

  1. Whether or not an agent is "rational" depends only on the rules it uses in 2. and 3.

I have two questions:

I think that these assumptions are implicit in most and maybe all of what this community ... (read more)

1whpearson11yThis also reminded me that I wanted to go through the Intentional Stance [http://www.amazon.com/Intentional-Stance-Bradford-Books/dp/0262540533] by Daniel Dennett and find the good bits. Also worth reading is the wiki page [http://en.wikipedia.org/wiki/Intentional_stance]. I think he would state that the model you describe comes from folk psychology [http://en.wikipedia.org/wiki/Folk_psychology]. A relevant passage "We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the "undeniable introspective fact" that you can feel "centrifugal force" cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other "secondary qualities". These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external "world"." In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts. You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.

Is there an on-line 'rationality test' anywhere, and if not, would it be worth making one?

The idea would be to have some type of on-line questionnaire, testing for various types of biases, etc. Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of ... (read more)

8SilasBarta11yMy kind of test would be like this: 1) Do you always seem to be able to predict the future, even as others doubt your predictions? If they say yes ---> "That's because of confirmation bias, moron. You're not special."
5RobinZ11yIn their defense, it might be hindsight bias instead. :P
6Cyan11yThere's an online test [http://www.projectionpoint.com/] for calibration of subjective probabilities.
2Alexandros11yThat was pretty awesome, thanks. Not precisely what I had in mind, but close enough to be an inspiration. Cheers.
4michaelkeenan11yI would love for this to exist! I have some notes on easily-tested aspects of rationality which I will share: The Conjunction Fallacy [http://lesswrong.com/lw/ji/conjunction_fallacy/] easily fits into a short multi-choice question. I'm not sure what the error is called, but you can do the test described in Lawful Uncertainty [http://lesswrong.com/lw/vo/lawful_uncertainty/]: You could do the positive bias [http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/] test where you tell someone the triplet "2-4-6" conforms to a rule and have them figure out the rule. You might be able to come up with some questions that test resistance to anchoring [http://lesswrong.com/lw/j7/anchoring_and_adjustment/]. It might be out of scope of rationality and getting closer to an intelligence test, but you could take some "cognitive reflection" questions from here [http://www.dynamist.com/articles-speeches/nyt/cognition.html], which were discussed at LessWrong here [http://lesswrong.com/lw/15f/misleading_the_witness/1130].
4[anonymous]11yThat Virginia Postrel article was interesting. I was wondering why more reflective people were both more patient and less risk-averse -- she doesn't make this speculation, but it occurs to me that non-reflective people don't trust themselves and don't trust the future. If you aren't good at math and you know it, you won't take a gamble, because you know that good gamblers have to be clever. If you aren't good at predicting the future, you won't feel safe waiting for money to arrive later. Tomorrow the gods might send you an earthquake. Risk aversion and time preference are both sensible adaptations for people who know they're not clever. People who are good at math and science don't retain such protections because they can estimate probabilities, and because their world appears intelligible and predictable.
4oliverbeatson11yThe test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic. Someone who had read Less Wrong a few times, but didn't make the knowledge truly a part of them, might return false negative for certain biases while retaining those biases in real-life situations. Don't want to make the test about guessing the teacher's password.
4NancyLebovitz11yThe test should include questions about applying rationality in one's life, not just abstract problems.
3utilitymonster11yI'd suggest starting with a list of common biases and producing a question (or a few?) for each. The questions could test the biases and you could have an explanation of why the biased reasoning is bad, with examples. It would also be useful to group the biases together in natural clusters, if possible.
2[anonymous]11ySounds like a good idea. Doesn't have to be invented from scratch; adapt a few psychological or behavioral-economics experiments. It's hard to ask about rationality in one's own life because of self-reporting problems; if we're going to do it, I think it's better to use questions of the form "Scenario: would you do a, b, c, or d?" rather than self-descriptive questions of the form "Are you more: a or b?"

I can't remember if this has come up before...

Currently the Sequences are mostly as-imported from OB; including all the comments, which are flat and voteless as per the old mechanism.

Given that the Sequences are functioning as our main corpus for teaching newcomers, should we consider doing some comment topiary on at least the most-read articles? Specifically, I wonder if an appropriate thread structure be inferred from context; also we could vote the comments up or down in order to make the useful-in-hindsight stuff more salient. There's a lot of great ... (read more)

7RobinZ11yVoting is highly recommended - please do, and feel free to reply to comments with additional commentary as well. Otherwise I'd say leave them as be.
2JamesAndrix11yAlso related: A lot of the Sequences show marks of their origin on Overcoming Bias that could be confusing to someone who lands on that article: Example: "Since this is an econblog... " in http://lesswrong.com/lw/j3/science_as_curiositystopper/ [http://lesswrong.com/lw/j3/science_as_curiositystopper/] I think some kind of editorial note is in order here, if not a rewrite.
2JamesAndrix11yAlternatively, we could repost/revisit the sequences on a schedule, and let the new posts build fresh comments. Or even better, try to cover the same topics from a different perspective.

I've suggested in the past that we use the old posts as filler; that is, if X days go by without something new making it to the front page, the next oldest item gets promoted instead.

Even if we collectively have nothing to say that is completely new, we likely have interesting things to say about old stuff - even if only linking it forward to newer stuff.

2gwern11ySo, from the 7 upboats, I take it that people in general approve of this idea. What's next? What do we do to make this a reality? Looking back at an old post from OB (I think), like http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ [http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/] I don't see any option to promote it to the front page. I thought I had enough karma to promote other peoples' articles, but it looks like I may be wrong about this. Is it even currently technically possible to promote old articles?
1Morendil11yAgree on the numerical value of X? LW has slowed down a bit recently, compared to relatively recent periods with frantic paces of posting; I rather appreciate the current rhythm. It would take a long period without new stuff to convince me we needed "filler" at all. Only editors can promote. (Installing the LW codebase locally is fun: you can play at being an editor.)
2gwern11yAlright. How about a week? If nothing new has shown up for a week, then I don't think people will mind a classic. (And offhand, I'm not sure we've yet had a slack period that long.)

http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/

Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.

From a rationality point of view, is it better to be inconsistent than consistently wrong?

There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-p... (read more)

2cupholder11ySee also 'crank [http://scienceblogs.com/denialism/2007/06/crank_magnetism_1.php] magnetism [http://rationalwiki.org/wiki/Crank_magnetism].' I wonder if this counts as evidence for my heuristic [http://lesswrong.com/lw/28i/what_is_bunk/1zd0] of judging how seriously to take someone's belief on a complicated scientific subject by looking to see if they get the right answer on easier scientific questions.

Has anyone continued to pursue the Craigslist charity idea that was discussed back in February, or did that just fizzle away? With stakes that high and a non-negligible chance of success, it seemed promising enough for some people to devote some serious attention to it.

Thanks for asking! I also really don't want this to fizzle away.

It is still being pursued by myself, Michael Vassar, and Michael GR via back channels rather than what I outlined in that post and it is indeed getting serious attention, but I don't expect us to have meaningful results for at least a year. I will make a Less Wrong post as soon as there is anything the public at large can do -- in the meanwhile, I respectfully ask that you or others do not start your own Craigslist charity group, as it may hurt our efforts at moving forward with this.

ETA: Successfully pulling off this Craigslist thing has big overlaps with solving optimal philanthropy in general.

Okay, here's something that could grow into an article, but it's just rambling at this point. I was planning this as a prelude to my ever-delayed "Explain yourself!" article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.


Title: On Mechanizing Science (Epistemology?)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

“It is not possible … to construct ... (read more)

I think there is an additional interpretation that you're not taking into account, and an eminently reasonable one.

First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you'd need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.

However, the more important question is -- what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless -- or worse. At the same time, it's unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informa... (read more)

5SilasBarta11yOkay, thanks, that tells me what I was looking for: clarification of what it is I'm trying to refute, and what substantive reasons I have to disagree. So "Moldbug" is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that's worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be. The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it? This exercise is not just some attempt to make robots "as good as humans"; rather, it reveals why that-which-we-call "common sense" works in the first place, and exposes more general principles of superior inference. In short, I claim that we can have Level 3 [http://lesswrong.com/lw/1yq/understanding_your_understanding/] understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn't work. This could lead to a good article.
4TraditionalRationali11yThat it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least -- in principle -- by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.) I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be "mechanized" in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below [http://lesswrong.com/lw/2eu/open_thread_july_2010/287j?context=1#comments] on the recent article by Gelman and Shalizi criticizing bayesianism. EDIT "done at a lower level" changed to "done at a higher level"
3WrongBot11yThe scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.
2NancyLebovitz11yI'm pretty sure that judging whether one has adequately tested a hypothesis is also going to be very hard to mechanize.
3SilasBarta11yThe problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge." But you have to wonder: the human didn't learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.
4cupholder11yThose people should be glad they've never heard of TETRAD [http://www.phil.cmu.edu/projects/tetrad/] - their heads might have exploded!
2NancyLebovitz11yThat's intriguing. Has it turned out to be useful?
6cupholder11yIt's apparently been put to use with some success. Clark Glymour [http://www.hss.cmu.edu/philosophy/faculty-glymour.php] - a philosophy professor who helped develop TETRAD - wrote a long review of The Bell Curve [http://www.hss.cmu.edu/philosophy/glymour/glymour1998.pdf] that lists applications of an earlier version of TETRAD (see section 6 of the review): Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.
3Tyrrell_McAllister11yThe pithy reply would be that science already is mechanized. We just don't understand the mechanism yet.
3cupholder11yAm I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. ( Count JSTOR hits for the word 'Bayesian,' [http://dfr.jstor.org/?sy=1951&cs=any%3ABayesian|year%3A[1951+TO+2005]^1.0&fs=dgm1&ey=2004] for example, and watch the numbers shoot up over time!)
1Douglas_Knight11yHalf of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper). Anyhow, it's clear from the context (I'd have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.
3NancyLebovitz11yHow hard do you think mechanizing science would be? It strikes me as being at least in the same class with natural language.
2NancyLebovitz11yI've been poking at the question of to what extent computers could help people do science, beyond the usual calculation and visualization which is already being done. I'm not getting very far-- a lot of the most interesting stuff seems like getting meaning out of noise. However, could computers check to make sure that the use of statistics isn't too awful? Or is finding out whether what's deduced follows from the raw data too much like doing natural language? What about finding similar patterns in different fields? Possibly promising areas which haven't been explored?
2steven046111yIt's probably best to take a cyborg point of view -- consciously followed algorithms (like probabilistic updating) aren't a replacement for common sense, but they can be integrated into common sense, or used as measuring sticks, to turn common sense into common awesome cybersense.
2cousin_it11yYou probably won't find much opposition to your opinion here on LW. Duh, of course science can and will be automated! It's pretty amusing that the thesis [http://www.cscs.umich.edu/~crshalizi/thesis/] of Cosma Shalizi, an outspoken anti-Bayesian, deals with automated extraction of causal architecture from observed behavior of systems. (If you enjoy math, read it all; it's very eye-opening.)
2SilasBarta11yReally? I read enough of that thesis to add it to the pile of "papers about fully generally learning programs with no practical use or insight into general intelligence". Though I did get one useful insight from Shalizi's thesis: that I should judge complexity by the program length needed to produce something functionally equivalent, not something exactly identical, as that metric makes more sense when judging complexity as it pertains to real-world systems and their entropy.
[-][anonymous]11y 7

Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.

The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.

I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his c... (read more)

[-][anonymous]11y 16

I pointed this out to my buddy who's a psychology doctoral student, his reply is below:

I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.

I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.

He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!

But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.

So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.

9cousin_it11yI think this is one of the few cases where Shalizi is wrong. (Not an easy thing to say, as I'm a big fan of his.) In the second part of the article he generates synthetic "test scores" of people who have three thousand independent abilities - "facets of intelligence" that apply to different problems - and demonstrates that standard factor analysis still detects a strong single g-factor explaining most of the variance between people. From that he concludes that g is a "statistical artefact" and lacks "reality". This is exactly like saying the total weight of the rockpile "lacks reality" because the weights of individual rocks are independent variables. As for the reason why he is wrong, it's pretty clear: Shalizi is a Marxist (fo' real) and can't give an inch to those pesky racists. A sad sight, that.

cousin_it:

A sad sight, that.

Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.

He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.

5satt11yThere is no such book (yet), but there are two books that cover the most controversial part of the mess that I'd recommend: Race Differences in Intelligence [http://www.amazon.com/Race-Differences-Intelligence-John-Loehlin/dp/0716707535] (1975) and Race, IQ and Jensen [http://www.amazon.com/Race-IQ-Jensen-James-Flynn/dp/0710006519] (1980). They are both systematic, thorough, and about as unbiased as one can reasonably expect on the subject of race & IQ. On the down side, they don't really cover other aspects of the IQ controversies, and they're three decades out of date. (That said, I personally think that few studies published since 1980 bear strongly on the race & IQ issue, so the books' age doesn't matter that much.)
4Vladimir_M11yYes, among the books on the race-IQ controversy that I've seen, I agree that these are the closest thing to an unbiased source. However, I disagree that nothing very significant has happened in the field since their publication -- although unfortunately, taken together, these new developments have led to an even greater overall confusion. I have in mind particularly the discovery of the Flynn effect and the Minnesota adoption study, which have made it even more difficult to argue coherently either for a hereditarian or an environmentalist theory the way it was done in the seventies. Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments. From what I've seen, Jensen is still using such arguments as a major source of support for his positions, constantly replying to the existing superficial critiques with superficial counter-arguments, and I've never seen anyone giving this issue the full attention it deserves.
2satt11yMe too! I just don't think there's been much new data brought to the table. I agree with you in counting Flynn's 1987 paper and the Minnesota followup report, and I'd add Moore's 1986 study [http://eric.ed.gov/ERICWebPortal/search/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=EJ339204&ERICExtSearch_SearchType_0=no&accno=EJ339204] of adopted black children, the recent meta-analyses by Jelte Wicherts [http://wicherts.socsci.uva.nl/] and colleagues on the mean IQs of sub-Saharan Africans, Dickens & Flynn's 2006 paper [http://www.iapsych.com/iqmr/fe/LinkedDocuments/dickens2006a.pdf] on black Americans' IQs converging on whites' (and at a push, Rushton & Jensen's reply [http://ssc.uwo.ca/psychology/faculty/rushtonpdfs/2006%20PSnew.pdf] along with Dickens & Flynn's [http://www.iapsych.com/iqmr/fe/LinkedDocuments/dickens2006b.pdf]), Fryer & Levitt's 2007 paper [http://www.economics.harvard.edu/faculty/fryer/files/fryer_levittbabiesrevision.pdf] about IQ gaps in young children, and Fagan & Holland's papers (2002 [http://dx.doi.org/10.1016/S0160-2896\(02\]00080-6), 2007 [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.139.170&rep=rep1&type=pdf] , 2009 [http://dx.doi.org/10.1016/j.intell.2008.07.004]) on developing tests where minorities score equally to whites. I guess Richard Lynn et al.'s papers on the mean IQ of East Asians count as well, although it's really the black-white comparison that gets people's hackles up. Having written out a list, it does looks longer than I expected...although it's not much for 30-35 years of controversy! Amen. The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.
5Morendil11yOK, I'll bite. Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

Morendil:

Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.

In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)

Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll ju... (read more)

4[anonymous]11yYour analogy is flawed, I think. The weight of the rock pile is just what we call the sum of the weights of the rocks. It's just a definition; but the idea of general intelligence is more than a definition. If there were a real, biological thing called g, we would expect all kinds of abilities to be correlated. Intelligence would make you better at math and music and English. We would expect basically all cognitive abilities to be affected by g, because g is real -- it represents something like dendrite density, some actual intelligence-granting property. People hypothesized that g is real because results of all kinds of cognitive tests are correlated. But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities. Sure, your old g will correlate with multiple abilities -- hell, you could let g = "test score" and that would correlate with all the abilities -- but that would be meaningless. If size and location determine the price of a house, you don't declare that there is some factor that causes both large size and desirable location!

SarahC:

But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Just to be clear, this is not an original idea by Shalizi, but the well known "sampling theory" of general intelligence first proposed by Godfrey Thomson almost a century ago. Shalizi states this very clearly in the post, and credits Thomson with the idea. However, for whatever reason, he fails to mention the very extensive discussions of this theory in the existing literature, and writes as if Thomson's theory had been ignored ever since, which definitely doesn't represent the actual situation accurately.

In a recent paper by van der Maas et al., which presents an extremely interesting novel theory of correlations that give rise to g (and which Shalizi links to at one point), the authors write:

Thorndike (1927) and Thomson (1951) proposed one such alternative mechanism, namely, sampling. In this sampling theory, carrying out cognitive tasks requires the use of many lower order uncorrelated modules or n

... (read more)
4satt11yI think Shalizi isn't too far off the mark in writing "as if Thomson's theory had been ignored". Although a few psychologists & psychometricians have acknowledged Thomson's sampling model, in everyday practice it's generally ignored. There are far more papers out there that fit g-oriented factor models as a matter of course than those that try to fit a Thomson-style model. Admittedly, there is a very good reason for that — Thomson-style models would be massively underspecified on the datasets available to psychologists, so it's not practical to fit them — but that doesn't change the fact that a g-based model is the go-to choice for the everyday psychologist. There's an interesting analogy here to Shalizi's post about IQ's heritability [http://www.cscs.umich.edu/~crshalizi/weblog/520.html], now I think about it. Shalizi writes it as if psychologists and behaviour geneticists don't care about gene-environment correlation, gene-environment interaction, nonlinearities, there not really being such a thing as "the" heritability of IQ, and so on. One could object that this isn't true — there are plenty of papers out there concerned with these complexities — but on the other hand, although the textbooks pay lip service to them, researchers often resort to fitting models that ignore these speedbumps. The reason for this is the same as in the case of Thomson's model: given the data available to scientists, models that accounted for these effects would usually be ruinously underspecified. So they make do.
4Vladimir_M11yHowever, it seems to me that the fatal problem of the sampling theory is that nobody has ever managed to figure out a way to sample disjoint sets of these hypothetical uncorrelated modules. If all practically useful mental abilities and all the tests successfully predicting them always sample some particular subset of these modules, then we might as well look at that subset as a unified entity that represents the causal factor behind g, since its elements operate together as a group in all relevant cases. Or is there some additional issue here that I'm not taking into account?

I can't immediately think of any additional issue. It's more that I don't see the lack of well-known disjoint sets of uncorrelated cognitive modules as a fatal problem for Thomson's theory, merely weak disconfirming evidence. This is because I assign a relatively low probability to psychologists detecting tests that sample disjoint sets of modules even if they exist.

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)". The paper's about the general goal of coming up with universal definitions of and ways to measure intelligence, but in the middle of it is a polemical/sceptical summary of research into g & IQ.

Smith went through a correlation matrix for 57 tests given to 240 people, published by Thurstone in 1938, and saw that the 3 most negative of the 1596 intercorrelations were between these pairs of tests:

  • "100-word vocabulary test // Recognize pictures of hand as Right/Left"
... (read more)
5Vladimir_M11ysatt: That's an extremely interesting reference, thanks for the link! This is exactly the kind of approach that this area desperately needs: no-nonsense scrutiny by someone with a strong math background and without an ideological agenda. David Hilbert allegedly once quipped that physics is too important to be left to physicists; the way things are, it seems to me that psychometrics should definitely not be left to psychologists. That they haven't immediately rushed to explore further these findings by Smith is an extremely damning fact about the intellectual standards in the field. Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.
4Douglas_Knight11yI haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond. Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable. The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated. The second example is a little more promising: maybe that scattered Xs test is independent of verbal ability, even though it looks like other skills that are not, but I doubt it. With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence. I knew that conscientiousness predicted GPAs, but I'd never heard such a strong claim. But it is true that a lot of people dismiss conscientiousness (and GPA) in favor of intelligence, and they seem to be making an error (or being risk-seeking).
3satt11yOnce you read the relevant passage in context, I anticipate you will agree with me that Smith is serious. Take this paragraph from before the passage I quoted from: Smith then presents the example from Thurstone's 1938 data. I'd be inclined to agree if the 3 most negative correlations in the dataset had come from very different pairs of tests, but the fact that they come from sets of subtests that one would expect to tap similar narrow abilities suggests they're not just statistical noise. Smith himself does not appear to make that claim; he presents his two examples merely as demonstrations that not all mental ability scores positively correlate. I think it's reasonable to package the 3 verbal subtests he mentions as strongly loading on verbal ability, but it's not clear to me that the 3 other subtests he pairs them with are strong measures of "spatial ability"; two of them look like they tap a more specific ability to handle mental mirror images, and the third's a visual memory test. Even if it transpires that the 3 subtests all tap substantially into spatial ability, they needn't necessarily correlate positively with specific measures of verbal ability, even though verbal ability correlates with spatial ability. I'm tempted to agree but I'm not sure such a strong generalization is defensible. Take a list of psychologists' definitions of intelligence [http://www.vetta.org/definitions-of-intelligence/]. IMO self-discipline plausibly makes sense as a component of intelligence under definitions 1, 7, 8, 13, 14, 23, 25, 26, 27, 28, 32, 33 & 34, which adds up to 37% of the list of definitions. A good few psychologists appear to include self-discipline as a facet of intelligence.
1HughRistik11yInteresting thought. It turns out that Conscientiousness is actually negatively related to intelligence [http://www.ist-world.org/ProjectDetails.aspx?ProjectId=74c831c7db5d4fd1835f7ad6a37411f8] , while Openness is positively correlated with intelligence [http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WM0-4B3NM6T-1&_user=10&_coverDate=10%2F31%2F2004&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1395814866&_rerunOrigin=scholar.google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=1fbfc8e968f791daa642c8957bcbd6a7] . This finding is consistent with the folk notion of "crazy geniuses." Though it's important to note that the second study was done on college students, who must have a certain level of IQ and who aren't representative of the population. The first study notes: If we took a larger sample of the population, including lower IQ individuals, then I think we would see the negative correlation between Conscientiousness and intelligence diminish or even reverse, because I bet there are lots of people outside a college population who have both low intelligence and low Conscientiousness. It could be that a moderate amount of Conscientiousness (well, whatever mechanisms cause Conscientiousness) is necessary for above average intelligence, but too much Conscientiousness (i.e. those mechanisms are too strong) limits intelligence.
6[anonymous]11yI noticed a while back when a bunch of LW'ers gave their Big Five scores that our Conscientiousness scores tended to be low. I took that to be an internet thing (people currently reading a website are more likely to be lazy slobs) but this is a more flattering explanation.
2Douglas_Knight11yNo it doesn't. The whole point of that article is that it's a mistake to ask people how conscientious they are.
1RobinZ11yJust out of curiosity: is psychology your domain of expertise [http://lesswrong.com/lw/26l/what_are_our_domains_of_expertise_a_marketplace/]? You speak confidently and with details.
5satt11yIf only! I'm just a physics student but I've read a few books and quite a few articles about IQ. [Edit: I've got an amateur interest in statistics as well, which helps a lot on this subject. Vladimir_M is right that there's a lot of crap statistics peddled in this field.]
4[anonymous]11y"All of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age 30. (This predictive ability is vastly less than many people would lead you to believe [cf.], but I'm happy to give them that point for the sake of argument.) This would still be true if I introduced a broader mens sana in corpore sano score, which combined IQ tests, physical fitness tests, and (to really return to the classical roots of Western civilization) rated hot-or-not sexiness. Indeed, since all these things predict success in life (of one form or another), and are all more or less positively correlated, I would guess that MSICS scores would do an even better job than IQ scores. I could even attribute them all to a single factor, a (for arete), and start treating it as a real causal variable. By that point, however, I'd be doing something so obviously dumb that I'd be accused of unfair parody and arguing against caricatures and straw-men." This is the point here. There's a difference between coming up with linear combinations and positing real, physiological causes.

My beef isn't with Shalizi's reasoning, which is correct. I disagree with his text connotationally. Calling something a "myth" because it isn't a causal factor and you happen to study causal factors is misleading. Most people who use g don't need it to be a genuine causal factor; a predictive factor is enough for most uses, as long as we can't actually modify dendrite density in living humans or something like that.

7satt11yShalizi's most basic point — that factor analysis will generate a general factor for any bunch of sufficiently strongly correlated variables — is correct. Here's a demo. The statistical analysis package R [http://www.r-project.org/] comes with some built-in datasets [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/00Index.html] to play with. I skimmed through the list and picked out six monthly datasets (72 data points in each): * atmospheric CO2 concentrations [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/co2.html], 1959-1964 * female UK lung deaths [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/UKLungDeaths.html] , 1974-1979 * international airline passengers [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/AirPassengers.html] , 1949-1954 * sunspot counts [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/sunspot.month.html] , 1749-1754 * average air temperatures at Nottingham Castle [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/nottem.html], 1920-1925 * car drivers killed & seriously injured in Great Britain [http://stat.ethz.ch/R-manual/R-patched/library/datasets/html/UKDriverDeaths.html] , 1969-1974 It's pretty unlikely that there's a single causal general factor that explains most of the variation in all six of these time series, especially as they're from mostly non-overlapping time intervals. They aren't even that well correlated with each other: the mean correlation between different time series is -0.10 with a std. dev. of 0.34. And yet, when I ask R's canned factor analysis routine [http://rss.acs.unt.edu/Rdoc/library/stats/html/factanal.html] to calculate a general factor for these six time series, that general factor explains 1/3 of their variance! However, Shalizi's blog post covers a lot more ground than just this basic point, and it's difficult for me to work out exactly what he's trying to say, which in turn makes it di
4HughRistik11yIn your example, we have no reason to privilege the hypothesis that there is an underlying causal factor behind that data. In the case of g, wouldn't its relationships to neurobiology [http://en.wikipedia.org/wiki/Neuroscience_and_intelligence] be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real? These results would seem surprising if g was merely a statistical "myth."
9satt11yThe best evidence that g measures something real is that IQ tests are highly reliable, i.e. if you get your IQ or g assessed twice, there's a very good correlation between your first score and your second score. Something has to generate the covariance between retestings; that g & IQ also correlate with neurobiological variables is just icing on the cake. To answer your question directly, g's neurobiological associations are further evidence that g measures something real, and I believe g does measure something real, though I am not sure what. Shalizi is, somewhat confusingly, using the word "myth" to mean something like " g's role as a genuine physiological causal agent is exaggerated because factor analysis sucks for causal inference", rather than its normal meaning of "made up". Working with Shalizi's (not especially clear) meaning of the word "myth", then, it's not that surprising that g correlates with neurobiology, because it is measuring something — it's just not been proven to represent a single causal agent. Personally I would've preferred Shalizi to use some word other than "myth" (maybe "construct") to avoid exactly this confusion: it sounds as if he's denying that g measures anything, but I don't believe that's his intent, nor what he actually believes. (Though I think there's a small but non-negligible chance I'm wrong about that.)
3[anonymous]11yFrom what I can gather, he's saying all other evidence points to a large number of highly specialized mental functions instead of one general intelligence factor, and that psychologists are making a basic error by not understanding how to apply and interpret the statistical tests they're using. It's the latter which I find particularly unlikely (not impossible though).
1satt11yYou might be right. I'm not really competent to judge the first issue (causal structure of the mind), and the second issue (interpretation of factor analytic g) is vague enough that I could see myself going either way on it.
2RobinZ11yBy the way, welcome to Less Wrong [http://lesswrong.com/lw/b9/welcome_to_less_wrong/]! Feel free to introduce yourself on that thread! If you haven't been reading through the Sequences [http://wiki.lesswrong.com/wiki/Sequences] already, there was a conversation last month about good, accessible introductory posts [http://lesswrong.com/lw/2cp/open_thread_june_2010_part_3/25bt?context=1#25bt] that has a bunch of links and links-to-links.
3satt11yThank you!
0RobinZ11yBelatedly: Economic development (including population growth?) is related to CO2, lung deaths, international airline passengers, average air temperatures (through global warming), and car accidents.
5gwern9yHere is a useful post directly criticizing Shalizi's claims: http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/ [http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/]

Robert Ettinger's surprise at the incompetence of the establishment:

Robert Ettinger waited expectantly for prominent scientists or physicians to come to the same conclusion he had, and to take a position of public advocacy. By 1960, Ettinger finally made the scientific case for the idea, which had always been in the back of his mind. Ettinger was 42 years old and said he was increasingly aware of his own mortality.[7] In what has been characterized as an historically important mid-life crisis,[7] Ettinger summarized the idea of cryonics in a few pages, w

... (read more)
4Mitchell_Porter11yThere are many momentous issues here. First: I think a historical narrative can be constructed, according to which a future unexpected in, say, 1900 or even in 1950 slowly comes into view, and in which there are three stages characterized by an extra increment of knowledge. The first increment is cryonics, the second increment is nanotechnology, and the third increment is superintelligence. There is a highly selective view; if you were telling the history of futurist visions in general, you would need to include biotechnology, robotics, space travel, nuclear power, even aviation, and many other things. In any case, among all the visions of the future that exist out there, there is definitely one consisting of cryonics + nanotechnology + superintelligence. Cryonics is a path from the present to the future, nanotechnology will make the material world as pliable as the bits in a computer, and superintelligence guided by some utility function will rule over all things. Among the questions one might want answered: 1) Is this an accurate vision of the future? 2) Why is it that still so few people share this perspective? 3) Is that a situation which ought to be changed, and if so, how could it be changed? Question 1 is by far the most discussed. Question 2 is mostly pondered by the few people who have answered 'yes' to question 1, and usually psychological answers are given. I think that a certain type of historical thinking could go a long way towards answering question 2, but it would have to be carried out with care, intelligence, and a will to objectivity. This is what I have in mind: You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise. To find a history which notices any of that, you will have to specialize, e.g. to a history of American technological subcultures, or a history of 20th-century futurological enthus
2Roko11yOn the other hand, does anyone who has seriously thought about the issue expect nanotech to not be incredibly important in the long-term? It seems that there is a solid sceptical case that nano has been overhyped in the short term, perhaps even by Drexler. But who will step forward having done a thorough analysis and say that humanity will thrive for another millennium without developing advanced nanotech?
3cupholder11yA good illustration of multiple discovery (not strictly 'discovery' in this case, but anyway) too:

I'm a bit surprised that nobody seems to have brought up The Salvation War yet. [ETA: direct links to first and second part]

It's a Web Original documentary-style techno-thriller, based around the premise that humans find out that a Judeo-Christian Heaven and (Dantean) Hell (and their denizens) actually exist, but it turns out there's nothing supernatural about them, just some previously-unknown/unapplied physics.

The work opens in medias res into a modern-day situation where Yahweh has finally gotten fed up with those hairless monkeys no longer being the ... (read more)

8cousin_it11yOkay, I've read through the whole thing so far. This is not rationalist fiction. This is standard war porn, paperback thriller stuff. Many many technical descriptions of guns, rockets, military vehicles, etc. Throughout the story there's never any real conflict, just the American military (with help from the rest of the world) steamrolling everything, and the denizens of Heaven and Hell admiring the American way of life. It was well-written enough to hold my attention like a can of Pringles would, but I don't feel enriched by reading it.
2NancyLebovitz11yI've only read about a chapter and a half, and may not read any more of it, but there's one small rationalist aspect worthy of note-- the author has a very solid grasp of the idea that machines need maintenance.
1CannibalSmith11yHere's a tiny bit of rationality:
2cousin_it11yIf your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective; that's why it is so widespread in the animal kingdom, where evolutionary dynamics make populations converge on an equilibrium of behavior, and that's why it was widespread in medieval times (that Hell is modeled from). So the passage you quoted doesn't work as a general statement about rationality, but it works pretty well as praise of America. Right now, America is the only country on Earth that can "fight to win". Other countries have to fight "honorably" lest America deny them their right of conquest.
2wedrifid11yThe wars America fights, the wars all countries fight are ritualised combat. We send our soldiers and bombers (of either the plane or suicide variety), you send your soldiers and bombers. One side loses more soldiers, the other side loses more money. If America or any its rivals fought to win their respective countries would be levelled. The ritualised combat model you describe matches modern warfare perfectly and the very survival of the USA depends on it.
5cousin_it11yWhy did you link to TV Tropes instead of the thing itself?
2[anonymous]11yDirect link to story [http://bbs.stardestroyer.net/viewtopic.php?f=35&t=118771]

Let's see...

Actual freedom is a tried and tested way of being happy and harmless in the world as it actually is ... stripped of the veneer of normal reality or Greater Reality which is super-imposed by the psychological and/or psychic entity within the body.

and

Here is an actual freedom from the Human Condition, surpassing Spiritual Enlightenment and any other Altered State Of Consciousness, and challenging all philosophy, psychiatry, metaphysics (including quantum physics with its mystic cosmogony)

and

For a start, one needs to fully acknowledge th

... (read more)

You know, it's pretty obvious that you care about our opinion of your movement, otherwise you wouldn't be spending so much time and effort trying to convince us. That's substantial evidence against your claim that it produces a lack of sense of self or attachment. You're really shooting yourself in the foot.

The following is a story I wrote down so I could sleep. I don't think it's any good, but I posted it on the basis that, if that's true, it should quickly be voted down and vanish from sight.

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random e... (read more)

6cousin_it11yOoh, an LW-themed horror story. My humble opinion: it's awesome! This phrase was genius: Moar please.
5pjeby11yWait, is that the whole story? 'cause if so, I really don't get it. Where's the rest of it? What happens next? Is Jerry afraid that his algorithm is a self-improving AI or something?
5apophenia11yApparently my story is insufficiently explicit. The gag here is that the AI is sentient, and has tricked Jerry into feeding it only reward numbers.
7Sniffnoy11yI'm going to second the idea that that isn't clear at all.
4Oscar_Cunningham11yHow does 2212221 represent perfect numbers?
2apophenia11yIt's not meant to be realistic, but in this specific case: 6 = 110, 28=1110 in binary. Add one to each digit.
2Sniffnoy11yExcept 28 is 11100 in binary...

Ok. Wrongbot has already given you the standard reading list, but I'd like to address this specifically.

The zeroth reason you've been voted down is that this comes across as spamming. No one likes to see a comment of apparently marginal relevance with lots of links to another website with minimal explanation.

Moving on from that, how will the general LW reader respond when reading the above? Let me more or less summarize the thought processes.

There are three ways to experience the world: sensations, feelings and thoughts. In the perception process, sensat

... (read more)
4pjeby11yFollowing which, they use more of their finite lifespan to comment in reply, in the hopes of feeling a momentary elevation of status, plus a lifetime of karma enhancements, that will maybe make up for the previous loss of time. ;-) (For the record, I upvoted you anyway. ;-) )

(1) We are aware. There are important reasons for keeping a moderation system anyway. Practical suggestions for rational groupthink-alleviating measures would be appreciated, although possibly not implemented.

(2) Bear in mind the selection effect of who reads, votes, and replies to a thread on a given topic. Last year's survey showed more people who had decided to forgo cryonics than signed up for preservation by a factor of sixteen.

(3) You are not yet a sufficiently impressive figure within this community to induce people to reconsider their judgments mer... (read more)

3timtyler11yRe: "Rational groupthink-alleviating measures" Don't delete, ban or otherwise punish critics, would be my recommendation. Critics often bear unpopular messages. The only group I have ever participated in where critics were treated properly is the security/cryptographic community. There, if someone bothers to criticise something, if anything they are thanked for their input.
4Paul Crowley11yI don't perceive a big difference between the crypto community and LW here. Do you have an example in mind of someone who speaks to the wider crypto community with the same tone that SamAdams speaks to us, but who is treated as a valued contributor?
3Vladimir_Nesov11y"Critic" is not a very useful category, moderation-wise. What matters is quality of argument, not implied conclusions, so an inane supporter of the group should be banned as readily as an inane defector, and there seems to be little value in keeping inane contributors around, whether "critics" or not.

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

We may have different definitions of "functional relationship." I'd put very high on the list of elements of a functional relationship that people don't go out of there way to consciously manipulate each other over substantial life decisions.

2Wei_Dai11yUm, it's a matter of life or death, so of course I'm going to "go out of my way". As for "consciously manipulate", it seems to me that people in all relationships consciously manipulate each other all the time, in the sense of using words to form arguments in order to convince the other person to do what they want. So again, why is this particular form of manipulation not considered acceptable? Is it because you consider it a lie, that is, you don't think you would really feel betrayed or abandoned if your significant other decided not to sign up with you? (In that case would it be ok if you did think you would feel betrayed/abandoned?) Or is it something else?
3wedrifid11yIt is a good question. The distinctive feature of this class of influence is the overt use of guilt and shame, combined with the projection of the speaker's alleged emotional state onto the actual physical actions of the recipient. It is a symptom relationship dynamic that many people consider immature and unhealthy.
2RobinZ11yPlaying Conway's Life [http://en.wikipedia.org/wiki/Conway's_Game_of_Life] is a great exercise - I recommend trying it, to anyone who hasn't. Feel free to experiment with different starting configurations. One simple one which produces a wealth of interesting effects is the "r pentomino": Edit: Image link died - see Vladimir_Nesov's comment, below.
2Vladimir_Nesov9yThe link to the image died, here it is:

I don't understand your last paragraph. Could you give an example? Is this relevant to the decision of whether intelligence tests should be used for choosing firemen? or is that a predictive use?

I'm baffled at the idea that the simulation hypothesis is silly. It can be rephrased "We are not at the top level of reality." Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we're at the top.

Do you have any evidence that any of those levels have anything remotely approximating observers? (I'll add the tiny data point that I've had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my d... (read more)

"For example, stocks are riskier than bonds, and over time always have greater returns."

In a LW vein, it's worth noting that selection and survivorship biases (as well as more general anthropic biases) means that the very existence of the equity risk premium is unclear even assuming that it ever existed.

(I note this because most people seem to take the premium for granted, but for long-term LW purposes, assuming the premium is dangerous. Cryonics' financial support is easier given the premium, for example, but if there is no premium and cryoni... (read more)

Apologies for being blunt, but your comment is nigh on useless: Andrew Gelman is a stats professor at Columbia who co-authored a book on Bayesian statistics (incidentally, he was also interviewed a while back by Eliezer on BHTV), while Cosma Shalizi is a stats professor at Carnegie Mellon who is somewhat well-known for his excellent Notebooks.

I don't fault you for not having known all of this, but this information was a few Google searches away. Your advice is clearly inapplicable in this case.

An additional failure mode with a few % chance of happening damages the expected utility by a few %. Unless you have some reason to think that this cause of failure is anticorrelated with other causes of failure?

Well, if you'll excuse the ugly metaphor, in this area even the positive questions are giant cans of worms lined on top of third rails, so I really have no desire to get into public discussions of normative policy issues.

Something I've been pondering recently:

This site appears to have two related goals:

a) How to be more rational yourself b) How to promote rationality in others

Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.

One might be more effective in the short term, but you might think the rational argu... (read more)

America's wars change regimes in other countries. This ain't ritualized combat.

That's exactly the purpose of ritualised combat. Change regimes without total war. Animals (including humans) change their relative standing in the tribe. Coalitions of animals use ritualised combat to change intratribal regimes. Intertribal combat often has some degree of ritual element, although this of course varies based on the ability of tribes to 'cooperate' in combat without total war.

In international battles there have been times where the combat has been completely n... (read more)

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

The thing that's puzzling me now is egalitarianism. As Carl Shulman pointed out, the problem that CEV has with people being able to cheaply copy themselves in the future is shared with democracy and other political and ethical systems that are based on equal treatment or rights of all... (read more)

5michaelkeenan11yI'm currently reading The Moral Animal [http://www.amazon.com/exec/obidos/ASIN/0679763996] by Robert Wright, because it was recommended by, among others, Eliezer [http://yudkowsky.net/obsolete/bookshelf.html]. I'm summarizing the chapters online [http://michaelkeenan.net/the_moral_animal.php] as I read them. The fifth chapter, noting that more human societies have been polygynous than have been monogamous, examines why monogamy is popular today; you might want to check it out. As for the wider question of reductionist explanations of morality, I'm a fan of the research of moral psychologist Jonathan Haidt (New York Times article [http://www.nytimes.com/2007/09/18/science/18mora.html?_r=1], very readable paper [http://web.archive.org/web/20080316213803/http://faculty.virginia.edu/haidtlab/articles/dscalepap.html] ).
1Wei_Dai11yYou're right that there are already people like Robert Wright and Jonathan Haidt who are trying to answer these questions. I suppose I'm really wishing that the science is a few decades ahead of where it actually is.

The comments on the Methods of Rationality thread are heading towards 500. Might this be time for a new thread?

3RobinZ11yThat sounds like a reasonable criterion.

You say that you are not promoting a cult, but for claims such as the ones you are making, I have a very high prior probability that you are. To overcome the strong weighting of my prior probability function and convince me that you are doing anything other than promoting a cult you need to supply strong evidence.

If you were able to identify specific ways in which your organization avoids falling into an affective death spiral, for example, I would be more inclined to take you seriously.

The same would hold if you explained why your group is not a cult in a way more compelling than "but we're actually right!"

Actual Freedom (AF) is not a religious system/cult; I am none too sure how anyone got that impression here as the very front page of the AF website mentions "Non-Spiritual" in bold text.

Calling something "Non-spiritual" doesn't make it not a religion. To use one obvious example, there are some evangelical Christians who say that they don't have a religion and aren't religious, but have a relationship with Jesus. Simply saying something isn't religious doesn't help matters.

To answer your specific questions: I define these things by

... (read more)
2LucasSloan11yI would be greatly edified if you would heed Blueberry's plea.

Welcome to Less Wrong. You may want to take a look at the articles listed on the LessWrong wiki page on religion; they may provide an understanding of why you are being downvoted.

4mattnewport11yI downvoted it because it looks a lot like spam, not so much because it is specifically religious spam.

This seems extremely pertinent for LW: a paper by Andrew Gelman and Cosma Shalizi. Abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distri

... (read more)
2cousin_it11ysteven0461 already posted this [http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/27ri] to the previous Open Thread and we had a nice little talk.
2TraditionalRationali11yI wrote a backlink to here from OB. I am not yet expert enough to do an evaluation of this. I do think however that it is an important and interesting question that mjgeddes asks. As an active (although at a low level) rationalist I think it is important to try to at least to some extent follow what expert philosophers of science actually find out of how we can obtain reasonably reliable knowledge. The dominating theory of how science proceeds seems to be the hypothetico-deductive model, somewhat informally described. No formalised model for the scientific process seems so far has been able to answer to serious criticism of in the philosophy of science community. "Bayesianism" seems to be a serious candidate for such a formalised model but seems still to be developed further if it should be able to anser all serious criticism. The recent article by Gelman and Shalizi is of course just the latest in a tradition of bayesian-critique. A classic article is Glymour "Why I am Not a Bayesian [http://philosophy.berkeley.edu/file/185/Glymour_Not_Bayesian.pdf]" (also in the reference list of Gelman and Shalizi). That is from 1980 so probably a lot has happened since then. I myself am not up-to-date with most of development, but it seems to be an import topic to discuss here on Less Wrong that seems to be quite bayesianistically oriented.
2Cyan11yETA: Never mind. I got my crackpots confused. Original text was: mjgeddes was once publicly dissed by Eliezer Yudkowsky on OB (can't find the link now, but it was a pretty harsh display of contempt). Since then, he has often bashed Bayesian induction, presumably in an effort to undercut EY's world view and thereby hurt EY as badly as he himself was hurt.

Is anything known about a physical basis for conscientiousness?

It can be reliably predicted by, for example, SPECT scans. If I recall correctly you can expect to see over-active frontal lobes and basal ganglia. For this reason (and because those areas depend on dopamine a lot) dopaminergics (Ritalin, etc) make a big difference.

I've tried very informal related experiments - often in dealing with people it's necessary to challenge their assumptions about the world.

a) People's assumptions often seem to be somewhat subconscious, so there's significant effort to extract the assumptions they're making.

b) These assumptions seem to be very core to people's thinking and they're extremely resistant to being challenged on them.

My guess is that trying to change people's methods of thinking would be even more difficult than this.

EDIT: The first version of this I post talked more about challe... (read more)

Welcome to the Premier Omega-3/Fish Oil Site on the Web!

I feel cautious about the objectivity of this source. Other sources suggest health benefits to consumption of fish, but I want to be confident that my expert sources are not skewing the selection of research to promote.

5Kevin11yRegardless of the souce, the evidence seems to be rather strong that fish oil does good things for the brain. If you can find any negative evidence about fish oil and mental health, I'd like to see it.
2RobinZ11yI would like to know of risks associated with fish oil consumption as well. I am not aware of any. I am also not confident that any given site dedicated to the stuff would provide such information if or when it is available. I would suggest investigating independent sources of information (including but not limited to citations within and citations of referenced research) before drawing a confident conclusion.
3mattnewport11yFish oil (particularly cod liver oil) has high levels of vitamin A which is known to be toxic [http://en.wikipedia.org/wiki/Vitamin_A#Toxicity] at high doses (above what would typically be consumed through fish oil supplements) and some studies suggest is harmful [http://jama.ama-assn.org/cgi/content/abstract/297/8/842] at lower doses (consistent with daily supplementation).
1RichardKennaway11ySeth Roberts has written [http://www.blog.sethroberts.net/category/nutrition/omega-3/omega-3-directory/] about omega-3s. I believe that somewhere in there he's talked about the possibility of mercury contamination in fish oils.
4wedrifid11y(I note that mercury concentration is subject to heavy quality control measures. Quality fish oil supplements will include credible guarantees regarding mercury levels, based of independent testing. This is, of course, something to consider when buying cheap sources from some obscure place.)
1RichardKennaway11yCorrection: the health risk he wrote about was PCBs in fish oil [http://www.blog.sethroberts.net/2010/05/23/dangerous-fish-oil/]. For this reason he advocates flaxseed oil as a source of omega-3. Whether there is any real danger I don't know.
1Douglas_Knight11yPCBs and omega-3s climb the food chain, so they're pretty well correlated. At some point I eyeballed a chart and decided that mercury was negatively correlated with omega-3s. No idea why.

I have begun a design for a general computer tool to calculate utilities. To give a concrete example, you give it a sentence like

I would prefer X2 amount of money in Y1 months, to X2 in Y2 months. Then, give it reasonable bounds for X and Y, simple additional information (i.e. you always prefer more money to less), and let it interview some people. It'll plot a utility function for each person, and you can check the fit of various models (i.e. exponential discounting, no discounting, hyperbolic discounting).

My original goals were to

  • Emperically chec
... (read more)
7Kingreaper11yEven if antinatalism is true at present (I have no major opinion on the issue yet) it need not be true in all possible future scenarios. In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase. I find it highly unlikely that even the maximum average utility is still less than zero.
1Jayson_Virissimo11yWhy shouldn't having a higher population lead to greater specialization of labor, economies of scale, greater gains from trade, and thus greater average utility?
2Kingreaper11yResource limitations. There is only a limited amount of any given resource available. Decreasing the number of people therefore increases the amount of resource available per person. There is a point at which decreasing the population will begin decreasing average utility, but to me it seems nigh certain that that point is significantly below the current population. I could be wrong, and if I am wrong I would like to know. Do you feel that the current population is optimum, below optimum, or above optimum?
5Mitchell_Porter11yI have long wrestled with the idea of antinatalism, so I should have something to say here. Certainly there were periods in my life in which I thought that the creation of life is the supreme folly. We all know that terrible things happen, that should never happen to anyone. The simplest antinatalist argument of all is, that any life you create will be at risk of such intolerably bad outcomes; and so, if you care, the very least you can do is not create new life. No new life, no possibility of awful outcomes in it, problem avoided! And it is very easy to elaborate this into a stinging critique of anyone who proposes that nonetheless one shouldn't take this seriously or absolutely (because most people are happy, most people don't commit suicide, etc). You intend to gamble with this new life you propose to create, simply because you hope that it won't turn out terribly? And this gamble you propose appears to be completely unnecessary - it's not as if people have children for the greater good. Etc. A crude utilitarian way to moderate the absoluteness of this conclusion would be to say, well, surely some lives are worth creating, and it would make a lot of people sad to never have children, so we reluctantly say to the ones who would be really upset to forego reproduction, OK, if you insist... but for people who can take it, we could say: There is always something better that you could do with your life. Have the courage not to hide from the facts of your own existence in the boisterous distraction of naive new lives. It is probably true that philanthropic antinatalists, like the ones at the blog to which you link, are people who have personally experienced some profound awfulness, and that is why they take human suffering with such deadly seriousness. It's not just an abstraction to them. For example, Jim Crawford (who runs that blog) was once almost killed in a sword attack, had his chest sliced open, and after they stitched him up, literally every breath was ag
8Roko11ySeems like loss aversion bias. Sure, bad things happen, but so do good things. You need to do an expected utility calculation for the person you're about to create: P(Bad)U(Bad) + P(Good)U(Good) P(Sword attack) seems to be pretty darn low.
2Mitchell_Porter11yI think that for you, a student of the singularity concept, to arrive at a considered and consistent opinion regarding antinatalism, you need to make some judgments regarding the quality of human life as it is right now, "pre-singularity". Suppose there is no possibility of a singularity. Suppose the only option for humanity is life more or less as it is now - ageing, death, war, economic drudgery, etc, with the future the same as the past. Everyone who lives will die; most of them will drudge to stay alive. Do you still consider the creation of a human life justifiable? Do you have any personal hopes attached to the singularity? Do you think, yes, it could be very bad, it could destroy us, that makes me anxious and affects what I do; but nonetheless, it could also be fantastic, and I derive meaning and hope from that fact? If you are going to affirm the creation of human life under present conditions, but if you are also deriving hope from the anticipation of much better future conditions, then you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future. It would be possible to have the attitude that life is already great and a good singularity would just make it better; or that the serious possibility of a bad singularity is enough for the idea to urgently command our attention; but it's also clear that there are people who either use singularity hope to sustain them in the present, or who have simply grown up with the concept and haven't yet run into difficulty. I think the combination of transhumanism and antinatalism is actually a very natural one. Not at all an inevitable one; biotechnology, for example, is all about creating life. But if you think, for example, that the natural ageing process is intolerable, something no-one should have to experience, then probably you should be an antinatalist.
4Douglas_Knight11yWhy do you link to a blog, rather than an introduction or a summary? Is this to test whether we find it so silly that we don't look for their best arguments? My impression is that antinatalists are highly verbal people who base their idea of morality on how people speak about morality, ignoring how people act. They get the idea that morality is about assigning blame and so feel compelled only to worry about bad acts, thus becoming strict negative utilitarians or rights-deontologists with very strict and uncommon rights. I am not moved by such moralities. Maybe some make more factual claims, eg, that most lives are net negative or that reflective life would regret itself. These seem obviously false, but I don't see that they matter. These arguments should not have much impact on the actions of the utilitarians that they seem aimed at. They should build a superhuman intelligence to answer these questions and implement the best course of action. If human lives are not worth living, then other lives may be. If no lives are worth living, then a superintelligence can arrange for no lives to be lead, while people evangelizing antinatalism aren't going to make a difference. Incidentally, Eliezer sometimes seems to be an anti-human-natalist.
3cousin_it11yThe antinatalist argument goes that humans suffer more than they have fun, therefore not living is better than living. Why don't they convert their loved ones to the same view and commit suicide together, then? Or seek out small isolated communities and bomb them for moral good. I believe the answer to antinatalism is that pleasure != utility. Your life (and the lives of your hypothetical kids) could create net positive utility despite containing more suffering than joy. The "utility functions" or whatever else determines our actions contain terms that don't correspond to feelings of joy and sorrow, or are out of proportion with those feelings.
3Leonhart11yThe suicide challenge is a non sequitur, because death is not equivalent to never having existed, unless you invent a method of timeless, all-Everett-branch suicide.
4Kingreaper11yPrecisely. If the utility of the first ten or fifteen years of life is extremely negative, and the utility of the rest slightly positive, then it can be logical to believe that not being born is better than being born, but suicide (after a certain age) is worse than either.
4orthonormal11yI think that's getting at a non-silly defense of antinatalism: what if the average experience of middle school and high school years is absolutely terrible , outweighing other large chunks of life experience, and adults have simply forgotten for the sake of their sanity? I don't buy this, but it's not completely silly. (However, it suggests a better Third Alternative [http://wiki.lesswrong.com/wiki/Third_option] exists: applying the Geneva Convention to school social life.)
3gwern11yQuite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life. (Also, I've seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)
9ocr-fork11ySuicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors. [http://www.cdc.gov/nchs/data/nvsr/nvsr57/nvsr57_14.pdf]
5gwern11yInteresting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9). So, I was right that the rates increase again in old age, but wrong about when the first spike was.
3pjeby11yUnfortunately, the age brackets don't really tell you if there's a teenage spike, except that if there is one, it happens after age 14. That 9.9 could actually be a much higher level concentrated within a few years, if I understand correctly.
0Unknowns11ySuicide rates may be higher in adolescence than at certain other times, but absolutely speaking, they remain very low, showing that most people are having a good life, and therefore refuting antinatalism.
2JoshuaZ11ySuicide rates are not a good measure of how good life is except at a very rough level since humans have very strong instincts for self-preservation.
2gwern11yMy counterpoint to the above would be that if suicide rates are such a good metric, then why can they go up with affluence? (I believe this applies not just to wealthy nations (ie. Japan, Scandinavia), but to individuals as well, but I wouldn't hang my hat on the latter.)
4daedalus2u11ySuicide rates are a measure of depression, not of how good life is. Depression can hit people even when they otherwise have a very good life.
0gwern11yYes yes, this is an argument for suicide rates never going to zero - but again, the basic theory that suicide is inversely correlated, even partially, with quality of life would seem to be disproved by this point.
4daedalus2u11yI think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”. When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence. How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don't understand what it is and believe that things like more affluence will resolve it.
1Unknowns11yI suspect the majority of adolescents would also deny wishing they had never been born.
3Mass_Driver11yWhenever anyone mentions how much it sucks to be a kid, I plug this article. It does suck, of course, but the suckage is a function of what our society is like, and not of something inherent about being thirteen years old. Why Nerds Hate Grade School [http://www.paulgraham.com/nerds.html]
2cousin_it11yBy the standard you propose, "never having existed" is also inadequate unless you invent a method of timeless, all-Everett-branch means of never having existed. Whatever kids an antinatalist can stop from existing in this branch may still exist in other branches.
3Nisan11yHere's one: I bet if you asked lots of people whether their birth was a good thing, most of them would say yes. If it turns out that after sufficient reflection, people, on average, regard their birth as a bad thing, then this argument breaks down.
5Roko11yThey have an answer to that. The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters. If our contrarian position was as wrong as we think antinatalism is, would we realize?
8JoshuaZ11yDo people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn't a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly. But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He's also a highly ranked chess master. He's clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren't smart (There's some evidence to back this up. See for example this breakdown of GSS data [http://blogs.discovermagazine.com/gnxp/2010/03/intelligent-design-idiocy/] and also this analysis [http://blogs.discovermagazine.com/gnxp/2009/02/which-religious-groups-are-creationist/] . Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn't just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly. It does however seem that on LW there's a common tendency to label beliefs silly when they mean "I assign a very low probability to this belief being correct." Or "I don't understand how someone's mind could be so warped as to have this belief." Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple mo
4cupholder11yA data point: I don't think antinatalism (as defined by Roko above - 'it is a bad thing to create people') is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child's life would be equally bad, it'd be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?
6Blueberry11yThat your child might experience a great deal of pain which you could prevent by not having it. That your child might regret being born and wish you had made the other decision. That you can be a good parent, raise a kid, and improve someone's life without having a kid (adopt). That the world is already overpopulated and our natural resources are not infinite.
2cupholder11yPoints taken. Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn't think the antinatalism position has legs.
6NancyLebovitz11yI'd throw in considering how stable you think those high living standards are.
5Blueberry11yI'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead. There is also [http://minnesota.publicradio.org/collections/special/columns/hows_the_family/archive/2007/12/new_study_says_having_kids_doe_1.shtml] some evidence [http://parenting.blogs.nytimes.com/2009/04/01/why-does-anyone-have-children/] that having children decreases your happiness (though there may be other reasons to have kids). But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
8Leonhart11yI don't think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that's indeed possible) but, given that I in fact exist, I do not want to die. I don't, right now, see screaming incoherency here, although I'm suspicious. I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.
5Nisan11yIf there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.
1RichardKennaway11yWe have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.
1Roko11ySuch as?
6RichardKennaway11yI knew someone would ask. :-) Ok, I'll list some of my silliness verdicts, but bear in mind that I'm not interested in arguing for my assessments of silliness, because I think they're too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don't post on matters I've consigned to the not-even-wrong category,or vote them down for it. Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. ("Non-silly" doesn't mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I'm persuaded of them.) Silly: we're living in a simulation, there are infinitely many identical copies of all of us, "status" as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable. Does anyone else think that some of the recurrent ideas here are silly? ETA: Non-silly: the mission of LessWrong [http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/]. Silly: Utilitarianism of all types.
3Douglas_Knight11yThere's an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second - there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me) Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you've phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
1Roko11yWhat probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don't mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
3RobinZ11yWhat probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will [http://plato.stanford.edu/entries/incompatibilism-theories/]?
1wedrifid11yI'm not entirely opposed to the idea. 6 billion is enough for now. Make more when we expand and distance makes it infeasible to concentrate neg-entropy on the available individuals. This is quite different from the Robin-Hanson 'make as many humans as physically possible and have them living in squalor' (exaggerated) position but probably also in in complete dissagreement with arguments used for Anti-natalism.

I know Argumentum ad populum does not work, and I know Arguments from authority do not work, but perhaps they can be combined into something more potent:

Can anyone recall a hypothesis that had been supported by a significant subset of the lay population, consistently rejected by the scientific elites, and turned out to be correct?

It seems belief in creationism has this structure. the lower you go in education level, the more common the belief. I wonder whether this alone can be used as evidence against this 'theory' and others like it.

2NancyLebovitz11yThat there's a hereditary component to schizophrenia.
2cupholder11y? [http://scholar.google.com/scholar?q=Kallmann+\(genetic+or+genetics\] +schizophrenia)
3NancyLebovitz11yMy impression was that the idea that schizophrenia runs in families was dismissed as an old wives' tale, but a fast google search isn't turning up anything along those lines, though it does seem that some Freudians believed schizophenia was a mental rather than physical disorder.
4cupholder11yMy understanding is that historically, schizophrenia has been presumed to have a partly genetic cause [http://books.google.com/books?id=eoA6AAAAIAAJ&lpg=PA43&pg=PA37] since around 1910, out of which grew an intermittent research program of family and twin studies to probe schizophrenia genetics. An opposing camp that emphasized environmental effects emerged in the wake of the Nazi eugenics program and the realization that complex psychological traits needn't follow trivial Mendelian patterns of inheritance. Both research traditions continue to the present day. Edit to add - Franz Josef Kallman, whose bibliography in schizophrenia genetics I somewhat glibly linked to in the grandparent comment, is one of the scientists who was most firmly in the genetic camp. His work (so far as I know) dominated the study of schizophrenia's causes between the World Wars, and for some time afterwards.
3NancyLebovitz11yThanks. You clearly know more about this than I do. I just had a vague impression.
1Douglas_Knight11yThe last point in the abstract at cupholder's link seems strikingly defensive to me:
1wedrifid11yNow I'm trying to work out what weird sexual thing involving one's mother could possibly be construed to cause schizophrenia.
1gwern11y* http://en.wikipedia.org/wiki/Schizophrenia#Genetic [http://en.wikipedia.org/wiki/Schizophrenia#Genetic] * "A family history of schizophrenia is the most significant risk factor (Table 12).3" [http://www.aafp.org/afp/2007/0615/p1821.html]
1wedrifid11yWow. Scientific elites were that silly? How on earth could they expect there not to be a hereditary component? Even exposure to the environmental factors that contribute is going to be affected by the genetic influence on personality. Stress in particular springs to mind.
2gwern11yElites in general (scientific or otherwise) seem to have a significant built-in bias against genetic explanations (which is usually what is meant by hereditary). I've seen a lot of speculation as to why this is so, ranging from it being a noble lie [http://en.wikipedia.org/wiki/Noble_lie] justified by supporting democracy or the status quo, to justifying meritocratic systems (despite their aristocratic results), to supporting bigger government (if society's woes are due to environmental factors, then empower the government to forcibly change the environment and create the new Soviet Man [http://en.wikipedia.org/wiki/New_Soviet_man]!), to simply long-standing instinctive revulsion and disgust stemming from historical discrimination employing genetic rhetoric (eugenics, Nazis, slavery, etc.) and so on. Possibly this bias is over-determined by multiple factors.

You're third, after steven0461 and nhamann.

5cupholder11yFourth! [http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2796]
2DanielVarga11yAnd I still managed to miss it the first three times.
1steven046111yI thought I did a search but apparently not; sorry.
1cupholder11yIn the long run, it's all good - I think it's a decent paper, and I suppose this way more eyeballs see it than if I was the only one to post it. (Not to say that we should make a regular habit of linking things four times :-)

There is a reason we consider infinities only as limits of sequences of finite quantities.

Suppose you tried to sum the log-odds evidence of the infinite scientist that the pond has more big fish. Well, some of them have positive evidence (summing to positive infinity), some have negative evidence (summing to negative infinity), and you can, by choosing the order of summation, get any result you want (up to some granularity) between negative and positive infinity.

You don't need anthropomorphic tricks to make things weird if you have actual infinities in the problem.

Is there a principled reason to worry about being in a simulation but not worry about being a Boltzmann brain?

Here are very similar arguments:

  • If posthumans run ancestor simulations, most of the people in the actual world with your subjective experiences will be sims.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if posthumans run ancestor simulations, you are probably a sim.

vs.

  • If our current model of cosmology is correct,

... (read more)
4Nisan11yIs there really a cosmology that says that most beings with my subjective experiences are Boltzmann brains? It seems to me that in a finite universe, most beings will not be Boltzmann brains. And in an infinite universe, it's not clear what "most" means.
6utilitymonster11yI gathered this from a talk by Sean Carroll that I attended, and it was supposed to be a consequence of the standard picture. All the Boltzmann brains come up in the way distant future, after thermal equilibrium, as random fluctuations. Carroll regarded this as a defect of the normal approach, and used this as a launching point to speculate about a different model. I wish I had a more precise reference, but this isn't my area and I only heard this one talk. But I think this issue is discussed in his book From Eternity to Here. Here's a blogpost [http://dtaatb.weebly.com/1/post/2010/04/bummed-about-botzmann-brains-buy-in-to-babyverses.html] that, I believe, faithfully summarizes the relevant part of the talk. The normal solution to Boltzmann brains is to add a past hypothesis. Here is the key part where the post discusses the benefits and shortcomings of this approach: The years there are missing some carats. Should be 10^100 and 10^10^120.
3Nisan11yOh I see. I... I'd forgotten about the future.
2utilitymonster11yThis is always hard with infinities. But I think it can be a mistake to worry about this too much. A rough way of making the point would be this. Pick a freaking huge number of years, like 3^^^3. Look at our universe after it has been around for that many years. You can be pretty damn sure that most of the beings with evidence like yours are Botlzmann brains on the model in question.

Ha, and where is the evidence for that?

here

Is it too much to ask for evidence in a forum pertaining to human rationality?

Sometimes, yes. It depends on how it is used. And I know you did't really want me to give an answer to your question. But that's the point. "Where is your evidence?" is just a bunch of verbal symbols that say very little to do with 'rationality'. If the meaning and intended function of the phrase is equivalent to "Your mom is a cult!" but translated to the vernacular of a different subculture then it says abs... (read more)

Currently you can choose the threshold for hiding comments: click on "preferences" on the right. I've turned mine off, because I like to see all the comments. I'd be open to adding an option for "don't even show there was a comment here," but I'd like the comments to be preserved in case someone wants to see them.

A very brief explanation of what I mean by prior probability: I've seen people making claims of this general sort about, say, twenty different techniques/philosophies/religions. In each of those twenty cases, the claimant and all other followers of the technique/philosophy/religion in question were together part of a cult.

So, presented only with the claims you have made and based on my past experience with such claims, I perceive that is very likely that you are promoting something cult-like, given the 100% correlation I have observed in the past.

This is n... (read more)

Nope, that was Jef Allbright.

I had a closer look at the AF website. The guy's biography was interesting. He starts out juxtaposing himself as a young conscript in the Vietnam war, facing a Buddhist priest burning himself alive, and feeling that both these sides are wrong. He struggles with the meaning of life, for some years falls into spiritual-savior consciousness, seeking to be or feeling that he is an enlightened teacher. Then he eventually he abandons that too, in favor of "the actual world". Thus, the ordinary ego-self he used to have was false, but so was the metaphys... (read more)

A small koan on utility functions that "refer to the real world".

  1. Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?

  2. Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?

In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.

6Kingreaper11yWould the simulation allow us to exit, in order to perform further research on the nature of the external world? If so, I would enter it. If not? Probably not. I do not want to live in a world where there are ultimate answers and you can go no further. The fact that I may already live in one is just bloody irritating :p
4Roko11yBut all of the mathematics and philosophy would still need to be done, and I suspect that that's where the exciting stuff is anyway.
1cousin_it11yGood point. You have just changed my answer from yes to no.
3Alicorn11yIf we move into the same simulation and can really interact with others, then I wouldn't mind the move at all. Apart from that, experiences are the important bit and simulations can have those.
2Clippy11yI might do that just sort of temporarily because it would be fun, similar to how apes like to watch other apes in ape situations even when it doesn't relate to their own lives. But I would have to limit this kind of thing because, although pleasurable, it doesn't support my real values. I value real paperclips, not simulated paperclips, fun though they might be to watch.
2wedrifid11yClippy is funnier when he plays the part of a paperclip maximiser, not a human with a paperclip fetish.
1ewbrownv11yYour footnote assumes away most of the real reasons for objecting to such a scenario (i.e. there is no remotely plausible world in which you could be confident that the simulation is either indestructible or tamper-proof, so entering it means giving up any attempt at personal autonomy for the rest of your existence).
1magfrump11yPart 2 seems similar to the claim (which I have made in the past but not on LessWrong) that the Matrix was actually a friendly move on the part of that world's AI.
4Bongo11yAgent Smith did say that the first matrix was a paradise but people wouldn't have it, but is simulating the world of 1999 really the friendliest option?
2magfrump11yWe only ever see America simulated. Even there we never see crime or oppression or poverty (homeless people could even be bots). If you don't simulate poverty and dictatorships then 1999 could be reasonably friendly. The economy is doing okay and the Internet exists and there is some sense that technology is expanding to meet the world's needs but not spiraling out of control. But I'm just making most of this up to show that an argument exists; it seems pretty clear that it was written to be in the present day to keep it in the genre of post-apocalyptic lit, in which case using the present adds to the sense of "the world is going downhill."
4billswift11yAnd the AI kills the thousands of people in Zion every hundred years or so when they get aggressive enough to start destabilizing the Matrix, thereby threatening billions. But the AI needs to keep some outside the Matrix as a control and insurance against problems inside the Matrix. And the AI spreads the idea that the Matrix "victims" are slaves and provide energy to the AI to keep the outsiders outside (even though the energy source claims are obviously ridiculous - the people in Zion are profoundly ignorant and bordering on outright stupid). Makes more sense than the silliness of the movies anyway.
1magfrump11yThis hypothesis also explains the oracle in a fairly clean way.
1ShardPhoenix11yThe given assumption seems unlikely to me, but in that case I think I'd go for it.
1red7511yIs it assumed that no new information will be entered into simulation after launch?
1Blueberry11yAnd does it change your answers if you learn that we are living in a simulation now? Or if you learn that Tegmark's theory is correct?
4wedrifid11yWhether or not his comments are desirable, this poster does not seem to qualify as a troll. Do not feed the Unwelcome Spammer perhaps?

Hi there yourself. I don't believe I've run across your website or mini-movement before. As some of your skeptical correspondents note, there is a very long prior history of people claiming enlightenment, liberation, transcendence of the self, and so forth. So even if one is sympathetic to such possibilities, one may reasonably question the judgment of "Richard" when he says that he thinks he is the first in history to achieve his particular flavor of liberation. This really is a mark against his wisdom. He would be far more plausible if he was s... (read more)

2LucasSloan11yI would be greatly edified if you would heed Blueberry's plea.
1RichardKennaway11yJust an FYI, but modern technology now allows instant access to a stream of such remarks.The Dalai Lama is on Facebook. [http://www.facebook.com/DalaiLama]

Paul Graham has written extensively on Startups and what is required. A highly focused team of 2-4 founders, who must be willing to admit when their business model or product is flawed, yet enthused enough about it to pour their energy into it.

Steve Blank has also written about the Customer Development process which he sees as paralleling the Product Development cycle. The idea is to get empirical feedback.by trying to sell your product from the get-go, as soon as you have something minimal but useful. Then you test it for scalability. Eventually you have ... (read more)

3RichardKennaway11yI wonder what percentage have ever tried?
2pjeby11yThat at least partly depends on what you define as a "startup". Graham's idea of one seems to be oriented towards "business that will expand and either be bought out by a major company or become one", vs. "enterprise that builds personal wealth for the founder(s)". By Graham's criteria, Joel Spolsky's company, Fog Creek, would not have been considered a startup, for example, nor would any business I've ever personally run or been a shareholder of. [Edit: I should say, "or been a 10%+ shareholder of"; after all, I've held shares in public companies, some of which were undoubtedly startups!]
2wedrifid11yI would not deviate too much from the prior (most would fail).

I know this thread is a bit bloated already without me adding to the din, but I was hoping to get some assistance on page 11 of Pearl's Causality (I'm reading 2nd edition).

I've been following along and trying to work out the examples, and I'm hitting a road block when it comes to deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory. Part of my problem comes because I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates... (read more)

This is evident on LW with how many posts just seem to rehash old news.

Do you have any insights which you would like to share that advance the borders of rationality?

Well, given that I can now be confident my words won't encourage you*, I will feel free to mention that I found the attitudes of many of those replying to you troubling. There seemed to be an awful lot of verbiage ascribing detailed motivations to you based on (so far as I could tell) little more than (a) your disagreement and (b) your tone, and these descriptions, I feel, were accepted with greater confidence than would be warranted given their prior complexity and their current bases of evidential support.

None of the above is to withdraw my remarks towar... (read more)

4JoshuaZ11yI'm slightly worried that some of my remarks to Sam fell in that category. Rereading them, I don't see that, but there may be substantial cognitive biases preventing me from seeing this issue in my own remarks. Did any of my comments fall into that category under your estimate? If so, which ones?
1RobinZ11yYour comments were reasonably restrained. Edit: To a certain extent I am gunshy about ascribing motivations at all - it may be my casual reading left me with an invalid impression of the extent to which this was done.

You have to differentiate between what an individual thinks/does/decides, and what society as a whole thinks/does/decides.

For example, in a society that generally accepted that it was the "done thing" for a person to die on the funeral pyre of their partner, saying that you wanted to make a deal to buck the trend would certainly be seen as selfish.

Most individuals see the world in terms of options that are socially allowable, and signals are considered relative to what is socially allowable.

They're both things with low probabilities of success, and extremely large pay-offs.

To someone with a certain view of the future, or a moderately low "maximum pay-off" threshold, the pay-off of cryonics could be the same as the pay-off for a lottery win.

At which point the lottery is a cheaper, but riskier, gamble. Again, if someone has a certain view of the future, or a "minimum probability" threshold (which both are under) then this difference in risk could be unnoticed in their thoughts.

At which point the two become identical, but one... (read more)

4Paul Crowley11yOne big barrier I hit in talking to some of those close to me about this is that I can't seem to explain the distinction between wanting the feeling of hope that I might live a very long time, and actually wanting to live a long time. Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".
4Nisan11yI could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?
8Paul Crowley11yYes. I'm happy that people respect my choices, but when they "respect my beliefs" it strikes me as incredibly disrespectful.
3RichardKennaway11yAnd if you reply "I only want to believe in things that are true?"
3Paul Crowley11yApply the same transformation to my words that is causing me problems to that reply, and you get "I only want to believe in things that I believe are true".

Stored riff here: I think the world would be a better place if people had cheap handy means of doing quantitative chemical tests. I'm not sure how feasible it is, though I think there's a little motion in that direction.

1wedrifid11yI would love to have that available, either as a product or a readily accessible service.
5Nisan11yIt would make consuming illegal drugs a lot safer, no?
4Kevin11yhttp://www.dancesafe.org/testingkits/ [http://www.dancesafe.org/testingkits/]

I love that on LW, feeding the trolls consists of writing well-argued and well-supported rebuttals.

7kpreid11yThis is not a distortion of the original meaning. “Feeding the trolls” is just giving them replies of any sort — especially if they're well-written, because you’re probably investing more effort than the troll.
1Cyan11yI didn't intend to imply otherwise.
4JoshuaZ11yI don't think this is unique to LW at all. I've seen well-argued rebuttals to trolls labeled as feeding in many different contexts including Slashdot and the OOTS forum.
1Vladimir_Nesov11yWe must aspire to a greater standard, with troll-feeding replies being troll-aware of their own troll-awareness.

Reflective people are LESS risk averse.

That's even more confusing. I would expect a reflective person to be more self-doubtful and more risk-averse than a non-reflective person, all else being equal. But perhaps a different definition of "reflective" is involved here.

2gwern11yPossibly. A reflective person can use expected-utility to make choices that regular people would simply categorically avoid. (One might say in game-theoretic terms that a rational player can use mixed strategies, but irrational ones cannot and so can do worse. But that's probably pushing it too far.) I recall reading one anecdote on an economics blog. The economist lived in an apartment and the nearest parking for his car was quite a ways away. There were tickets for parking on the street. He figured out the likelihood of being ticketed & the fine, and compared its expected disutility against the expected disutility of walking all the way to safe parking and back. It came out in favor of just eating the occasional ticket. His wife was horrified at him deliberately risking the fines. Isn't this a case of rational reflection leading to an acceptance of risk which his less-reflective wife was averse to?
2gwern11yIn a serendipitous and quite germane piece of research, Marginal Revolution [http://www.marginalrevolution.com/marginalrevolution/2010/07/how-do-higheriq-people-choose.html] links to a study on IQ and risk-aversion:
2RobinZ11yI don't believe the article [http://www.dynamist.com/articles-speeches/nyt/cognition.html] says "reflective":
4NancyLebovitz11yThe problem with the temperament checks in the last two paragraphs is that they're still testing roughly the same thing that's tested earlier on-- competence at word problems. And possibly interest in word problems-- I know I've seen versions of the three problems before. I wouldn't be going at them completely cold, but I wouldn't have noticed and remembered having seen them decades ago if word problems weren't part of my mental univers.
1gwern11ySomewhat offtopic: I recall reading a study once that used a test which I am almost certain was this one to try to answer the cause/correlation question of whether philosophical training/credentials improved one's critical thinking or whether those who undertook philosophy already had good critical thinking skills; when I recently tried to re-find it for some point or other, I was unable to. If anyone also remembers this study, I'd appreciate any pointers. (About all I can remember about it was that it concluded, after using Bayesian networks, that training probably caused the improvements and didn't just correlate.)

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model. The problem is, we don't know these things. In practice we can't exact... (read more)

Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I'd asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?

4RichardKennaway11yI would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness. [ETA: And of course, I'm talking about ideas that I've judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn't going to make a difference.] But you changed it to "could be". Sure, could be, but that's like Descartes' speculations about a trickster demon faking all our sensations. It's unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you're just writing speculative fiction. But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
3Roko11ySo the bottom line of your reasoning is quite safe from any evidential threats?

We think of Aumann updating as updating upward if the other person's probability is higher than you thought it would be, or updating downward if the other person's probability is lower than you thought it would be. But sometimes it's the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it's OD you might learn it's OD or you might not. A and B are given an urn and ar... (read more)

We have recently had a discussion on whether the raw drive for status seeking benefits society. This link seems all too appropriate (or, well, at least apt.)

If you click "Help" when writing a comment, it will appear in a handy box right next to where you are writing.

A neurological study still will not give a full picture of a PCE. The scientists have not been able to locate the identity/self anywhere in the brain, let alone detect its absence

Yes they have, at least in the sense that you are referring to. And they can provoke the suppression of this self with magnetic stimulation.

You on the other hand are completely incapable of suppressing the identity/self. You are tied up in it far more than the average person.

I'm not certain that's so, as ISTM many of the things humanity wants to maximize are to a large extent representation-invariant - in particular because they refer to other people - and could be done just as well in a simulation. The obvious exception being actual knowledge of the outside world.

if someone is to call the discoverer, his discovery and a few of those experimenting with his method as a cultic organization, then the burden of proof lies on the shoulder of this someone, does it not?

Nope, and cult members ask the exact same question.

Extraordinary claims require extraordinary evidence. You came here to convince people to adopt Actualism (it seems). So, actually convince me. Why should I pay more attention to you and your alleged non-cult than I do to someone else's alleged non-cult? Arguments based on the teachings of your alleged non-cult are worthlessly circular, because you're trying to convince me that such claims should have worth in the first place.

Maybe not deleted but simply locked so that no-one can post in them. Should stop any painful soul-draining, ultimately pointless arguements. Of course, if the subject is posted again then it's definitely spamming, and the mods should delete the repost and ban those responsible,

I have been thinking about "holding off on proposing solutions." Can anyone comment on whether this is more about the social friction involved in rejecting someone's solution without injuring their pride, or more about the difficulty of getting an idea out of your head once it's there?

If it's mostly social, then I would expect the method to not be useful when used by a single person; and conversely. My anecdote is that I feel it's helped me when thinking solo, but this may be wishful thinking.

2Oscar_Cunningham11yDefinitely the latter, even when I'm on my own, any subsequent ideas after my first one tend to be variations on my first solution, unless I try extra hard to escape its grip.

I have a few questions.

1) What's "Bayescraft"? I don't recall seeing this word elsewhere. I haven't seen a definition on LW wiki either.

2) Why do some people capitalize some words here? Like "Traditional Rationality" and whatnot.

5Morendil11yTo me "Bayescraft" has the connotation of a particular mental attitude, one inspired by Eliezer Yudkowsky's fusion of the ev-psych, heuristics-and-biases literature with E.T. Jaynes' idiosyncratic take on "Bayesian probabilistic inference", and in particular the desiderata for an inference robot: take all relevant evidence into account, rather than filter evidence according to your ideological biases, and allow your judgement of a proposition's plausibility to move freely in the [0..1] range rather than seek all-or-nothing certainty in your belief.
3Nisan11yCapitalized words are often technical terms. So "Traditional Rationality" [http://wiki.lesswrong.com/wiki/Traditional_rationality] refers to certain epistemic attitudes and methods which have, in the past, been called "rational" (a word which is several hundred years old). This frees up the lower-case word "rationality" [http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/], which on this site is also a technical term.
1Oscar_Cunningham11yBayescraft is just a synonym for Rationallity, with connotations of a) Bayes theorem, since that's what epistemic rationallity must be based on, and b) the notion that rationallity is a skill which must be developed personally and as a group (see also: Martial art of Rationallity (oh look, more capitals!)) The capitals are just for emphasis of concepts that the writer thinks are fundamentally important.

an interesting site I stumbled across recently: http://youarenotsosmart.com/

They talk about some of the same biases we talk about here.

0Cyan11yIn fact, the post of July 14 on the illusion of transparency quotes EY's post on the same subject.

Is self-ignorance a prerequisite of human-like sentience?

I present here some ideas I've been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.

Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.

Yet I take it as true that our brains - like everything else - are purely ... (read more)

Downvoted for unnecessarily rude plonking. You can tell someone you're not interested in what they have to say without being mean.

he's proud to be a "Silas-free" zone now.

From looking at his blog, I think you should take this as a compliment.

I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are.

Ah, but did you notice that that did not work for Robin? (The NYT article says that Robin discussed it with Peggy when they were getting to know each other.)

5Nisan11yIt "worked" for Robin to the extent that Robin got to decide whether to marry Peggy after they discussed cryonics. Presumably they decided that they preferred each other to hypothetical spouses with the same stance on cryonics.

So, probably like most everyone else here, I sometimes get complaints (mostly from my ex-girlfriend, you can always count on them to point out your flaws) that I'm too logical and rational and emotionless and I can't connect with people or understand them et cetera. Now, it's not like I'm actually particularly bad at these things for being as nerdy as I am, and my ex is a rather biased source of information, but it's true that I have a hard time coming across as... I suppose the adjective would be 'warm', or 'human'. I've attributed a lot of this to a) my ... (read more)

7WrongBot11y"Fake it until you make it" is surprisingly good advice for this sort of thing. I had moderate self-esteem issues in my freshman year of college, so I consciously decided to pretend that I had very high self-esteem in every interaction I had outside of class. This may be one of those tricks that doesn't work for most people, but I found that using a song lyric (from a song I liked) as a mantra to recall my desired state of mind was incredibly helpful, and got into the habit of listening to that particular song before heading out to meet friends. (The National's "All The Wine [http://www.youtube.com/watch?v=d9yjOy9PqNY]" in this particular case. "I am a festival" was the mantra I used.) That's in the same class of thing as acting like Regina Spektor or Feynman; if you act in a certain way consistently enough, your brain will learn that pattern and it will begin to feel more natural and less conscious. I don't worry about my self-esteem any more (in that direction, at least).
6wedrifid11yI suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode. (And less time with your ex!) A perfect form of practice is dance. Take swing dancing lessons, for example. That removes the possibility of using your overwhelming verbal fluency and persona of intellectual brilliance. It makes it far easier to activate that part that is sometimes called 'human' but perhaps more accurately called 'animal'. Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.
3Will_Newsome11yNon-nerdy people who are interesting are surprisingly difficult to find, and I have a hard time connecting with the ones I do find such that I don't get much practice in. I'm guessing that the biggest demographic here would be artists (musicians). Being passionate about something abstract seems to be the common denominator. Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one. Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!
5wedrifid11yI was using very similar reasoning when I suggested "non nerds or nerds not presently in nerd mode". The key is hide the abstract discussion crutch! Friends who are willing to suggest improvements (Tsuyoku naritai) sincerely are valuable resources! If your ex is able to point out a flaw then perhaps you could ask her to lead you through an example of how to have a 'warm, human' interaction, showing you the difference between that and what you usually do? Mind you, it is still almost certainly better to listen to criticism from someone who has a vested interest in your improvement rather than your acknowledgement of flaws. Like, say, a current girlfriend. ;)
1Kevin11yIn my last semester at college, I figured I should take fun classes while I could, so I took two one credit drumming classes. In African Drumming Ensemble, we spent 90% of the time doing complex group dances and not drumming, because the drumming was so much easier to learn than the dancing. Being tricked into taking a dance class was broadly good for my social skills, not the least my confidence on a dance floor.
5Kevin11yThe kind of ultra rational Bayesian lingustic patterns used around here would be considered obnoxiously intellectual and pretentious (and incomprehensible?) by most people. Practice mirroring the speech patterns of the people you are communicating with, and slip into rationalist talk when you need to win an argument about something important. When I'm talking to street people, I say "man" a lot because it's something of a high honorific. Maybe in California I will need to start saying "dude", though man seems inherently more respectful.
4[anonymous]11yI think most people here have some sort of similar problem. Mine isn't being emotionless (ha!) but not knowing the right thing to say, putting my foot in my mouth, and so on. Occasionally coming across as a pedant, which is so embarrassing. I may be getting better at it, though. One thing is: if you are a nerd (in the sense of passionate about something abstract) just roll with it. You will get along better with similar people. Your non-nerdy friends will know you're a nerd. I try to be as nice as possible so that when, inevitably, I say something clumsy or reveal that I'm ignorant of something basic, it's not taken too negatively. Nice but clueless is much better than arrogant. And always wait for a cue from the other person to reveal something about yourself. Don't bring up politics unless he does; don't mention your interests unless he asks you; don't use long words unless he does. I can't dance for shit, but various kinds of exercise are a good way to meet a broader spectrum of people. Do I still feel like I'm mostly tolerated rather than liked? Yeah. It can be pretty depressing. But such is life. As for dating -- the numbers are different from my perspective, of course, but so far I've found I'm not going to click really profoundly with guys who aren't intelligent. I don't mean that in a snobbish way, it's just a self-knowledge thing -- conversation is really fun for me, and I have more fun spending time with quick, talkative types. There's no point forcing yourself to be around people you don't enjoy.
2knb11yIn my experience, something as simple as adding a smile can transform a demeanor otherwise perceived as "cold" or "emotionless" to "laid-back" or "easy-going".
2JoshuaZ11yDate nerdier people? In general, many nerdy rational individuals have a lot of trouble getting a long with not so nerdy individuals. There's some danger that I'm other optimizing [http://wiki.lesswrong.com/wiki/Other-optimizing] but I have trouble thinking how an educated rational individual would have be able to date someone who thought that there was something wrong with using terms like "a priori." That's a common enough term, and if someone uses a term that they don't know they should be happy to learn something. So maybe just date a different sort of person?
1Will_Newsome11yI wasn't talking mostly about dating, but I suppose that's an important subfield. The topic you mention came up at the Singularity Institute Visiting Fellows house a few weeks. 3 or 4 guys, myself included, expressed a preference for girls who had specialized in some other area of life: gains from trade of specialized knowledge. And I just love explaining to a girl how big the universe is and how gold is formed in super novas... most people can appreciate that, even if they see no need for using the word 'a priori'. I don't mean average intelligence, but one standard deviation above the mean intelligence. Maybe more; I tend to underestimate people. There was 1 person who was rather happy with his relationship with a girl who was very like him. However, the common theme was that people who had more dating experience consistently preferred less traditionally intelligent and more emotionally intelligent girls (I'm not using that term technically, by the way), whereas those with less dating experience had weaker preferences for girls who were like themselves. Those with more dating experience also seemed to put much more emphasis on the importance of attractiveness instead of e.g. intelligence or rationality. Not that you have to choose or anything, most of the time. I'm going to be so bold as to claim that most people with little dating experience that believe they would be happiest with a rationalist girlfriend should update on expected evidence and broaden their search criteria for potential mates. As for preferences of women, I'm sorry, but the sample size was too small for me to see any trends. (To be fair this was a really informal discussion, not an official SIAI survey of course. :) ) Important addendum: I never actually checked to see if any of the guys in the conversation had dated women who were substantially more intelligent than average, and thus they might not have been making a fair comparison (imagining silly arguments about deism versus atheism or so
2JoshuaZ11yI've dated females who were clearly less intelligent than I am, some about the same, and some clearly more intelligent. I'm pretty sure the last category was the most enjoyable (I'm pretty sure that rational intelligent nerdy females don't want want to date guys who aren't as smart as they are either). There may be issues with sample size.
1katydee11yI have myself been accused of being an android or replicant on many occasions. The best way that I've found to deal with this is to make jokes and tell humorous anecdotes about the situation, especially ones that poke fun at myself. This way, the accusation itself becomes associated with the joke and people begin to find it funny, which makes it "unserious."

I maintain that if you are male with a female relatively neurotypical partner, the probability of success of making her sign on the dotted line for cryo, or accepting wholeheartedly your own cryo is not maximized by using rational argument, rather it is maximized by having an understanding of the emotional world that the fairer sex inhabit, and how to control her emotions so that she does what you think best. She won't listen to your words, she'll sense the emotions and level of dominance in you, and then decide based on that, and then rationalize that de... (read more)

2lmnop11yI mostly agree with you. I would even expand your point to say that if you want to convince anyone (who isn't a perfect Bayesian) to do anything, the probability of success will almost always be higher if you use primarily emotional manipulation rather than rational argument. But cryonics inspires such strong negative emotional reactions in people that I think it would be nearly impossible to combat those with emotional manipulation of the type you describe alone. I haven't heard of anyone choosing cryonics for themselves without having to make a rational effort to override their gut response against it, and that requires understanding the facts. Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.

About "Silas-free zones" you blogged:

So why would this Serious Thinker feel the need to reject, on sight, my comments from appearing, and then advertise it?

You don't think your making a horrible impression on people you argue with may have anything to do with it? ;)

Seriously, that would be my first hypothesis. "You don't catch flies with vinegar." Go enough out of your way to antagonize people even as you're making strong rebuttals to their weak arguments, and you're giving them an easy way out of listening to you.

The nicer you are,... (read more)

I don't think arithmetic coding achieves the 1 bit / character theoretical entropy of common English, as that requires knowledge of very complex boundaries in the probability distribution. If you know a color word is coming next, you can capitalize on it, but not letterwise.

Of course, if you permit a large enough block size, then it could work, but the lookup table would probably be umanageable.

2Sniffnoy11yYeah, I meant "arithmetic encoding with absurdly large block size"; I don't have a practical solution.

Information theory challenge: A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

To do so, you'd have to exploit all the regularities of English to offer suggestions that save the user from having to specify individual letters. Most of the entropy is in the intial cha... (read more)

4gwern11yAlready done; see Dasher [http://en.wikipedia.org/wiki/Dasher#Design] and especially its Google Tech Talk [http://video.google.com/videoplay?docid=5078334075080674416#]. It doesn't reach the 0.7-1 bit per character limit, of course, but then, according to the Hutter challenge [http://cs.fit.edu/~mmahoney/compression/rationale.html] no compression program (online or offline [http://en.wikipedia.org/wiki/Online_algorithm]) has.
3SilasBarta11yWow, and Dasher was invented by David MacKay [http://en.wikipedia.org/wiki/David_J._C._MacKay], author of the famous free textbook [http://www.inference.phy.cam.ac.uk/mackay/itila/book.html] on information theory!
2gwern11yAccording to Google Books, the textbook mentions Dasher, too.
3Christian_Szegedy11yThis is already exploited on cell phones to some extent.
[-][anonymous]11y 2

The kinds of implications I'm thinking about are that if IQ causes X, (and if IQ is heritable) then we should not seek to change X by social engineering means, because it won't be possible. X could be the distribution of college admittees, firemen, criminals, etc.

Not all policy has to rely on causal factors, of course. And my thinking is a little blurry on these issues in general.

That's a bizarre choice of example. The question of whether Pluto is a planet is entirely a definitional one; the IAU could make it one by fiat if they chose. There's no particular reason for it not to be one, except that the IAU felt the increasing number of tranNeptunian objects made the current definition awkward.

5RobinZ11y"[E]ntirely a definitional" question does not mean "arbitrary and trivial" - some definitions are just wrong [http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/]. EY mentions the classic example in Where to Draw the Boundary? [http://lesswrong.com/lw/o0/where_to_draw_the_boundary/]: Honestly, it would make the most sense to draw four lists, like the Hayden Planetarium did, with rocky planets, asteroids, gas giants, and Kuiper Belt objects each in their own category, but it is obviously wrong to include everything from Box 1 and Box 3 and one thing from Box 4. The only reason it was done is because they didn't know better and didn't want to change until they had to.
9mkehrt11yYou (well, EY) make a good point, but I think neither the Pluto remark nor the fish one is actually an example of this. In the case of Pluto, the transNeptunians and the other planets seem to belong in a category that the asteroids don't. They're big and round! Moreover, they presumably underwent a formation process that the asteroid belt failed too complete in the same way (or whatever the current theory of formation of the asteroid belt is; I think that it involves failure to form a "planet" due to tidal forces from Jupiter?). Of course there are border cases like Ceres, but I think there is a natural category (whatever that means!) that includes the rocky planets, gas giants and Kuiper Belt objects that does not include (most) asteroids and comets. On the fish example, I claim that the definition of "fish" that includes the modern definition of fish union the cetaceans is a perfectly valid natural category, and that this is therefore an intensional definition. "Fish" are all things that live in the water, have finlike or flipperlike appendages and are vaguely hydrodynamic. The fact that such things do not all share a comment descent* is immaterial to the fact that they look the same and act the same at first glance. As human knowledge has increased, we have made a distinction between fish and things that look like fish but aren't, but we reasonably could have kept the original definition of fish and called the scientific concept something else, say "piscoids". *well, actually they do, but you know approximately what I mean.
3NancyLebovitz11yNitpick: if in your definition of fish, you mean that they need to both have fins or flippers and be (at least) vaguely hydrodynamic, I don't think seahorses and puffer fish qualify.
2wnoise11yThe usual term is "monophyletic [http://en.wikipedia.org/wiki/Monophyly]".
1mkehrt11yYes, but neither fish nor (fish union cetaceans) is monphylatic. The decent tree rooted at the last common ancestor of fish also contains tetrapods and decent tree rooted at the last common ancestor of tetrapods contains the cetaceans. I am not any sort of biologist, so I am unclear on the terminological technicalties, which is why I handwaved this in my post above.
3Emile11yFish are a paraphyletic [http://en.wikipedia.org/wiki/Paraphyly] group.
1wedrifid11yI'm inclined to agree. Having a name for 'things that naturally swim around in the water, etc' is perfectly reasonable and practical. It is in no way a nitwit game.

Seconding Douglas_Knight's question. I don't understand why you say policy uses must rely on causal factors.

I find myself at a loss to give a brief answer. Can you ask a more specific question?

I maximize the number of papercips in the universe (that exist an arbitrarily long time from now). I use "number of paperclips counted so far" as a measure of progress, but it is always screened off by more direct measures, or expected quantities, of paperclips in the universe.

Your confusion with Tegmark IV seems to remain though, so I'm glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things "exist" or "don't exist", and here Bob is supposed to feel "non-existence". The property of "existence" is meaningless, that's the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob's program), you refer to all their properties, and you can't stamp &q... (read more)

3cousin_it11yIt seems to me you're mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they're just wrong. You haven't yet given any real answer to the issue of pheasants, or maybe I'm a pathetic failure at parsing your posts. Incidentally, my problem makes for a nice little test case: what experiences do you think Bob "should" anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn't, why do such questions appear to have correct answers in our world, answers which don't require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob's?
1jimrandomh11yOn the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.

I still think that it's silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.

I'm curious what you think the causal justification is. I'm not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can't help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suici... (read more)

2Roko11yI promise that I genuinely did not know that when I wrote "I suspect, not the causal reason for the values it purports to justify." and thought "these people were just born with low happiness set points and they're rationalizing"

Another reference request: Eliezer made a post about how it's ultimately incoherent to talk about how "A causes B" in the physical world because at root, everything is caused by the physical laws and initial conditions of the universe. But I don't remember what it is called. Does anybody else remember?

5Vladimir_Nesov11yIt is coherent to talks about "A causes B", to the contrary it's a mistake to say that everything is caused by physical laws, and therefore you have no free will, for example (as if your actions don't cause anything). Of course, any given event won't normally have only one cause, but considering the causes of an event makes sense. See the posts on free will [http://wiki.lesswrong.com/wiki/Free_will], and then the solution posts linked from there. The picture you were thinking about is probably from these [http://lesswrong.com/lw/r1/timeless_control/] posts [http://lesswrong.com/lw/rc/the_ultimate_source/].
1Kazuo_Thow11yIt couldn't have been "Timeless Causality" [http://lesswrong.com/lw/qr/timeless_causality/] or "Causality and Moral Responsibility" [http://lesswrong.com/lw/ra/causality_and_moral_responsibility/] , could it?

I have some half-baked ideas about getting interesting information on lesswronger's political opinions.

My goal is to give everybody an "alien's eye" view of their opinions, something like "You hold position Foo on issue Bar, and justify it by the X books you read on Bar; but among the sample people who read X or more books on Bar, 75% hold position ~Foo, suggesting that you are likely to be overconfident".

Something like collecting:

  • your positions on various issues

  • your confidence in that position

  • how important various characteristics

... (read more)
2Douglas_Knight11yYou may like the Correct Contrarian Cluster [http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/].
1[anonymous]11yIn general I'd be interested in more specific and subtle data on political views than is normally given. In particular, on what issues do people tend to break with their own party or ideology? That's a simpler answer than you're asking, but easily tested.
1Emile11yOh, and I would probably want to add something on political affiliation - mostly because I expect a lot of "I believe Foo because I researched the issue / am very smart; others believe ~Foo because of their political affiliation"; but also because "I believe Foo and have researched it well, even though it goes against the grain of my general political affiliation" may be good evidence for Foo.

We use markdown syntax. An > at the start of the paragraph will make it a quote,

like so.

I've seen a couple of those, and consider them significant evidence that certain meditation techniques are useful. As naivecortex is claiming that PCEs have effects much more dramatic than meditation, I would expect to see MRI data that is correspondingly stronger.

The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas' view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:

Think of it in terms of Searle's Chinese Room gedankenexperiment. If you can build a true AI, you can build the Chinese Room. Since I do not follow Penrose and the neo-vitalists in believing that AI is in principle impossible, I think the Chinese Room can be built, although it would take a lot of people and be very slow.

My argument

... (read more)

Bringing a psychiatrist in to this is good: you have offered evidence that does not rely on reports of subjective experiences. But it is still weak evidence; there are many other hypotheses that explain the evidence, and several of them are much more probable.

An example of what I consider strong evidence: a person who had their brain imaged by an fMRI while performing some set of relatively simple mental tasks both before and after experiencing a PCE had radically different results.

That would not entirely convince me, but it would certainly make me take yo... (read more)

You don't have to assign exactly no value to anything, which makes all structures relevant (to some extent).

Sorry, I'm bad about that terminology. Thanks for the correction.

"There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure."

I'm not sure that claim would be entirely absurd.

In the software engineering business, there's a subculture whose underlying ideology can be caricatured as "Programming would be so simple if only we could get those pesky programmers out of the loop." This subculture invests heavily into code generation, model-driven architectures, and so on.

Arguably, too, this goal only sems plausible if yo... (read more)

2SilasBarta11yOkay, but that point only concerns production of software, a relatively new "production output". The statement ("there is no automatist revival in industry ...") would apply just the same to any factory, and ridicules the idea that there can be a mechanical procedure for producing any good. In reality, of course, this seems to be the norm: someone figures out what combination of motions converts the input to the output, refuting the notion that e.g. "There is no mechanical procedure for preparing a bottle of Coca-cola ..." In any case, my dispute with Callahan's remark is not merely about its pessimism regarding mechanizing this or that (which I called View 1 [http://lesswrong.com/lw/2eu/open_thread_july_2010/283q]), but rather, the implication that such mechanization would be fundamentally impossible (View 2), and that this impossibility can be discerned from philosophical considerations. And regarding software, the big difficulty in getting rid of human programmers seems to come from how their role is, ultimately, to find a representation for a function (in a standard language) that converts a specified input into a specified output. Those specifications come from ... other humans, who often conceal properties of the desired I/O behavior, or fail to articulate them.

I'm afraid that isn't really a good fit for how he thinks about these things...

This may be true in some cases, but I don't think it is in this one; my mom has no trouble moralizing on any other topic, even ones about which I care a great deal more than I do about cryonics. For example, she's criticized polyamory as unrealistic and bisexuality as non-existent on multiple occasions, both of which have a rather significant impact on how I live my life.

1whpearson11yI wasn't there at the discussions, but those seem different types of statements than saying that they are "wrong/selfish" and that by implication you are a bad person for doing them. She is impugning your judgement in all cases rather than your character.
1WrongBot11yAn important distinction, it's true. I feel like it should make a difference in this situation that I declared my intention to not pursue cryopreservation, but I'm not sure that it does. Either way, I can think of other specific occasions when my mom has specifically impugned my character as well as my judgment. ("Lazy" is the word that most immediately springs to mind, but there are others.) It occurs to me that as I continue to add details my mom begins to look like a more and more horrible person; this is generally not the case.

The problem I have found is determining what people accept as evidence about "intelligences".

If everyone thought intelligence was always somewhat humanlike (i.e. that if we can't localise beliefs in humans we shouldn't try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.

I think it fairly uncontroversial that beliefs aren't stored in one particular place in humans ... (read more)

bzip's window is 900k, yet it compresses 100MB to 29% but 1GB to 25%. Increasing the memory on 7zip's PPM makes a larger difference on 1GB than 100MB, so maybe it's the window that's relevant there, but it doesn't seem very plausible to me. (18.5% -> 17.8% vs 21.3% -> 21.1%)

Sporting lists might compress badly, especially if they contain times, but this one seems to compress well.

0gwern11yThat's very odd. If you ever find out what is going on here, I'd appreciate knowing.

Shannon's estimate of 0.6 to 1.3 was based on having humans guess the next character out a 27 character alphabet including spaces but no other punctuation.

The impractical leading algorithm achieves 1.3 bits per byte on the first 10^8 bytes of wikipedia. This page says that stripping wikipedia down to a simple alphabet doesn't affect compression ratios much. I think that means that it hits Shannon's upper estimate. But it's not normal text (eg, redirects), so I'm not sure in which way its entropy differs. The practical (for computer, not human) algorithm b... (read more)

1gwern11ybzip2 [http://changelog.complete.org/archives/tag/bzip2] is known to be both slow and not too great at compression; what does lzma-2 (faster & smaller) get you on Wikipedia? (Also, I would expect redirects to play in a compression algorithm's favor compared to natural language. A redirect almost always takes the stereotypical form #REDIRECT[[foo]] or #redirect[[foo]]. It would have difficulty compressing the target, frequently a proper name, but the other 13 characters? Pure gravy.)

That's a rather... disproportionate level of faith to have in the US government's ability to regulate anything. I would not rely on American regulatory agencies for risk assessment in any field, much less one in which so little is currently known.

4Kevin11yhttp://www.nytimes.com/2009/03/24/health/24real.html [http://www.nytimes.com/2009/03/24/health/24real.html] I don't have faith, but I have a broad knowledge of the FDA and their regulation of supplements. Usually when the US government works, it works. If evidence comes out that something is dangerous, the FDA usually pulls it from store shelves until it is fixed. Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava. I knew that there were people claiming fish oil is bad, some of them loudly. I know that this was first disclaimed at least five years ago. I then intuited today, that if there ever did exist a safety issue with mercury in fish oil, it would have been fixed by now. The meme that some fish oil pills are poisoned is mostly perpetuated by companies that are trying to sell you extra expensive fish oil pills.
1wedrifid11y(Voted up but...) I'd like to clarify that claim, because I took the totally wrong message from it the first read through. We're talking about regulation for quality control purposes and not control of the substance itself (I'm assuming). 5-Hydroxytryptophan [http://en.wikipedia.org/wiki/5-Hydroxytryptophan] itself is just an amino acid precursor that is available over the counter in the USA and Canada. It is an intermediate product produced when Tryptophan is being converted into Seratonin. It was Tryptophan which was banned by the FDA due to association [http://en.wikipedia.org/wiki/Tryptophan#Tryptophan_supplements_and_EMS] with EMS [http://en.wikipedia.org/wiki/Eosinophilia%E2%80%93myalgia_syndrome]. They cleared that up eventually once they established that the problem was with the filtering process of a major manufacturer, not the substance itself. I don't think they ever got around to banning 5-HTP, even though the two only differ by one enzymatic reaction. In general it is relatively hard to mess yourself up with amino acid precursors, even though Seratonin is the most dangerous neurotransmitter to play with. In the case of L-Tryptophan and 5-HTP care should be taken when combining it with SSRIs and MAO-A inhibitors. ie. Take way way less for the same effect or just "DO NOT MESS WITH SERATONIN!" (in slightly shaky handwriting [http://www.fanfiction.net/s/5782108/17/Harry_Potter_and_the_Methods_of_Rationality] ). Let me know if you meant something different from the above. Also, what is the story with Kava? All I know is that it is a mild plant based supplement that mildly sedates/counters anxiety/reduces pain, etc. Has it had quality issues too?
5Kevin11yThanks for the clarification, yes, by 5-HTP I meant tryptophan. Serotonin has serious drug interactions with SSRIs and MAOIs, but otherwise is decidedly milder than pharmaceutical anti-depressants. It's effects are more comparable to melatonin than prozac Kava is a plant that counters anxiety, and it is rather effective at doing so but very short lasting. It causes no physical addiction, which is one of the reasons it is on the FDA's Generally Recognized as Safe list. All kava on the market today is sourced from kava root. Kava has a great deal of native/indigenous use, and those people always make their drinks from kava root, throwing away the rest of the plant. The rest of the plant contains active substances, so in their infinite wisdom, a Western company bought up the cheap kava leaf remnants and made extracts. It turns out that kava leafs have ingredients that cause large amounts of liver damage, but the roots are relatively harmless. Kava root still isn't good for the liver, but it is less damaging than alcohol or acetaminophen. It is a bad idea to regularly mix it with alcohol or acetaminophen or other things that are bad for the liver, though.
3wedrifid11yCourtesy of google: acetaminophen is 'paracetamol'. It seems several countries (including the US) use a different name for the chemical.

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different.

They most certainly are. But it's semantics.

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Frankly, I'm not informed enough about priors commit to maxent, Kolmogorov complexity, or anything else.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical

... (read more)

a truthful deity told you that the mean was 3.5

I think I'd answer, "the mean of what?" ;)

I'm not really qualified to comment on the methodological issues since I have yet to work through the formal meaning of "maximum entropy" approaches. What I know at this stage is the general argument for justifying priors, i.e. that they should in some manner reflect your actual state of knowledge (or uncertainty), rather than be tainted by preconceptions.

If you appeal to intuitions involving a particular physical object (a die) and simultaneousl... (read more)

8Cyan11yIf the die is slightly shorter along the 3-4 axis than along the 1-6 and 2-5 axes, then the 3 and 4 faces will have slightly greater surface area than the other faces.
1Morendil11yOur models differ, then: I was assuming a strictly cubic die. So maybe we should also model our uncertainty over the dimensions of the (parallelepipedic) die. But it seems in any case that we are circling back [http://lesswrong.com/lw/2eu/open_thread_july_2010/28tt] to the question of model checking, via the requirement that we should first be clear about what our uncertainty is about.

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Allow me to introduce to you the Brandeis dice problem. We have a six-sided die, sides marked 1 to 6, possibly unfair. We throw it many times (say, a billion) and obtain an average value of 3.5. Using that information alone, what's your probability distribution for the next throw of the die? A naive application of the maxent approach says we should pick the distribution over {1,2,3,4,5,6} with mean 3.... (read more)

2cupholder11yIn this example, what information are we Bayesian updating on?
2Cyan11yI'm nearly positive that the linked paper (and in particular, the above-quoted conclusion) is just wrong. Many years ago I checked the calculations carefully and found that the results come from an unavailable computer program, so it's definitely possible that the results were just due to a bug. Meanwhile, my paper copy of PT:LOS contains a section which purports to show that Bayesian updating and maximum entropy give the same answer in the large-sample limit. I checked the math there too, and it seemed sound. I might be able to offer more than my unsupported assertions when I get home from work.
2Cyan11yI've checked carefully in PT:LOS for the section I thought I remembered, but I can't find it. I distinctly remember the form of the theorem (it was a squeeze theorem), but I do not recall where I saw it. I think Jaynes was the author, so it might be in one of the papers listed here [http://bayes.wustl.edu/etj/node1.html]... or it could have been someone else entirely, or I could be misremembering. But I don't think I'm misremembering, because I recall working through the proof and becoming satisfied that Uffink must have made a coding error.
2Morendil11ySo my prior state of knowledge about the die is entirely characterized by N=10^9 and m=3.5, with no knowledge of the shape of the distribution? It's not obvious to me how you're supposed to turn that, plus your background knowledge about what sort of object a die is, into a prior distribution; even one that maximizes entropy. The linked article mentions a "constraint rule" which seems to be an additional thing. This sort of thing is rather thoroughly covered by Jaynes in PT:TLOS as I recall, and could make a good exercise for the Book Club when we come to the relevant chapters. In particular section 10.3 "How to cheat at coin and die tossing" contains the following caveat: And later:
1Douglas_Knight11yIn the large N limit, and only the information that the mean is exactly 3.5, the obvious conclusion is that one is in a thought experiment, because that's an absurd thing to choose to measure and an adversary has chosen the result to make us regret the choice. More generally, one should revisit the hypothesis that the rolls of the die are independent. Yes, rolling only 1 and 6 is more likely to get a mean of 3.5 than rolling all six numbers, but still quite unlikely. Model checking!

I have an IQ of 85. My sister has an IQ of 160+. AMA.

http://www.reddit.com/r/IAmA/comments/cma2j/i_have_an_iq_of_85_my_sister_has_an_iq_of_160_ama/

Posted because of previous LW interest in a similar thread.

Definitions are not a simple matter - I would claim that libertarian free will* is at least as silly as the simulation hypothesis.

But I don't filter my conversation to ban silliness.

* I change my phrasing to emphasize that I can respect hard incompatibilism - the position that "free will" doesn't exist.

I understand this move but I don't like it. I think that in the fullness of time, we'll see that probability is not a kind of preference, and there is a "fact of the matter" about the effects that actions have, i.e. that reality is objective not subjective.

But I don't like arguments from subjective anticipation, subjective anticipation is a projective error that humans make, as many worlds QM has already proved.

Indeed MW QM combined with Robin's Mangled Worlds is a good microcosm for how the multiverse at other levels ought to turn out. Subjecti... (read more)

2Vladimir_Nesov11yI think that probability is a tool for preference, but I also think that there is a fact of the matter about the effects of actions, and that reality of that effect is objective. This effect is at the level of the sample space (based on all mathematical structures maybe) though, of "brittle math", while the ways you measure the "probability" of a given (objective) event depend on what preference (subjective goals) you are trying to optimize for.

Agreed about utilitarianism.

FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you're playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an a... (read more)

9Vladimir_M11yRichardKennaway: Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role -- even if stated in the crudest possible character-sheet sort of way -- can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions. Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn't (yet?) exist, often it's very difficult to do any better [http://lesswrong.com/lw/2ee/unknown_knowns_why_did_you_choose_to_be_monogamous/27cj] .
4NancyLebovitz11yCould you expand on how those discussions of status here and on OB are different from what you'd see as a more realistic discussion of status?

I don't think that creation of new sentients, in and of itself, has an impact on the (my) SUF. It only has an impact to the extent that their creators value them and others disvalue such new beings.

We've been thinking about moral status of identical copies. Some people value them, some people don't, Nesov says we should ask a FAI because our moral intuitions are inadequate for such problems. Here's a new intuition pump:

Wolfram Research has discovered a cellular automaton that, when run for enough cycles, produces a singleton creature named Bob. From what we can see, Bob is conscious, sentient and pretty damn happy in his swamp. But we can't tweak Bob to create other creatures like him, because the automaton's rules are too fragile and poorly understo... (read more)

3mkehrt11yI completely lack the moral intuition that one should create new conscious beings if one knows that they will be happy. Instead, my ethics apply only to existing people. I am actually completely baffled that so many people seem to have this intuition. Thus, there is no reason to copy Bob. (Moreover, I avoid the repugnant condition.)
1SilasBarta11ySame answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him. (First LW post from my first smartphone btw.)
3Vladimir_Nesov11yBah, he can't feel that we don't run him. Whether we should run him is a question of optimizing the moral value of our world, not of determining his subjective perception. What Bob feels is a property completely determined by the initial conditions of the simulation, and doesn't (generally) depend on whether he gets implemented in any given world.
3cousin_it11yOkay next question. Our understanding of the cellular automaton has advanced to the point where we can change one spot of Bob's world, at one specific moment in time, without being too afraid of harming Bob. It will have ripple effects and change the swamp around him slightly, though. So now we have 10^30 possible slightly-different potential futures for Bob. He will probably be happy in the overwhelming majority of them. How many should we run to fulfill our moral utility function of making sentients happy?
1SilasBarta11yOkay, point taken. The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones. But in any case, you would be obligated to preserve re-instantiation capability of any already-created being.

utilitymonster:

The probability of being the first to see 25/100 is WAY higher (x 10^25 or so) if the lake is 3/4 full of big fish than if it is 1/4 full of big fish.

Maybe I'm misunderstanding your phrasing here, but it sounds fallacious. If there's a deck of cards and you're in a group of 52 people who are called out in random order and told to pick one card each from the deck, the probability of being the first person to draw an ace is exactly the same (1/52) regardless of whether it's a normal deck or a deck of 52 aces (or even a deck with 3 out of 4... (read more)

1utilitymonster11yYou're right, thanks. I was considering an example with 10^100 scientists. I thought that since there would be a lot more scientists who got 25 big in the 1/4 scenario than in the 3/4 scenario (about 9.18 10^98 vs. 1.279 10^75), you'd be more likely to be first the 3/4 scenario. But this forgets about the probability of getting an improbable result. In general, if there are N scientists, and the probability of getting some result is p, then we can expect Np scientists to get that result on average. If the order is shuffled as you suggest, then the probability of being the first to get that result is p * 1/(Np) = 1/N. So the probability of being the first to get the result is the same, regardless of the likelihood of the result (assuming someone will get the result). EDIT: It occurs to me that I might have been thinking about the probability of being selected by Al conditional on getting 25/100. In that case, you're a lot more likely to be selected if the pond is 3/4 big than if it is 1/4 big, since WAY more people got similar results in the latter case. JGMWeissman was probably thinking the same.

I don't :)

If there is a disagreement on, say, the status of Taiwan, even someone who doesn't know much it might agree that some good predictors would agree that some good predictors would be "knowledge of the history of Taiwan", "Having lived in Taiwan", "Familiarity with Chinese culture", etc.

And it can be interesting to see whether:

  • People of different opinions consider different predictors as important (conveniently, those that favor their position)

  • Everyone agrees on which predictors are important, but those who score hi

... (read more)

Wow... a cult formed to actively seek neurological dysfunction.

I'd support deleting heavily downvoted comments that do not have upvoted descendants.

It is not possible … to construct a system of thought that improves on common sense

I read Moldbug's quote as saying: there is currently no system ...

Medical grade honey! I can't wait until I can get this stuff in bulk.

How honey kills bacteria

1gwern11yI'm just wondering - what makes medical-grade honey medical-grade (as opposed to food-grade)?
6Emile11yThe price ?
3Douglas_Knight11yMedical-grade honey is purer, sterilized, and made from tea tree nectar. It is a better [http://www.woundsresearch.com/content/a-comparison-between-medical-grade-honey-and-table-honeys-relation-antimicrobial-efficacy] antibiotic, both because of the sterilization and because it has more of the active ingredient than ordinary tea tree honey, probably because they put more effort into preventing the bees from eating anything else.
1[anonymous]11yIt is produced by bees.

If you wish to distinguish yourself from people who are promoting cults, you need to not sound like someone promoting a cult.

As this is a straw man (there is no cult here to promote), I'll pass.

That Straw Man must feel seriously misunderstood and abused sometimes!

Also:

3) Check for grammar, spelling, capitalization, and punctuation.

I've been listening to a podcast (Skeptically Speaking) talking with a fellow named Sherman K Stein, author of Survival Guide for Outsiders. I haven't read the book, but it seems that the author has a lot of good points about how much weight to give to expert opinions.

EDIT: Having finished listening, I revise my opinion down. It's still probably worth reading, but wait for it to get to the library.

I am afraid I cannot agree with you.

Have you succeeded in your stated intention of "deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory"?

If you wish to continue discussing this problem with me, I humbly suggest that the best way forward is for you to show me your proof of that. And we might take the discussion to email if you like.

It is great that you are studying Pearl.