This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Part 2

New Comment
697 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I propose that LessWrong should produce a quarterly magazine of its best content.

LessWrong readership has a significant overlap with the readers of Hacker News, a reddit/digg-like community of tech entrepreneurs. So you might be familiar with Hacker Monthly, a print magazine version of Hacker News. The first edition, featuring 16 items that were voted highly on Hacker News, came out in June, and the second came out today. The curator went to significant effort to contact the authors of the various articles and blog posts to include them in the magazine.

Why would we want LessWrong content in a magazine? I personally would find it a great recruitment tool; I could have copies at my house and show/lend/give them to friends. As someone at the Hacker News discussion commented, "It's weird but I remember reading some of these articles on the web but, reading them again in magazine form, they somehow seem much more authoritative and objective. Ah, the perils of framing!"

The publishing and selling part is not too difficult. Hacker Monthly uses MagCloud, a company that makes it easy to publish and sell PDFs into printed magazines.

Unfortunately, I don't have the skills or time to d... (read more)

5mattnewport
Does anyone else find the idea of creating a printed magazine rather anachronistic?
3Blueberry
The rumors of print media's death have been greatly exaggerated.
[-]Larks130

This comment would seem much more authoritative if seen in print.

2LucasSloan
I don't think there's enough content on LW to be worthwhile publishing a magazine. However, Eliezer's book on rationality should offer many of the same benefits.
8michaelkeenan
Not all of the content needs to be from the most recent quarter. There could be classic articles too. But I think we might have enough content each quarter anyway. Let's see... There were about 120 posts to Less Wrong from April 1 to June 30. The top ten highest-voted were Diseased thinking: dissolving questions about disease by Yvain, Eight Short Studies On Excuses by Yvain, Ugh Fields by Roko, Bayes Theorem Illustrated by komponisto, Seven Shiny Stories by Alicorn, Ureshiku Naritai by Alicorn, The Psychological Diversity of Mankind by Kaj Sotala, Abnormal Cryonics by Will Newsome, Defeating Ugh Fields In Practice by Psychohistorian, and Applying Behavioral Pscyhology on Myself by John Maxwell IV. Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long). So maybe swap a couple of them out for other ones. Then maybe add a few classic LessWrong articles (for example, Disguised Queries would make a good companion piece to Diseased Thinking), add a few pages of advertising and maybe some rationality quotes, and you'd have at least 30 pages. I know I'd buy it.
1komponisto
It's not actually all that long; it's just that the diagrams take up a lot of space.
2michaelkeenan
Well, I'd like to keep the diagrams if the article is to be used. I do like Bayes Theorem Illustrated and I think an explanation of Bayes Theorem is perfect content for the magazine. If I were designing the magazine I'd want to try to include it, maybe edited down in length.
3NancyLebovitz
Monthly seems too often. Quarterly might work.
3gwern
A yearly anthology would be pretty good, though. HN is reusing others' content and can afford a faster tempo; but that simply means we need to be slower. Monthly is too fast, I suspect that quarterly may be a little too fast unless we lower our standards to include probably wrong but still interesting essays. (I think of "Is cryonics necessary?: Writing yourself into the future" as an example of something I'm sure is wrong, but was still interesting to read.)
2Kevin
How about thirdly!?
0magfrump
This post both made me laugh AND think it was a good idea; I'd love to see a magazine that was more than once a year. There's a bit of discussion of the most recent quarter; if people don't think that it is long enough (or that the pace will continue, or that people will consent to their articles being put in journals) a slight delay should help but a four times delay seems excessive.
1Kevin
There's certainly enough content to do at least one really good issue.

A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:

http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all

While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.

Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.

This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.

This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn'... (read more)

[-]Roko170

Wanting cryo signals disloyalty to your present allies.

Women, it seems, are especially sensitive to this (mothers, wives). Here's my explanation for why:

  1. Women are better than men at analyzing the social-signalling theory of actions. In fact, they (mostly) obsess about that kind of thing, e.g. watching soap operas, gossiping, people watching, etc. (disclaimer: on average)

  2. They are less rational than men (only slightly, on average), and this is compounded by the fact that they are less knowledgeable about technical things (disclaimer: on average), especially physics, computer science, etc.

  3. Women are more bound by social convention and less able to be lone dissenters. Asch's conformity experiment found women to be more conforming.

  4. Because of (2) and (3), women find it harder than men to take cryo seriously. Therefore, they are much more likely to think that it is not a feasible thing for them to do

  5. Because they are so into analyzing social signalling, they focus in on what cryo signals about a person. Overwhelmingly: selfishness, and as they don't think they're going with you, betrayal.

6Alicorn
If you're right, this suggests a useful spin on the disclosure: "I want you to run away with me - to the FUTURE!" However, it was my dad, not my mom, who called me selfish when I brought up cryo.
0Roko
I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are. For parents, you can't do this, but they're your parents, they'll love you through thick and thin.
2RHollerith
Ah, but did you notice that that did not work for Robin? (The NYT article says that Robin discussed it with Peggy when they were getting to know each other.)
5Nisan
It "worked" for Robin to the extent that Robin got to decide whether to marry Peggy after they discussed cryonics. Presumably they decided that they preferred each other to hypothetical spouses with the same stance on cryonics.
0RHollerith
Thanks. (Upvoted.)
5Wei Dai
Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?
6Roko
Aha, but if I signed up, I'd have to non-conform, darling. Think of what all the other girls at the office would say about me! It would be worse than death!
6lmnop
In the case of refusing cryonics, I doubt that fear of social judgment is the largest factor or even close. It's relatively easy to avoid judgment without incurring terrible costs--many people signed up for cryonics have simply never mentioned it to the girls and boys in the office. I'm willing to bet that most people, even if you promised that their decision to choose cryonics would be entirely private, would hardly waver in their refusal.
2Will_Newsome
For what it's worth Steven Kaas emphasized social weirdness as a decent argument against signing up. I'm not sure what his reasoning was, but given that he's Steven Kaas I'm going to update on expected evidence (that there is a significant social cost so signing up that I cannot at the moment see).
6Wei Dai
I don't get why social weirdness is an issue. Can't you just not tell anyone that you've signed up?
2gwern
The NYT article points out that you sometimes want other people to know - your wife's cooperation at the hospital deathbed will make it much easier for the Alcor people to wisk you away.
2Vladimir_Nesov
It's not an argument against signing up, unless the expected utility of the decision is borderline positive and it's specifically the increased probability of failure because of lack of additional assistance of your family that tilts the balance to the negative.
0gwern
Given that there are examples of children or spouses actively preventing (and succeeding) cryopreservation, that means there's an additional few % chance of complete failure. Given the low chance to begin with (I think another commenter says noone expects cryonics to succeed with more than 1/4 probability?), that damages the expected utility badly.
5pengvado
An additional failure mode with a few % chance of happening damages the expected utility by a few %. Unless you have some reason to think that this cause of failure is anticorrelated with other causes of failure?
0[anonymous]
If I initially estimate that cyronics in aggregate has a 10% chance of succeeding, and I then estimate that my spouse/children have a 5% chance of preventing my cryopreservation, does my expected utility decline by only 5%?
-1RogerPepitone
Are you still involved in Remember 11?
2wedrifid
If my spouse played that card too hard I'd sign up to cryonics then I'd dump them. ("Too hard" would probably mean more than one issue and persisting against clearly expressed boundaries.) Apart from the manipulative aspect it is just, well, stupid. At least manipulate me with "you will be abandoning me!" you silly man/woman/intelligent agent of choice.
2JoshuaZ
Voted up as an interesting suggestion. That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with. Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.
0Wei Dai
Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do. Right, so sign up before entering the relationship, then play that card. :)
9lsparrish
I would say that if you aren't yet married, be prepared to dump them if they won't sign up with you. Because if they won't, that is a strong signal to you that they are not a good spouse. These kinds of signals are important to pay attention to in the courtship process. After marriage, you are hooked regardless of what decision they make on their own suspension arrangements, because it's their own life. You've entered the contract, and the fact they want to do something stupid does not change that. But you should consider dumping them if they refuse to help with the process (at least in simple matters like calling Alcor), as that actually crosses the line into betrayal (however passive) and could get you killed.
6JoshuaZ
We may have different definitions of "functional relationship." I'd put very high on the list of elements of a functional relationship that people don't go out of there way to consciously manipulate each other over substantial life decisions.
2Wei Dai
Um, it's a matter of life or death, so of course I'm going to "go out of my way". As for "consciously manipulate", it seems to me that people in all relationships consciously manipulate each other all the time, in the sense of using words to form arguments in order to convince the other person to do what they want. So again, why is this particular form of manipulation not considered acceptable? Is it because you consider it a lie, that is, you don't think you would really feel betrayed or abandoned if your significant other decided not to sign up with you? (In that case would it be ok if you did think you would feel betrayed/abandoned?) Or is it something else?
3wedrifid
It is a good question. The distinctive feature of this class of influence is the overt use of guilt and shame, combined with the projection of the speaker's alleged emotional state onto the actual physical actions of the recipient. It is a symptom relationship dynamic that many people consider immature and unhealthy.
0Wei Dai
I'm tempted to keep asking why (ideally in terms of game theory and/or evolutionary psychology) but I'm afraid of coming across as obnoxious at this point. So let me just ask, do you think there is a better way of making the point, that from the perspective of the cryonicist, he's not abandoning his SO, but rather it's the other way around? Or do you think that its not worth bring up at all?
2NancyLebovitz
I don't see why you'd be showing disloyalty to those of your allies who are also choosing cryo. Here are some more possible reasons for being opposed to cryo. Loss aversion. "It would be really stupid to put in that hope and money and get nothing for it." Fear that it might be too hard to adapt to the future society. (James Halperin's The First Immortal has it that no one gets thawed unless someone is willing to help them adapt. would that make cryo seem more or less attractive?) And, not being an expert on women, I have no idea why there's a substantial difference in the proportions of men and women who are opposed to cryo.
6Roko
Difference between showing and signalling disloyalty. To see that it is a signal of disloyalty/lower commitment, consider what signal would be sent out by Rob saying to Ruby: "Yes, I think cryo would work, but I think life would be meaningless without you by my side, so I won't bother"
0Wei Dai
It's seems to also be a signal of disloyalty/lower commitment to say, "No honey, I won't throw myself on your funeral pyre after you die." Why don't we similarly demand "Yes, I could keep on living, but I think life would be meaningless without you by my side, so I won't bother" in that case?
3Roko
You have to differentiate between what an individual thinks/does/decides, and what society as a whole thinks/does/decides. For example, in a society that generally accepted that it was the "done thing" for a person to die on the funeral pyre of their partner, saying that you wanted to make a deal to buck the trend would certainly be seen as selfish. Most individuals see the world in terms of options that are socially allowable, and signals are considered relative to what is socially allowable.
8SilasBarta
I -- quite predictably -- think this is a special case of the more general problem that people have trouble explaining themselves. You mom doesn't give her real reason because she can't (yet) articulate it. In your case, I think it's due to two factors: 1) part of the reasoning process is something she doesn't want to say to your face so she avoids thinking it, and 2) she's using hidden assumptions that she falsely assumes you share. For my part, my dad's wife is nominally unopposed, bitterly noting that "It's your money" and then ominously adding that, "you'll have to talk about this with your future wife, who may find it loopy". (Joke's on her -- at this rate, no woman will take that job!)
0cousin_it
Sometime ago I offered this explanation for not signing up for cryo: I know signing up would be rational, but can't overcome my brain's desire to make me "look normal". I wonder whether that explanation sounds true to others here, and how many other people feel the same way.
0SilasBarta
I'm in a typical decision-paralysis state. I want to sign up, I have the money, but I'm also interested in infinite banking, which requires you to get a whole-life plan [1], which would have to be coordinated, which makes it complicated and throws off an ugh field. What I should probably do is just get the term insurance, sign up for cryo, and then buy amendments to the life insurance contract if I want to get into the infinite banking thing. [1] Save your breath about the "buy term and invest the difference" spiel, I've heard it all before. The investment environment is a joke.
0mattnewport
You mentioned this before and I had a quick look at the website and got the impression that it is fairly heavily dependent on US tax laws around whole life insurance and so is not very applicable to other countries. Have you investigated it enough to say whether my impression is accurate or if this is something that makes sense in other countries with differing tax regimes as well?
0SilasBarta
I haven't read about the laws in other countries, but I suspect they at least share the aspect that it's harder to seize assets stored in such a plan, giving you more time to lodge an objection of they get a lien on it.
0mattnewport
For a variety of reasons I don't think cryonics is a good investment for me personally. The social cost of looking weird is certainly a negative factor, though not the only one.
4NancyLebovitz
I don't have anything against cryo, so this are tentative suggestions. Maybe going in for cryo means admitting how much death hurts, so there's a big ugh field. Alternatively, some people are trudging through life, and they don't want it to go on indefinitely. Or there are people they want to get away from. However, none of this fits with "I'll write you out of my will". This sounds to me like seeing cryo as a personal betrayal, but I can't figure out what the underlying premises might be. Unless it's that being in the will implies that the recipient will also leave money to descendants, and if you aren't going to die, then you won't.
3Blueberry
Is there evidence for this? Specifically the "intense" part? ETA: Did you ask her why she had such strong feelings about it? Was she able to answer?
0WrongBot
The evidence is largely anecdotal, I think. There are certainly stories of cryonics ending marriages out there. I haven't yet asked her about it, but I plan to do so next time we talk.
0whpearson
If I was going to make a guess, I suspect that saying X is selfish can easily lead to the rejoinder, "It is my money I have the right to chose what to do with it," especially within the modern world. Saying X is selfish so it shouldn't be done, can also be seen as interfering with another persons business which is frowned upon in lots of social circles. It is also called moralising. So she may be unconsciously avoiding that response.
1WrongBot
This may be true in some cases, but I don't think it is in this one; my mom has no trouble moralizing on any other topic, even ones about which I care a great deal more than I do about cryonics. For example, she's criticized polyamory as unrealistic and bisexuality as non-existent on multiple occasions, both of which have a rather significant impact on how I live my life.
1whpearson
I wasn't there at the discussions, but those seem different types of statements than saying that they are "wrong/selfish" and that by implication you are a bad person for doing them. She is impugning your judgement in all cases rather than your character.
1WrongBot
An important distinction, it's true. I feel like it should make a difference in this situation that I declared my intention to not pursue cryopreservation, but I'm not sure that it does. Either way, I can think of other specific occasions when my mom has specifically impugned my character as well as my judgment. ("Lazy" is the word that most immediately springs to mind, but there are others.) It occurs to me that as I continue to add details my mom begins to look like a more and more horrible person; this is generally not the case.
7Vladimir_Nesov
A factual error: I'm fairly sure that head-only preservation doesn't involve any brain-removal. It's interesting that in context the purpose of the phrase was to present a creepy image of cryonics, and so the bias towards the phrases that accomplish this goal won over the constraint of not generating fiction.
3Wei Dai
I wonder if Peggy's apparent disvalue of Robin's immortality represents a true preference, and if so, how should an FAI take it into account while computing humanity's CEV?
3Clippy
It should store a canonical human "base type" in a data structure somewhere. Then it should store the information about how all humans deviate from the base type, so that they can in principle be reconstituted as if they had just been through a long sleep. Then it should use Peggy's body and Robin's body for fuel.
1red75
It seems plausible that "know more" part of EV should include result of modelling of applying CEV to humanity, i.e. CEV is not just result of aggregation of individuals' EVs, but one of fixed points of humans' CEV after reflection on results of applying CEV. Maybe Peggy's model will see, that her preferences will result in unnecessary deaths and that death is no more important part for society to exist/for her children to prosper.
3Wei Dai
It seems to me if it were just some factual knowledge that Peggy is missing, Robin would have been able to fill her in and thereby change her mind. Of course Robin's isn't a superintelligent being, so perhaps there is an argument that would change Peggy's mind that Robin hasn't thought of yet, but how certain should we be of that?

Communicating complex factual knowledge in an emotionally charged situation is hard, to say nothing of actually causing a change in deep moral responses. I don't think failure is strong evidence for the nonexistence of such information. (Especially since I think one of the most likely sorts of knowledge to have an effect is about the origin — evolutionary and cognitive — of the relevant responses, and trying to reach an understanding of that is really hard.)

3Wei Dai
You make a good point, but why is communicating complex factual knowledge in an emotionally charged situation hard? It must be that we're genetically programmed to block out other people's arguments when we're in an emotionally charged state. In other words, one explanation for why Robin has failed to change Peggy's mind is that Peggy doesn't want to know whatever facts or insights might change her mind on this matter. Would it be right for the FAI to ignored that "preference" and give Peggy's model the relevant facts or insights anyway? ETA: This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.
[-]Kevin150

You are underestimating just how enormously Peggy would have to change her mind. Her life's work involves emotionally comforting people and their families through the final days of terminal illness. She has accepted her own mortality and the mortality of everyone else as one of the basic facts of life. As no one has been resurrected yet, death still remains a basic fact of life for those that don't accept the information theoretic definition of death.

To change Peggy's mind, Robin would not just have to convince her to accept his own cryonic suspension, but she would have to be convinced to change her life's work -- to no longer spend her working hours convincing people to accept death, but to convince them to accept death while simultaneously signing up for very expensive and very unproven crazy sounding technology.

Changing the mind of the average cryonics-opposed life partner should be a lot easier than changing Peggy's mind. Most cryonics-opposed life partners have not dedicated their lives to something diametrically opposed to cryonics.

1Roko
You mean you want to make an average IQ woman into a high-grade rationalist? Good luck! Better plan: go with Rob Ettinger's advice. If your wife/gf doesn't want to play ball, dump her. (This is a more alpha-male attitude to the problem, too. A woman will instinctively sense that you are approaching her objection from an alpha-male stance of power, which will probably have more effect on her than any argument) In fact I'm willing to bet at steep odds that Mystery could get a female partner to sign up for cryo with him, whereas a top rationalist like Hanson is floundering.
6Alicorn
Is this generalizable? Should I, too, threaten my loved ones with abandonment whenever they don't do what I think would be best?
1Alexandros
I don't think this is about doing what you think best, it's about allowing you to do what you think best. And yes, you should definitely threaten abandonment in these cases, or at least you're definitely entitled to threatening and/or practicing abandonment in such cases.
1Roko
I'm not sure. It might work, but you're going outside of my areas of expertise.
1Larks
Better yet, sign up while you're single, and present it Fait accompli. It won't get her signed up, but I'd be willing to be she won't try to make you drop your subscription.
0lmnop
Well the practical advice is being offered to LW, and I'd guess that most of the people here are not average IQ, and neither are their friends and family. I personally think it's a great idea to try and give someone the relevant factual background to understand why cryonics is desirable before bringing up the option. It probably wouldn't work, simply because almost all attempts to sell cryonics to anyone don't work, but it should at least decrease the probability of them reacting with a knee-jerk dismissal of the whole subject as absurd.
2Roko
I maintain that if you are male with a female relatively neurotypical partner, the probability of success of making her sign on the dotted line for cryo, or accepting wholeheartedly your own cryo is not maximized by using rational argument, rather it is maximized by having an understanding of the emotional world that the fairer sex inhabit, and how to control her emotions so that she does what you think best. She won't listen to your words, she'll sense the emotions and level of dominance in you, and then decide based on that, and then rationalize that decision. This is a purely positive statement, i.e. it is empirically testable, and I hereby denounce any connotation that one might interpret it to have. Let me explicitly disclaim that I don't think that womens' emotional nature makes them inferior, just different, and in need of different treatment. Let me also disclaim that this applies only on average, and that there will be exceptions, i.e. highly systematizing women who will, in fact, be persuaded by rational argument.
2lmnop
I mostly agree with you. I would even expand your point to say that if you want to convince anyone (who isn't a perfect Bayesian) to do anything, the probability of success will almost always be higher if you use primarily emotional manipulation rather than rational argument. But cryonics inspires such strong negative emotional reactions in people that I think it would be nearly impossible to combat those with emotional manipulation of the type you describe alone. I haven't heard of anyone choosing cryonics for themselves without having to make a rational effort to override their gut response against it, and that requires understanding the facts. Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.
0Roko
Right, but the data says that it is a serious problem. Cryonics wife problem, etc.
9lsparrish
I wonder how these women feel about being labeled "The Hostile Wife Phenomenon"?
2Roko
Full of righteous indignation, I should imagine. After all, they see it as their own husbands betraying them.
2steven0461
Yes -- calling it "factual knowledge" suggests it's only about the sort of fact you could look up in the CIA World Factbook, as opposed to what we would normally call "insight".
2red75
I meant something like embedding into culture where death is unnecessary, rather than directly arguing for that. Words aren't best communication channel for changing moral values. Will it be enough? I hope yes, if death of carriers of moral values isn't necessary condition for moral progress. Edit: BTW, if CEV will be computed using humans' reflection on its application, then it means that FAI cannot passively combine all volitions, it must search for and somehow choose fixed point. Which rule should govern that process?
3wedrifid
That was very nearly terrifying.
2Vladimir_Nesov
Good article overall. Gives a human feel to the decision of cryonics, in particular by focusing on an unfair assault it attracts (thus appealing cryonicist's status).
1mattnewport
The hostile wife phenomenon doesn't seem to have been mentioned much here. Is it less common than the article suggests or has it been glossed over because it doesn't support the pro-cryonics position? Or has it been mentioned and I wasn't paying attention?
2ata
At last count (a while ago admittedly), most LWers were not married, and almost none were actually signed up for cryonics. So perhaps this phenomenon just isn't a salient issue to most people here.
4Morendil
I'm married and with kids, my wife supports my (so far theoretical only) interest in cryo. Though she says she doesn't want it for herself.
2Paul Crowley
Data point FWIW: my partners are far from convinced of the wisdom of cryonics, but they respect my choices. Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".
0gwern
Well, I hoped you showed him your expected utility calculations!
1Paul Crowley
I'm afraid that isn't really a good fit for how he thinks about these things...
0Sniffnoy
It seems a bit odd to me that he would use the lottery comparison, in that case. Or no?
3Kingreaper
They're both things with low probabilities of success, and extremely large pay-offs. To someone with a certain view of the future, or a moderately low "maximum pay-off" threshold, the pay-off of cryonics could be the same as the pay-off for a lottery win. At which point the lottery is a cheaper, but riskier, gamble. Again, if someone has a certain view of the future, or a "minimum probability" threshold (which both are under) then this difference in risk could be unnoticed in their thoughts. At which point the two become identical, but one is more expensive. It's quick-and-dirty thinking, but it's one easy way to end up with the connection, and doesn't involve any utility calculations (in fact, utility calculations would be an anathema to this sort of thinking)
4Paul Crowley
One big barrier I hit in talking to some of those close to me about this is that I can't seem to explain the distinction between wanting the feeling of hope that I might live a very long time, and actually wanting to live a long time. Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".
4Nisan
I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?
8Paul Crowley
Yes. I'm happy that people respect my choices, but when they "respect my beliefs" it strikes me as incredibly disrespectful.
3Richard_Kennaway
And if you reply "I only want to believe in things that are true?"
3Paul Crowley
Apply the same transformation to my words that is causing me problems to that reply, and you get "I only want to believe in things that I believe are true".
0[anonymous]
I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?
0Sniffnoy
That's a bit scary.
0HughRistik
It was mentioned, and you weren't paying attention ;)
0mattnewport
I did think this was quite a likely explanation. As I'm not married the point would likely not have been terribly salient when reading about pros and cons.

Drowning Does Not Look Like Drowning

Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.

Cryonics scales very well. People who think cryonics is costly, even if you had to come up with the entire lump sum close to the end of your life, are generally ignorant of this fact.

So long as you keep the shape constant, for any given container the surface area is a based on a square law whereas the volume is a cube. For example with a cube shaped object, one side squared times 6 is the surface area whereas one side cubed is the volume. Surface area is where the heat gets entry, so if you have a huge container holding cryogenic goods (humans in this case) it costs much less per unit volume (human) than is the case with a smaller container of equal insulation. A way to understand this is that you only have to insulate the outside -- the inside gets free insulation.

But you aren't stuck using equal insulation. You can use thicker insulation, with a much smaller proportional effect on total surface area as you use bigger sizes. Imagine the difference between a marble sized freezer and a house-sized freezer, when you add a foot of insulation. The outside of the insulation is where it begins collecting heat. But with a gigantic freezer, you might add a meter of insulation without it ha... (read more)

7Morendil
This needs to be a top-level post. Even with minimal editing. Please. (ETA: It's not so much that we need to have another go at the cryonics debate; but the above is an argument that I can't recall seeing discussed here previously, that does substantially change the picture, and that illustrates various kinds of reasoning - about scaling properties, about predefining thresholds of acceptability, and about what we don't know we don't know - that are very relevant to LW's overall mission.)
1lsparrish
Done.
[-]VNKKET130

This is a mostly-shameless plug for the small donation matching scheme I proposed in May:

I'm still looking for three people to cross the "membrane that separates procrastinators and helpers" by donating $60 to the Singularity Institute. If you're interested, see my original comment. I will match your donation.

6Kutta
Done, 60 USD sent.
2VNKKET
Thank you! Matched.
4Scott Alexander
Done!
4WrongBot
I'm sorry I didn't see that earlier; I donated $30 to the SIAI yesterday, and I probably could have waited a little while longer and donated $60 all at once. If this offer will still be open in a month or two, I will take you up on it.
0VNKKET
That sounds good, and feel free to count your first $30 towards a later $60 total if I haven't found a third person by then.
2zero_call
Without any way of authenticating the donations, I find this to be rather silly.
3VNKKET
I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment: Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.

I was at a recent Alexander Technique workshop, and some of the teachers had been observing how two year olds crawl.

If you've had any experience with two year olds, you know they can cover ground at an astonishing rate.

The thing is, adults typically crawl with their faces perpendicular to the ground, and crawling feels clumsy and unpleasant.

Two year olds crawl with their faces at 45 degrees to the ground, and a gentle curve through their upper backs.

Crawling that way gives access to a surprisingly strong forward impetus.

The relevance to rationality and to akrasia is the implication that if something seems hard, it may be that the preconditions for making it easy haven't been set up.

Here's a puzzle I've been trying to figure out. It involves observation selection effects and agreeing to disagree. It is related to a paper I am writing, so help would be appreciated. The puzzle is also interesting in itself.

Charlie tosses a fair coin to determine how to stock a pond. If heads, it gets 3/4 big fish and 1/4 small fish. If tails, the other way around. After Charlie does this, he calls Al into his office. He tells him, "Infinitely many scientists are curious about the proportion of fish in this pond. They are all good Bayesians with the same prior. They are going to randomly sample 100 fish (with replacement) each and record how many of them are big and how many are small. Since so many will sample the pond, we can be sure that for any n between 0 and 100, some scientist will observe that n of his 100 fish were big. I'm going to take the first one that sees 25 big and team him up with you, so you can compare notes." (I don't think it matters much whether infinitely many scientists do this or just 3^^^3.)

Okay. So Al goes and does his sample. He pulls out 75 big fish and becomes nearly certain that 3/4 of the fish are big. Afterwards, a guy na... (read more)

7Vladimir_M
First, let's calculate the concrete probability numbers. If we are to trust this calculator, the probability of finding exactly 75 big fish in a sample of a hundred from a pond where 75% of the fish are big is approximately 0.09, while getting the same number in a sample from a 25% big pond has a probability on the order of 10^-25. The same numbers hold in the reverse situation, of course. Now, Al and Bob have to consider two possible scenarios: 1. The fish are 75% big, Al got the decently probable 75/100 sample, but Bob happened to be the first scientist who happened to get the extremely improbable 25/100 sample, and there were likely 10^(twenty-something) or so scientists sampling before Bob. 2. The fish are 25% big, Al got the extremely improbable 75/100 big sample, while Bob got the decently probable 25/100 sample. This means that Bob is probably among the first few scientists who have sampled the pond. So, let's look at it from a frequentist perspective: if we repeat this game many times, what will be the proportion of occurrences in which each scenario takes place? Here we need an additional critical piece of information: how exactly was Bob's place in the sequence of scientists determined? At this point, an infinite number of scientists will give us lots of headache, so let's assume it's some large finite number N_sci, and Bob's place in the sequence is determined by a random draw with probabilities uniformly distributed over all places in the sequence. And here we get an important intermediate result: assuming that at least one scientist gets to sample 25/100, the probability for Bob to be the first to sample 25/100 is independent of the actual composition of the pond! Think of it by means of a card-drawing analogy. If you're in a group of 52 people whose names are repeatedly called out in random order to draw from a deck of cards, the proportion of drawings in which you get to be the first one to draw the ace of spades will always be 1/52, regardless
1utilitymonster
I was assuming Charlie would show Bob the first person to see 75/100. Anyway, your analysis solves this as well. Being the first to see a particular result tells you essentially nothing about the composition of the pond (provided N_sci is sufficiently large that someone or other was nearly certain to see the result). Thus, each of Al and Bob should regard their previous observations as irrelevant once they learn that they were the first to get those results. Thus, they should just stick with their priors and be 50/50 about the composition of the pond.
4Blueberry
Interesting problem! I think these two statements are inconsistent. If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason. If Bob doesn't trust Al completely, they don't have the same information. Bob doesn't know for sure that Charlie told Al about the selection. From his point of view, Al could be lying. If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond. If each of them only knows the other was selected and they both trust the other one's statements, same thing. But if each puts more trust in Charlie than in the other, then they don't have the same information.
1prase
It is strange. Shall Bob discount his observation after being told that he is selected? What does it actually mean to be selected? What if Bob finds 25 big fish and then Charlie tells him, that there are 3^^^3 other observers and he (Charlie) decided to "select" one of those who observe 25 big fish and talk to him, and that Bob himself is the selected one (no later confrontation with AI). Should this information cancel the Bob's observations? If so, why?
1Kingreaper
Yes, it should, if it is known that Charlie hasn't previously "selected" any other people who got precisely 25. The probability of being selected (taken before you have found any fish) p[chosen] is approximately equal regardless of whether there are 25% or 75% big fish. And the probability of you being selected if you didn't find 25 p[chosen|not25] is zero Therefore, the probability of you being selected, given as you have found 25 big fish p[chosen|found25] is approximately equal to p[chosen]/p[found25] The information of the fact you've been chosen directly cancels out the information from the fact you found 25 big fish.
0utilitymonster
Glad to see we're on the same page.
0utilitymonster
I'm not sure about this: Here's why: VARIANT 2: Charlie has both Al and Bob into his office before the drawings take place. He explains that the first guy (other than Al) to see 25/100 big will report to Al. Bob goes out and sees 25/100 big. To his surprise, he gets called into Charlie's office and informed that he was the first to see that result. Question: right now, what should Bob expect to hear from Al? Intuitively, he should expect that Al had similar results. But if you're right, it would seem that Bob should discount his results once he talks to Charlie and fights out that he is the messenger. If that's right, he should have no idea what to expect Al to say. But that seems wrong. He hasn't even heard anything from Al. If you're still not convinced, consider: VARIANT 3: Charlie has both Al and Bob into his office before the drawings take place. He explains that the first guy (other than Al) to see 25/100 big will win a trip to Hawaii. Bob goes out and sees 25/100 big. To his surprise, he gets called into Charlie's office and informed that he was the first to see that result. I can see no grounds for treating VARIANT 3 differently from VARIANT 2. And it is clear that in VARIANT 3 Bob should not discount his results.
2RobinZ
One key observation is that Al made his observation after being told that he would meet someone who made a particular observation - specifically, the first person to make that specific observation, Bob. This makes Al and Bob special in different ways: * Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting. * Bob is special because he has been selected to meet Al because of the specific data he observes. More precisely, because he will be the first to obtain that specific result. Therefore his result has been selected, and he is only at the meeting because he happens to be the first one to get that result. In the original case, Bob's result is effectively a lottery ticket - when he finds out from Al the circumstances of the meeting, he can simply follow the Natural Answer himself and conclude that his results were unlikely. In the modified case, assuming perfect symmetry in all relevant aspects, they can conclude that an astronomically unlikely event has occurred and they have no net information about the contents of the pond.
0utilitymonster
Not quite. He was selected to meet someone like Bob, in the sense that whoever the messenger was, he'd have seen 25/100 big. He didn't know he'd meet Bob. But he regards the identity of the messenger as irrelevant. You can bring out the difference by considering a variant of the case in which both Al and Bob hear about Charlie's plan in advance. (In this variant, the first to see 25/100 big will visit Al.) What is the relevance of the fact that they observed highly improbable event?
1Kingreaper
Okay, qualitative analysis without calculations: Let's go for a large, finite, case. Because otherwise my brain will explode. Question 1: for any large, finite number of scientists Bob should defer MOSTLY to Alice. First lets look at Alice; In any large finite number of scientists there is a small finite chance that NO scientist will get that result. This chance is larger in the case where 75% of the fish are big. Thus, upon finding that a scientist HAS encountered 25 fish, Alice must adjust her probability slightly towards 25% big fish. Bob has also received several new pieces of information. *He was the first to find 25 big fish. P[first25|found25] approaches 1/P[found25] as you increase the number of scientists. This information almost entirely cancels out the information he already had. *All the information Alice had. This information therefore tips the scales. Bob's final probability will be the same as Alice's. Question two is N/A I will answer question three in a reply to this to try and avoid a massive wall of text.
1Kingreaper
Question 3: lateral answer: in the symmetrical variant the issue of "how many people are being given other people to meet, and is this entire thing just a weird trick" begins to rise. In fact, the probability of it being a weird trick is going to overshadow almost any other attempt at analysis. The first person to get 25 happens to be a person who is told they will meet someone who got 75, and the person who was told they would meet the first person to get 25 happens to get 75? Massively improbable. However, if it is not a trick, the probability is significantly in favour of it being 75% still. Alice isn't talking to Bob due to the fact she got 75, she's talking to Bob due to the fact he got the first 25. Otherwise Bob would most likely have ended up talking to someone else. The proper response at this point for both Alice and Bob is to simply decide that it is overwhelming probable that Charlie is messing with them. I can produce similar variants which don't have this issue, and they come out to 50:50. These include: Everyone is told that the first person to get 25 will meet the first person to get 75.
1Dagon
What is each of their prior probabilities for this setup being true? Bob, knowing that he was selected for his unusual results, can pretty happily disregard them. If you win a lottery, you don't update to believe that most tickets win. Bob now knows of 100 samples (Al's) that relate to the prior, and accepts them. Bob's sampling is of a different prior: coin flipped, then a specific resulting sample will be found. If they are both selected for their results, they both go to 50/50. Neither one has non-selected samples.
1prase
Is there any particular reason why one of the actors is an AI?
2utilitymonster
Al, not AI. ("Al" as in "Alan")
5prase
Sorry. I have some Lesswrong bias. Google statistics on Less Wrong: * AI (second i): 2400 hits * Al (second L): 318 hits (mostly in "et al." and "al Qaida", without capital A) By the way, are these two strings distinguishable when written in the font of this site? Seem to me the same.
2RobinZ
You're right - they're pixel-for-pixel identical. That's a bit problematic.
1Douglas_Knight
Maybe that's why cryptographers say "Alice" rather than "Al."
1JGWeissman
From Bob's perspective, he was more likely to be chosen as the one to talk to Al, if there are fewer scientist that observed exactly 25 big fish, which would happen if there are more big fish. So Bob should update on the evidence of being chosen.
0utilitymonster
This should be important to the finite case. The probability of being the first to see 25/100 is WAY higher (x 10^25 or so) if the lake is 3/4 full of big fish than if it is 1/4 full of big fish. But in the infinite case the probability of being first is 0 either way...
4JGWeissman
There is a reason we consider infinities only as limits of sequences of finite quantities. Suppose you tried to sum the log-odds evidence of the infinite scientist that the pond has more big fish. Well, some of them have positive evidence (summing to positive infinity), some have negative evidence (summing to negative infinity), and you can, by choosing the order of summation, get any result you want (up to some granularity) between negative and positive infinity. You don't need anthropomorphic tricks to make things weird if you have actual infinities in the problem.
1Vladimir_M
utilitymonster: Maybe I'm misunderstanding your phrasing here, but it sounds fallacious. If there's a deck of cards and you're in a group of 52 people who are called out in random order and told to pick one card each from the deck, the probability of being the first person to draw an ace is exactly the same (1/52) regardless of whether it's a normal deck or a deck of 52 aces (or even a deck with 3 out of 4 aces replaced by other cards). This result doesn't even depend on whether the card is removed or returned into the deck after each person's drawing; the conclusion follows purely from symmetry. The only special case is when there are zero aces, in which the event becomes impossible, with p=0. Similarly, if the order in which the scientists get their samples is shuffled randomly, and we ignore the improbable possibility that nobody sees 25/100, then purely by symmetry, the probability that Bob happens to be the first one to see 25/100 is the same regardless of the actual frequency of the 25/100 results: p = 1/N(scientists).
1utilitymonster
You're right, thanks. I was considering an example with 10^100 scientists. I thought that since there would be a lot more scientists who got 25 big in the 1/4 scenario than in the 3/4 scenario (about 9.18 10^98 vs. 1.279 10^75), you'd be more likely to be first the 3/4 scenario. But this forgets about the probability of getting an improbable result. In general, if there are N scientists, and the probability of getting some result is p, then we can expect Np scientists to get that result on average. If the order is shuffled as you suggest, then the probability of being the first to get that result is p * 1/(Np) = 1/N. So the probability of being the first to get the result is the same, regardless of the likelihood of the result (assuming someone will get the result). EDIT: It occurs to me that I might have been thinking about the probability of being selected by Al conditional on getting 25/100. In that case, you're a lot more likely to be selected if the pond is 3/4 big than if it is 1/4 big, since WAY more people got similar results in the latter case. JGMWeissman was probably thinking the same.
0[anonymous]
What effect will updating on this information have?
0Soki
First off all, I think that if Al does not see a sample, it makes the problem a bit simpler. That is, Al just tells Bob that he (Bob) is the first person that saw 25 big fishes. I think that the number N of scientists matters, because the probability that someone will come to see Al depends on that. Lets call B then event the lake has 75% big fishes, S the opposite and C the event someone comes, which means that someone saw 25 fishes. Once Al sees Bob, he updates : P(B/C)=P(B)* P(C/B)/(1/2*P(C/B)+1/2*P(C/S)). When N tends toward infinity, both P(C/B) and P(C/S) tend toward 1, and P(B/SC) tends to 1/2. But for small values of N, P(C/B) can be very small while P(C/S) will be quite close to 1. Then the fact that someone was chosen lowers the probability of having a lake with big fishes. If N=infinity, then the probability of being chosen is 0, and I cannot use Bayes' theorem. If Charlie keeps inviting scientists until one sees 25 big fishes, then it becomes complicated, because the probability that you are invited is greater if the lake has more big fishes. It may be a bit like the sleeping beauty or the absent-minded driver problem. Edited for formatting and misspellings

Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?

1Kazuo_Thow
Part of the San Francisco skyline, maybe?
1cousin_it
Thanks. This is the first time I ever noticed this. Absolutely no idea what it is or why it's there. Talk about selective blindness!
0matt
It was an early draft of the map vs territory theme that became the site header, which we intended to finish but forgot about.

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. E... (read more)

5cousin_it
If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation. If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse. (These arguments are not standard LW fare, but I've floated them here before and they seem to work okay.)
6JoshuaZ
This depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don't have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.
7cousin_it
Wha? Any sequence of observations can be embedded in a consistent system that "hardcodes" it.
1JoshuaZ
Yeah, that's a good point. Hardcoding complicated changes is consistent. So any such argument of this form about level IV fails. I withdraw that claim.
0DanielVarga
Tegmark level IV is a very useful tool to guide one's intuitions, but in the end, the only meaningful question about Tegmark IV universes is this: Based on my observations, what is the relative probability that I am in this one rather than that one? And this, of course, is just what scientists do anyway, without citing Tegmark each time. Hardcoded universes are easily dealt with by the scientists' favorite tool, Occam's Razor.
1Vladimir_Nesov
Consistency is about logics, while Tegmark's madness is about mathematical structures. Whenever you can model your own actions (decision-making algorithm) using huge complicated mathematical structures, you can also do so with relatively simple mathematical structures constructed from the syntax of your algorithm (Lowenheim-Skolem type constructions). There is no fact of the matter about whether a given consistent countable first order theory, say, talks about an uncountable model or a countable one.
4Vladimir_Nesov
Not if you interpret your preference about those worlds as assigning most of them low probability, so that only the ordered ones matter.
0Jordan
I don't follow. Many low probability and unordered worlds are highly preferable. Conversely, many high probability worlds are not. I don't see a correlation.
0Vladimir_Nesov
It's a simplification. If preference satisfies expected utility axioms, it can be decomposed on probability and utility, and in this sense probability is a component of preference and shows how much you care about a given possibility. This doesn't mean that utility is high on those possibilities as well, or that the possibilities with high utility will have high probability. See my old post for more on this.
1Roko
I understand this move but I don't like it. I think that in the fullness of time, we'll see that probability is not a kind of preference, and there is a "fact of the matter" about the effects that actions have, i.e. that reality is objective not subjective. But I don't like arguments from subjective anticipation, subjective anticipation is a projective error that humans make, as many worlds QM has already proved. Indeed MW QM combined with Robin's Mangled Worlds is a good microcosm for how the multiverse at other levels ought to turn out. Subjective anticipation out, but still objective facts about what happens. I note that since the argument from subjective anticipation is invalid, there is still the possibility that we live in an infinite structure with no canonical measure, in which case Vladimir would be right.
2Vladimir_Nesov
I think that probability is a tool for preference, but I also think that there is a fact of the matter about the effects of actions, and that reality of that effect is objective. This effect is at the level of the sample space (based on all mathematical structures maybe) though, of "brittle math", while the ways you measure the "probability" of a given (objective) event depend on what preference (subjective goals) you are trying to optimize for.
0cousin_it
To rephrase, "unless you interpret your preference as denying the multiverse hypothesis" :-)
2Vladimir_Nesov
You don't have to assign exactly no value to anything, which makes all structures relevant (to some extent).
0Mitchell_Porter
What if you can see the doom building up, with every passing day? :-) I think this one is deeper. It is a valid criticism of quantum MWI, for example. If all worlds exist equally then naively all this structure around us should dissolve immediately, because most physical configurations are just randomness. Thus the quest to derive the Born probabilities... I don't believe MWI as an explanation of QM anyway, so no big deal. But I am interested in "level IV" thinking - the idea that "all possible worlds exist", according to some precise notion of possibility. And yes, if you think any sequence of events is equally possible and hence (by the hypothesis) equally real, then what we actually see happening looks exceedingly improbable. One pragmatist response to this is just to say "only orderly worlds are possible", without giving a further reason. If you actually had an "orderly multiverse" theory that gave correct predictions, you would have some justification for doing this, though eventually you'd still want to know why only the orderly worlds are real. A more metaphysical response would try to provide a reason why all the real worlds are orderly. For example: Anything that exists in any world has a "nature" or an "essence", and causality is always about essences, so it's just not true that any string of events can occur in any world. Any event in any world really is a necessary product of the essences of the earlier events that cause it, and the appearance of randomness only happens under special circumstances (e.g. brains in vats) which are just uncommon in the multiverse. There are no worlds where events actually go haywire because it is logically impossible for causality to switch off, and every world has its own internal form of causality. Then there's an anthropic variation on the metaphysical response, where you don't say that only orderly worlds are possible, but you give some reason why consciousness can only happen in orderly worlds (e.g. it requires ca
0ShardPhoenix
It's not clear to me that this is correct. Also, even if it is, then coherent memories (like what we're using to judge this whole scenario) only exist in worlds where this either hasn't happened yet or won't ever.
2wedrifid
We use markdown syntax. An > at the start of the paragraph will make it a quote,
0ShardPhoenix
I know, I was just being too lazy to look up the syntax :/.
3apophenia
If you click "Help" when writing a comment, it will appear in a handy box right next to where you are writing.
0Roko
What is this subjective expectation that you speak of?
1Roko
http://www.nickbostrom.com/papers/anthropicshadow.pdf
0NancyLebovitz
From what I've heard, there was a lot of talk about bomb shelters, but very few of them were built.
0[anonymous]
Well, we even had a law which required to have one if you built a new house (see an article in German). This law is long since extinct, but according to the link above, there were 2.5 million such rooms, for a population of just 8 million people... Please note that in case of a real emergency most of those would probably have been extremely under-equipped. So,built - yes, correctly - no, and nowadays not even thought about.
0NancyLebovitz
What I'd heard was a bit on NPR which claimed there were only a handful of bomb shelters built in the US, and I admit I wasn't thinking about the rest of the world. I'm probably born a little late (1953) for the height of bomb-shelter building, but I've never heard second or third-hand about actual bomb shelters in the US, and I think I would have (as parts of basements or somesuch) if they were at all common. My impression is that the real attitude wasn't so much that a big nuclear war was unlikely as that people thought that if it happened, it wouldn't be worth living through.

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we

... (read more)
1RobinZ
It makes me think of Richard Hamming talking about having "an attack".
[-][anonymous]80

Here are some assumptions one can make about how "intelligences" operate:

  1. An intelligent agent maintains a database of "beliefs"
  2. It has rules for altering this database according to its experiences.
  3. It has rules for making decisions based on the contents of this database.

and an assumption about what "rationality" means:

  1. Whether or not an agent is "rational" depends only on the rules it uses in 2. and 3.

I have two questions:

I think that these assumptions are implicit in most and maybe all of what this community ... (read more)

1whpearson
This also reminded me that I wanted to go through the Intentional Stance by Daniel Dennett and find the good bits. Also worth reading is the wiki page. I think he would state that the model you describe comes from folk psychology. A relevant passage "We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the "undeniable introspective fact" that you can feel "centrifugal force" cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other "secondary qualities". These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external "world"." In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts. You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.
0[anonymous]
Relevant and new-to-me, thanks. I'd be interested to hear examples of things, related to this discussion, that people here would not be easily convinced of.
1whpearson
The problem I have found is determining what people accept as evidence about "intelligences". If everyone thought intelligence was always somewhat humanlike (i.e. that if we can't localise beliefs in humans we shouldn't try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised. I think it fairly uncontroversial that beliefs aren't stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in. To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence. Does that help?
-2whpearson
I'm not so interested in decision theory. I criticised it a bit here Edit: To give a bit more background to how I view rationality: An intelligence is a set of interacting programs some of which have control of the agent at any one time. The rationality of the agent depends upon the set of programs in control of the agent. The relationship between the set of programs and rationality of the system is somewhat environmentally specific.

Is there an on-line 'rationality test' anywhere, and if not, would it be worth making one?

The idea would be to have some type of on-line questionnaire, testing for various types of biases, etc. Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of ... (read more)

8SilasBarta
My kind of test would be like this: 1) Do you always seem to be able to predict the future, even as others doubt your predictions? If they say yes ---> "That's because of confirmation bias, moron. You're not special."
5RobinZ
In their defense, it might be hindsight bias instead. :P
6Cyan
There's an online test for calibration of subjective probabilities.
2Alexandros
That was pretty awesome, thanks. Not precisely what I had in mind, but close enough to be an inspiration. Cheers.
4michaelkeenan
I would love for this to exist! I have some notes on easily-tested aspects of rationality which I will share: The Conjunction Fallacy easily fits into a short multi-choice question. I'm not sure what the error is called, but you can do the test described in Lawful Uncertainty: You could do the positive bias test where you tell someone the triplet "2-4-6" conforms to a rule and have them figure out the rule. You might be able to come up with some questions that test resistance to anchoring. It might be out of scope of rationality and getting closer to an intelligence test, but you could take some "cognitive reflection" questions from here, which were discussed at LessWrong here.
4[anonymous]
That Virginia Postrel article was interesting. I was wondering why more reflective people were both more patient and less risk-averse -- she doesn't make this speculation, but it occurs to me that non-reflective people don't trust themselves and don't trust the future. If you aren't good at math and you know it, you won't take a gamble, because you know that good gamblers have to be clever. If you aren't good at predicting the future, you won't feel safe waiting for money to arrive later. Tomorrow the gods might send you an earthquake. Risk aversion and time preference are both sensible adaptations for people who know they're not clever. People who are good at math and science don't retain such protections because they can estimate probabilities, and because their world appears intelligible and predictable.
0pjeby
Um, that should make them more risk-averse, shouldn't it? Or do you mean reflective people don't trust themselves or the future?
0[anonymous]
oops. Reflective people are LESS risk averse. Corrected above.
3pjeby
That's even more confusing. I would expect a reflective person to be more self-doubtful and more risk-averse than a non-reflective person, all else being equal. But perhaps a different definition of "reflective" is involved here.
2gwern
Possibly. A reflective person can use expected-utility to make choices that regular people would simply categorically avoid. (One might say in game-theoretic terms that a rational player can use mixed strategies, but irrational ones cannot and so can do worse. But that's probably pushing it too far.) I recall reading one anecdote on an economics blog. The economist lived in an apartment and the nearest parking for his car was quite a ways away. There were tickets for parking on the street. He figured out the likelihood of being ticketed & the fine, and compared its expected disutility against the expected disutility of walking all the way to safe parking and back. It came out in favor of just eating the occasional ticket. His wife was horrified at him deliberately risking the fines. Isn't this a case of rational reflection leading to an acceptance of risk which his less-reflective wife was averse to?
2gwern
In a serendipitous and quite germane piece of research, Marginal Revolution links to a study on IQ and risk-aversion:
2RobinZ
I don't believe the article says "reflective":
4NancyLebovitz
The problem with the temperament checks in the last two paragraphs is that they're still testing roughly the same thing that's tested earlier on-- competence at word problems. And possibly interest in word problems-- I know I've seen versions of the three problems before. I wouldn't be going at them completely cold, but I wouldn't have noticed and remembered having seen them decades ago if word problems weren't part of my mental univers.
1gwern
Somewhat offtopic: I recall reading a study once that used a test which I am almost certain was this one to try to answer the cause/correlation question of whether philosophical training/credentials improved one's critical thinking or whether those who undertook philosophy already had good critical thinking skills; when I recently tried to re-find it for some point or other, I was unable to. If anyone also remembers this study, I'd appreciate any pointers. (About all I can remember about it was that it concluded, after using Bayesian networks, that training probably caused the improvements and didn't just correlate.)
0RobinZ
They are more risk-averse - that was a typo.
0Alexandros
Thanks for the ideas. It's good to have something concrete. Let's see how it goes.
4oliverbeatson
The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic. Someone who had read Less Wrong a few times, but didn't make the knowledge truly a part of them, might return false negative for certain biases while retaining those biases in real-life situations. Don't want to make the test about guessing the teacher's password.
4NancyLebovitz
The test should include questions about applying rationality in one's life, not just abstract problems.
3utilitymonster
I'd suggest starting with a list of common biases and producing a question (or a few?) for each. The questions could test the biases and you could have an explanation of why the biased reasoning is bad, with examples. It would also be useful to group the biases together in natural clusters, if possible.
2[anonymous]
Sounds like a good idea. Doesn't have to be invented from scratch; adapt a few psychological or behavioral-economics experiments. It's hard to ask about rationality in one's own life because of self-reporting problems; if we're going to do it, I think it's better to use questions of the form "Scenario: would you do a, b, c, or d?" rather than self-descriptive questions of the form "Are you more: a or b?"
0[anonymous]
Somewhat relatedly, I considered the idea of creating a 'Bias-Quotient' type test. It could go some way to popularising rationality and bias-aversion. A lot more people like the idea of being right than are actually aware of biases and other such behavioural stuff. I anticipate that many of these people would do the test expecting to share their score somewhere online and gain relative intellect-prestige from an expected high score. On discovering that they're more biased than they believed, I believe that, provided the test's response to a low score were engaging and informative (and not annoying and pedantic), they would on net be genuinely interested in overcoming this, with a link to Less Wrong somewhere appropriate. They might share the test regardless of their low score with an annotation such as 'check this -- very interesting!'. That's all based on my model of how a lot of aspiring intelligent people behave. It may be biased. This could open to a lot of people the doors to beginning to overcome the failures of their visceral probability heuristics, as well as the standard set of cognitive biases. The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic.

I can't remember if this has come up before...

Currently the Sequences are mostly as-imported from OB; including all the comments, which are flat and voteless as per the old mechanism.

Given that the Sequences are functioning as our main corpus for teaching newcomers, should we consider doing some comment topiary on at least the most-read articles? Specifically, I wonder if an appropriate thread structure be inferred from context; also we could vote the comments up or down in order to make the useful-in-hindsight stuff more salient. There's a lot of great ... (read more)

7RobinZ
Voting is highly recommended - please do, and feel free to reply to comments with additional commentary as well. Otherwise I'd say leave them as be.
2JamesAndrix
Also related: A lot of the Sequences show marks of their origin on Overcoming Bias that could be confusing to someone who lands on that article: Example: "Since this is an econblog... " in http://lesswrong.com/lw/j3/science_as_curiositystopper/ I think some kind of editorial note is in order here, if not a rewrite.
2JamesAndrix
Alternatively, we could repost/revisit the sequences on a schedule, and let the new posts build fresh comments. Or even better, try to cover the same topics from a different perspective.
[-]gwern120

I've suggested in the past that we use the old posts as filler; that is, if X days go by without something new making it to the front page, the next oldest item gets promoted instead.

Even if we collectively have nothing to say that is completely new, we likely have interesting things to say about old stuff - even if only linking it forward to newer stuff.

2gwern
So, from the 7 upboats, I take it that people in general approve of this idea. What's next? What do we do to make this a reality? Looking back at an old post from OB (I think), like http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ I don't see any option to promote it to the front page. I thought I had enough karma to promote other peoples' articles, but it looks like I may be wrong about this. Is it even currently technically possible to promote old articles?
1Morendil
Agree on the numerical value of X? LW has slowed down a bit recently, compared to relatively recent periods with frantic paces of posting; I rather appreciate the current rhythm. It would take a long period without new stuff to convince me we needed "filler" at all. Only editors can promote. (Installing the LW codebase locally is fun: you can play at being an editor.)
2gwern
Alright. How about a week? If nothing new has shown up for a week, then I don't think people will mind a classic. (And offhand, I'm not sure we've yet had a slack period that long.)
0Morendil
Sounds good to me.

http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/

Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.

From a rationality point of view, is it better to be inconsistent than consistently wrong?

There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-p... (read more)

2cupholder
See also 'crank magnetism.' I wonder if this counts as evidence for my heuristic of judging how seriously to take someone's belief on a complicated scientific subject by looking to see if they get the right answer on easier scientific questions.

Has anyone continued to pursue the Craigslist charity idea that was discussed back in February, or did that just fizzle away? With stakes that high and a non-negligible chance of success, it seemed promising enough for some people to devote some serious attention to it.

[-]Kevin130

Thanks for asking! I also really don't want this to fizzle away.

It is still being pursued by myself, Michael Vassar, and Michael GR via back channels rather than what I outlined in that post and it is indeed getting serious attention, but I don't expect us to have meaningful results for at least a year. I will make a Less Wrong post as soon as there is anything the public at large can do -- in the meanwhile, I respectfully ask that you or others do not start your own Craigslist charity group, as it may hurt our efforts at moving forward with this.

ETA: Successfully pulling off this Craigslist thing has big overlaps with solving optimal philanthropy in general.

Okay, here's something that could grow into an article, but it's just rambling at this point. I was planning this as a prelude to my ever-delayed "Explain yourself!" article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.


Title: On Mechanizing Science (Epistemology?)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

“It is not possible … to construct ... (read more)

I think there is an additional interpretation that you're not taking into account, and an eminently reasonable one.

First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you'd need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.

However, the more important question is -- what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless -- or worse. At the same time, it's unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informa... (read more)

5SilasBarta
Okay, thanks, that tells me what I was looking for: clarification of what it is I'm trying to refute, and what substantive reasons I have to disagree. So "Moldbug" is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that's worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be. The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it? This exercise is not just some attempt to make robots "as good as humans"; rather, it reveals why that-which-we-call "common sense" works in the first place, and exposes more general principles of superior inference. In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn't work. This could lead to a good article.
4TraditionalRationali
That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least -- in principle -- by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.) I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be "mechanized" in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism. EDIT "done at a lower level" changed to "done at a higher level"
3WrongBot
The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.
2NancyLebovitz
I'm pretty sure that judging whether one has adequately tested a hypothesis is also going to be very hard to mechanize.
3SilasBarta
The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge." But you have to wonder: the human didn't learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.
4cupholder
Those people should be glad they've never heard of TETRAD - their heads might have exploded!
2NancyLebovitz
That's intriguing. Has it turned out to be useful?
6cupholder
It's apparently been put to use with some success. Clark Glymour - a philosophy professor who helped develop TETRAD - wrote a long review of The Bell Curve that lists applications of an earlier version of TETRAD (see section 6 of the review): Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.
0NancyLebovitz
Maybe it's just a matter of people kidding themselves about how hard it is to explain something. On the other hand, some things (like vision and natural language) are genuinely hard to figure out. I'm not saying the problem is insoluble. I'm saying it looks very difficult.
0cupholder
One possible way to get started is to do what the 'Distilling Free-Form Natural Laws from Experimental Data' project did: feed measurements of time and other variables of interest into a computer program which uses a genetic algorithm to build functions that best represent one variable as a function of itself and the other variables. The Science article is paywalled but available elsewhere. (See also this bunch of presentation slides.) They also have software for you to do this at home.
3Tyrrell_McAllister
The pithy reply would be that science already is mechanized. We just don't understand the mechanism yet.
0SilasBarta
Is that directed at, or intended to be any more convincing to those holding Callahan's view in the link? I'm not trying to criticize you, I just want to make sure you know the kind of worldview you're dealing with here. If you'll remember, this is the same guy who categorically rejects the idea that anything human-related is mechanized. ( Recent blog post about the issue ... he's proud to be a "Silas-free" zone now.) On a slightly related note, I was thinking about what analogous positions would look like, and I thought of this one for comparison: "There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure."
2Blueberry
From looking at his blog, I think you should take this as a compliment.
2Morendil
About "Silas-free zones" you blogged: You don't think your making a horrible impression on people you argue with may have anything to do with it? ;) Seriously, that would be my first hypothesis. "You don't catch flies with vinegar." Go enough out of your way to antagonize people even as you're making strong rebuttals to their weak arguments, and you're giving them an easy way out of listening to you. The nicer you are, the harder you make it for others to dismiss you as an asshole. I'd count that as a good reason to learn nice. (If you need role models, there are plenty of people here who are consistently nice without being pushovers in arguments - far from it.)
0SilasBarta
The evidence against that position is that Callahan, for a while, had no problem allowing my comments on his site, but then called me a "douche" and deleted them the moment they started disagreeing with him. Here's another example. Also, on this post, I responded with something like, "It's real, in the sense of being an observable regularity in nature. Okay, what trap did I walk into?" but it was diallowed. Yet I wouldn't call that comment rude. It's not about him banning me because of my tone; he bans anyone who makes the same kinds of arguments, unless they do it badly, in which case he keeps their comments for the easy kill, gets in the last word, and closes the thread. Which is his prerogative, of course, but not something to be equated with "being interested in meaningful exchange of ideas, and only banning those who are rude".
1Morendil
I'm not sure that claim would be entirely absurd. In the software engineering business, there's a subculture whose underlying ideology can be caricatured as "Programming would be so simple if only we could get those pesky programmers out of the loop." This subculture invests heavily into code generation, model-driven architectures, and so on. Arguably, too, this goal only sems plausible if you have swallowed quite a few confusions regarding the respective roles of problem-solving, design, construction, and testing. A closer examination reveals that what passes for attempts at "mechanizing" the creation of software punts on most of the serious questions, focusing only on what is easily mechanizable. But that is nothing other than the continuation of a trend that has existed in the software profession from the beginning: the provision of mechanized aids to a process that remains largely creative (and as such poorly understood). We don't say that compilers have mechanized the production of software; we say that they have raised the level of abstraction at which a programmer works.
2SilasBarta
Okay, but that point only concerns production of software, a relatively new "production output". The statement ("there is no automatist revival in industry ...") would apply just the same to any factory, and ridicules the idea that there can be a mechanical procedure for producing any good. In reality, of course, this seems to be the norm: someone figures out what combination of motions converts the input to the output, refuting the notion that e.g. "There is no mechanical procedure for preparing a bottle of Coca-cola ..." In any case, my dispute with Callahan's remark is not merely about its pessimism regarding mechanizing this or that (which I called View 1), but rather, the implication that such mechanization would be fundamentally impossible (View 2), and that this impossibility can be discerned from philosophical considerations. And regarding software, the big difficulty in getting rid of human programmers seems to come from how their role is, ultimately, to find a representation for a function (in a standard language) that converts a specified input into a specified output. Those specifications come from ... other humans, who often conceal properties of the desired I/O behavior, or fail to articulate them.
0Tyrrell_McAllister
No, you're absolutely right. My comment definitely would not be convincing. The best that could be said for it is that it would help to clarify the nature of my rejection of View 2. That is, if I were talking to Callahan, that comment would, at best, just help him to understand which position he was dealing with.
3cupholder
Am I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. (Count JSTOR hits for the word 'Bayesian,' for example, and watch the numbers shoot up over time!)
1Douglas_Knight
Half of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper). Anyhow, it's clear from the context (I'd have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.
0cupholder
It might well have been clear from the quote itself, but not to me - I just read the quote as saying Bayesian thinking and Bayesian methods haven't become more popular in science, which doesn't mesh with my intuition/experience.
3NancyLebovitz
How hard do you think mechanizing science would be? It strikes me as being at least in the same class with natural language.
2NancyLebovitz
I've been poking at the question of to what extent computers could help people do science, beyond the usual calculation and visualization which is already being done. I'm not getting very far-- a lot of the most interesting stuff seems like getting meaning out of noise. However, could computers check to make sure that the use of statistics isn't too awful? Or is finding out whether what's deduced follows from the raw data too much like doing natural language? What about finding similar patterns in different fields? Possibly promising areas which haven't been explored?
0SilasBarta
Not exactly sure, to be honest, though your estimate sounds correct. What matters is that I deem it possible in a non-trivial sense; and more importantly, that we can currently identify rough boundaries of ideal mechanized science, and can categorize much of existing science as being definitely in or out.
2steven0461
It's probably best to take a cyborg point of view -- consciously followed algorithms (like probabilistic updating) aren't a replacement for common sense, but they can be integrated into common sense, or used as measuring sticks, to turn common sense into common awesome cybersense.
2cousin_it
You probably won't find much opposition to your opinion here on LW. Duh, of course science can and will be automated! It's pretty amusing that the thesis of Cosma Shalizi, an outspoken anti-Bayesian, deals with automated extraction of causal architecture from observed behavior of systems. (If you enjoy math, read it all; it's very eye-opening.)
2SilasBarta
Really? I read enough of that thesis to add it to the pile of "papers about fully generally learning programs with no practical use or insight into general intelligence". Though I did get one useful insight from Shalizi's thesis: that I should judge complexity by the program length needed to produce something functionally equivalent, not something exactly identical, as that metric makes more sense when judging complexity as it pertains to real-world systems and their entropy.
0SilasBarta
And regarding your other point, I'm sure people agree with holding view 2 in contempt. But what about the more general question of mechanizing epistemology? Also, would people be interested in a study of what actually does motivate opposition to the attempt to mechanize science? (i.e. one that goes beyond my rants and researches it)
0Daniel_Burfoot
I read Moldbug's quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.
1SilasBarta
0Jayson_Virissimo
Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as "it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense". Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?
2mattnewport
The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas' view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
0SilasBarta
That sounds like a justification for view 1. Remember, view 1 doesn't provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view. (Of course, "Moldbug's" view still doesn't seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)
[-][anonymous]70

Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.

The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.

I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his c... (read more)

[-][anonymous]160

I pointed this out to my buddy who's a psychology doctoral student, his reply is below:

I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.

I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.

He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!

But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.

So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.

9cousin_it
I think this is one of the few cases where Shalizi is wrong. (Not an easy thing to say, as I'm a big fan of his.) In the second part of the article he generates synthetic "test scores" of people who have three thousand independent abilities - "facets of intelligence" that apply to different problems - and demonstrates that standard factor analysis still detects a strong single g-factor explaining most of the variance between people. From that he concludes that g is a "statistical artefact" and lacks "reality". This is exactly like saying the total weight of the rockpile "lacks reality" because the weights of individual rocks are independent variables. As for the reason why he is wrong, it's pretty clear: Shalizi is a Marxist (fo' real) and can't give an inch to those pesky racists. A sad sight, that.

cousin_it:

A sad sight, that.

Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.

He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.

5satt
There is no such book (yet), but there are two books that cover the most controversial part of the mess that I'd recommend: Race Differences in Intelligence (1975) and Race, IQ and Jensen (1980). They are both systematic, thorough, and about as unbiased as one can reasonably expect on the subject of race & IQ. On the down side, they don't really cover other aspects of the IQ controversies, and they're three decades out of date. (That said, I personally think that few studies published since 1980 bear strongly on the race & IQ issue, so the books' age doesn't matter that much.)
4Vladimir_M
Yes, among the books on the race-IQ controversy that I've seen, I agree that these are the closest thing to an unbiased source. However, I disagree that nothing very significant has happened in the field since their publication -- although unfortunately, taken together, these new developments have led to an even greater overall confusion. I have in mind particularly the discovery of the Flynn effect and the Minnesota adoption study, which have made it even more difficult to argue coherently either for a hereditarian or an environmentalist theory the way it was done in the seventies. Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments. From what I've seen, Jensen is still using such arguments as a major source of support for his positions, constantly replying to the existing superficial critiques with superficial counter-arguments, and I've never seen anyone giving this issue the full attention it deserves.
2satt
Me too! I just don't think there's been much new data brought to the table. I agree with you in counting Flynn's 1987 paper and the Minnesota followup report, and I'd add Moore's 1986 study of adopted black children, the recent meta-analyses by Jelte Wicherts and colleagues on the mean IQs of sub-Saharan Africans, Dickens & Flynn's 2006 paper on black Americans' IQs converging on whites' (and at a push, Rushton & Jensen's reply along with Dickens & Flynn's), Fryer & Levitt's 2007 paper about IQ gaps in young children, and Fagan & Holland's papers (200200080-6), 2007, 2009) on developing tests where minorities score equally to whites. I guess Richard Lynn et al.'s papers on the mean IQ of East Asians count as well, although it's really the black-white comparison that gets people's hackles up. Having written out a list, it does looks longer than I expected...although it's not much for 30-35 years of controversy! Amen. The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.
0[anonymous]
satt: If I remember correctly, Loehlin's book also mentions it briefly. However, it seems to me that the situation is actually more complex. Jensen's arguments, in the forms in which he has been stating them for decades, are clearly inadequate. Some very good responses were published 30+ years ago by Mackenzie and Furby. Yet for some bizarre reason, prominent critics of Jensen have typically ignored these excellent references and instead produced their own much less thorough and clear counterarguments. Nevertheless, I'm not sure if the argument should end here. Certainly, if we observe a subpopulation S in which the values of a trait follow a normal distribution with the mean M(S) that is lower than for the whole population, then in pairs of individuals from S among whom there exists a correlation independent of rank and smaller than one, the lower-ranked individuals will regress towards M(S). That's a mathematical tautology, and nothing can be inferred from it about what the causes of the individual and group differences might be; the above cited papers explain this fact very well. However, the question that I'm not sure about is: what can we conclude from the fact that the existing statistical distributions and correlations are such that they satisfy these mathematical conditions? Is this really a trivial consequence of the norming of tests that's engineered so as to give their scores a normal distribution over the whole population? I'd like to see someone really statistics-savvy scrutinize the issue without starting from the assumption that both the total population distribution and the subpopulation distribution are normal and that the correlation coefficients between relatives are independent of their rank in the distribution.
0NancyLebovitz
What would appropriate policy be if we just don't know to what extent IQ is different in different groups?
5Vladimir_M
Well, if you'll excuse the ugly metaphor, in this area even the positive questions are giant cans of worms lined on top of third rails, so I really have no desire to get into public discussions of normative policy issues.
5Morendil
OK, I'll bite. Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

Morendil:

Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.

In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)

Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll ju... (read more)

4[anonymous]
Your analogy is flawed, I think. The weight of the rock pile is just what we call the sum of the weights of the rocks. It's just a definition; but the idea of general intelligence is more than a definition. If there were a real, biological thing called g, we would expect all kinds of abilities to be correlated. Intelligence would make you better at math and music and English. We would expect basically all cognitive abilities to be affected by g, because g is real -- it represents something like dendrite density, some actual intelligence-granting property. People hypothesized that g is real because results of all kinds of cognitive tests are correlated. But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities. Sure, your old g will correlate with multiple abilities -- hell, you could let g = "test score" and that would correlate with all the abilities -- but that would be meaningless. If size and location determine the price of a house, you don't declare that there is some factor that causes both large size and desirable location!

SarahC:

But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Just to be clear, this is not an original idea by Shalizi, but the well known "sampling theory" of general intelligence first proposed by Godfrey Thomson almost a century ago. Shalizi states this very clearly in the post, and credits Thomson with the idea. However, for whatever reason, he fails to mention the very extensive discussions of this theory in the existing literature, and writes as if Thomson's theory had been ignored ever since, which definitely doesn't represent the actual situation accurately.

In a recent paper by van der Maas et al., which presents an extremely interesting novel theory of correlations that give rise to g (and which Shalizi links to at one point), the authors write:

Thorndike (1927) and Thomson (1951) proposed one such alternative mechanism, namely, sampling. In this sampling theory, carrying out cognitive tasks requires the use of many lower order uncorrelated modules or n

... (read more)
4satt
I think Shalizi isn't too far off the mark in writing "as if Thomson's theory had been ignored". Although a few psychologists & psychometricians have acknowledged Thomson's sampling model, in everyday practice it's generally ignored. There are far more papers out there that fit g-oriented factor models as a matter of course than those that try to fit a Thomson-style model. Admittedly, there is a very good reason for that — Thomson-style models would be massively underspecified on the datasets available to psychologists, so it's not practical to fit them — but that doesn't change the fact that a g-based model is the go-to choice for the everyday psychologist. There's an interesting analogy here to Shalizi's post about IQ's heritability, now I think about it. Shalizi writes it as if psychologists and behaviour geneticists don't care about gene-environment correlation, gene-environment interaction, nonlinearities, there not really being such a thing as "the" heritability of IQ, and so on. One could object that this isn't true — there are plenty of papers out there concerned with these complexities — but on the other hand, although the textbooks pay lip service to them, researchers often resort to fitting models that ignore these speedbumps. The reason for this is the same as in the case of Thomson's model: given the data available to scientists, models that accounted for these effects would usually be ruinously underspecified. So they make do.
4Vladimir_M
However, it seems to me that the fatal problem of the sampling theory is that nobody has ever managed to figure out a way to sample disjoint sets of these hypothetical uncorrelated modules. If all practically useful mental abilities and all the tests successfully predicting them always sample some particular subset of these modules, then we might as well look at that subset as a unified entity that represents the causal factor behind g, since its elements operate together as a group in all relevant cases. Or is there some additional issue here that I'm not taking into account?
[-]satt130

I can't immediately think of any additional issue. It's more that I don't see the lack of well-known disjoint sets of uncorrelated cognitive modules as a fatal problem for Thomson's theory, merely weak disconfirming evidence. This is because I assign a relatively low probability to psychologists detecting tests that sample disjoint sets of modules even if they exist.

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)". The paper's about the general goal of coming up with universal definitions of and ways to measure intelligence, but in the middle of it is a polemical/sceptical summary of research into g & IQ.

Smith went through a correlation matrix for 57 tests given to 240 people, published by Thurstone in 1938, and saw that the 3 most negative of the 1596 intercorrelations were between these pairs of tests:

  • "100-word vocabulary test // Recognize pictures of hand as Right/Left"
... (read more)
5Vladimir_M
satt: That's an extremely interesting reference, thanks for the link! This is exactly the kind of approach that this area desperately needs: no-nonsense scrutiny by someone with a strong math background and without an ideological agenda. David Hilbert allegedly once quipped that physics is too important to be left to physicists; the way things are, it seems to me that psychometrics should definitely not be left to psychologists. That they haven't immediately rushed to explore further these findings by Smith is an extremely damning fact about the intellectual standards in the field. Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.
0satt
That's an excellent point that completely did not occur to me. Turns out that self-discipline is actually one of the 6 subscales used to measure conscientiousness on the NEO-PI-R, so it's clearly related to conscientiousness. With that in mind, it is a bit weird that conscientiousness doesn't get a shoutout in the paper...
0NancyLebovitz
Is anything known about a physical basis for conscientiousness?
4wedrifid
It can be reliably predicted by, for example, SPECT scans. If I recall correctly you can expect to see over-active frontal lobes and basal ganglia. For this reason (and because those areas depend on dopamine a lot) dopaminergics (Ritalin, etc) make a big difference.
4Douglas_Knight
I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond. Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable. The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated. The second example is a little more promising: maybe that scattered Xs test is independent of verbal ability, even though it looks like other skills that are not, but I doubt it. With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence. I knew that conscientiousness predicted GPAs, but I'd never heard such a strong claim. But it is true that a lot of people dismiss conscientiousness (and GPA) in favor of intelligence, and they seem to be making an error (or being risk-seeking).
3satt
Once you read the relevant passage in context, I anticipate you will agree with me that Smith is serious. Take this paragraph from before the passage I quoted from: Smith then presents the example from Thurstone's 1938 data. I'd be inclined to agree if the 3 most negative correlations in the dataset had come from very different pairs of tests, but the fact that they come from sets of subtests that one would expect to tap similar narrow abilities suggests they're not just statistical noise. Smith himself does not appear to make that claim; he presents his two examples merely as demonstrations that not all mental ability scores positively correlate. I think it's reasonable to package the 3 verbal subtests he mentions as strongly loading on verbal ability, but it's not clear to me that the 3 other subtests he pairs them with are strong measures of "spatial ability"; two of them look like they tap a more specific ability to handle mental mirror images, and the third's a visual memory test. Even if it transpires that the 3 subtests all tap substantially into spatial ability, they needn't necessarily correlate positively with specific measures of verbal ability, even though verbal ability correlates with spatial ability. I'm tempted to agree but I'm not sure such a strong generalization is defensible. Take a list of psychologists' definitions of intelligence. IMO self-discipline plausibly makes sense as a component of intelligence under definitions 1, 7, 8, 13, 14, 23, 25, 26, 27, 28, 32, 33 & 34, which adds up to 37% of the list of definitions. A good few psychologists appear to include self-discipline as a facet of intelligence.
1HughRistik
Interesting thought. It turns out that Conscientiousness is actually negatively related to intelligence, while Openness is positively correlated with intelligence. This finding is consistent with the folk notion of "crazy geniuses." Though it's important to note that the second study was done on college students, who must have a certain level of IQ and who aren't representative of the population. The first study notes: If we took a larger sample of the population, including lower IQ individuals, then I think we would see the negative correlation between Conscientiousness and intelligence diminish or even reverse, because I bet there are lots of people outside a college population who have both low intelligence and low Conscientiousness. It could be that a moderate amount of Conscientiousness (well, whatever mechanisms cause Conscientiousness) is necessary for above average intelligence, but too much Conscientiousness (i.e. those mechanisms are too strong) limits intelligence.
6[anonymous]
I noticed a while back when a bunch of LW'ers gave their Big Five scores that our Conscientiousness scores tended to be low. I took that to be an internet thing (people currently reading a website are more likely to be lazy slobs) but this is a more flattering explanation.
2Douglas_Knight
No it doesn't. The whole point of that article is that it's a mistake to ask people how conscientious they are.
0satt
Interesting. I would've expected Conscientiousness to correlate weakly positively with IQ across most IQ levels. I would avoid interpreting a negative correlation between C/self-discipline and IQ as evidence against C/self-discipline being a separate facet of intelligence; I think that would beg the question by implicitly assuming that IQ's representing the entirety of what we call intelligence.
1RobinZ
Just out of curiosity: is psychology your domain of expertise? You speak confidently and with details.
5satt
If only! I'm just a physics student but I've read a few books and quite a few articles about IQ. [Edit: I've got an amateur interest in statistics as well, which helps a lot on this subject. Vladimir_M is right that there's a lot of crap statistics peddled in this field.]
0[anonymous]
Ok, that's interesting new stuff -- I haven't read this literature at all.
4[anonymous]
"All of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age 30. (This predictive ability is vastly less than many people would lead you to believe [cf.], but I'm happy to give them that point for the sake of argument.) This would still be true if I introduced a broader mens sana in corpore sano score, which combined IQ tests, physical fitness tests, and (to really return to the classical roots of Western civilization) rated hot-or-not sexiness. Indeed, since all these things predict success in life (of one form or another), and are all more or less positively correlated, I would guess that MSICS scores would do an even better job than IQ scores. I could even attribute them all to a single factor, a (for arete), and start treating it as a real causal variable. By that point, however, I'd be doing something so obviously dumb that I'd be accused of unfair parody and arguing against caricatures and straw-men." This is the point here. There's a difference between coming up with linear combinations and positing real, physiological causes.

My beef isn't with Shalizi's reasoning, which is correct. I disagree with his text connotationally. Calling something a "myth" because it isn't a causal factor and you happen to study causal factors is misleading. Most people who use g don't need it to be a genuine causal factor; a predictive factor is enough for most uses, as long as we can't actually modify dendrite density in living humans or something like that.

0[anonymous]
Ok, let's talk connotations. If g is a causal factor then "A has higher g than B" adds additional information to the statement "A scored higher than B on such-and-such tests." It might mean, for instance, that you could look in A's brain and see different structure than in B's brain; it might mean that we would expect A to be better at unrelated, previously untested skills. If g is not a causal factor, then comments about g don't add any new information; they just sort of summarize or restate. That difference is significant. A predictive factor is enough for predictive uses, but not for a lot of policy uses, which rely on causality. From your comment, I assume you are not a lefty, and that you think we should be more confident than we are about using IQ to make decisions regarding race. I think that Shalizi's reasoning is likely not irrelevant to making those decisions; it should probably make us more guarded in practice.
6Douglas_Knight
I don't understand your last paragraph. Could you give an example? Is this relevant to the decision of whether intelligence tests should be used for choosing firemen? or is that a predictive use?
2[anonymous]
The kinds of implications I'm thinking about are that if IQ causes X, (and if IQ is heritable) then we should not seek to change X by social engineering means, because it won't be possible. X could be the distribution of college admittees, firemen, criminals, etc. Not all policy has to rely on causal factors, of course. And my thinking is a little blurry on these issues in general.
2cousin_it
Seconding Douglas_Knight's question. I don't understand why you say policy uses must rely on causal factors.
0[anonymous]
The way you define "real" properties, it seems you can't tell them from "unreal" ones by looking at correlations alone; we need causal intervention for that, a la Pearl. So until we invent tech for modifying dendrite density of living humans, or something like that, there's no practical difference between "real" g and "unreal" g and no point in making the distinction between them. In particular, their predictive power is the same. So, basically, your and Shalizi's demand for a causal factor is too strong. We can do with weaker tools.
7satt
Shalizi's most basic point — that factor analysis will generate a general factor for any bunch of sufficiently strongly correlated variables — is correct. Here's a demo. The statistical analysis package R comes with some built-in datasets to play with. I skimmed through the list and picked out six monthly datasets (72 data points in each): * atmospheric CO2 concentrations, 1959-1964 * female UK lung deaths, 1974-1979 * international airline passengers, 1949-1954 * sunspot counts, 1749-1754 * average air temperatures at Nottingham Castle, 1920-1925 * car drivers killed & seriously injured in Great Britain, 1969-1974 It's pretty unlikely that there's a single causal general factor that explains most of the variation in all six of these time series, especially as they're from mostly non-overlapping time intervals. They aren't even that well correlated with each other: the mean correlation between different time series is -0.10 with a std. dev. of 0.34. And yet, when I ask R's canned factor analysis routine to calculate a general factor for these six time series, that general factor explains 1/3 of their variance! However, Shalizi's blog post covers a lot more ground than just this basic point, and it's difficult for me to work out exactly what he's trying to say, which in turn makes it difficult to say how correct he is overall. What does Shalizi mean specifically by calling g a myth? Does he think it is very unlikely to exist, or just that factor analysis is not good evidence for it? Who does he think is in error about its nature? I can think of one researcher in particular who stands out as just not getting it, but beyond that I'm just not sure.
4HughRistik
In your example, we have no reason to privilege the hypothesis that there is an underlying causal factor behind that data. In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real? These results would seem surprising if g was merely a statistical "myth."
9satt
The best evidence that g measures something real is that IQ tests are highly reliable, i.e. if you get your IQ or g assessed twice, there's a very good correlation between your first score and your second score. Something has to generate the covariance between retestings; that g & IQ also correlate with neurobiological variables is just icing on the cake. To answer your question directly, g's neurobiological associations are further evidence that g measures something real, and I believe g does measure something real, though I am not sure what. Shalizi is, somewhat confusingly, using the word "myth" to mean something like "g's role as a genuine physiological causal agent is exaggerated because factor analysis sucks for causal inference", rather than its normal meaning of "made up". Working with Shalizi's (not especially clear) meaning of the word "myth", then, it's not that surprising that g correlates with neurobiology, because it is measuring something — it's just not been proven to represent a single causal agent. Personally I would've preferred Shalizi to use some word other than "myth" (maybe "construct") to avoid exactly this confusion: it sounds as if he's denying that g measures anything, but I don't believe that's his intent, nor what he actually believes. (Though I think there's a small but non-negligible chance I'm wrong about that.)
3[anonymous]
From what I can gather, he's saying all other evidence points to a large number of highly specialized mental functions instead of one general intelligence factor, and that psychologists are making a basic error by not understanding how to apply and interpret the statistical tests they're using. It's the latter which I find particularly unlikely (not impossible though).
1satt
You might be right. I'm not really competent to judge the first issue (causal structure of the mind), and the second issue (interpretation of factor analytic g) is vague enough that I could see myself going either way on it.
2RobinZ
By the way, welcome to Less Wrong! Feel free to introduce yourself on that thread! If you haven't been reading through the Sequences already, there was a conversation last month about good, accessible introductory posts that has a bunch of links and links-to-links.
3satt
Thank you!
0RobinZ
Belatedly: Economic development (including population growth?) is related to CO2, lung deaths, international airline passengers, average air temperatures (through global warming), and car accidents.
5gwern
Here is a useful post directly criticizing Shalizi's claims: http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/
-1RobinZ
I don't think it's surprising that an untenable claim could persist within a field for a long time, once established. Pluto was called a planet for seventy-six years. I've no idea whether the critique of g is accurate, however.
2mkehrt
That's a bizarre choice of example. The question of whether Pluto is a planet is entirely a definitional one; the IAU could make it one by fiat if they chose. There's no particular reason for it not to be one, except that the IAU felt the increasing number of tranNeptunian objects made the current definition awkward.
5RobinZ
"[E]ntirely a definitional" question does not mean "arbitrary and trivial" - some definitions are just wrong. EY mentions the classic example in Where to Draw the Boundary?: Honestly, it would make the most sense to draw four lists, like the Hayden Planetarium did, with rocky planets, asteroids, gas giants, and Kuiper Belt objects each in their own category, but it is obviously wrong to include everything from Box 1 and Box 3 and one thing from Box 4. The only reason it was done is because they didn't know better and didn't want to change until they had to.
9mkehrt
You (well, EY) make a good point, but I think neither the Pluto remark nor the fish one is actually an example of this. In the case of Pluto, the transNeptunians and the other planets seem to belong in a category that the asteroids don't. They're big and round! Moreover, they presumably underwent a formation process that the asteroid belt failed too complete in the same way (or whatever the current theory of formation of the asteroid belt is; I think that it involves failure to form a "planet" due to tidal forces from Jupiter?). Of course there are border cases like Ceres, but I think there is a natural category (whatever that means!) that includes the rocky planets, gas giants and Kuiper Belt objects that does not include (most) asteroids and comets. On the fish example, I claim that the definition of "fish" that includes the modern definition of fish union the cetaceans is a perfectly valid natural category, and that this is therefore an intensional definition. "Fish" are all things that live in the water, have finlike or flipperlike appendages and are vaguely hydrodynamic. The fact that such things do not all share a comment descent* is immaterial to the fact that they look the same and act the same at first glance. As human knowledge has increased, we have made a distinction between fish and things that look like fish but aren't, but we reasonably could have kept the original definition of fish and called the scientific concept something else, say "piscoids". *well, actually they do, but you know approximately what I mean.
3NancyLebovitz
Nitpick: if in your definition of fish, you mean that they need to both have fins or flippers and be (at least) vaguely hydrodynamic, I don't think seahorses and puffer fish qualify.
2wnoise
The usual term is "monophyletic".
1mkehrt
Yes, but neither fish nor (fish union cetaceans) is monphylatic. The decent tree rooted at the last common ancestor of fish also contains tetrapods and decent tree rooted at the last common ancestor of tetrapods contains the cetaceans. I am not any sort of biologist, so I am unclear on the terminological technicalties, which is why I handwaved this in my post above.
3Emile
Fish are a paraphyletic group.
1wedrifid
I'm inclined to agree. Having a name for 'things that naturally swim around in the water, etc' is perfectly reasonable and practical. It is in no way a nitwit game.
[-]Roko70

Robert Ettinger's surprise at the incompetence of the establishment:

Robert Ettinger waited expectantly for prominent scientists or physicians to come to the same conclusion he had, and to take a position of public advocacy. By 1960, Ettinger finally made the scientific case for the idea, which had always been in the back of his mind. Ettinger was 42 years old and said he was increasingly aware of his own mortality.[7] In what has been characterized as an historically important mid-life crisis,[7] Ettinger summarized the idea of cryonics in a few pages, w

... (read more)
4Mitchell_Porter
There are many momentous issues here. First: I think a historical narrative can be constructed, according to which a future unexpected in, say, 1900 or even in 1950 slowly comes into view, and in which there are three stages characterized by an extra increment of knowledge. The first increment is cryonics, the second increment is nanotechnology, and the third increment is superintelligence. There is a highly selective view; if you were telling the history of futurist visions in general, you would need to include biotechnology, robotics, space travel, nuclear power, even aviation, and many other things. In any case, among all the visions of the future that exist out there, there is definitely one consisting of cryonics + nanotechnology + superintelligence. Cryonics is a path from the present to the future, nanotechnology will make the material world as pliable as the bits in a computer, and superintelligence guided by some utility function will rule over all things. Among the questions one might want answered: 1) Is this an accurate vision of the future? 2) Why is it that still so few people share this perspective? 3) Is that a situation which ought to be changed, and if so, how could it be changed? Question 1 is by far the most discussed. Question 2 is mostly pondered by the few people who have answered 'yes' to question 1, and usually psychological answers are given. I think that a certain type of historical thinking could go a long way towards answering question 2, but it would have to be carried out with care, intelligence, and a will to objectivity. This is what I have in mind: You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise. To find a history which notices any of that, you will have to specialize, e.g. to a history of American technological subcultures, or a history of 20th-century futurological enthusiasms. An
2Roko
On the other hand, does anyone who has seriously thought about the issue expect nanotech to not be incredibly important in the long-term? It seems that there is a solid sceptical case that nano has been overhyped in the short term, perhaps even by Drexler. But who will step forward having done a thorough analysis and say that humanity will thrive for another millennium without developing advanced nanotech?
3cupholder
A good illustration of multiple discovery (not strictly 'discovery' in this case, but anyway) too:

I'm a bit surprised that nobody seems to have brought up The Salvation War yet. [ETA: direct links to first and second part]

It's a Web Original documentary-style techno-thriller, based around the premise that humans find out that a Judeo-Christian Heaven and (Dantean) Hell (and their denizens) actually exist, but it turns out there's nothing supernatural about them, just some previously-unknown/unapplied physics.

The work opens in medias res into a modern-day situation where Yahweh has finally gotten fed up with those hairless monkeys no longer being the ... (read more)

8cousin_it
Okay, I've read through the whole thing so far. This is not rationalist fiction. This is standard war porn, paperback thriller stuff. Many many technical descriptions of guns, rockets, military vehicles, etc. Throughout the story there's never any real conflict, just the American military (with help from the rest of the world) steamrolling everything, and the denizens of Heaven and Hell admiring the American way of life. It was well-written enough to hold my attention like a can of Pringles would, but I don't feel enriched by reading it.
2NancyLebovitz
I've only read about a chapter and a half, and may not read any more of it, but there's one small rationalist aspect worthy of note-- the author has a very solid grasp of the idea that machines need maintenance.
1CannibalSmith
Here's a tiny bit of rationality:
2cousin_it
If your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective; that's why it is so widespread in the animal kingdom, where evolutionary dynamics make populations converge on an equilibrium of behavior, and that's why it was widespread in medieval times (that Hell is modeled from). So the passage you quoted doesn't work as a general statement about rationality, but it works pretty well as praise of America. Right now, America is the only country on Earth that can "fight to win". Other countries have to fight "honorably" lest America deny them their right of conquest.
2wedrifid
The wars America fights, the wars all countries fight are ritualised combat. We send our soldiers and bombers (of either the plane or suicide variety), you send your soldiers and bombers. One side loses more soldiers, the other side loses more money. If America or any its rivals fought to win their respective countries would be levelled. The ritualised combat model you describe matches modern warfare perfectly and the very survival of the USA depends on it.
-1cousin_it
America's wars change regimes in other countries. This ain't ritualized combat.
5wedrifid
That's exactly the purpose of ritualised combat. Change regimes without total war. Animals (including humans) change their relative standing in the tribe. Coalitions of animals use ritualised combat to change intratribal regimes. Intertribal combat often has some degree of ritual element, although this of course varies based on the ability of tribes to 'cooperate' in combat without total war. In international battles there have been times where the combat has been completely non-ritualised and brutal. But right now if combat was not ritualised countries would be annihilated by nuclear battles. That's the whole point of ritual combat. Fight with the claws retracted, submit to the stronger party without going for the kill. Because if powerful countries with current technology levels, or powerful animals, fight each other without restriction both will end up crippled. That can either mean infections from relatively minor flesh wounds in a fight to the death or half your continent being reduced to an uninhabited and somewhat radioactive wasteland in a war you 'won'. The point I argue here is that America is allowed to make such interference only because its rivals choose to cooperate in the 'ritualised combat' prisoners dilemma. They accept America's dominance in conventional warfare because total war would result in mutual destruction. In a world where multiple countries have the ability to destroy each other (or, if particularly desperate, all mammalian life on the planet) combat is necessarily ritualised or the species goes extinct. You misunderstand the purpose of ritualised combat. In animals this isn't the play fighting that pups do to practice fighting. This is real, regime changing, win-or-don't-get-laid-till-later combat-and-get-fewer-resources. (ETA: I note that we are arguing here over how to apply an analogy. Since analogies are more useful as an explanatory tool and an intuition pump than a tool for argument it is usually unproductive to delve too deepl
0cousin_it
You seem to be living on an alternate Earth where America fights ritualized wars against countries that have nuclear weapons. In our world America attacks much weaker countries whose leaders have absolutely no reason to fight with claws rectracted, because if they lose they get hanged like Saddam Hussein or die in prison like Milosevic. No other country does that today.
0Douglas_Knight
Countries aren't that coherent and certainly aren't their leaders. I don't think the analogy makes sense either way.
0wedrifid
It would seem that I need to retract the last sentence in my ETA.
0[anonymous]
It's funny. When describing the history of Hell, the author unwittingly explains the benefits of ritualized warfare while painting them as stupid. It seems he doesn't quite grasp how ritualized combat can be game-theoretically rational and why it occurs so often in the animal kingdom. Fighting to win is only rational when you're confident enough that you will win.
5cousin_it
Why did you link to TV Tropes instead of the thing itself?
0JohannesDahlstrom
A good question. I ended up writing a longer post than I expected; originally I just thought I'd just utilize the TV Tropes summary/review by linking there. Also, the Tropes page provides links to both of the parts, and to both the original threads (with discussion) and the cleaned-up versions (story only.) I'll edit the post to include direct links.
2[anonymous]
Direct link to story

The following is a story I wrote down so I could sleep. I don't think it's any good, but I posted it on the basis that, if that's true, it should quickly be voted down and vanish from sight.

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random e... (read more)

6cousin_it
Ooh, an LW-themed horror story. My humble opinion: it's awesome! This phrase was genius: Moar please.
5pjeby
Wait, is that the whole story? 'cause if so, I really don't get it. Where's the rest of it? What happens next? Is Jerry afraid that his algorithm is a self-improving AI or something?
5apophenia
Apparently my story is insufficiently explicit. The gag here is that the AI is sentient, and has tricked Jerry into feeding it only reward numbers.
7Sniffnoy
I'm going to second the idea that that isn't clear at all.
0cousin_it
For onlookers: only Jerry can see the pattern on the pad that prompted him to try rewarding the AI.
0Blueberry
Huh? No, they're numbers written on a pad. Why should Jerry be the only one to see them? They don't change when someone else looks at them.
0cousin_it
Reread the story. Other people can see the numbers but don't notice the pattern. This happens all the time in real life, e.g. someone can see a face in the clouds but fail to explain to others how to see it.
4Oscar_Cunningham
How does 2212221 represent perfect numbers?
2apophenia
It's not meant to be realistic, but in this specific case: 6 = 110, 28=1110 in binary. Add one to each digit.
2Sniffnoy
Except 28 is 11100 in binary...
0apophenia
My mistake. I was reverse engineering. I still think that's it, just that the sequence hasn't finished printing.

Conway's Game of Life in HTML 5

http://sixfoottallrabbit.co.uk/gameoflife/

2RobinZ
Playing Conway's Life is a great exercise - I recommend trying it, to anyone who hasn't. Feel free to experiment with different starting configurations. One simple one which produces a wealth of interesting effects is the "r pentomino": Edit: Image link died - see Vladimir_Nesov's comment, below.
2Vladimir_Nesov
The link to the image died, here it is:

Something I've been pondering recently:

This site appears to have two related goals:

a) How to be more rational yourself b) How to promote rationality in others

Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.

One might be more effective in the short term, but you might think the rational argu... (read more)

0RobinZ
There is a third option of making a reasoned, rational meta-argument as to why the methods they were using to develop their position were wrong. I don't know how reliable it is, however.
4mstevens
I've tried very informal related experiments - often in dealing with people it's necessary to challenge their assumptions about the world. a) People's assumptions often seem to be somewhat subconscious, so there's significant effort to extract the assumptions they're making. b) These assumptions seem to be very core to people's thinking and they're extremely resistant to being challenged on them. My guess is that trying to change people's methods of thinking would be even more difficult than this. EDIT: The first version of this I post talked more about challenging people's methods, I thought about this more and realised it was more assumptions, but didn't correctly edit everything to fit that. Now corrected.

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

The thing that's puzzling me now is egalitarianism. As Carl Shulman pointed out, the problem that CEV has with people being able to cheaply copy themselves in the future is shared with democracy and other political and ethical systems that are based on equal treatment or rights of all... (read more)

5michaelkeenan
I'm currently reading The Moral Animal by Robert Wright, because it was recommended by, among others, Eliezer. I'm summarizing the chapters online as I read them. The fifth chapter, noting that more human societies have been polygynous than have been monogamous, examines why monogamy is popular today; you might want to check it out. As for the wider question of reductionist explanations of morality, I'm a fan of the research of moral psychologist Jonathan Haidt (New York Times article, very readable paper).
1Wei Dai
You're right that there are already people like Robert Wright and Jonathan Haidt who are trying to answer these questions. I suppose I'm really wishing that the science is a few decades ahead of where it actually is.
0Alexandros
Thank you michael, I just read through your summary of Wright's book, an excellent read.
0michaelkeenan
Thanks! I'll PM you when I've summarized parts three and four.

The comments on the Methods of Rationality thread are heading towards 500. Might this be time for a new thread?

3RobinZ
That sounds like a reasonable criterion.

This seems extremely pertinent for LW: a paper by Andrew Gelman and Cosma Shalizi. Abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distri

... (read more)
2cousin_it
steven0461 already posted this to the previous Open Thread and we had a nice little talk.
2TraditionalRationali
I wrote a backlink to here from OB. I am not yet expert enough to do an evaluation of this. I do think however that it is an important and interesting question that mjgeddes asks. As an active (although at a low level) rationalist I think it is important to try to at least to some extent follow what expert philosophers of science actually find out of how we can obtain reasonably reliable knowledge. The dominating theory of how science proceeds seems to be the hypothetico-deductive model, somewhat informally described. No formalised model for the scientific process seems so far has been able to answer to serious criticism of in the philosophy of science community. "Bayesianism" seems to be a serious candidate for such a formalised model but seems still to be developed further if it should be able to anser all serious criticism. The recent article by Gelman and Shalizi is of course just the latest in a tradition of bayesian-critique. A classic article is Glymour "Why I am Not a Bayesian" (also in the reference list of Gelman and Shalizi). That is from 1980 so probably a lot has happened since then. I myself am not up-to-date with most of development, but it seems to be an import topic to discuss here on Less Wrong that seems to be quite bayesianistically oriented.
2Cyan
ETA: Never mind. I got my crackpots confused. Original text was: mjgeddes was once publicly dissed by Eliezer Yudkowsky on OB (can't find the link now, but it was a pretty harsh display of contempt). Since then, he has often bashed Bayesian induction, presumably in an effort to undercut EY's world view and thereby hurt EY as badly as he himself was hurt.
0Douglas_Knight
You're probably not thinking of this On Geddes.
0Cyan
No, not that. Geddes made a comment on OB about eating a meal with EY during which he made some well-meaning remark about EY becoming more like Geddes as EY grows older, and noticing an expression of contempt (if memory serves) on EY's face. EY's reply on OB made it clear that he had zero esteem for Geddes.
4Morendil
Nope, that was Jef Allbright.
0Cyan
No wonder I couldn't find the link. Yeesh. One of these days I'll learn to notice when I'm confused.
0[anonymous]
I'm not expert enough to interpret. But I know Shalizi is skeptical of Bayesians and some of his blog posts seem so directly targeted at the LessWrong point of view that I almost suspect he's read this stuff. Getting in contact with him would be a coup.
0cupholder
(Fixed) link to earlier discussion of this paper in the last open thread. (Edit - that's what I get for posting in this thread without refreshing the page. cousin_it already linked it.)
0Matt_Simpson
Yesterday, I posted my thoughts in last month's thread on the article. I'm reproduci