You know what to do.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

New Comment
558 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Is anyone else here disturbed over the recent Harvard incident where Stephanie Grace's perfectly reasonable email where she merley expreses agnosticism over the posiblity that the well documented IQ differences between groups are partially genetic is worthy of harsh and inaccurate condemnation from the Harvard Law school dean?

I feel sorry for the girl since she trusted the wrong people (the email was alegedly leaked by one of her girlfriends who got into a dispute with her over a man). We need to be extra carefull to selfcensure any rationalist discusions about cows "everyone" agrees are holy. These are things I don't feel comfortable even discussing here since they have ruined many carrers and lives due to relentless persecution. Even recanting dosen't help at the end of the day, since you are a google away and people who may not even understand the argument will hate you intensly. Scary.

I mean surley everyone here agrees that the only way to discover truth is to allow all the hypothesies to stand on their own without giving them the privilige of supressing competition to a few. Why is our society so insane that this regurarly happens even concerning views that many re... (read more)


I'm a bit upset.

In my world, that's dinner-table conversation. If it's wrong, you argue with it. If it upsets you, you are more praiseworthy the more you control your anger. If your anti-racism is so fragile that it'll crumble if you don't shut students up -- if you think that is the best use of your efforts to help people, or to help the cause of equality -- then something has gone a little screwy in your mind.

The idea that students -- students! -- are at risk if they write about ideas in emails is damn frightening to me. I spent my childhood in a university town. This means that political correctness -- that is, not being rude on the basis of race or ethnicity -- is as deep in my bones as "please" and "thank you." I generally think it's a good thing to treat everyone with respect. But the other thing I got from my "university values" is that freedom to look for the truth is sacrosanct. And if it's tempting to shut someone up, take a few deep cleansing breaths and remember your Voltaire.

My own beef with those studies is that you cannot (to my knowledge) isolate the genetics of race from the experience of race. Every single black subject whose... (read more)

Since mixed racial background should make a difference in genes but makes only a small difference in the way our culture treats a person, if the IQ gap is the result of genetics we should see find that the those with mixed race backgrounds have higher IQs than those of mostly or exclusively African descent. This has been approximated with skin tone studies in the past and my recollection is that one study showed a slight correlation between lighter skin tone and IQ and the other study showed no correlation. There just hasn't been much research done and I doubt there will ever be much research (which is fine by me).
I'm still not confident because we're not, as Nancy mentioned, completely binary about race even in the US. What you'd really need to do is a comparative study between the US and somewhere like Brazil or Cuba, which had a different history regarding mixed race. (The US worked by the one-drop-of-blood rule; Spanish and Portuguese colonies had an elaborate caste system where more white blood meant more legal rights.) If it's mainly a cultural distinction, we ought to see a major difference between the two countries -- the light/dark gap should be larger in the former Spanish colony than it is in the US. If culture doesn't matter much, and the gap is purely genetic, it should be the same all around the world. The other thing I would add, which is easy to lose track of, is that this is not research that should be done exclusively by whites, and especially not exclusively by whites who have an axe to grind about race. Bias can go in that direction as well, and a subject like this demands extraordinary care in controlling for it. Coming out with a bad, politically motivated IQ study could be extremely harmful.
Frankly, I'm not sure why the research should be done at all.
Minnesota Trans-Racial Adoption Study suggests that a lot of the difference is cultural and/or that white parents are better able to protect their children from the effects of prejudice. I also have no idea what the practical difference of 4 IQ points might be. I don't know where you'd find people who were interested enough in racial differences in intelligence to do major studies on it, but who didn't have preconceived ideas.
Afaik, skin tone, hair texture, and facial features make a large difference in how African Americans treat each other. White people, in my experience, are apt to think of race in binary terms, but this might imply that skin tone affects how African Americans actually get treated.

Here is the leaked email by Stephanie Grace if anyone is interested.

… I just hate leaving things where I feel I misstated my position.

I absolutely do not rule out the possibility that African Americans are, on average, genetically predisposed to be less intelligent. I could also obviously be convinced that by controlling for the right variables, we would see that they are, in fact, as intelligent as white people under the same circumstances. The fact is, some things are genetic. African Americans tend to have darker skin. Irish people are more likely to have red hair. (Now on to the more controversial:)

Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders. This suggests to me that some part of intelligence is genetic, just like identical twins raised apart tend to have very similar IQs and just like I think my babies will be geniuses and beautiful individuals whether I raise them or give them to an orphanage in Nigeria. I don’t think it is that controversial of an opinion to say I think it is at least possible that African Americans are less intelligent on a gene

... (read more)
One of the people criticizing the letter accused the letter writer of privileging the hypothesis - that it's only because of historical contingency (i.e. racism) that someone would decide to carve reality between "African-Americans" and "whites" instead of, say, "people with brown eyes" and "people with blue eyes". (She didn't use that exact phrase, but it's what she meant.)

Isn't nearly everything a social construct though? We can divide people based into two groups, those with university degrees and those without. People with them may tend to live longer or die earlier, they may earn more money or earn less, ect. We may also divide people into groups based on self identification, do blondes really have more fun than brunettes or do hipsters really feel superior to nonhipsters or do religious people have lower IQs than self-identified atheists ect Concepts like species, subspecies and family are also constructs that are just about as arbitrary as race.

I dosen't really matter in the end. Regardless of how we carve up reality, we can then proceed to ask questions and get answers. Suppose we decided to in 1900 take a global test to see whether blue eyed or brown eyed people have higher IQs. Lo and behold we see brown eyed people have higher IQs. But in 2050 the reverse is true. What happened? The population with brown eyes was heterogeneous and its demographics changed! However if we took skin cancer rates we would still see people with blue eyes have higher rates of skin cancer in both periods.

So why should we bother carving up reality on this racial m... (read more)

This is a matter of much dispute and a lot of confusion. See here.
I wondered how humans are grouped, so I got some genes from the world, and did an eigenvalue analysis, and this is what i found: As you can see, humans are indeed clustered in subspecies.
This doesn't demonstrate subspecies.
Thanks for the link, I'm reading it now. I just want to clear up that I'm refering to species and subspecies in the biological sense in that sentence and family in the ordinary every day sense not to the category between order and genus.
Only if you accept that particular framing, I would have thought? If one chooses to justify affirmative action as reparations for past wrongs, 'what the data has to about say about group differences' won't change your opinion of affirmative action. (ETA - Also.)
Of course one can do this. But then you get into the sticky issue of why should we group reparations based on race? Aren't the Chatolic Irish entitled to reparations for their mistreatment as immigrant labour and discrimination against them based on their religion if the same is true of the Chinese? Aren't Native Americans a bit more entitled to reparations than say Indian immigrants? Also why are African Americans descended from slaves not differenciated to those who have migrated to the US a generation ago (after the civil rights era)? And how long should such reparations be payed? Indefinetly? I hope that from the above you can see why there would need to be a new debate on affirmative action if one reframes it.
I don't believe affirmative action is justified by 'past wrongs' - I used that as an example only because you mentioned it. (Personally, I believe it makes more sense to justify affirmative action as a device to offset present disadvantages.) I meant only to make the point that the statement 'Affirmative action's ethical status...depends on what the data has to about say about group differences' is too broad, because there are justifications for affirmative action that do not hinge on the nature of IQ differences between blacks and whites.
I wrote affirmative action as it is currently framed. I consider that an important distinction. I never denied other frames where possible, I'm just saying the current support for affirmative action amongst groups that are harmed by it is loosly based on the notion that it is offseting unwaranted privilige (bias by employers in other words) of the majority.
I think we both agree that 'what the data has to about say about group differences' does not necessarily affect 'Affirmative action's ethical status' in general - only if one justifies it on grounds that make assumptions about the nature of IQ differences between groups. That just wasn't clear to me as of four days ago due to your phrasing.
I didn't say I agreed.
I never said you did. :) Would you however agree with the sentiment of my last paragraph? This thread of conversation is easily derailed since whether group differences exist isn't really its topic.
Yeah, I do...
Black people routinely outperform whites at elite running events, Asians already rule at math and science, so the hypothesis that there are genetic differences in performance between blacks and whites is already something one should consider likely.
IAWYC, but "Asians rule at math and science" seems to have a huge cultural basis, and it's at least no more obvious that it has a genetic component than that racial IQ gaps do.
To someone who knows that Asian math achievement has a fully or almost fully cultural basis, the Asian math link doesn't do work privileging the hypothesis that there might be a black/white genetic IQ difference. However, to someone who simply sees math classes full of people with yellow skin, and doesn't know why, it does do work privileging the hypothesis that there might be a black/white genetic IQ difference, rather that e.g. anti-black discrimination causing lower grades for blacks etc. Of course, if you saw Asian-filled math classes, there must have already been something that made you assign some probability to the hypothesis that genes, not memes were responsible. I don't think it has to be more obvious or clear-cut, it moves you evidentially by simply being another instance of the same thing. If the only racial-feature correlation in the world was that black people tested low on IQ, then the idea that genes rather than, say, discrimination were responsible would be something of a stretch. But when you see a whole collection of racial-feature correlations, the idea that genes are responsible to some extent becomes more plausible. It is a reasonable AI/machine learning heuristic to jump from co-variation to some kind of causal link: if you see skin color (which is well known to indicate DNA-type) covary with ability at sport, ability at math, ability at IQ tests, criminality, etc, you place some weight on the hypothesis that DNA directly causally influences these things. Of course, you don't put all of your weight on that.
@Nick Tarleton: Can you explain how you know Asian math acheivement is fully due to cultural bias? Haven't crossracial adpotion studies shown that adopted East Asian children do better than their white peers on IQ tests? I also remember hearing claims that generally Asians do beter on the visualspatial component of IQ tests than whites. Edit: Originally adressed @Roko
Nick Tarleton said it, not me ;-) I have not seen evidence either way; my arguments given above are not dependent upon it being true or false.
I misread the first sentence. Thanks for the correction, I'll put a @Nick Tarleton in there then.
I think it would be fascinating if people with blue eyes were more or less intelligent, when controlling for the variables, than people with brown eyes. That said, I would expect a larger genetic variation when choosing between long-separated and isolated populations rather than eye colors.
I'm using eye color as an example here since CronoDAS mentioned it. Replace it with a particular gene, future time orientation, nose type or whatever. If society makes quantifiable claims about a particular category into which we slice up reality (ie Atheists are more likley to rape and murder!) an individual should have the right to either test or demand proof for this quantifiable claim. Race is a pretty good proxy form which populations your ancestors came from. Its not perfect since for example the Black race has the most genetic diversity and geneflow has increased after the rise of civilization and especially globalisation. Knowing however, whether for example most of your ancestors lived outside of Africa for the last 60,000 thousand years or that your group of ancestors diverged from the other guys group of ancestors 40,000 thousand years ago is also relevant info. I stole this graph from Razib's site (gene expression) for a quick reference of what current biology has to say about ancestral populations.
Care to point them out?
Most escape me right now but I do recall something that bothered me... She implicity uses stereotypes of African American behvariour and how they change over time as an indicator of the actuall change in violent behaviour. I'm sure it correlates somewhat, but considering how much stronger changes in wider society where and how much people's interests regarding what it was best to have other people belive about Black behaviour changed over time I don't think you can base an argument on this either way.
Here's a bit more on the "privileging the hypothesis" bit, taken from here:

My "wrong-headed thinking" radar is picking up more bleeps from this than from the incriminating email:

  • "There are people with vested interests" is basically unverifiable, she's basically assuming anybody who disagrees is a fundamentally racist mutant
  • "People won't change their mind anyway, the discussion will be pointless" can be said of any controversial subject
  • The comparison to creationists can also be used to tar any opponent, there should be some version of Godwin's law for that
  • The argument that "one can always find a difference if one looks hard enough"
  • "No study can ever “prove” that no difference between two groups exists" seems to be besides the point - the question isn't whether any difference exists, but whether this specific difference exists, something that can be proved or disproved by experiment. (Well, more exactly, the topic would be what the cause of the difference is)
As the prior threadmakes clear, distinguishing between genetic and environmental causes of intelligence is immensely complicated -- especially given the confusion over what intelligence is. However, it is well known that people don't like being told that they're statistically less likely to be intelligent. There are actually a fair number of studies showing that promoting stereotypes can actually reduce test scores. This is called "Stereotype Threat". While there is a recent meta-study which claims that the effect is an artifact of publication bias, that study had not been published when Grace wrote her email. Grace (a) has no new data, and (b) has no new arguments. When she makes the claim that the search for evidence that the race-iq correlation is not genetic has been "unsuccessful", she hurts people. But she does not, in return, contribute anything at all to the discourse. She cannot even claim the virtue of curiosity -- note that her open-mindedness extends to the idea that African Americans might be as smart as whites, but not to the idea that they might be smarter. Someone whose grasp of evidence is that weak, should not be working in the law. Should someone who callously performs any act which she knows or should know will cause harm to people without any offsetting benefit, should probably be publicly shamed.
She was talking to friends at dinner. No harm there. The harm comes when months later one of the dinner companions forwards the e-mail to those who will likely be hurt.
It is the dinner companion who should be condemned, if this account of the matter is accurate.
Your standards for a dinner time discussion among law students are awfully high. Incidentally, the only poster here who has ever claimed to be a practicing attorney (afaik) was Brazil, from the prior thread.
So that's why I felt like he was cross-examining me in that thread. Mystery solved...
Well perhaps, fundamental attribution error and all that. Maybe he was just having a bad week or got defensive after we ganged up on him. (Edit: but his global warming blog had the same kind of tone and approach)
Good point.
I belive that "choose what to believe based on evidence" is not too high a standard. The law connection is that Grace is a law student, going to clerk for a judge. Since the comment was not about her correctness but about her treatment, it's reasonable to question whether the treatment was justified.
Isn't acknowledging what few others will acknowledge contributing to the discourse? A substantial portion of intellectuals refuse to even acknowledge the possibility that there is a correlation between race and intelligence (controlling for culture, etc). And they don't get publicly shamed for shoddy science. Yet Grace should get publicly shamed for pointing out that the evidence suggests such a correlation? It's not as if she claimed a high degree of certainty. Besides, the best way to overcome any disadvantages one race might have in intelligence is to understand why there are differences in the first place. Refusing to believe in a substantial portion of the hypothesis space for no good reason is a potentially huge detriment to this aim. Grace certainly made a social error, and for that perhaps she can be criticized, but it shouldn't be a social error to acknowledge different possibilities and the evidence for those possibilities in an intellectual* conversation. * I.e., truth seeking. The evidence/possibilities shouldn't be used in a condescending way, of course.
It gets a lot more complicated when those differences are significantly directly affected by publicly discussing them, as seems to be the case. This statement may very well be true, but it's also an applause light, and makes it sound like you think reality is obligated to be set up so that truthseeking wins.
Fair enough, though I'll point out that the discussion was over dinner/email, not in an extremely public forum where many people will be exposed (though there is still the possibility that friends tell friends who tell friends, etc.). Yes, I see that now. How about this: it's unclear that the best strategy for combating any racial disadvantages is not talking about them, rather than determining the cause and attempting to do something proactive about it.

PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person?

See Michael Vassar's discussion of this phenomenon. Also, I think that people discussing statements they see as dangerous often implicitly (and unconsciously) adopt the frames that make those statements dangerous, which they (correctly) believe many people unreflectively hold and can't easily be talked out of, and treat those frames as simple reality, in order to more simply and credibly call the statement and the person who made it dangerous and Bad.

The Harvard incident is business as usual:
I think there's something to be said for not posting opinions such that 1) LW is likely to agree with the opinion, and 2) sites perceived as agreeing with the opinion are likely to be the target of hate campaigns.

This is the best exposition I have seen so far of why I believe strongly that you are very wrong.

On a Bus in Kiev

I remember very little about my childhood in the Soviet Union; I was only seven when I left. But one memory I have is being on a bus with one of my parents, and asking something about a conversation we had had at home, in which Stalin and possibly Lenin were mentioned as examples of dictators. My parent took me off the bus at the next stop, even though it wasn’t the place we were originally going.

Please read the whole thing and remember that this is where the road inevitably leads.

Yes, self-censorship is Prisoner's Dilemma defection, but unilaterally cooperating has costs (in terms of LW's nominal purpose) which may outweigh that (and which may in turn be outweighed by considerations having nothing to do with this particular PD).

Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".

I don't, which is why I posted it.
It isn't inevitable. There's a trivial demonstration that censorship self-censorship don't form necessarily form a collective downward spiral: There are societies that at one point had much heavier censorship and now don't. That's not easily made consistent with your claim. Censorship is bad. Self-censorship is very bad. Especially on a website devoted to improving rationality we shouldn't censor what we have to say. But the notion that small bits of self-censorship will eventually lead to believing that 2+2=5 if the Party says so is simply not called for. This is a classic example where a strong argument can be made for a claim but that claim is being inherently undermined by the use of a very weak argument for the claim instead of the strong one. (Incidentally, generalization from fictional evidence also comes up here).
I am claiming that this road leads to totalitarianism. That is not the same as claiming that the road is one way with no exits and no U-turns. If I thought otherwise there would be little point in me expressing my concerns. As long as society keeps its foot on the pedal and fails to realize it is heading in the wrong direction however that is where it will end up. Inevitably. This is not generalizing from fictional evidence. It is using a literary quote to express an idea more eloquently than I can myself. Since the book can be seen as a parable illustrating the same concerns I am emphasizing I believe it is quite appropriate to quote from it. I am not using the fictional story as proof of my claim, I am quoting it to elaborate on what it is I am claiming.

I'm sympathetic to this as a general principle, but it's not clear to me that LW doesn't have specific battles to fight that are more important than the general principle.

Perhaps there should be a "secret underground members only" section where we can discuss these things?
Logic would suggest that such a section would be secret, if it existed. It would be simple enough to send private messages to trusted members alerting them to the existence of a private invitation-only forum on another website where such discussions could be held. Naturally, I would say none of this if I knew of such a forum, or had any intention of creating such. And I would not appreciate any messages informing me of the existence of such a forum - if for no other reason than that I am the worst keeper of secrets I have ever known.
The first rule of rationality club is: you do not talk about rationality club.
There could still be a lower level of 'secrecy' where it wont show up on google and you cant actually read it unless you have the minimum karma, but its existence is acknowledged. It's not where you'd plan to take over the world, but I'd hope it'd be sufficient for talking about race/intelligence issues
I share your concern. Literal hate campaigns seem unlikely to me, but such opinions probably do repulse some people, and make it considerably easier for us to lose credibility in some circles, that we might (or might not) care about. On the other hand, we pretty strongly want rationalists to be able to discuss, and if necessary slay, sacred cows, for which purpose leading by example might be really valuable.
Undiscriminating skepticism strikes again: here's the thread on the very topic of genetic IQ differences.

Oh good. Make it convenient for the guys running background searches.

Thanks for the link! I'm new here and really appreciate stuff to read up on since its mostly new to me. :)
I agree with what you've written, with particular emphasis on the problem of privacy on the Internet (and off, for that matter). Given that I don't even know who Stephanie Grace is, though, I think I don't care.
I think when arguing about really controversial things that don't fit your tribe's beliefs via email or online means its best to only use them to send sources and citations. Avoid comments, any comments whatsoever, perhaps even quotes or Galileo forbid boldening anything but the title. Encourage people involved in the debate to do the same. Keep any controversial conclusions gleaned from the data or endorsals of any paper from the electronic record.Then when you are private tell them, did you manage to read the Someguyson study I sent you in email 6#? When you've exausted the mailed links talk switch to gossip or the weather. If the mail was leaked and they don't have you on record for saying anything forbidden except just mailing around sources, how exactly will they tar and feather you? I can say, this mode of conversation is actually quite stimulating since I've engaged in it before but I've only tested it for noncontraversial and complex subjects. It lets you catch up on what starting points he is coming from as well as gives you time to cool off in heated arguments. It is something however that drags on for weeks so not really appropirate with strangers.
I'm more directly disturbed by the bias present in your exposition: "perfectly reasonable", "merely expresses agnosticism", "well documented", "harsh and inaccurate". Starting off a discussion with assorted applause lights and boo lights strikes me as unlikely to lead to much insight. What would be likely to lead to useful insight? Making use of the tools LessWrong's mission is to introduce us to, such as the applications of Bayesian reasoning. "Intelligence has a genetic component" strikes me as a causal statement. If it is, we ought to be able to represent it formally as such, tabooing the terms that give rise to cognitive muddles, until we can tell precisely what kind of data would advance our knowledge on that topic. I've only just cracked open Pearl's Causality, and started playing with the math, so am still very much an apprentice at such things. (I have my own reasons to be fooling with that math, which are not related to the race-IQ discussion.) But it has already convinced me that probability and causality are deep topics which it's very easy to draw mistaken conclusions about if you rely solely on a layman's intuition. For instance, "the well documented IQ differences between groups" are purely probabilistic data, which tell us very little about causal pathways generating the data, until and unless we have either controlled experiments, or further data sets which do discriminate between the competing causal models (only very grossly distinguished into "nature" and "nurture"). I don't know if the email you quoted (thanks for that, BTW, it's a treat to have access to a primary source without needing to chase it down) is racist, but it does sound very ignorant to me. It makes unwarranted inferential leaps, e.g. from "skin and hair color are definitely genetic" to "some part of intelligence is genetic", omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other. It comes ac

Dinnertime conversations between regular, even educated people do not contain probabilistic causal analyses. In the email Grace claimed something was a live possibility and gave some reasons why. Her argument was not of the quality we expect comments to have here at Less Wrong. And frankly, she does sound kind of annoying.

But that all strikes me as irrelevant compared to being made into a news story and attacked on all sides, by her dean, her classmates and dozens of anonymous bloggers. By the standards of normal, loose social conversation she did nothing deserving of this reaction.

I feel a chilling effect and I've only ever argued against the genetic hypothesis. Frankly, you should too since in your comment you quite clearly imply that you don't know for sure there is no genetic component. My take from the reaction to the email is that the only socially acceptable response to encountering the hypothesis is to shout "RACIST! RACIST!" at the top of your lungs. If you think we'd be spared because we're more deliberate and careful when considering the hypothesis you're kidding yourself.

Sure. What I do find disturbing is how, knowing what she was doing (and who she was sending it to), the "friend" who leaked that email went ahead and did it anyway. That's positively Machiavellian, especially six months after the fact. However, I do not feel a need to censure myself when discussing the race-IQ hypothesis. If intelligence has a genetic component, I want to see the evidence and understand how the evidence rules out alternatives. I would feel comfortable laying out the case for and against in an argument map, more or less as I feel comfortable laying out my current state of uncertainty regarding cryonics in the same format. Neither do I feel a need to shout at the top of my lungs, but it does seem clear to me that racism was a strong enough factor in human civilization that it is necessary, for the time being, to systematically compensate, even at the risk of over-compensating. "I absolutely do not rule out the possibility [of X]" can be a less than open-minded, even-handed stance, depending on what X you declare it about. (Consider "I absolutely do not rule of the possibility that I will wake up tomorrow with my left arm replaced by a blue tentacle.") Saying this and mistaking it for an "agnostic" stance is kidding oneself.
Since people are discussing group differences anyway. I would just like people to be a bit clearer in their phrasing. Inteligence does have a genetic component. I hope no one argues that the cognitive difference between the average Chimpanzee and Resus monkey are result of nurture. The question is if there is any variation in the genetic component in Humans. Studies have shown a high heritability for IQ, this dosen't nesecarily mean much of it is genetic but it does seem a strong position to take, especially considering results from twin studies. A good alternative explanation I can think of, that could be considered equivalent in explanatory power, would be differences in prenatal environment beyond those controled in previous studies (which could get sticky since such differences may also show group genetic variation ! for example the average lenght of pregnancy and risks associated with postterm complications does vary slightly between races). The question disscused here however is whether there are any meaningfull differences between human groups regarding their genetic predispositions towards mental faculties. We know quite a bit from genetic analysis about where people with certain markers have spread and which groups have been isolated. Therefore the real question we face is twofold: 1. Just how really evolutionary recent is abstract thinking and other mental tricks the IQ test measures? The late advent of behavioral modernity compared vs. the early evidence of anatomically nearly modern could be considered for example. Some claim it was an evolutionary change following the well documented recent bottleneck of the Human species others say the advent of modern behaviour was a radical cultural adaptation to a abrupt environmental change or just part of a long and slow progress of rising population density and material culture complexity we haven't yet spotted. Considering how sketchy the archeological record is we can't be suprised at all if it turns out
If genetic differences in intelligence could not be relevant to reproductive success within a single generation it is difficult to see how human intelligence could have evolved.
Group selection may help you imagine more.
Isn't group selection largely discredited?
Let's be careful here. The letter does not assert baldly that "some part of intelligence is genetic". Rather, the letter asserts that some evidence "suggests to me that some part of intelligence is genetic". Furthermore, that particular inferential leap does not begin with the observation that "skin and hair color are definitely genetic". Rather, the inferential leap begins with the claim that "Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders." Therefore, at least with regards to that particular inference, it is not fair to criticize the author for "omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other." [ETA: Of course, the inference that the author did make is itself open to criticism, just not the criticism that you made.] I say all this as someone who considers Occam to be pretty firmly on the side of nongenetic explanations for the racial IQ gaps. But no progress in these kinds of discussions is possible without assiduous effort to avoid misrepresenting the other side's reasoning.

He who controls the karma controls the world.

Less Wrong dystopian speculative fiction: An excerpt.

JulXutil sat, legs crossed in the lotus position, at the center of the Less Wrong hedonist-utilitarian subreddit. Above him, in a foot-long green oval, was his karma total: 230450036. The subreddit was a giant room with a wooden floor and rice paper walls. In the middle the floor was raised, and then raised again to form a shallow step pyramid with bamboo staircases linking the levels. The subreddit was well lit. Soft light emanated from the rice paper walls as if they were lit from behind and Japanese lanterns hung from the ceiling.

Foot soldiers, users JerelYu and Maxine stood at the top of each staircase to deal with the newbies who wanted to bother the world famous JulXutil and to spot and downvote trolls before they did much damage. They also kept their eyes out for members of rival factions because while /lw/hedutil was officially public, every Less Wrong user knew this subreddit was Wireheader territory and had been since shortly after Lewis had published his famous Impossibility Proof for Friendliness. The stitched image of an envelope on JulXutil’s right sleeve turned red. H... (read more)

This is golden. I demand continuation.
It's a real question where to, the Karma system leads. In a long run, we might see quite unexpected and unwanted results. But there is probably no other way to see that, than to wait where to it will actually go. I guess, a kind of conformism will prevail, if it hasn't already.
The karma=wireheading angle is wonderful, and I think new.

Ask A Rationalist--choosing a cryonics provider:

I'm sold on the concept. We live in a world beyond the reach of god; if I want to experience anything beyond my allotted threescore and ten, I need a friendly singularity before my metabolic processes cease; or information-theoretic preservation from that cessation onward.

But when one gets down to brass tacks, the situation becomes murkier. Alcor whole body suspension is nowhere near as cheap as numbers that get thrown around in discussions on cryonics--if you want to be prepared for senescence as well as accidents, a 20 year payoff on whole life insurance and Alcor dues runs near $200/month; painful but not impossible for me.

The other primary option, Cryonics Institute, is 1/5th the price; but the future availability--even at additional cost--of timely suspension is called into question by their own site.

Alcor shares case reports, but no numbers for average time between death and deep freeze, which seems to stymie any easy comparison on effectiveness. I have little experience reading balance sheets, but both companies seem reasonably stable. What's a prospective immortal on a budget to do?

Why not save some money and lose what's below the neck?
That saves about half the life insurance cost while leaving the Alcor dues the same, dropping the cost from ~$200/month to ~$140/month. This doesn't make it a clearly preferable option to me.
If I recall correctly preservation of the brain is supposed to be easier and on average of better quality with the decapitated option (I know I'm using the uncool term) than the whole body option.
For what it's worth, I've been finding Alcor to be bureaucratic and very slow to respond. I've been trying to sign up (from Australia) for several months and am not over the line yet.
I second this query. I've been meaning to post something similar.

I have an idea that may create a (small) revenue stream for LW/SIAI. There are a lot of book recommendations, with links to amazon, going around in LW, and many of them do not use an affiliate code. Having a script add a LessWrong affiliate code to those links that don't already have one may lead to some income, especially given that affiliate codes persist and may get credited for unrelated purchases later in the day.

I believe Posterous did this, and there was a minor PR hubbub about it, but the main issue was that they did not communicate the change properly (or at all). Also, given that LW/SIAI are not-for-profit endeavours, this is much easier to swallow. In fact, if it can be done in an easy-to-implement way, I think quite a few members with popular blogs may be tempted to apply this modification to their own blogs.

Does this sound viable?

Yes, under two conditions: 1. It is announced in advance and properly implemented. 2. It does not delete other affiliate codes if links are posted with affiliate codes. Breaking both these rules is one of the many things which Livejournal has done wrong in the last few years, which is why I mention them.

I have a (short) essay, 'Drug heuristics' in which I take a crack at combining Bostrom's evolutionary heuristics and nootropics - both topics I consider to be quite LW-germane but underdiscussed.

I'm not sure, though, that it's worth pursuing in any greater depth and would appreciate feedback.

Interesting essay.
I'd like to see this pursued further. In particular, I'd like to hear your thoughts on modafinil. JustinShovelain's post on caffeine was similar, and upvoted.
Modafinil is now done:
As of the time I reply there is nothing about modafinil on that page.
I use aggressive caching settings on since most of the content doesn't change very often. Force-refresh, and you'll see it.
Anything besides modafinil? In part I'm stuck because I don't know what else to discuss; Justin's post was similarly short, but it was mainly made of links.
I'd like to see it pursued further. where does alcohol fit in your schema?
I don't know terribly much about alcohol, so take this with a grain of salt. I think I would probably put it as an out-of-date adaptation; my understanding is that alcoholic beverages would have been extremely energy-rich, and also hard to come by, and so is in the same category as sugars and fats - they are now bad for us though they used to be good. ('Superstimulus', I think, is the term.) Given that, it's more harmful than helpful and to be avoided. I'll admit that the issue of resveratrol confuses me. But assuming that it has any beneficial effect in humans, AFAIK one should be able to get it just by drinking grape juice - resveratrol is not created in the fermentation process.
Fermented beverages also had the advantage of usually being free of dangerous bacteria; ethanol is an antiseptic that kills the bacteria that cause most water-borne diseases. (And water-borne disease used to be very common.)
That's a good second way it's an out of date optimization.
You might find this paper interesting. In a sentence, it suggests that people drink to signal trustworthiness.

Today, while I was attending an honors banquet, a girl in my class and her boyfriend were arguing over whether or not black was a color. When she had somewhat convinced him that it wasn't (I say somewhat because the argument was more-or-less ending and he didn't have a rebuttal), I asked "Wait, are you saying I can't paint with black paint?" She conceded that, of course black paint can be used to paint with, but that black wasn't technically a color. At which point I explained that we were likely using two different definitions of color, and that we should explain what we mean. I gave two definitions: 1] The various shade which a human eye was seeing and the brain was processing. 2] The specific wavelength of light that a human eye can pick up. The boyfriend and I were using definition 1, where as she was using definition 2. And with that cleared up, the debate ended.

Note: Both definitions aren't word for word, but somewhat close. I was simply making the distinction between the wavelength itself and the process of seeing something and placing it in a certain color category.

One could argue that definition 2 is Just Wrong, because it implies that purple isn't a color (purple doesn't have a wavelength, it is non-spectral).
By her definition, the yellow color you see on a computer screen is not a color at all, since it's made up of two wavelengths of light which happen to stimulate the red and green cone cells in your retina in approximately the same way that yellow light would.
This will replace Eliezer's tree falling in a forest sound as my go-to example of how an algorithm feels on the inside about wrong questions.
Huzzah! That's all too common a problem... sometimes the main problem...

I noticed something recently which might be a positive aspect of akrasia, and a reason for its existence.

Background: I am generally bad at getting things done. For instance, I might put off paying a bill for a long time, which seems strange considering the whole process would take < 5 minutes.

A while back, I read about a solution: when you happen to remember a small task, if you are capable of doing it right then, then do it right then. I found this easy to follow, and quickly got a lot better at keeping up with small things.

A week or two into it, I thought of something evil to do, and following my pattern, quickly did it. Within a few minutes, I regretted it and thankfully, was able to undo it. But it scared me, and I discontinued my habit.

I'm not sure how general a conclusion I can draw from this; perhaps I am unusually prone to these mistakes. But since then I've considered akrasia as a sort of warning: "Some part of you doesn't want to do this. How about doing something else?"

Now when the part of you protesting is the non-exercising part or the ice-cream eating part, then akrasia isn't being helpful. But... it's worth listening to that feeling and seeing why you are avoiding the action.

the most extreme example is depressed people having an increased risk of suicide if an antidepressant lifts their akrasia before it improves their mood.

I've also read that people with bipolar disorder are more likely to commit suicide as their depression lifts. But antidepressant effects can be very complicated. I know someone who says one med made her really really want to sleep with her feet where her head normally went. I once reacted to an antidepressant by spending three days cycling through the thoughts, "I should cut off a finger" (I explained to myself why that was a bad idea) "I should cut off a toe" (ditto) "I should cut all the flesh from my ribs" (explain myself out of it again), then back to the start. The akrasia-lifting explanation certainly seems plausible to me (although "mood" may not be the other relevant variable--it may be worldview and plans; I've never attempted suicide, but certainly when I've self-harmed or sabotaged my own life it's often been on "autopilot", carrying out something I've been thinking about a lot, not directly related to mood--mood and beliefs are related, but I've noticed a lag between one changing and the other changing to catch up to it; someone might no longer be severely depressed but still believe that killing themself is a good course of action). Still, I would also believe an explanation that certain meds cause suicidal impulses in some people, just as they can cause other weird impulses.
My antidepressant gave me a sweet tooth.
Interesting. Are you sure that is going on when antidepressants have paradoxical effects?
Not absolutely certain. It's an impression I've picked up from mass media accounts, and it seems reasonable to me. It would be good to have both more science and more personal accounts. Thanks for asking.
My mom is a psychiatrist, and she's given an explanation basically equivalent to that one - that people with very severe depression don't have the "energy" to do anything at all, including taking action to kill themselves, and that when they start taking medication, they get their energy back and are able to act on their plans.
Good observations. Sometimes I procrastinate for weeks about doing something, generally non-urgent, only to have something happen that would have made the doing of it unnecessary. (For instance, I procrastinate about getting train tickets for a short trip to visit a client, and the day before the visit is due the client rings me to call it off.) The useful notion here is that it generally pays to defer action or decision until "the last responsible moment"; it is the consequence of applying the theory of options valuation, specifically real options, to everyday decisions. A top-level post about this would probably be relevant to the LW readership, as real options are a non-trivial instance of a procedure for decision under uncertainty. I'm not entirely sure I'm qualified to write it, but if no one else steps up I'll volunteer to do the research and write it up.
I work in finance (trading) and go through my daily life quantifying everything in terms of EV. I would just caution in saying that, yes procrastinating provides you with some real option value as you mentioned but you need to weigh this against the probability of you exercising that option value as well as the other obvious costs of delaying the task. Certain tasks are inherently valuable to delay as long as possible and can be identified as such beforehand. As an example, work related emails that require me to make a decison or choice I put off as long as is politely possible in case new information comes in which would influence my decision. On the other hand, certain tasks can be identified as possessing little or no option value when weighted with the appropriate probabilities. What is the probability that delaying the payment of your cable bill will have value to you? Perhaps if you experience an emergency cash crunch. Or the off chance that your cable stops working and you decide to try to withhold payment (not that this will necessarily do you any good).
I'd be interested in reading it.
Continuing on the "last responsible moment" comment from one of the other responders - would it not be helpful to consider the putting off of a task until the last moment as an attempt to gather the largest amount of information persuant to the task without incurring any penalty? Having poor focus and attention span I use an online todo-list for work and home life where I list every task as soon as I think of it, whether it is to be done within the next hour or year. The list soon mounts up, occassionally causing me anxiety, and I regularly have cause to carry a task over to the next day for weeks at a time - but what I have found is that a large number of tasks get removed because a change makes the task no longer necessary and a small proportion get notes added to them while they stay on the list so that the by the time the task gets actioned it has been enhanced by the extra information. By having everything captured I can be sure no task will be lost, but by procrastinating I can ensure the highest level of efficiency in the tasks that I do eventually perform. Thoughts?
I suspect it’s just a figure of speech, but can you elaborate on what you meant by “evil” above?

Question: Which strongly held opinion did you change in a notable way, since learning more about rationality/thinking/biases?


Theism. Couldn't keep it. In the end, it wasn't so much that the evidence was good -- it had always been good -- as that I lost the conviction that "holding out" or "staying strong" against atheism was a virtue.

Standard liberal politics, of the sort that involved designing a utopia and giving it to people who didn't want it. I had to learn, by hearing stories, some of them terrible, that you have no choice but to respect and listen to other people, if you want to avoid hurting them in ways you really don't want to hurt them.

I just listened to UC Berkeley's "Physics for Future Presidents" course on iTunes U (highly recommended) and I thought, "Surely no one can take theism seriously after experiencing what it's like to have real knowledge about the universe."
Disagreed. My current opinion is that you can be a theist and combine that with pretty much any other knowledge. Eliezer points to Robert Aumann as an example. For someone that has theism hardcoded into their brain and treats it as a different kind of knowledge than physics there can be virtually no visible difference in everyday life from a normal a-theist. I think the problem is not so much the theism, but that people use it to base decisions on it.
oh it's true. I know deeply religious scientists. Some of them are great scientists. Let's not get unduly snide about this.
There seems to be a common thought-pattern among intelligent theists. When they learn a lot about the physics of the Universe, they don't think "I should only be satisfied with beliefs in things that I understand in this deep way." Instead, they think, "As smart as I am, I have only this dim understanding of the universe. Imagine how smart I would have to be to create it! Truly, God is wonderful beyond comprehension."
"Wonderful" I could believe, but I don't think John Horton Conway is actually wonderful beyond comprehension. To make an analogy.
If Conway used the Turing-completeness of Life to create within it a universe like our own, he would be wonderful beyond my comprehension :).
If Flatland would do, he could do it 'naturally' given enough scale and time. (:
could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.
Personal stories, from a friend, so no, there's no place to link them. Well-meaning liberals have either hurt, or failed to help, him and people close to him.
Communism is one utopia that ended in disaster, see Rummel's Death by Government
I recommend reading Blank Slate to get a good perspective on the Utopian issues; the examples (I was born in USSR) are trivial to come by, but the book will give you a mental framework to deal with the issues.
I'm no longer a propertarian/Lockean/natural rights libertarian. Learning about rationality essentially made me feel comfortable letting go of a position that I honestly didn't have a good argument for (and I knew it). The ev-psych stuff scared the living hell out of me (and the libertarianism* apparently). *At least that sort of libertarianism
I stopped being a theist a few years ago. That was due more to what Less Wrong people would call "traditional rationalism" than the sort often advocated here (I actually identify as closer to a traditionalist rationalist than a strict Bayesianism but I suspect that the level of disagreement is smaller than Eliezer makes it out to be). And part of this was certainly also emotional reactions to having the theodicy problem thrown in my face rather than direct logic. One major update that occurred when I first took intro psych was realizing how profoundly irrational the default human thinking processes were. Before then, my general attitude was very close to humans as the rational animal. I'm not sure how relevant that is, since that's saying something like "learning about biases taught me that we are biased." I don't know if that's very helpful. My political views have updated a lot on a variety of different issues. But I suspect that some of those are due to spending time with people who have those views rather than actually getting relevant evidence. I've updated on how dangerous extreme theism is. It may sound strange, but this didn't arise as much out of things like terrorism, but rather becoming more aware of how many strongly held beliefs about the nature of the world there were out there that were motivated by religion and utterly at odds with reality. This was not about evolution which even in my religious phases I understood and was annoyed at by the failure of religious compatriots to understand. Rather this has included geocentrism among the Abrahamic religions, flat-Earthism among some Islamic extremists, spontaneous generation among ultra-Orthodox Jews (no really. Not a joke. And not even microscopic spontaneous generation but spontaneous generation of mice), belief among some ultra-Orthodox Jews that the kidneys are the source of moral guidance (which they use as an argument against kidney transplants). My three most recent major updates (last six mon
It is! I am repeatedly surprised about a) basic level insights that are not wide spread and b) insights that other people consider basic that I do not have c) applications of an idea i understand in an area I did not think of applying it too To list a few: People are biased => I am biased! Change is possible Understanding is possible I am a brain in a vat. Real life rocks :-) Even after learning about cached thought, happy death and many others I still managed to fall into the traps of those. So i consider it helpful to see where someone applies biases. That statement in itself looks like a warning sign.
Yeah, being aware that there are biases at play doesn't always mean I'm at all sure I'm able to correct for all of them. The problem is made more complicated by the fact that for each of the views in questions, I can point to new information leading to the updates. But I don't know if in general that's the actual cause of the updates.
I started to believe in the Big Bang here. I was convinced by the evidence, but as this comment indicates, not by the strongest evidence I was given; rather, it was necessary to contradict the specific reasoning I used to disbelieve the Big Bang in the first place. Is this typical? I think it would be very helpful if, in addition to stating which opinion you have changed, you stated whether the evidence convinced you because it was strong or because it broke the chain of thought which led to your pre-change opinion.
To answer my own question: * changed political and economic views (similar to Matt). * changed views on the effects of Nutrition and activity on health (including the actions that follow from that) * changed view on the dangers of GMO (yet again) * I became aware of areas where I am very ignorant of opposing arguments, and try to counterbalance * I finally understand the criticisms about the skeptics movement * I repeatedly underestimated the amount of ignorance in the world, and got shocked when discovering that And on the funnier side. Last week I found out that i learned a minor physics fact wrong. That was not a strongly held opinion, just a fact i never looked up again till now. For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%
As a result of reading this post, I uninstalled a 10-year old habit -- drinking a cup of strong coffee every morning. Now I drink coffee only when I feel that I need a short-term boost.
Coffee and concentration experiment Article about self-measurement This doesn't mean you don't get a boost, but it might be worth checking.
My experience is quite similar to what is described in the first article -- no coffee leads to better concentration for me. The caffeine 'boost' I was talking about reduces my concentration but makes me more inclined to action -- I found it useful for breaking through procrastination periods. The effect of Red Bull on me is similar but more pronounced. The effect seems to be physical, but I don't rule out placebo (and frankly, it's fine with me either way.)
Have you never made ice cubes?
Very interesting. If you find time, could you elaborate on these. I am particularly interested in hearing more on the criticism of the skeptics movement.
I think it was mentioned here before. Skeptics do a decent job of raising the sanity waterline and thats all nice and important. I watched all of Randis U-tube videos, Penn&Teller Bullshit, Dawkins, Derren Brown and what ever else looked interesting. But as some keep pointing out. Randi is not a scientist! He talks about stuff that should be obvious to elementary school kids. P&T get stuff wrong on their show. (I identified 2 topics so far). And they use a style of edutainment that might make you think a bit, or move in-groups. But you dont learn more about reasoning from it. I am not sure, but you might be able to compare it to any standard theist shoutout show. (To be fair, they generally do a decent job of representing oposing views. But might have learned some Tricks from a certain Michael Moore.) All those skeptics push saner beliefs into the public and make it cool to have those in their respective subculture. As a fellow Rationalist i feel sometimes smug listening to them. But telling me stuff i already know is not too effective, while i dont have any indicators if they reach a target audience where an opinion shift is really mandated. And: skeptics are not particularly rational. (I don't think they are even into the concept of learning more about thought processes or how science works.) Wenn you spend your time battling idiots you might not remark, when you are wrong yourself. Find a skeptic that will listen to your criticism of the traditional scientific method, and/or about how awesome baysianism is :-) On a personal note: there is a distinct line of highly accidental circumstances that lead me to become involved in this particular group here. Each step involved people i learned from, and that knew more than my general surrounding. But each of those people got stuck in their personal level of thought (and field of interest respectively), and didn't follow me any further. Becoming an Atheist, and reading sceptics stuff was one of the steps. But i am v
Not to hit you over the head with this, as I've noticed before how common it is that someone learns a random fact or two much later than they should. But, you never, say, made frozen popsicles? I mean a whole lot of havoc would get wreaked... imagine frozen pipes... water in cracks in the road... Related to this subject, my sister was 14 before someone corrected her belief that "North" on a map corresponded to the sky above her head (which if you think about it is the intuitive interpretation when maps are placed vertically on classroom walls).
Both numbers serve as an explanation for why tubes crack. I never did any visualization about it. (Its not that uncommon that people have inconsistent beliefs.) Iirc I read that fact in the Mickey Mouse magazine at the appropriate age, but never tried it myself. Since reading about the memory bias I am deeply afraid to have false or corrupted memories, while also wanting to experience such an effect. Finding minor mistakes in my knowledge on physics is similarly disturbing. The content of the example itself doesn't really change anything about my life. But i am left wondering how many other mistakes I carry around.
Do you have any scientific/engineering training? A habit I note that people with such training tend to develop is to do a little mental arithmetic when confronted with some new numerical 'fact' and do some basic sanity checking against their existing beliefs. I often find when I am reading a news story that I notice some inconsistency in the numbers presented (something as simple as percentages for supposedly mutually exclusive things adding up to more than 100 for example) that I am amazed slipped past both the writer and the editor. The fact that most journalists lack any real scientific or engineering training is probably the reason for this. This ice 'fact' should have been immediately obviously wrong to someone applying this habit. It's perfectly understandable if this is just one of those things you picked up as a child and never had any cause to examine but it is indicative of a common failing and I would suggest that as a rule developing this 'engineering mindset' is valuable for any aspiring rationalist regardless of whether their job involves the routine application of such habits.
I am in the finial stages of becoming a computer scientist so: 'no'. In school I had physics as one of the depend subjects. I don't think I saw any actual science training anywhere in my education. But that might be due to my own ignorance. I still do not do math as often as I should, but sometimes. What might have contributed to sustaining the mistake is my very early knowledge on the mistakes in intuitive judging of scaling volumes. I should really milk this mistake for systematic causes....
Unfortunately this is not something that is generally taught well in high school science classes even though it would be of much more practical use to most students than what they are actually being taught. It is conveyed better in university science courses that have a strong experimental component and in engineering courses.
It might not be too surprising that i totally agree. It CS we dont do that much experimentation. And i have some beef with the lack of teaching good ways to actually make software. I dont think the word 'Version control' was ever uttered somewhere.
Additional side note: I am deeply troubled by the fact that all of the important things in my life happened by pure accident. I am generally happy with the development of ideas i hold true and dear so far, but wouldn't have minded some short cuts. There is no clearcut path that has me ending up in the place I would want to be in, and I do not see anything systematic I can do about that. I don't 'choose' to become a rationalist or not, instead I get sucked in by interesting articles that carry ideas i find pleasant. But it would have been equally likely that i spent the weeks reading OB/LW initially on tvtropes instead. I recently checked on an atheist board for good recommendations on rational thought. (Considering that my path down to science started with the reasoned atheism bit) and was shocked by the lack of anything that resembled even a reasonable recommendation. I don't like accidental developments.
Just because you weren't aware of any conscious reasoning behind your choices doesn't imply that they were fully accidents. The mind manages some very important things subconsciously, especially in emotionally charged domains where explicit awareness of motivations might hurt someone else's feelings or one's own self-image.
Other examples

Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.

After a brief 10 word discussion on #lesswrong, I've made a lesswrong team :p

Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.

Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don't understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I'm still not sure what to think about which project I should prefer to run.
Personally I run Rosetta@home because, based on my research, it could be more useful to designing new proteins and computationally predicting the function of proteins. Folding seems to be more about understanding how they proteins fold, which can help with some diseases, but isn't nearly the game changing that in silico design and shape prediction would be. I also think that the SENS Foundation (Aubrey de Grey & co) have some ties to Rosetta, and might use it in the future to design some proteins. I'm a member of the Lifeboat Foundation team: But we could also create a Less Wrong team if there's enough interest.
So I think I have it working but... theres nothing to tell me if my CPU is actually doing any work. It says it's running but... is there supposed to be something else? I used to do SETI@home back in the day and they had some nice feedback that made you feel like you were actually doing something (of course, you weren't because your computer was looking for non-existent signals, but still).
The existence of ET signals is an open qustion. SETI is a fully legitimate organization ran according to a well thought out plan for collecting data to help answer this question.
I think the probability they ever find what they're looking for is extraordinarily low. But I don't have anything against the organization.
Right on, but just so you know, other (highly informed) people think that we may find a signal by 2027, so there you go. For an excellent short article (explaining this prediction), see here.
I don't think the author deals with the Fermi paradox very well, and the paradox is basically my reason for assigning a low probability to SETI finding something.
The Fermi paradox also struck me as a big issue when I first looked into these ideas, but now it doesn't bother me so much. Maybe this should be the subject of another open thread.
I use the origami client manager thingie; it handles deploying the folding client, and gives a nice progress meter. The 'normal' clients should have similar information available (I'd expect that origami is just polling the clients themselves).
What is this?
I wrote a quick introduction to distributed computing a while ago: My favorite project (the one which I think could benefit humanity the most) is Rosetta@home.
Donating money to scientific organizations (in the form of a larger power bill). You run your CPU (otherwise idle) to crunch difficult, highly parallel problems like protein folding.
Granted that in many cases, it's donating money that you were otherwise going to burn.
No, modern CPUs use considerably less power when they are idle. A computer running folding at home will be drawing more power than if it were not.
But you've already paid for the hardware, you've already paid for the power to run the CPU at baseload, and the video card, and the hard disk, and all the other components; if you turn the machine off overnight, you're paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work. In other words, the small amount of money spent on the extra electricity enables the useful application of a much larger chunk of resources. That means if you run Folding@home, your donation is effectively being matched not just one for one but severalfold, and not by another philanthropist, but by the universe.
I've seen numerous discussions about whether it's better / more economical to turn off your machine or to leave it running all the time, and I have never seen a satisfactory conclusion based on solid evidence.
That's because it depends on the design. On the lifetime point, for example: if the machine tends to fail based on time spent running (solder creep, perhaps), leaving it running more often will reduce the life, but if the machine tends to fail based on power cycling (low-cycle fatigue, perhaps), turning it on and off more often will reduce the life. Given that I've dropped my MacBook from a height of four feet onto a concrete slab, I figure the difference is roundoff error as far as I am concerned.
A severalfold match isn't very impressive if the underlying activity is at least several orders of magnitude less efficient than alternatives, which seems likely here.
It seems highly unlikely to me. Biomedical research in general and protein folding in particular are extremely high leverage areas. I think you will be very hard put to it to find a way to spend resources even a single order of magnitude more efficiently (let alone make a case that the budget of any of us here is already being spent more efficiently, either on average or at the margin).
1. Moore's Law means that the cost of computation is falling exponentially. Even if one thought that providing computing power was the best way to spend money (on electricity) it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now. 2. Biomedical research already gets an outsized portion of all R&D, with diminishing returns. The NIH budget is over $30 billion. 3. Slightly accelerating protein folding research doesn't benefit very much from astronomical waste considerations compared to improving the security of future progress with existential risk reduction.
In principle, this is true; in practice, saying things like these seems more likely to make the people in question to simply cease donating electricity, instead of ceasing to donate electricity and donating the saved money to something more useful. Installing a program and running it all the time doesn't really feel like you're spending money, but explicitly donating money requires you to cross the mental barrier between free and paid in a way that running the program doesn't. For those reasons, I'd be very hesitant about arguing against running programs like Folding@Home; it seems likely to cause more harm than good.
But on the other hand ; it doesn't seem clear to me which effect dominates, so we should be careful about drawing inferences based on that. Furthermore, it seems to me like things like F@H are rather unlikely to cause a "good deed of the day" effect for very long: by their nature, they're continuing processes that rather quickly fade into the background of your consciousness and you partially forget about. If F@H automatically starts up whenever you boot your computer, then having it running wouldn't count for a day's good deed for most people. Constantly seeing the icon might boost a cached self effect of "I should do useful things", though.
1. In practice, it is worth doing the computation now -- we can easily establish this by looking at the past, and noting that the people who performed large computations then, would not have been better off waiting until now. 2. $30 billion is a lot of money compared to what you and I have in our pockets. It's dirt cheap compared to the trillions being spent on unsuccessful attempts to treat people who are dying for lack of better biotechnology. 3. By far the most important way to reduce real life existential risks is speed. 4. Even if you could find a more cost effective research area to finance, it is highly unlikely that you are actually spending every penny you can spare in that way. The value of spending resources on X, needs to be compared to the other ways you are actually spending those resources, not to the other ways you hypothetically could be spending them.
Whether it makes sense in general doing a calculation now or just waiting isn't always so clear cut. Also, at least historically there hasn't always been a choice. For example, in the 1940s and 1950s, mathematicians studying the Riemann zeta function really wanted to do hard computations to look at more of the non-trivial zeros. but this was given very low priority by the people who controlled computers and by the people who programmed them. The priority was so low that by the time it advanced up the queue the computer in question would already be labeled as obsolete and thus would not be maintained. It wasn't until the late 1950s that the first such calculation was actually performed
They have high-performance GPU clients that are a lot faster than CPU-only ones.
Assuming whatever gets learned through folding@home has applications they should offer users partial ownership of the intellectual property.
It's scientific research, the results are freely published.
I'm not saying it isn't a net gain, it may well be according to your own personal weighing of the factors. I'm just saying it is not free. Nothing is.
Many != all. My desktop is old enough that it uses very little more power at full capacity than it does at idle. Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.
It is also not equal to 'some'. The vast majority of computers today will use more power when running folding at home than they would if they were not running folding at home. There may be some specific cases where this is not true but it will generally be true. You've measured that have you? Here's an example of some actual measurements for a range of current processors' power draw at idle and under load. It's not a vast difference but it is real and ranges from about 30W / 40% increase in total system power draw to around 100W / 100% increase. I couldn't find mention of any such setting on their site. Do you have a link to an explanation of this setting?
On further consideration, my complaint wasn't my real/best argument, consider this a redirect to rwallace's response above :p That said, I personally don't take 'many' as meaning 'most', but more in the sense of "a significant fraction", which may be as little as 1/5 and as much as 4/5. I'd be somewhat surprised if the number of old machines (5+ years old) in use wasn't in that range. re: scaling, the Ubuntu folding team's wiki describes the approach.
Idle could also mean 'off', which would be significant power savings even (especially?) for older CPUs.
One who refers to their powered-off computer as 'idle' might find themselves missing an arm.
Except I'm talking about opportunity cost rather than redefining the word. You can turn off a machine you aren't using, a machine that's idle.

Self-forgiveness limits procrastination

Wohl's team followed 134 first year undergrads through their first mid-term exams to just after their second lot of mid-terms. Before the initial exams, the students reported how much they'd procrastinated with their revision and how much they'd forgiven themselves. Next, midway between these exams and the second lot, the students reported how positive or negative they were feeling. Finally, just before the second round of mid-terms, the students once more reported how much they had procrastinated in their exam prep

... (read more)

I recalled the strangest thing an AI could tell you thread, and I came up with another one in a dream. Tell me how plausible you think this one is:

Claim: "Many intelligent mammals (e.g. dogs, cats, elephants, cetaceans, and apes) act just as intelligently as feral humans, and would be capable of human-level intelligence with the right enculturation."

That is, if we did to pet mammals something analogous to what we do to feral humans when discovered, we could assimilate them; their deficiencies are the result of a) not knowing what assimilation re... (read more)

I don't know that we've ever successfully assimilated a feral human either.
Sounds plausible to me. I suspect people aren't able to develop enculturation for animals-- the sensory systems and communication methods are too different. I also believe people have been unintentionally selecting wild animals for intelligence.
Next step: "Okay, what should I do to test this?"
Find some method of communication that the mammal can use, and raise it in a society of children that use that method of communication. See if its behavior tracks that of the children in terms of intelligence. I believe such an experiment has already been performed, involving deaf children and using sign language as the communication method, and some kind of ape as the mammal. It supposedly actually adapted very comfortably, behaving just as the children (except that they taught it to ask for drugs), but they had to cut off the experiment on the grounds that, after another year of growth, the ape would be too strong and therefore too dangerous to risk leaving in the presence of children. I can't find a cite at the moment, but I remember a friend telling me about this and it checked out in an online search.
Human languages (including sign) are adapted for human beings. While there's some flexibility, I wouldn't expect animals using human language to be at their best.
What they need to do is include like 5 or 6 apes with the children and then when they're removed they can continue socializing with each other. The problem is coming up with methods of communication. Aside from apes and sign language I can't think of any...
One major difference between humans and apes is this: Humans teach each other. When we discover something new, we immediately go and tell everybody. Apes don't. If an ape discovers something, it doesn't spread to the other members of its social group until they happen to watch the ape using its discovery. And apes that are taught sign language don't pass it on to their children.
Which means apes don't get the benefit of cultural evolution (or gene-culture co-evolution). I wonder if that was a key barrier to the development of ape culture.
Hm, I thought I had a counterexample, but it looks like it was just a case of learning by imitation. Also, vervet monkeys teach their proto-language (of "eagles!", "snakes!", and "leopards!") to their young by smacking them when they give the wrong call. As for other mammals, there are cases of them teaching each other when they learn something new, for example when an elephant learned how to unlock her cage.
African gray parrots and spoken language.

Yes, and there's been a lot of work with African Greys already. Irene Pepperberg and her lab have done most of the really pioneering work. They've shown that Greys can recognize colors, small numbers and in some cases produce very large vocabs. There's also evidence that Grey's sometimes overcorrect. That is, they apply complete grammatical rules to conjugate/decline words even when the words are irregular. This happens with human children as well. Thus for example, human children will frequently say "runned" when they mean "ran" or "mouses" when they mean "mice" and many similar examples. This is strong evidence that they are internalizing general rules rather than simply repeating words they've heard. Since Greys do the same thing, we can conclude that parrots aren't just parroting.

Yes, it is! I hadn't heard that before. Is there a journal article somewhere?
I'm not aware of any journal articles for overcorrection and a quick Google search doesn't turn any up. I'll go bug my ornithology friends. In the meantime, here's a BBC article that discusses the matter: . They give the example of N'kisis using "flied" for the past tense of "fly" rather than "flew." Edit: Fixed link. Edit: Link's accuracy is questionable. See Mass Driver's remarks below.
The link seems to be dead or misspelled.
Misspelled. Edited for correct link.
The BBC appears to have at least partially withdrawn their article about the parrot in question: New BBC News Article
skeptic article about the parrot
Hmm, that's very interesting. I think I've seen the overcorrection claim before but then definitely don't have anything resembling a good citation.

I am thinking of making a top-level post criticizing libertarianism, in spite of the current norm against discussing politics. Would you prefer that I write the post, or not write it?

I will vote it down unless you say something that I have not seen before. I think that it was a good idea to not make LW a site for rehearsing political arguments, but if you have thought of something that hasn't been said before and if you can explain how you came up with it then it might be a good reasoning lesson.
I will only vote it up if there's something I haven't seen before, but will only vote it down if I think it's dreadful. We may not be ready for it yet, but at some point we need to be able to pass the big test of addressing hard topics.
I will vote it up to cancel the above downvote, to encourage you to make the post in case the threat of downvoting scares you off.
I'd love to read it, though I may well disagree with a lot of it. I'd prefer it if it were kept more abstract and philosophical, as opposed to discussing current political parties and laws and so forth: I think that would increase the light-to-heat ratio.
Upvoted your comment for asking in the first place. If your post was a novel explanation of some aspect of rationality, and wasn't just about landing punches on libertarianism, I'd want to see it. If it was pretty much just about criticizing libertarianism, I wouldn't. I say this as someone very unsympathetic to libertarianism (or at least what contemporary Americans usually mean by 'libertarianism') - I'm motivated by a feeling that LW ought to be about rationality and things that touch on it directly, and I set the bar high for mind-killy topics, though I know others disagree with me about that, and that's OK. So, though I personally would want to downvote a top-level post only about libertarianism, I likely wouldn't, unless it were obnoxiously bare-faced libertarian baiting.
I agree on most counts. However, I'd also enjoy reading it if it were just a critique of libertarianism but done in an exceptionally rational way, such that if it is flawed, it will be very clear why. At minimum, I'd want it to explicitly state what terminal values or top-level goals it is assuming we want a political system to maximize, consider only the least convenient possible interpretation of libertarianism, avoid talking about libertarians too much (i.e. avoid speculating on their motives and their psychology; focus as much as possible on the policies themselves), separate it from discussion of alternatives (except insofar as is necessary to demonstrate that there is at least one system from which we can expect better outcomes than libertarianism), not appear one-sided, avoid considering it as a package deal whenever possible, etc.
That standard sounds pretty weird. If it is so clear that it is flawed, wouldn't you expect it to be clear to the author and thus not posted? Perhaps you mean clear what your core disagreement is?
I'm interested.
Not enough information to answer. I will upvote your post if I find it novel and convincing by rationalist lights. Try sending draft versions to other contributors that you trust and incorporate their advice before going public. I can offer my help, if being outside of American politics doesn't disqualify me from that.
ergh.... after the recent flamewar I was involved in, I had resolved to not allow myself to get wrapped up in another one, but if there's going to be a top level post on this, I don't realistically see myself staying out of it. I'm not saying don't write it though. If you do, I'd recommend you let a few people you trust read it over first before you put it up, to check for anything unnecessarily inflammatory. Also, what Blueberry said.

The Cognitive Bias song:

Not very good, but, you know, it's a song about cognitive bias, how cool is that?

Kaj_Sotala is doing a series of interviews with people in the SIAI house. The first is with Alicorn.

Edit: They are tagged as "siai interviews".

Entertainment for out-of-work Judea Pearl fans: go to your local job site and search on the word "causal", and then imagine that all those ads aren't just mis-spelling the word "casual"...

Most people's intuition is that assassination is worse than war, but simple utilitarianism suggests that war is much worse.

I have some ideas about why assassination isn't a tool for getting reliable outcomes-- leaders are sufficiently entangled in the groups that they lead that removing a leader isn't like removing a counter from a game, it's like cutting a piece out of a web which is going to rebuild itself in not quite the same shape-- but this doesn't add up to why assassination could be worse than war.

Is there any reason to think the common intuition is right?

TLDR: “War” is the inter-group version of “duel” (ie, lawful conflict). “Assassination” is the inter-group version of “murder” (ie, unlawful conflict). My first “intuition about the intuition” is that it’s a historical consequence: During most history, things like freedom, and power and responsibility for enforcement of rules when conflicts (freedom vs. freedom) occur, were stratified. Conflicts between individuals in a family are resolved by the family (e.g. by the head thereof), conflicts between families (or individuals in different families) by tribal leaders or the kind. During feudalism the “scale” was formalized, but even before we had a large series of family → group → tribe → city → barony → kingdom → empire. The key about this system is that attempts to “cross the borders” in this system, for instance punishing someone from a different group directly rather than invoking punishment from that group’s leadership is seen as an intrusion in that group’s affairs. So assassination becomes seen as the between-group version of murder: going around the established rules of society. That’s something that is selected against in social environments (and has been discussed elsewhere). By contrast, war is the “normal” result when there is no higher authority to recurse to, in a conflict of groups. Note that, analogously, for much of history duels were considered correct methods of conflict resolution between some individuals, as long as they respected some rules. So as long as, at least in theory, there are laws of war, war is considered a direct extension of that instinct. Assassination is seen as breaking rules, so it’s seen differently. A few other points: * war is very visible, so you can expend a lot of signaling to dehumanize the adversary. * but assassination is supposed to be done in secret, so you can’t use propaganda as well (assassinating opposing leadership during a war is not seen as that much of a big problem; they’re all infidels/drug lords/terror
What an excellent analysis. I voted up. The only thing I can think of that could be added is that making a martyr can backfire.
Who thinks assassination is worse than war? I could make an argument for it, though: If countries engaged regularly in assassination, it would never come to a conclusion, and would not reduce (and might increase) the incidence of war. Phrasing it as "which is worse" makes it sound like we can choose one or the other. This assumes that an assassination can prevent a war (and doesn't count the cases where it starts a war).
It seems to me that the vast majority of people think of war as a legitimate tool of national policy, but are horrified by assassination.
I've always assumed that the norm against assassination, causally speaking, exists mostly due to historical promotion by leaders who wanted to maintain a low-assassination equilibrium, now maintained largely by inertia. (Of course, it could be normatively supported by other considerations.) It makes sense to me that people would oversimplify the effect of assassination in basically the way you describe, overestimating the indispensability of leaders. I know I've seen a study on the effects of assassination on terrorist groups, but can't find a link or remember the conclusions.
Whoooohooo! Awsomest thing in the last ten years of genetic news for me! YAAY! WHO HOO!!! /does a little dance / I want to munch on that delicious data! Ahem. Sorry about that. But people 1 to 4% admixture! This is big! This gets an emotional response from me!That survived more than a thousand generations of selection, the bulk of it is probably neutral but think about how many perfectly usefull and working allels we may have today (since the Neanderthalls where close to us to start with). 600 000 or something years of speration these guys evolved sperate from us for nearly as long as the fictional Vampires in Blindsight. It seems some of us are have a bit our ancestors picked of another species in our genes! Could this have anything to do with behavioural modernity that started off at about the same time the populations crossbred in the middle east ~100 000 years ago? Which adaptations did we pick up? Think of the possiblities! Ok I'll stop the torrent of downvote magnet words and get back to reading about this. And then everything else my grubby little paws can get on Neanderthals, I need to brush up! Edit: I just realized part of the reason why I got so excited is because it shows I may have a bit of exotic ancestry. Considering how much people, all else being equal, like to play up their "foreign" or "unusual" semimythical ancestors or even roots in conversation, national myths or on the census instead of the ethnicity of the majority of their ancestors this may be a more general bias, that I could of course quickly justify with a evo psych "just so" story but I'll refrain from that to search for what studies have to say about this.
I definitely think this is top-level post material but I didn't have enough to say to not piss the people off that think all top level posts need to be at least 500 words long.
I think this is very interesting but I'm not sure it should be a top-level post. Not due to the length but simply because it isn't terribly relevant to LW. Something can be very interesting and still not the focus here.
There is interesting discussion to be had that is relevant to LW.
How so? I'm not seeing it.
That's because there isn't a top-level post yet! :P The point being that many, many more people read /new than read 400 comments deep in the open thread.
It is easier to convince people that there is an interesting discussion to be had relevant to LW if you can discuss its relevance to LW in an interesting fashion when you post it. More seriously, if there isn't some barrier to posting, /new will be suffer a deluge of marginally interesting material, and after the transients die out nobody will be watching the posts there, either. I read most new posts because most new posts are substantive.
And the paper:

I have a request. My training is in science & engineering, but I am totally ignorant of basic economics. I have come to see this as a huge blind spot. I feel my views on social issues are fairly well-reasoned, but when it comes to anything fiscal, it's all very touchy-feely at present.

Can anyone recommend intro material on economics (books, tutorials)? I ask on LW because I have no idea where to start and who to trust. If you offer a recommendation of a book pushing some particular economic "school of thought," that's fine, but I'd like to know what that school is.


Economics in One Lesson by Henry Hazlitt is a good slim introduction to the economic mindset. For a different approach focused on the application of economic thinking to everyday life The Logic of Life by Tim Harford is worth a look. Neither book covers much of the math of economics but I think that is a good thing since most of the math heavy parts of economics are the least useful and relevant. ETA: Economics In One Lesson is a heavily free market / free trade 'classical' economic slant.
The book I used in my college Econ 101 class was this one.
MIT OpenCourseWare has a lot of material. I also like Bryan Caplan's lecture notes (these sometimes have a libertarian slant).
Pick up a copy of Basic Economics by Thomas Sowell. It is, by far, the best introduction to economics I've ever read. Don't be put off by the size of the thing — it's a very easy read and requires no background in math. There's a follow-up book by the same author, too: Applied Economics.
I believe a textbook is always a better first step.
A good textbook is a better first step. Unfortunately, there are some truly dire textbooks out there.

So, I'm somewhat new to this whole rationality/Bayesianism/(nice label that would describe what we do here on LessWrong). Are there any podcasts or good audiobooks that you'd recommend on the subjects of LessWrong? I have a large amount of time at work that I can listen to audio, but I'm not able to read during this time. Does anyone have any suggestions for essential listening/reading on subjects similar to the ones covered here?

I know you said you don't have a ton of time to read but Gary Drescher's Good and Real has been called Less Wrong in book form on occasion. If nothing else, I found it an enjoyable read that gives a good start to getting into the mindset people have in this community.

This seems like a potentially significant milestone: 'Artificial life' breakthrough announced by scientists

Scientists in the US have succeeded in developing the first synthetic living cell.

The researchers constructed a bacterium's "genetic software" and transplanted it into a host cell.

The resulting microbe then looked and behaved like the species "dictated" by the synthetic DNA.

Given that this now opens the door for artificially designed and deployed harmful viruses, perhaps unfriendly AI falls a few notches on existentialist risk ladder.

I remember hearing a few anecdotes about abstaining for food for a period of time (fasting) and improved brain performance. I also seem to recall some pop-sci explanation involving detoxification of the body and the like. Today something triggered interest in this topic again, but a quick Google search did not return much on the topic (fasting is drowned in religious references).

I figure this is well within LW scope, so does anyone have any knowledge or links that offer more concrete insight into (or rebuttal of) this notion?

Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.

I agree that AI deterrence will necessarily fail if: 1. All AI's modify themselves to ignore threats from all agents (including ones it considers irrational), and 2. any deterrence simulation counts as a threat. Why do you believe that both or either of these statements are true? Do you have some concrete definition of 'threat' in mind?
I don't believe statement 1 and don't see why it's required. After all, we are quite rational, and so is our future FAI.
The notion of "first mover" is meaningless, where the other player's program is visible from the start.

In another comment I coined (although not for the first time, it turns out) the expression "Friendly Human Intelligence". Which is simply geekspeak for how to bring up your kids right and not make druggie losers, wholesale killers, or other sorts of paperclipper. I don't recall seeing this discussed on LessWrong. Maybe most of us don't have children, and Eliezer has said somewhere that he doesn't consider himself ready to create new people, but as the saying is, if not now, when, and if not this, what?

I don't have children and don't intend to. I ... (read more)

I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is. Personally I don't think there is a method that fits all children and all relationships with them. But... You might try activities rather than presents. 'Oh cool, uncle is gone to make a video with us and we're going to do it at the zoo.' If you get the right activity (depends on child), they will remember it and what you did and said for years. I had a uncle that I only saw a few times but he showed me how to make and throw a bomerang. He explained why it returned. I have thanked him for that day for 60 years.
I don't have children and I didn't try answering the question because I knew what a complex and tricky question it is - I don't expect it to be much different than the bigger question of how to improve human rationality for people in general.
Possibly relevant, from the archives: * "On the Care and Feeding of Young Rationalists" * "On Juvenile Fiction"

Today the Pope finally admitted there has been a problem with child sex abuse by Catholic priests. He blamed it on sin.

What a great answer! It covers any imaginable situation. Sin could be the greatest tool for bad managers everywhere since Total Quality Management.

"Sir, your company, British Petroleum, is responsible for the biggest environmental disaster in America this decade. How did this happen, and what is being done to prevent it happening again?"

"Senator, I've made a thorough investigation, and I'm afraid there has been sin in th... (read more)

That sounds like the kind of remark that goes out of its way to offend several categories of people at once. :) But in that category the gold standard remains Evelyn Waugh's “now that they no longer defrock priests for sexual perversities, one can no longer get any decent proofreading.”

You know, lots of people claim to be good cooks, or know good cooks, or have an amazing recipe for this or that. But Alicorn's cauliflower soup... it's the first food that, upon sneakily shoveling a fourth helping into my bowl, made me cackle maniacally like an insane evil sorcerer high on magic potions of incredible power, unable to keep myself from alerting three other soup-enjoying people to my glorious triumph. It's that good.

Awwwww :D PS: If this endorsement of house food quality encourages anyone to apply for an SIAI fellowship, note your inspiration in the e-mail! We receive referral rewards!
Would you be willing to post the recipe?
2Alicorn I have taken to also adding two or three parsnips per batch.
Can you describe that “better than bouillon” thing, for us non-US (I assume) readers? Also, how much cream do you use, and what’s “a ton” of garlic? (In my kitchen, that could mean half a pound — we use garlic paste as ketchup around here...)
Better than Bouillon is paste-textured reduced stock. It's gloopy, not very pretty, and adds excellent flavor to just about any savory dish. Instead of water and BTB, you could use a prepared stock, or instead of just the BTB, use a bouillon cube, but I find they have dramatically inferior flavors unless you make your own stock at home. I haven't tried cooking down a batch of homemade stock to see if I could get paste, but I think it would probably take too long. I guess on the cream until the color looks about right. I use less cream if I overshot on the water when I was cooking the veggies, more if it's a little too thick. "A ton" of garlic means "lots, to taste". I'd put one bulb in a batch of cauliflower soup mostly because it's convenient to grab one bulb out of a bag of garlic bulbs. If you're that enthusiastic about garlic, go ahead and use two, three, four - it's kind of hard to overdo something that wonderful.
How long are the fellowships for?
As long as three months (and the possibility of sticking around after if everything goes swimmingly), but you could come for considerably shorter if you have scheduling constraints. We've also been known to have people over for a day or two just to visit and see how cool we are. Totally e-mail Anna if you have any interest at all! Don't be shy! She isn't scary!
Does Alicorn's presence prohibit me from applying for an SIAI fellowship?
Nope. All applications are welcome.
I second Anna, but I will also note that we plan on moving into a biggg house or possibly two big houses, and this would hopefully minimize friction in the event that two Visiting Fellows don't quite get along. I hope you apply!

If we get forums, I'd like a projects section. A person could create a project, which is a form centered around a problem to work on with other people over an extended period of time.

This seems like the sort of activity Google Wave is (was?) meant for.

Tough financial question about cryonics: I've been looking into the infinite banking idea, which actually has credible supporters, and basically involves using a mutual whole life insurance policy as a tax shelter for your earnings, allow you to accumulate dividends thereon tax free ("'cause it's to provide for the spouse and kids"), and withdraw from your premiums and borrow against yourself (and pay yourself back).

Would having one mutual whole life insurance policy keep you from having a separate policy of the kind of life insurance needed to fund a cryonic self-preservation project? Would the mutual whole life policy itself be a way to fund cryopreservation?

Apparently it is all too easy to draw neat little circles around concepts like "science" or "math" or "rationality" and forget the awesome complexity and terrifying beauty of what is inside the circles. I certainly did. I recommend all 1400 pages of "Molecular Biology Of The Cell" (well, at least the first 600 pages) as an antidote. A more spectacularly extensive, accessible, or beautifully illustrated textbook I have never seen.

Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...

Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on The title is The Great Singularity Debate. After Yudkowsky at the beginning gives three different definitions of "the singularity" they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not "just simulating to have intelligence" but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that "well it may then just simulate intelligence, but maybe it is not actually having it". (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.) There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky's definition looked reasonable. It would have been interesting to have that point elaborated in more detail though. Pigliucci's point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only inte
I found this diavlog entertaining, but not particularly enlightening - the two of them seemed to mostly just be talking past each other. Pigliucci kept on conflating intelligence and consciousness, continually repeating his photosynthesis analogy, which makes sense in the context of consciousness, but not intelligence, and Eliezer would respond by explaining why that doesn't make sense in the context of intelligence, and then they'd just go in circles. I wish Eliezer had been more strict about forcing him to explicitly differentiate between intelligence/consciousness. Frustrating.... but worth watching regardless. Note that I'm not saying I agree with Pigliucci's photosynthesis analogy, even when applied to consciousness, just that it seems at least to be coherent in that context, unlike in the context of intelligence, in which case it's just silly. Personally, I don't see any reason for consciousness to be substrate-dependant, but I feel much less confident in asserting that it isn't, just because I don't really know what consciousness is, so it seems more arrogant to make any definitive pronouncement about it.
That diavlog was a total shocker! Pigliucci is not a nobody: he is a university professor, authored several books, holds 3 PhD's. Still, he made an utterly confused impression on me. I don't think people must agree on everything, especially when it comes to hard questions like consciousness,but his views were so weak and incoherent that it was just too painful to watch. My head still aches... :(
I'm going to have to remember to use the word cishumanism more often.
Welcome back.
SIAI may have built an automaton to keep donors from panicking
You should post this as a top-level post for +10x karma.
random, possibly off-topic question: Is there an index somewhere of all of Eliezer's appearances on BHTV? Or a search tool on the BHTV site that I can use to find them?
Direct link:,%20Eliezer
Thanks! I had tried using the search tool before, but I guess I hadn't tried searching for "Yudkowsky, Eliezer" ... oh, and it turns out that there was a note right beside the search box saying "NAME FORMAT = last, first". oops... anyway, now I know, thanks :)
In general, google's site: operator is great for websites that have missing or uncooperative search functionality: eliezer
Orange button called "search" in the upper right hand corner.
You can tell he's alive and well because he's posted several chapters in his Harry Potter fanfiction in that time; his author's notes lead me to believe that, as he stated long ago, he's letting LW drift so he has time to write his book.
Anyway, he can't be hurt; "Somebody would have noticed."
Well, he would've noticed, but he's not us...
He's writing his book.
and Harry Potter fanfiction. Unless that was what you meant by "his book".
Question: Who is moderating if Eliezer isn't?
The other moderators appear to be Robin Hanson, matt, and wmoore. None of them have posted in the past few days, but maybe at least one of them has been popping in to moderate from time to time. And/or maybe Eliezer is too, just not posting.
Harry Potter and the Methods of Rationality updated on Sunday; it could be that writing that story is filling much of his off time.

Recycling an email I wrote in a Existential Risk Reduction Career Network discussion. The topic looked at various career options, specifically with an eye towards accumulating wealth - the two major fields recognized being finance and software development.

Frank Adamek enquired as to my (flippant) vanilla latte comments, which revealed a personal blind-spot. Namely, that my default assumption for people with an interest in accumulating wealth is that they're motivated by an interest in improving the quality of their own life (e.g., expensive gadgets, etc.)... (read more)

In a thread called Acturial vs. Software Engineering - what pays best?, somebody wrote: My response... I encourage most people to pursue a math or science degree, rather than comp.sci., even if their long term goals are in the field of software engineering. My opinion is based on personal hindsight (having majored in computer science, I often wish my ability to absorb and apply fundamental math or hard physics was stronger) and on eleven years industry experience (where I've noticed an inverse correlation between the amount of formal comp.sci. training a person's had and his or her strength as a software engineer). In regards to my personal hindsight; it could well be that had I studied math or physics, that I'd feel my comp.sci. expertise would need brushing up. That's probably true to some extent, but there's another factor; namely that many comp.sci. programs are a less-than-ideal blend of theoretical math (better obtained through a dedicated programs[1]) and practical engineering (most definitely useful[2], but because of its nature easily accessible in your spare time). That last point is critical; anybody who can afford university education, has access to a computer and a compiler. So why not tinker at home - you're passionate, right? Compare with programs like mechanical engineering, chemistry, and most hard physics programs - you probably don't have access to a particle accelerator or DNA extraction lab at home. Not yet anyway... :-) That brings me to my observation from industry experience, namely that the best programmers I've worked with often hadn't majored in comp.sci. The point of course not that a comp.sci. education makes for worse programmers. Rather, that people with the audacity and discipline to pursue hard physics or math who also have a passion for programming have a leg-up on those who are only passionate about programming. I'm sure there's the occasional failed particle physicist applying for a hundred programming gigs without success,
Having shared my view on comp.sci. education, I do wish to throw in a recommendation for pursuing a career in software development (beyond the years of formal education). Specifically in contrast to one alternative discussed earlier in this thread, namely a career in finance. Full disclaimer, my perspective on "jobs that involve working with money" stems mostly from how the mainstream portrays it and is likely to be extremely naive. Despite what I'm about to say, I actually have a great deal of respect for money-savy people. Considering my personal financial situation is a constant source of akrasia, I'm often envious of people who are able to wield money itself as a tool to generate more of it. I'm realistic enough to admit that income potential is a valid factor in deciding what kind of career to pursue - like most of us, I enjoy food, shelter, and expensive gadgets. Meanwhile, I also believe nobody treats money as the only factor in choosing a caree - we all rather work in fields we're passionate about. So really, we have a realistic assessment of various career options - all of whom promise at least a decent living. Even agreeing with comments made earlier, that programming is prole and finance has higher likelihood of fast-tracking prestige (and as a programmer, I actually must admit there's some truth to this sentiment), my gut says that your passion and interest far outweighs these observations. I mean, we're not talking about whether you'll become a high-school janitor versus United States president. If you like money and you have knack for using it for growth and your benefit, go to Wall Street. If you like computers and have a knack for using them for innovation, go to Silicon Valley. In both cases you'll be able to afford a grande sugar-free vanilla low-fat soy latte every morning - if that's your cup of tea. Now all of this is fairly generic advice, nothing you weren't told already by your parents. My reason for chiming in on this discussion has (obv

Some people have curious ideas about what LW is; from :

"HO-ley **! That was awesome! You might also be interested to know that my brother, my father and I all had a wonderful evening reading that wikipedia blog on rationality that you are named for. Thank you for this, most dearly and truly."

I'm not sure I even know how to parse "wikipedia blog on rationality". But at least in some sense, we apparently are Wikipedia. Congrats.
The link to Less Wrong in Eliezer's profile takes you to the wiki page for the sequences. So they were in a wiki, which I guess they figured was part of Wikipedia.
A lot of people think that every wiki is a part of Wikipedia or the same thing as Wikipedia, or that "Wikipedia" is a common noun meaning "wiki", or that every Wiki has to be a 'Pedia of some sort. And most people don't know that the first wiki predated Wikipedia by six years, so they assume Wikipedia invented the concept.
I'm not sure what that third hypothesis means, but the first two seem very different to me and seems worth knowing how popular those two beliefs are.
By "every Wiki has to be a 'Pedia of some sort", I was referring to this observation: Over the first six years of wikis, most were informally-organized, and mixed discussion in with factual content. They gathered information, but focused on (or at least allowed) discussion and socializing; they did not resemble encyclopedias. (The original wiki, Ward's Wiki AKA WikiWikiWeb AKA the Portland Pattern Repository, is a good surviving example of this format, as is CocoaDev, the first wiki I encountered, plus the SL4 Wiki.) But people often assume Wikipedia-like rules, norms, and structure apply to every wiki. I own and (now only occasionally) edit a wiki about a little-known band, documenting its albums, songs, concerts, etc., and its fan culture. Early on, a few people mistakenly assumed that rules like NPOV, notability, and encyclopedicness applied there. I've seen this elsewhere too, but it's admittedly getting harder to find incorrect examples of such assumptions, because most wikis these days (at least the big ones) are modeled after Wikipedia, just within fictional universes or within more specific domains than "all human knowledge". (Also, to "...or that 'Wikipedia' is a common noun meaning 'wiki'", let me add "or that 'wiki' is an abbreviation for 'Wikipedia'". I'm not the sort who will cling to old definitions as the True Essence of normally evolving words, but given the wide historical and current use of "wiki" for sites unrelated to Wikipedia, I reserve the right to get mildly annoyed at people who say "wiki" as shorthand for "Wikipedia".)
For what it is worth, the Wikimedia foundation doesn't like people using "wiki" for Wikipedia. Most Wikipedians don't like it either. And neither does Wikipe-tan as she makes clear here Edit: Apparently the software interprets a closing parenthesis in a URL as the end of the URL string. This is an unfun bug. Using a short url to avoid the problematic parsing.
I think you can backslash-escape parentheses in URLs to avoid that bug (or that unexpected-but-correct-according-to-the-spec behaviour, rather). Testing it: blah.png)
Although it doesn't apply in this case, do you think the common use of WikiMedia (which defaults to the same dignified blue and gray look that Wikipedia has) contributes to the problem? Do people expect TvTropes to be like Wikipedia?
It probably contributes to it. It's pretty easy to assume a site is identical or similar to Wikipedia when they look almost identical. (Nitpick: Wikimedia is the foundation that owns Wikipedia and its related sites. The wiki software that they develop and run is called MediaWiki.) TvTropes probably doesn't suffer from this problem too much because 1) it doesn't have "wiki" in its name; 2) it doesn't run MediaWiki or look like it; and 3) the home page has a paragraph that starts "We are not Wikipedia. We're a buttload more informal. . . ."
I wonder if that sort of thing should be added to the list of biases-- it's being so influenced by the most prominent example that one no longer perceives the range of possibility. It seems something like framing, but not exactly it.
I don't know if that's quite what's happening here. It's probably more that Wikipedia (and maybe a few other heavily Wikipedia-inspired, MediaWiki-based sites) is the only exposure most people will have to the wiki concept. The range of possibility didn't exist in their minds in the first place. I'm not sure if the effect whereby it skews people's expectations of other later-discovered wikis is something like a qualitative rather than numeric version of anchoring (is there any research on that? Does it have a name?), or if it's just an unsurprising and possibly rational result of people originally seeing "wiki" associated with a single site and not a larger category. If a person is casually familiar with Wikipedia, and they hear their friends call it "wiki", and they've never heard of the general wiki concept... and then they happen upon CocoaDev, see that it describes itself as a wiki (which, to them, was previously not even a one-element category but just a single website; it would seem analogous to Bing calling itself "a google"), and import their expectations about "wiki"... then is that really a bias if they find many aspects of CocoaDev's structure very surprising? Maybe it's a bias specifically if they fail to update their understanding of the concept "wiki" and instead assume that CocoaDev is doing something wrong.
Fair enough. It could be described as a sort of group bias. People would have been capable of seeing a range of possibility except that a strong example channels their minds.
The first one is not at all uncommon. Although I don't have any citations off the top of my head, as a (not very active) admin for Wikinews I can say that very often news sources credit us as "Wikipedia."
I have to wonder where they think Wikipedia got its name from...
With all the nonsensical “cool” prefixes (see iPod, XBox), “cool” etymologically-challenged names (Skype, Google), and “cool” weird-spelling-that-kind-of-suggests something (Syfy) going on, I don’t blame people for thinking any new name they encounter is simply made up for no reason.

"Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners"

Likely the effects were due to the fish oil. This study was replicating similar results seen in a UK youth prison.

Also see this other study of the use of fish oil to present the onset of schizophrenia in a population of youth that had had one psychotic episode or similar reason to seek treatment. The p-values they got are ridiculous -- fish oil appears ... (read more)

What about snake oil?
I don't know if your kidding or scoffing, but I will give a straight answer. Richard Kunin, M.D., once analyzed snake oil and found that it is 20 or 25% omega-3 fatty acids.
The link is giving me trouble. Can you paste the whole abstract?
Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners ABSTRACT Objective: In an earlier study, improvement of dietary status with food supplements led to a reduction in antisocial behavior among prisoners. Based on these earlier findings, a study of the effects of food supplements on aggression, rule-breaking, and psychopathology was conducted among young Dutch prisoners. Methods: Two hundred and twenty-one young adult prisoners (mean age=21.0, range 18-25 years) received nutritional supplements containing vitamins, minerals, and essential fatty acids or placebos, over a period of 1-3 months. Results: As in the earlier (British) study, reported incidents were significantly reduced (P=.017, one-tailed) in the active condition (n=115), as compared with placebo (n=106). Other assessments, however, revealed no significant reductions in aggressiveness or psychiatric symptoms. Conclusion: As the incidents reported concerned aggressive and rule-breaking behavior as observed by the prison staff, the results are considered to be promising. However, as no significant improvements were found in a number of other (self-reported) outcome measures, the results should be interpreted with caution. Aggr. Behav. 36:117-126, 2010. © 2009 Wiley-Liss, Inc.

Is it possible to change the time zone in which LW displays dates/times?


Would it be reasonable to request a LW open thread digest to accompany these posts? A simple bullet list of most of the topics covered would be nice.

On the Wiki, perhaps? A bit of a pain to update it, admittedly...
Would there need to be titles for open thread discussions? Open tagging?
"Open Thread Digest: May 2010" seems fine to me. And I think we're supposed to make top-level posts if we get anything tag-worthy out of a thread.
Its too late for me. Its the Second of May over here. :' (

Question: How many of you, readers and contributers here on this site, actually do work on some (nontrivial) AI project?

Or have an intention to do that in the future?

Yes, I have an intention to do so, because I'm convinced that it is very important to the future of humanity. I don't quite know how I'll be able to contribute yet, but I think I'm smart and creative enough that I'll be able to acquire the necessary knowledge and thinking habits (that's the part I'm working on these days) and eventually contribute something novel, if I can do all that soon enough for it to matter.
I'm working on one as part of a game, where I'm knocking off just about every concept I've run into - goal systems, eurisko-type self-modifying code, AIXI, etc. I'll claim it's nontrivial because the game is, and I very much intend to make it unusually smart by game standards. But that's not really true AI. It's for fun, as much as anything else. I'm not going to claim it works very well, if at all; it's just interesting to see what kind of code is involved. (I have, nevertheless, considered FAI. There's no room to implement it, which was an interesting thing to discover in itself. Clearly my design is insufficiently advanced.)
I happened to see this today, which you might find interesting. He's using genetic algorithms to make the creatures that the player has to kill evolve. At one point they evolved to exploit bugs in the game.
As a programmer, I'm curious exactly how there is no room to implement it. (I understand the “no room” concept, but want details.)
The most obvious problem, which I suspect most games would have in common, is that it has no notion that it's a game. As far as the AI is concerned, the game world is all there is. It wants to win a war,and it has no idea that there's a player on the other side. Building up its understanding to the point where that is not the case would be, well, both way too much work and probably beyond my abilities.
May I ask what game?
You can ask, but at the moment it's more of a design document plus some proof-of-concept algorithms. 99% incomplete, in other words, and I don't really want to get people excited over something that might never come to fruition. I can't really describe the game, because that'd be mostly wishful thinking, but perhaps some design criteria will satisfy your curiosity. So, some highlights I guess: * 4X space-based RTS. Realism is important: I want this to look somewhat like reality, with the rule of fun applied only where it has to be, not as an attempt to justify bad physics. * Therefore, using non-iterative equations where possible (and some places they really shouldn't be used) to allow for drastic changes in simulation speed - smoothly going from slower than realtime during battles to a million times faster for slower-than-light interstellar travel. That is to say, using equations that do a constant amount of work to return the state at time T, instead of doing work proportional to the amount of in-game time that has passed. * Therefore, having a lot of systems able to work (and translate between) multiple levels of abstraction. Things that require an iterative simulation to look good when inspected in real-time may be unnoticably off as a cheaper abstraction if time moves a thousand times faster. * To support this, I'm using an explicit cause-effect dependency graph, which lead me to.. * Full support for general relativity. Obviously that makes it a single-player game, but the time compression pretty much requires that already. Causality, FTL, Relativity - pick any two. I'm dropping causality. The cause-effect graph makes it relatively (ha ha, it is to laugh - theoretically it's just looking for cycles, but the details are many) simple to detect paradoxes. What happens if there are paradoxes, though.. that, I don't know yet. Anything from gribbly lovecraftian horrors to wormholes spontaneously collapsing will do. Hopefully, I'll find the time to work o
Ambitious time travel (or anomalous causality) game mechanics are fun. There's the Achron RTS game which involves some kind of multiplayer time travel. As far as I can tell, they deal with paradoxes by cheating with a "real" time point that progresses normally as the players do stuff. There is only a window of a few minutes around the real time point to do time-travel stuff in, and things past the window get frozen into history. Changes in the past also propagate slowly into the rest of the timeline as the real time point advances. So paradoxes end up as oscillations of timewaves until some essential part moves out of the time travel window and gets frozen in an arbitrary state. I'm not sure how well something similar could work with a relativistic space game where you end up displaced from time by just moving around instead of using gamedev-controllable magic timetravel juice. Your concept also kinda reminds me of a very obscure Amiga game called Gravity. Based on the manual it had relativistic space-time, programmable drones, terraforming planets from gas giants and all sorts of hard SF spacegame craziness not really seen games pretty much ever nowadays.
I've been playing Achron, but it's not really an inspiration. How should I put it.. My understanding of physics is weak enough without trying to alter it. If I stick as closely as possible to real-life physics, I know I won't run into any inconsistencies. Therefore, there will be no time-travel. I might do something cute with paradoxes later, but the immediate solution for those is to blow the offending ship or wormhole up, as real-life wormholes have been theorized to do via virtual particles.
Blow up the paradox-causing FTL? Sounds like that could be weaponized. I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a "Relativity and FTL travel" FAQ. I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?
I think the only possible answer to that is "through play-testing". As I understand it, real-life wormhole physics gives enormous advantages to a defender. However, this is a wargame, so I will have to limit that somewhat. Exactly how, and to what degree - well, that's something I will be confronting in a year or two. (And yes, it could be weaponized. Doing so might not be a good idea, depending on the lovecraft parameter, but you can certainly try.)
Did this ever get made? I have had (what feels like separate) intentions to make game with most of the bullet point (minus relativity) I have a (atleast skill implicit) understanding how one would account for causality(essentially meta-time)
I am writing a book about a new approach to AI. The book is a roadmap, after I'm finished, I will follow the roadmap. That will take many years. I have near-zero belief that AI can succeed without a major scientific revolution.
I'm interested in what sort of scientific revolution you think is needed and why.
Well... you'll have to read the book :-) Here's a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don't make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.
Danger! You're not looking at the whole system. Children's knowledge doesn't just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution. So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge "boost", and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world. If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using. It's not that there's a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge. You seem to be endorsing the discredited "blank slate" paradigm. A better strategy would be to look at how evolution "learned" and "encoded" that data, and how to represent such assumptions about this environment, which is what I'm attempting to do with a model I'm working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what "intelligence" means in such a model, and how to get it. (By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: "If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using." That hasn't been demonstrated - AFAIK. Children are not blank slates - but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected - for a sufficiently smart agent.
If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits - say, the inability to use nouns - as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can't see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general. How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?
That's not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn't need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn't encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using "you" to refer to themselves and "I" to refer to anyone else). Similarly, there's a clear genetic distinction in language processing between humans and other primates in that many of the "sentences" constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.