A recent entry from the West Hunters blog (written by Gregory Cochran and Henry Harpending with whom most LWers are probably already familiar with) caught my eye:
People who grow up in a small town, or an old and stable neighborhood, often know their neighbors. More than than that, they know pretty much everything that’s happened for the past couple of generations, whether they want to or not. For many Americans, probably most, this isn’t the case. Mobility breeds anonymity. Suburban kids haven’t necessarily been hanging out with the same peers since kindergarten, and even if they have, they probably don’t much about their friends’ sibs and parents.
If you do have that thick local knowledge, significant trait heritability is fairly obvious. You notice that the valedictorians cluster in a few families, and you also know that those families don’t need to put their kids under high pressure to get those results. They’re just smart. Some are smart but too rebellious to play the game – and that runs in families too. For that matter, you know that those family similarities, although real and noticeable, are far from absolute. You see a lot of variation within a family.
If you don’t have it, it’s easier to believe that cognitive or personality traits are generated by environmental influences – how your family toilet trained you, whether they sent you to a prep school, etc. Easier to believe, but false.
So it isn’t all that difficult to teach quantitative genetics to someone with that background. They already know it, more or less. Possession of this kind of knowledge must have been the norm in the human past. I’m sure that Bushmen have it.
The loss of this knowledge must have significant consequences, not just susceptibility to nurturist dogma. In the typical ancestral situation, you knew a lot about the relatives of all potential mates. Today, you might meet someone in college and know nothing about her family history. In particular, you might not be aware that schizophrenia runs in her family. You can’t weigh what you don’t know. In modern circumstances, I suspect that the reproductive success of people with a fair-sized dose of alleles that predispose to schiz has gone up – with the net consequence that selection is less effective at eliminating such alleles. The modern welfare state has probably had more impact, though. In the days of old, kids were likely to die if a parent flaked out. Today that does not happen.
Seems quite coherent. It meshes well with findings that the more children parents have the less they subscribe to nurture, since they finally, possibly for the first time ever, get some hands on experience with the nurture (nurture as in stuff like upbringing not nurture as in lead paint) versus. nature issue. Note that today urban, educated, highly intelligent people are less likley to have children than possibly ever, how is this likley to effect intellectual fashions?
Perhaps somewhat related to this is also the transition in the past 150 years (the time frame depending on where exactly you live) from agricultural communities, that often raised livestock to urban living. What exactly "variation" and "heredity" might mean in a intuitive way thus comes another source short with no clear replacement.
People immersed in the life of a small town see a much smaller amount of environmental variation than those accustomed to cosmopolitan living - relatively more observed phenotypic variation should be a result of genetic variation. So cet par cosmopolitans are biased in an environmentalist direction relative to provincials, or to say the same thing, provincials are biased in an innatist direction relative to cosmopolitans (who are biased in an innatist direction relative to interdimensional travellers who've seen all sorts of logically possible human societies that haven't occurred in our timeline.)
It's not clear at all from this where the "unbiased" point would be (without a Bayesian incorporation of all other relevant information etc) or what that would even mean.
Oligopsony, you imply that there is symmetry between the state of knowledge possessed by the small town dwellers and the city dwellers. I disagree.
The small town dwellers generally know who is related to whom, amongst the humans that they encounter. They also possess somewhat detailed information about the kind of upbringing that many of the people they encounter have experienced.
The cosmopolitans on the other hand simply lack knowledge about the genetic background, and the upbringing, of most of the people they encounter. Where is the symmetry?
I don’t see the question “Who is likely to form the most accurate estimate regarding the importance of nurture vs nature, supposing that these folks are exposed to no other relevant information?” as being most salient. In reality, they are exposed to other information on the subject.
Rather, what Harpending and Cochran appear to be discussing is the idea that the small town dwellers are less easily fooled by inaccurate but “clever” theoretical arguments in favour of the idea that nurture is a far more powerful influence than nature. Ceteris paribus, I also think that the converse applies: the small town dwellers should also be less easily fooled by inaccurate technical arguments stating the opposite extreme. They simply possess more evidence than the city dwellers do, ceteris paribus.
As it happens, in this society it is the former false idea (nurture dominating nature) that is promoted by the media-education system, which is why Cochran and Harpending focused on how the move away from small town living has facilitated that.
Yes but I think they do a disservice by making what sounds like an off-hand and very question-begging reference to a possible consequence with regard to selection of schizophrenics.
They may also see less genetic variation.
While he does make some points, the valedictorian comment seems to be potentially off-base. There is obviously a fair bit of genetics that goes to intelligence, but other things can cause inheritance in this fashion. For example, young children will see active older siblings as role-models to emulate. Similarly, different cultural norms in different families will impact how children and the families treat learning.
Consider for a minute the hypothetical of the same comment being made about the decline of the nobility in England. The same basic argument could be made, but there really isn't much that they had that was genetically advantageous.
Moreover, I suspect that most people won't look at the evidence for genetic intelligence anyways but will rather simply emphasize/adopt whatever view is most politically and ideologically convenient. This piece assumes a much higher degree of correlation between evidence and beliefs than is normally present, and also assumes that humans in the past were doing a decent job of taking subtle sorts of data and actually integrating it accurately into their world view.
You may want to look at the book Blink. People are fairly good at noticing subtle patterns in things they observe directly, even if they can't consciously explain why they can't consciously explain how they've come to believe it.
There are contexts where humans can take evidence and unconsciously process it to get good results. However, most of those contexts are contexts where they are taking their experience and applying it to individual cases. One example in that book which sort of fits with this is doctors diagnosing heart attacks.
This is a very different circumstance then having people take in a wide variety of different sorts of data and to come up with a set of rules that actually explain it. Empirically, humans are overactive pattern seekers with confirmation bias issues. Thus, one sees all sorts of superstitions crop up. Moreover, empirically, folk genetics has generally been awful, arguably even worse than folk psychology. For example, look at how many cultures believed that what a female was thinking or looking at would influence the offspring. (This one dates at least to Biblical times judging from the story of Jacob.) Similarly, many cultures have believed that once a female mated with a given male, all her later offspring could potentially inherit properties from that male.
Well, as Konkvistador pointed out, what happens to a pregnant woman does influence the offspring. As for what she was thinking or looking at, especially if it caused her to be flooded with adrenaline or other hormones, that it could effect the baby certainly doesn't strike me as absurd. (Do you know of any research in this area?)
Sure, that sort of effect could maybe occur. But the versions in classical cultures aren't that. For example, the referenced example in Genesis has Jacob apparently using speckled sticks to make the the offspring of the cattle become speckled. Similarly, some cultures believed that if a woman was thinking of another man when she conceived the child then the child would be more likely to look like the other man.
That almost sounds like the type of "polite fiction" that developed to avoid dealing with the consequences of embarrassing affairs.
This might be an excellent example of trends that can make a society grow more wrong over time or locked in a Red Queen's race situation where more and more data is needed just to keep changing intuitions from shifting further away from reality, despite nifty things like say the scientific method or computers.
One argument against the pure nurture point of view is that infants have different temperaments at birth, and this is something you're only likely to know if you've dealt with infants (or, like me, if you're a compulsive reader).
It's conceivable that all the temperamental variation among infants is caused by prenatal environment, but this doesn't seem likely.
This is very interesting but there is one obvious loose end that would be impossible to tie up. Nobody had any local knowledge about the people who drifted off. My aunt has pages and pages of our family tree compiled. A large fraction of the males in the tree (best guess from memory--I haven't looked at it in a while--a fourth to a third) went off to work or went off to the army or mysteriously disappeared at a very young age and nobody ever heard a word from them again. So your natured cohert is biased to those who never left home. This is a fundamental human difference and would probably be a useful axis of distinction for a big five type classification scheme. A lot of people never leave home. A lot of people return every Christmas and go to their high school reunion every five years.
And a lot of people just cannot be bothered to do so. A lot. And you will never capture them in your sample if you want to do a study like this. Even that famous 50 year longitudinal Harvard study which is supposedly the greatest trove of social science research we have has missing members due to this.
A strong case does not imply an important effect size.
It is far more likely that today you will be directly affected by the gravity of Pluto than that today you will trip on the sidewalk. A case explaining why I thought you likely to trip today would be weak and full of holes, while my case for gravity is much stronger.
The OP seems to me like a fixation on a very small thing, a story of why a factor should have an effect, totally ignoring most other factors on the effect and most other effects of the factor.
Why is this down voted? It is a decent point.
However I feel obliged to point out that most of the stuff posted on LW/OB written in this style ("saying complex things with simple words) dosen't much bother to deal with effect size.
Edit: Can downvoters of these two posts please explain why this particular comment is wrong? I really can't see it.
His points are individually cogent, but something about the tone of the piece makes me suspect Dark Arts at work. I'm generally rather suspicious of arguments deriving a broad range of evolutionary consequences (particularly gloomy ones) from some social trend; it's too easy to privilege the hypothesis in such cases, and hard to prove when it happens. And nature vs. nurture is a topic I'm particularly suspicious of, since it bears directly on a number of social policy issues.
The post tells a plausible story, but within the space of plausible stories I don't see much that privileges it. If it was supported by data, that'd be another thing; perhaps you could look at the reproductive success of the relatives of people with schizophrenia or other externally obvious problems with heritable components, and compare between rural and urban settings. Or just take a survey on attitudes towards nature vs. nurture, although in that case you'd probably have to control for age and politics (rural areas skew older and more conservative).
(Anecdotally, I did grow up in a small town, and while I knew the parents and siblings of most of my friends growing up I don't think I had enough data to put together a clear picture of family traits. Upstate California in the 1990s is a far cry from the Kalahari, though.)
Did your parents and grandparents grow up in the same town? Those of your friends?
I'm going to go on record saying that the blog post and subsequent thread have made no sense to me so far. This is the sort of confusion that isn't common for me when I read things here.
Try reading and approaching this as you would a more cryptic and seemingly not-too-clever Robin Hanson post. Skimming through some of the posts, they seem to be employing signalling by not using big words even if it increases ambiguity. The person who misunderstands the point or considers it plain silly implicitly dosen't belong in the conversation in this style of writing. At the same time it gives some insight even to complete outsiders, basically it is a way to write to other metacontrarians to signal you are one of them and exclude the pesky dull contrarians while tolerating some of the better behaved "uneducated". It is, I suspect, also just plain fun to write that way, since it lends itself easily to mocking regular contrarian opinions.
Those are wrong more often than they fail to be clever.
I think it is basically storytelling with truth constrained according to rules of Aristotelian inference. Whenever anyone tries to make an implication from that to reality, and actually make predictions, they can be sniped at by the game-players for failing to understand biology. Nothing useful about biology can be learned from this sort of thing.
It is basically inverse Talmudic exegesis.
That piles untrue assumptions atop each other according to elaborate reasoning until fantastical conclusions are reached - conclusions that would be important if true - and protects the merchants of such conclusions from incisive criticism except for by those who invested enough to be able to play the game (and meta-criticism).
This conjoins mundane observations to each other according to an entertaining narrative until a logically true and subjectively interesting influence in biology is discovered - regardless of the fact that the method used to reach the random-vectored conclusion, minus the constraint of having to be entertaining, would endorse countless other truths of similar magnitude and random vector.
Interesting - where are these findings reported?
I agree - though since most people have gross misunderstandings of genetics, then they might also think - "Well - they have the same parents and yet they're still so different!"- so then they might ascribe less to heredity too (and more to birth order or certain other environmental influences)
That's an insightful post, linking ideas that were previously unrelated in my mind - cultural differences between the city and the countryside; different opinions on heritability of traits; and possibly genetic profiling (as an alternative for knowing about the whole family history of someone).
Maybe an alternative way of phrasing it is that as society gets more complex, the effect of single variables is harder and harder to discern. Add to that that the "accumulated knowledge of the ancients" loses value as life changes faster and faster (one's grandfather's career advice is less and less useful), and it becomes harder and harder for individuals to predict their future (education, literacy and media work against that, but are probably not strong enough).
Consider a separate possibility: competition and opportunity abounds in urban areas, placing additional value on intelligence and skill acquisition. Since there is nothing which can be done about intelligence, really, focusing on skill acquisition is a better strategy. Parents who believe very thoroughly in the nurture argument may be much more willing to invest heavily in their child's education, expecting far greater benefits than are actually possible. Because the perceived value of success is higher it succeeds more often in the face of discounting.
In this case, the false belief is highly adaptive socially, with people adhering to it acquiring better positions in society. While this does not really lend itself to much genetic replication, meme spread should accelerate. I think that, prima facie, we should prefer this explanation; because it does not rely on stories LessWrongians may find aggrandizing, it is less likely that we will be accepting this narrative through bias.
The problem is that this is one out of uncountable equally plausible stories that, if true, explains a tiny effect on a meme's spread that varies in direction depending on the story. The effect of offspring's eventual financial success on this sort of child-raising meme's spread is negligible. Isolating it and finding the direction doesn't tell me about the important factors behind such memes.
I'm going to say there are four general problems with that, giving me one chance to be right and many to be wrong. Privileging the hypothesis, reversed stupidity is not intelligence, cultish countercultishness, the tragedy of group selectionism.
I'm not sure what it is that is (or is not) being explained. Phlogiston had fire, for example. There needs to be an unexplained phenomenon, or one has a fake fake explanation.
Phlogiston was a substance hypothesized to explain fire, my comment supposes an architecture of pre-existing mechanisms which appear just as plausible as what the OP proposes.
You've aggressively chopped from my comment relevant details, for example, the qualifier "prima facie", which negates your objections.
You're overly presumptive about memes, presuming that we need to personally observe a complete trajectory from baby to success. This is not so; it is sufficient that we observe highly skilled people which are financial successes and ask about their trajectory.
I'm going to make an aggressive assertion beyond what is relevant in this context to increase the chances for me to be wrong.
"Prima facie" isn't a statement that ever saves a person from privileging the hypothesis, rather, recognized stupidity avoided plus "prima facie" is a hallmark of privileging the hypothesis, and one has only literally, technically saved one's argument from succumbing to reversed stupidity if one is physically writing about a random thing with the justification it is better than the stupid thing, but one has not saved ones self from it because the time spent on it remains spent..
The most well-established case in the world showing one car, chosen at random, is going north on the freeway at time t does not enable one to say anything important about the average direction of traffic on all freeways at time t.
Group selection is real, mathematically real, and present in all selection among sexually reproducing creatures. Its effect is not observable because that effect is swamped by the countless other paradigms that humans aren't programmed to (over) attribute. That's what is meant by "group selection is not real", that it is never a predominant explanation of any phenomenon.
The blog post is a random story crafted to appeal to humans and not be logically false. It can be defended by saying that all that is meant is logical truth of its stories being factors, but by Gricean implication, if one writes a blog post about an effect being real, one is claiming that this could be used to make a prediction and has more of an effect than the influence of Pluto's gravity on mating patterns of the Buffy-Tufted Marmoset.
If Schizophrenia is skyrocketing due to its maladaptiveness diminishing, what's so adaptive about it that it's taking over the gene pool?
A popular answer to that nowadays is something like "creativity".
What? When something becomes less deadly, its frequency increases. That trend does not continue to the point of taking over the gene pool, and it shouldn't surprise you that the trend doesn't continue to that point.
Not necessarily. If an allele has no selection pressure on it then we should expect the frequency to be as likely to go up as it will be to go down. The situation is more complicated when one has multiple alleles interacting, but as a rough approximation in non-pathological contexts this should still be true. Since schizophrenia is largely genetic, we should expect the frequency to stay about the same.
There are some exceptional cases to this sort of logic. If for example one has a trait that is often inflicted by the environment (say deafness or blindness) then one should expect as it becomes less deadly that more people will survive with the trait and so the percentage with it will go up. But schizophrenia doesn't seem to act that way.
So, without a very detailed analysis, if the deleterious effects of schizophrenia have been reduced we should expect the percentage of the population that has it to stay roughly constant. However, if there are alleles which have positive selection effects by themselves but which have deleterious effects when found together (a likely scenario for a complicated mental trait like schizophrenia), then it may be that reduced negative selection pressure on schizophrenia makes those alleles have an equilibrium ratio in the population that has moved up. In that context, one would expect to see more schizophrenics.
The upshot is that without a lot more data about the underlying genetics, predicting an increase seems unjustified.
ETA: Curious about cause for downvote. Everything above is essentially what one will get in an intro genetics course. Nothing above should be controversial. Is this being downvoted as too trivial?
If an allele exists currently at frequency X, and the selection pressure on it changes upwards, what should we expect? The frequency to increase. Of course it is possible for the frequency to decrease, and I made no comments on the variance of that expectation.
Why is this the case? The deleterious effects of schizophrenia are that schizophrenics and those suspected of sharing their genes have less grandchildren. If those effects are reduced, that means that there are more grandchildren.
I didn't downvote the comment, but I suspect someone saw it as trivially wrong.
Barring cases the rare cases where new copies of the allele are being generated from individuals who do not have copies of that allele (such as the example JoshuaZ gave), then if the selection pressure for an allele is negative, we should expect its frequency to go down, although for rare recessive alleles this rate will tend to be extremely slow. If the selection pressure goes up, but continues to be negative, then we should expect the frequency to continue to decrease, but more slowly than before. The rate at which the frequency goes down will depend on the strength of the negative selection pressure, not the relative negative selection pressure to whatever it used to be.
Having thought about this for a week, I think I've realized what the disagreement was about, and where I went wrong in expressing what I was thinking. I didn't distinguish between long-term averages and short-term averages, which was a mistake. My statements were wrong if we only knew short-run frequency but I believe were correct if we knew long-run frequency.
Consider the prevalence of a gene in a finite population in each generation as a Markov chain. If you start off in state i, that is i individuals having the gene, the sum of the transition probabilities to lower numbers represents the chance that there are less individuals with that gene in the next generation, the sum of the transition probabilities to higher numbers represents the chance that there are more individuals in that gene in the next generation, and the remainder (the transition probability back to i) is the chance that nothing changes.
Selection pressure is related to the chance that the gene becomes less frequent compared to the chance that it becomes more frequent, and depends on the frequency. One can easily imagine situations in which a gene is pushed towards a frequency that isn't 0 or 1, but instead, say, .3, and so has positive selection pressure below that and negative selection pressure above it. (Frequency = Prevalence / Population Size)
For a deleterious gene, it can easily be the case that for all positive states the chance of transitioning downwards is greater than the chance of transitioning upwards. Even in that case, the stationary distribution of states on that chain will have a positive mean (because there are no negative states). Consider a two-state system, with p(1|0)=.1 and p(0|1)=1. The stationary distribution is (10/11, 1/11), with mean 1/11.
As the the gene become less deleterious- the selection pressure becomes less negative- the stationary distribution will spread upwards. We expect the long-term mean will increase, and are less surprised to find the system in a state where a larger population is carrying that gene than we were before. In the same example, if we change p(0|1) to .9 then the stationary distribution is now (.9,.1), with mean .1.
Under such a view, it's obvious that when you decrease the selection pressure on a rare, deleterious condition, the long-term average of individuals with such a condition will increase, but will not grow to dominate the population.
No. This doesn't follow. Consider for example an allele that is normally recessive and in the homozygous case is nearly lethal. Such an allele will generally be pushed to a very low frequency. The only way that such an allele stays at a substantial fraction of the population is if it is has a constant influx of new copies (For example Huntington's disease is sort of this way. The allele is dominant and extremely negative in that form, and is homozygous lethal, but Huntingon precursor alleles are constantly mutating into new cases of Huntington's and the specific biochem of the allele in question makes this much more likely). Now, if an allele has no impact in the heterozygous case. As the allele becomes extremely rare, the selection pressure will drop more and more to the point where it becomes negligible. Now, consider what happens if we discover a cure for this very rare disease that occurs in the homozygous case, or that we make it much easier to survive. What should we expect to happen to the frequency in the population? We should expect it to stay roughly constant, because there's no positive selection pressure.
In general, decreasing negative selection effects does not increase the frequency of an allele.
I suspect I'm being unclear. I'm not discussing a state where we have good knowledge of the underlying mechanics, but one where we have some original frequency of a heritable condition, and then we make people with that condition / their relatives more likely to procreate than they were before. The equilibrium has shifted, and it has shifted upwards. We don't need to know the strength of the selection pressures (positive and negative) or their mechanisms to make that prediction; we just know that the scales were probably balanced before, and we pulled some weight off of one side. The scales should tip away from the side we pulled weight off of.
Yes, you are being clear, and this doesn't follow. It might help to reread my example. If we reduce a negative selection pressure it doesn't mean that things will shift. In the example I gave there's no real equilibrium, the allele just gets to stay under the radar of evolution because it is so rare evolution doesn't get a chance to act on it. (This is by the way a well-known ev-bio issue, that bad recessive alleles can easily stay at low levels in a population.) Making the allele have a less negative selection pressure won't necessarily change that state. If the pressure is moved to close to zero then one then expects neutral drift to occur as usual which can move things up or down, and if the pressure is still negative then it should stay about where it is unless neutral drift moves it a bit downwards.
If there is still an influx of new copies due to mutation, then the frequency will increase because there's now less selection pressure driving the mutations out.
Influx of new copies for most alleles is generally negligible for any specific allele. Examples like Huntington's are extremely rare. The probability that any mutation will arise more than once in the population is generally extremely small. Standard genetic models often don't even bother taking into account the chance that a mutation will be matched because the chance is so small.
That's a reason for it to stop being selected against. For it to actively spread as is being claimed above, it's got to be contributing something, or very very lucky.
Suppose you have two types, A and B, and each type gives birth to itself with fidelity .999. (That is, out of a thousand births from As, you have 999 As and 1 B, and the same but reversed for Bs.) As have 1 child each on average; Bs have .5 children each on average. There is a equilibrium ratio of As to Bs in the population that you can figure out pretty easily.
Now suppose the number of children the Bs have raises up to .9. What happens to the ratio of As to Bs?
The selection pressure is still in favor of As, but even when it's strong Bs are still around because of the mutations. (With sexual selection, and multiple genes, it becomes easier for B contributors to stick around in the gene pool.) When you decrease the edge that As have over Bs, the equilibrium number of Bs increases.
That assumes a ridiculously high mutation rate. For the vast majority of alleles the mutation rate isn't what matters but the selection rate.
Sure, it's a toy example without sex. Even so, if the mutation rate were one in a million instead of one in a thousand, you would still have a equilibrium ratio of As to Bs. When you add in sex and precursor genes (that is, you don't have schizophrenia unless you have two copies of an allele, or you need multiple different alleles, etc.) then the selective pressure depends on the prevalence- as the condition gets rare, the selection pressure on the precursors lowers because potential mates are unlikely to have the other half necessary to get the condition. (This gives you another equilibrium ratio of the number of people with the condition.)
The real incidence of schizophrenia is about one in two hundred. That suggests there's something going on beyond mutation- either some of the schizophrenia precursors are positive, or it's caused by a virus, or genes just determine susceptibility, or the heredity has to do with prenatal environments, or so on. (With inclusive 'or's.)
Sure, those are all possibilities. But there are other possibilities also. For example, it could be that the selection pressure was really only high fairly recently. Most people who get schzophrenia get it sometime between around 15 to 35 years of age, and for a large fraction the symptoms come and go. So in many classical societies they would have had time to reproduce. Moreover, in some societies people with symptoms became things like shamans. So the selection pressure would likely have been not nearly as negative in the past, possibly to the point where neutral drift could account for a decent fraction of the alleles.
All of that said, I agree that it is likely that some of the alleles which produce a likelyhood of schizophrenia at some point had positive selection pressures on them for other reasons. But humans are so far from our ancestral environment that alleles which once had positive selection pressure don't necessarily have much or any today.
In that case why is the allele still around at all?
Consider a deck of cards that is randomly shuffled. It must come to some arrangement. Now consider the chance that shufflling another deck gives the same result. That's only 1 / 52! which is around 10^-67. But if someone said that therefore no deck of cards is ever shuffled they'd be wrong. Similarly, consider a protein with 600 base pairs describing it. The chance that a mutation occurs in that specific protein at any given time is pretty small, the chance that the exact same mutation occurs will be much smaller by roughly two orders of magnitude (assuming just singleton substitution errors).
The key is that mutations occur but repeated mutations don't generally occur unless there's something very weird going on like in the case of Huntington's where there's a whole family of bad alleles and there's a biochemical quirk which makes the mutations much more likely.
Pardon me if I'm being obtuse, but wouldn't we expect "a whole family of bad alleles" to be the usual case, since you can break a protein in any number of different ways?
I've heard that some fairly high percentage of hemophilia A and B cases are de novo mutations (a quick Google turned up this). I'm sure it's because hemophilia is pretty lethal and often doesn't get the chance to be inherited, but it's another case where mutation rates do seem to matter.
Yes, hemophilia is an example like Huntington's where there's a family of alleles. And of course, in that case, the allele is extremely lethal, killing a large fraction of males, and killing any female that is homozygous. So the allele has to stay really rare.
In general though for most proteins it is surprisingly difficult to break them. Most mutations will actually be neutral. They will be neutral either because the mutated codon actually codes for the same allele, or codes for a chemically similar allele, or because it is a section of the protein which provides something like structural support, if it mighta ctually have a phenotypical effect that just doesn't matter much either way . Many other mutations might have a negative effect but it won't be the same negative effect or it will be a negligible negative effect.. Moreover, some of the negative effects from mutated proteins aren't because the protein itself is now broken at what it normally does but because the protein now in addition to what it is supposed to do gums something else up or isn't as easily broken down or something like that. Those sorts of things also require specific mutations to occur. So in general, it is very rare for a mutation to get repeated.
Not necessarily, see Eliezer's post Evolving to Extinction.
Just posted a link to this thread in a comment on the blog itself, hope I didn't violate any netiquette by doing so.
gcochran commented the LW discussion on the blog: