Edit: Q&A is now closed. Thanks to everyone for participating, and thanks very much to Harpending and Cochran for their responses.

In response to Kaj's reviewHenry Harpending and Gregory Cochran, the authors of the The 10,000 Year Explosion, have agreed to a Q&A session with the Less Wrong community.

If you have any questions for either Harpending or Cochran, please reply to this post with a question addressed to one or both of them. Material for questions might be derived from their blog for the book which includes stories about hunting animals in Africa with an eye towards evolutionary implications (which rose to Jennifer's attention based on Steve Sailer's prior attention).

Please do not kibitz in this Q&A... instead go to the kibitzing area to talk about the Q&A session itself. Eventually, this post will be edited to note that the process has been closed, at which time there should be no new questions.


New Comment
112 comments, sorted by Click to highlight new comments since: Today at 12:02 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I haven't read your book yet, so forgive me if you discuss this there. But I’ve been wondering:

Simple traits (such as an organism's height) are probably relatively easy to alter via genetic mutations, without needing to combine many different genes chosen from huge populations. So, e.g., dog breeding altered dogs’ size relatively easily.

Complex adaptations aren’t nearly so easy to come by.

If intelligence is a conceptually simple thing, there might be simple mutations that create “more intelligence” -- it might be possible to make smarter people/mice/etc. by tuning a setting on an adaptation we already have. (E.g., “make more brain cells”).

If intelligence is instead something that requires many information-theoretic bits to specify, e.g. because “intelligence” is a matter of fit between an organism’s biases and the details of its environment, it shouldn’t be easy to create much more intelligence from a single mutation. (Just as if the target was a long arbitrary string in binary, and the genetic code specified that string digit by digit, simple mutations would increase fit by at most one digit.)

From the manner in which modern human intelligence evolved, what’s your guess at how simple human (or animal) intelligence is?

You are even meaner than Shulman. We don't know how human intelligence evolved and we need to know it in order to answer your question I think. This is where evolutionary psychology and differential psychology (Am I using that term right?) must come together to work this out.

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence. BRCA1 is another early CNS growth suppressor, and we speculate in the book that a mildly broken BRCA1 is an IQ booster even though it gives you cancer later. BTW Greg tells me that there a high correlation between IQ and the risk of brain cancer, perhaps because of the same mechanism.

But these ways of boosting IQ are Red Green engineering. (Red Green is a popular North American comedy on television. The hero is a do-it-yourselfer who does everything shoddily.)

On the other hand IQ seems to behave like a textbook quantitative trait and it ought to respond rapidly to selection. We suggest that it did among Ashkenazi Jews and probably Parsis. IQ does not seem to have a downside in the general population, e.g. it is positively correlated with physical attractiveness, health, lifespan, and so on. Do we get insight into the costs of high IQ by looking at Ashkenazi Jews? Do they have overall higher rates of mental quirks? Cancer? I don't know.


You are even meaner than Shulman.

They're engaged. :)

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence.

There is now substantial evidence that there is a causal link between prolonged focusing on close objects - of which probably the most common case is reading books (it appears that monitors are not close enough to have a substantial effect) - and nearsightedness/myopia, though this is still somewhat controversial. This is the typical explanation for the correlation between myopia and IQ and academic achievement.

A genetic explanation is possible, and would be fascinating, but I wouldn't want to accept that without further evidence. If the genetic explanation is true and environment makes no contribution, then I think one should find that IQ is more highly correlated with myopia than academic achievement -- I don't know if this has been found or not.

It's like saying "if evolution is true, crocoducks should exist". You are (deliberately?) misrepresenting opponent's views. He meant that of all genetic variation affecting IQ, only small, but non-negligible, subset affects both myopia and IQ. However I still don't quite get how larger brain can cause myopia rather than hyperopia.
Maybe the larger brain leads to more intelligence, and people with more intelligence read more, and reading more leads to myopia. (Whether reading actually leads to myopia can be questioned, but that doesn't affect the point.)
More correlated than academic achievement is correlated with IQ, or with myopia? Your comment is a very good point. But IQ may be more-closely correlated with academic achievement than academic achievement is with reading books; so this comparison might not help. (And you want to talk about the variance in X accounted for by Y but not by Z, rather than place a bet on whether Y or Z has a higher correlation with X.)
Yes, of course. But remember that in science we are not in the business of "accepting" one thing of another. That is the domain of religion and politics. The only thing that matters is finding good hypotheses and testing them. HCH
8Wei Dai13y
That's interesting. I found a 2006 paper which argued that a genetic mutation is responsible for myopia, and that it also increases intelligence, but the specific gene and mechanism involved were apparently still unknown at that time. Has there been some more recent research results on this topic?
There is apparently a research group in China that has some solid results but I have not seen them and do not know if they are out yet. HCH

From The 2% Difference, an article by Robert Sapolsky:

Given the outward differences, it seems reasonable to expect to find fundamental differences in the portions of the genome that determine chimp and human brains—reasonable, at least, to a brainocentric neurobiologist like me. But as it turns out, the chimp brain and the human brain differ hardly at all in their genetic underpinnings. Indeed, a close look at the chimp genome reveals an important lesson in how genes and evolution work, and it suggests that chimps and humans are a lot more similar than even a neurobiologist might think.


... Still, chimps and humans have very different brains. So which are the brain-specific genes that have evolved in very different directions in the two species? It turns out that there are hardly any that fit that bill. This, too, makes a great deal of sense. Examine a neuron from a human brain under a microscope, then do the same with a neuron from the brain of a chimp, a rat, a frog, or a sea slug. The neurons all look the same: fibrous dendrites at one end, an axonal cable at the other. They all run on the same basic mechanism: channels and pumps that move sodium, potassium, and calcium arou

... (read more)
If that's actually correct, we should be able to just breed a superintelligence. Maybe not one as powerful as an AI gone foom, but still orders of magnitude higher than us mortals. Unless he claims at some point that humans reached some sort of hard limit, but it seems vastly more likely that huge brains are costly and we're the point where the tradeoffs balanced.
Supposedly human brain size is limited by the skulls that will fit out of our mothers, and human babies are actually born premature relative to other species because it's only when we are premature that our skulls will still fit out. Of course, we have cesarean births now, so...
Great points. And, since we're born premature as you said, there's already a partial workaround even if you need "natural" births for some reason (potential complications from the surgery?)
That's not really a new idea :P all those sci fi worlds with brain bugs and future humans worshiping the morlock king knew that.
It must be simple in some way since it is so heritable. People with IQs of 90 and IQs of 140 both prosper and do fine. although there are lots of statistical differences between two such groups. Other other hand if we take a trait like "propensity to learn language in childhood" this seems to me to be relatively invariable and fixed and so probably very complex. Certainly one could breed for IQ and raise the population mean a lot. But what would we be doing to our children? People with 140 IQ seem to do all right but I would worry a lot about the kind of life a kid with an IQ of 220 would have.

Do you see any difficulties for very high IQ children other than isolation?

It's a little much to expect people to have so much patience, but doing moderate IQ increases generation by generation, with large numbers of increased IQ children in each generation would do a lot to solve the social problems.

an IQ 220 kid will do just fine in company of other IQ 220 kids and teachers.
Moved to the kibitzing thread.

Michael Vassar is having trouble accessing this site right now, so asked me to relay this question:

You mention in your book (p. 69) that from 100,000 BC to 12,000 BC, the human population increased from half a million to six million thanks to better hunting tools and techniques. On the other hand, from page 100 onwards, you discuss Malthusian limits to population, implying that the sizes of primitive populations were proportional to the amount of food available. In other words, you seem to be saying that from 100,000 BC to 12,000 BC, the human population grew because better hunting techniques increased the availability of food.

But better hunting technologies won't generally tend to raise Malthusian limits strongly. While hunting better will mean that new prey become exploitable, it also means that old prey are continually hunted to extinction. The net result isn't a systematic trend. How strong is the evidence for any prehistoric population sizes? How do the implied population densities compare to those for other large omnivores, such as black bears and pigs, in their territories, or to the population densities at which Chimps live? Why would human densities have been much lower?

 Better hunting techniques can significantly raise Malthusian limits. 

First, you have to remember that old-fashioned humans were one predator among many: improved hunting techniques could raise our share of the pot, as well as decreasing other predators' tendency to eat us. Also, modern humans seem to have used carcasses more efficiently than Neanderthals: they had permafrost storage pits and drying racks, so could have preserved meat for long periods. Neanderthals didn't, and I think they must have wasted a lot. Next, moderns used snares, traps, nets, bows etc to catch smaller game not much harvested by Neanderthals: they also made more use of fish and molluscs. And lastly, more plant foods. Altogether, their innovations gave them a larger share of the game, used that share more efficiently, tapped marine resources (lots of salmon in Europe), and harvested resources at a lower trophic level ( plants for example), which are always more abundant.

Hunting to extinction happened in some places, but not everywhere: it hardly happened in Africa at all. It happened most in places with no previous hominid occupation.

Implied population densities are, I think, e... (read more)

If a trait is being selected for, the alleles with large positive effects will compound with a faster growth rate than those with small effects (even if there are initially many more small-effect alleles) and tend to account for a large portion of the heritability of that trait (at least until they have almost swept the population).

You suggest that psychological traits such as personality and cognition have been subject to recent positive selection, so why haven't GWAS (or targeted investigations, e.g. microcephalin) found much in the way of common large effect alleles for psychological traits? What are your best guesses on the genetic architectures of personality and cognition?

Yikes! This is worse than my PhD orals.

There have been some (tentatively) identified like the 7-repeat version of the D4 dopamine receptor, the serotonin transporter, and others that Greg will be able to dredge up from his memory.

We may have found others but not identified them. Imagine that it would be highly beneficial to have a little bit less of substance s. If so then a mutation that broke the gene producing s would be favored a lot and would sweep until people with two copies of broken s started being born. How likely is it now that two broken copies of s will still work? A lot of the sweeps identified from SNP scans seem to have stalled out at intermediate frequencies (as opposed to going to fixation) suggesting that heterozygote advantage is widespread.

If so the genome wide association studies ought to find them, and they find a lot, many of the findings are not replicable. So after all the above I have no coherent answer to your question!

Do you have an overall view on the feasibility and timeline for genetic engineering of human intelligence?

For example, at what odds would you bet that we will have the ability to create hundreds of IQ +6 sigma super-geniuses by 2020 (for a reasonable cost, e.g. total project cost <$1bn)? 2030? 2040? 2050? 2075?

This is quite relevant for people interested in the singularity, because if it is highly feasible (and there are some who think it is), then it could provide a route to singularity that is independent of software AI progress, thereby forcing a rational observer to include an additional factor in favor of extreme scientific progress in the 21st century.

I would say that it is some sense obvious that higher intelligence is possible, because the process that led to whatever intelligence we have was haphazard (path-dependent, stochastic, and all that) and because what optimization did occur was under severe constraints - some of which no longer apply. Clearly, the best possible performance under severe constraints is inferior to the best possible with fewer constraints.

So, if C-sections allow baby heads to get bigger, or if calories are freely available today, changes in brain development that take advantage of those relaxed constraints ought to be feasible. In principle this does not have to result in people who are damaged or goofy, although they would not do well in ancestral environments. In practice, since we won't know what the hell we are doing... of course it will.

Still, that's too close to an existence proof: it doesn't really tell you how to do it.

You could probably get real improvements by mining existing genetic variation: look at individuals and groups with unusually high IQs, search for causal variants. Plomin and company haven't any real success ( in terms of QTLs that explain much of the variance) but f... (read more)

I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with "acceleration" of "progress", but with the development of smarter-that-human AI -- which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline. But those who push the "acceleration" of "progress" brand, have about 10^3 times our marketing budget. No disrespect to Gregory -- it is simply the case that the marketing and info that's out there has turned the "Singularity" brand sour -- the term has lost any precise meaning.
If the problem is Kurzweil's mesage than it probably doesn't help SIAI's brand that he's listed second. Anecdotally, I'd say you're absolutely right and that SIAI's prospects could be substantially improved by jettisoning the term "singularity". I'm someone who SIAI should want to target as a supporter, and I've mostly come around but the term singularity just radiates bad juju for me. I think I'm going to apply for a visiting fellow spot but frankly, I'm not especially comfortable telling friends and family that I'm planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I'm hoping they don't have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in "robotics" (which I tend to use instead of AI) at the "theoretical level" and that they want to bring to the "level of human intelligence" and that they study "risks to humanity". Of course, this is all "counting chickens 'fore they're hatched" at this point, because I haven't sent my email/CV to Anna Salamon yet...
Ah, go on Silas. I'm especially sure Alicorn will be delighted to meet you at the SIAI Benton house ;-)
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation. The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
Thanks for the response. (Consider the following question in a Bayesian spirit, i.e. the spirit of giving a probability to any event, even if you don't have an associated frequency for it) If you had to bet on whether the technology for these genetic engineering efforts (NOT the political will) will be ready by e.g. 2030, 2040, 2050, 2075, 2125, what kind of odds/probabilities would you bet at?
I have heard of the theory that the human with the "consensus" genome would be way above average in phenotype. any idea how much?
I have heard discussion about the singularity on the web but I have never had any idea at all what it is, so I can't say much about that. I do not think there is much prospect for dramatic IQ elevation without producing somewhat damaged people. We talk a lot in our book about the ever-present deleterious consequences of the strong selection that follows any environmental change. Have a look for example at the whippet homozygous for a dinged version of myostatin. Even a magic pill is likely to do the same thing. OTOH scientists don't have a very good track record at predicting the future. Now, I am going to hop into my flying car and go to the office -:) HCH
You could contact Anna Salamon or Carl Shulman for a well-written introductory piece on the singularity. Very short summary: if we humans manage to scientifically understand intelligence, then the consequences would be counter-intuitively extreme. The counter-intuitiveness comes from the fact that humans struggle to see our own intelligence in perspective: * both how extreme and sudden its effects have been on the biosphere, * and the fact that it is not the best possible form of intelligence, not the final word, more the like a messy first attempt If one accepts that intelligence is a naturalistic property of computational systems, then it becomes clear that the range of possible kinds or levels of intelligence probably extends both to much narrower and dumber systems than humans and to much more able, general systems.
Interesting. Would these people be so damaged that they would be unable to do science? Or would you be expecting super-aspergers types? (Or, to put it more rigorously, what probability would you assign to dead/severely disabled vs. super-aspergers/some other non-showstopping deleterious effect?)

I don't know but I can give you some candidates. One is torsion spasm (Idiopathic Torsion Dystonia). It will give you about a ten point IQ boost just by itself. Most of the time the only effect of the disease is vulnerability to writer's cramp, but 10% of the time it puts you in a wheelchair. So you could do science just fine.

Similarly the Ashkenazi form of Gaucher's disease is not ordinarily all that serious but it also give a hefty IQ boost. Asperger like stuff would probably also increase: many super bright people seem to be a bit not quite. Of course lots of other super-brights seem to be completely normal.

I am just babbling, I have no special insight at all...


That is very interesting, thanks. The only question that remains in my mind is what the timescale for this is: both the "when will it become technically feasible" and "when will political and economic factors actually cause it to happen".

The Neanderthal genomics work showing a few percent of non-African human genomes inherited from Neanderthals suggests that any individual handy Neanderthal alleles would have needed only a few doublings to reach fixation. Any news on whether the Neanderthal variants show more or less post-mixture selection than you would have expected?

Hi Carl:

No word on that yet. They identified regions of the genome where there are (1) deep gene trees in Europe and/or Asia, (2) we share variants with Neanderthals, and (3) these shared variants are absent in Africa, and they found a lot of them. But if some variants in Neanderthals were positively selected in humans very early on then they would have spread through all humanity, and no one has scanned for those yet.

Our favorite candidate is the famous FOXP2 region, without which one has no speech. Every human has it, and the diversity hear it on the chromosome suggests that it is 42,000 years old in humans. Neanderthals have the human version (so far), so a likely scenario is that we stole it from Neanderthals.


Paabo seems to think it unlikely that any of these introgressed alleles had a a significant selective advantage in humans, but that's unlikely. I'll bet money on this.

To be fair, I should explain why that is a sucker bet. John Hawks and I discussed about a situation with just a few tens of matings over all time: we were making the point that even in that minimal scenario, alleles with large advantages (on the order of 5%) could jump over to modern humans. The Max Planck estimate of 2% Neanderthal admixture is far more favorable to introgression: with that much of a start, and with at least 50,000 years to grow in, any allele with a selective advantage > 0.2% is likely to be over 50% today. Many such Neanderthal alleles should be fixed in Eurasians - or in some Eurasian populations in the right environments - or even in Africans, if the allele conferred global advantages. of course we'd have trouble proving this in Africans: the Science study really shows how much more Neanderthal ancestry Eurasians have than Africans, not the absolute amount in either population.

Note that the Fisher-wave velocity goes as the square root of the selective advantage: a Neanderthal allele with an... (read more)

What are your thoughts on the Flynn effect?

It is an interesting puzzle. This was a secular rise in cognitive test scores well documented in a number of countries during the 20th century. It has stopped and even reversed in the last few decades. There seem to be several pausible ideas out there One is that social changes have had the effect of "training" people for cognitive tests: more magazines, radio, chatter everywhere, advertising, etc. Hard idea to test. I do fieldwork in Southern Africa. Forty years ago there were no radios in the backcountry, no books, no magazines. Today radio, newspapers, magazines are everywhere. I expect that this changes people a lot but I have no evidence. Flynn himself thinks nutrition got better but the data are not clear about that. I would favor as an explanation vaccination and antibiotics. Infectious disease and the inflammation associated with it does seem to damage people (Caleb Finch, Eileen Crimmins, others). We have cut the intensity of childhood insults way down everywhere. My two cents........
I doubt Flynn thinks much of the nutrition hypothesis any more; his recent paper 'Requiem for nutrition as the cause of IQ gains' argues against nutrition as a major cause of IQ gains in developed nations. He would likely agree with you that the kinds of social changes you're thinking of had a big impact; I seem to remember him writing in his book from three years back that contemporary people make more of a habit of thinking about things abstractly, and learn more of the mental tools needed to do well on IQ tests.

When I did fieldwork in the late 1960s in backcountry Botswana I hit upon the idea of asking my sister (a dairy farmer) to send me a box of back issues of American cattle magazines. It was unbelievable: I could have made a fortune selling pictures from them, not to mention whole issues, to the local cattle people. At that time people carefully hoarded little scraps of paper to use writing messages.

In the late 1980s I brought some more such magazines with me, and no one was interested at all. The media storm had penetrated and everyone had school textbooks, magazines, radios, etc.

My main interest is how language barriers control how information, like cattle farming best practices, bounce around.
Interesting. If mass media have only started to penetrate parts of Southern Africa in the last 40 years or so, I wonder if the Flynn effect is still happening there. Editing this comment to add - I did a quick Google scholar search and didn't find Flynn effect studies for Southern Africa. The best I could get were papers on IQ rises in Sudan and rural Kenya.
A similar argument was made in the book Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter.
I suspect that people want more complex popular culture because they've gotten smarter at least as much as the more complex culture making them smarter by accident. Anyone have any actual knowledge of why tv shows started doing longer, more complex story arcs?
I have no such knowledge, but allow me to add "better recording and rewatching options" to the list of candidates. Ready access to the backlog is certainly a factor in the success of serials in webcomics over newspaper comics, for example. (Yes, there are serials in both, but they are the norm in webcomics and the exception in print.)
Not to mention viewer base fragmentation. There is less need to appeal to the so-called lowest common denominator when there are hundreds or thousands of avenues for transmission. Those without patience for long story arcs can watch a different program more easily today than they could before cable, satelite, and the internet.
I think there's some evidence that the Flynn effect isn't just about IQ tests: for example, I think it's only been within the past 30 years that there are popular books about popular culture.
How are popular books about popular culture an indicator of rising IQ? You mean, e.g., a book about Michael Jackson? Science fiction blossomed in the 1930s. Educational books became big in the 1950s, I think. Self-help books became huge 40 or 50 years ago. Parenting books became huge in the 1960s. Popular sociology books date back to before Future Shock, printed 40 years ago. I have the impression of a big increase in IQ when I listen to old radio comedy shows, pre-World War II. The humor is so simple and repetitive and uninteresting that I get the feeling the US must have consisted of adult-sized children. Maybe it's because radio was a new medium; but a lot of it was just a restaging of vaudeville humor that had been successful for decades.


I have the impression of a big increase in IQ when I listen to old radio comedy shows, pre-World War II. The humor is so simple and repetitive and uninteresting that I get the feeling the US must have consisted of adult-sized children. Maybe it's because radio was a new medium; but a lot of it was just a restaging of vaudeville humor that had been successful for decades.

I have the same impression, though it could be partly due to the growth and specialization in the pop-culture market, so that the sample you happen to see today is mainly from the output targeted at smarter audiences. But the difference seems too large to explain just by that effect; the old shows are often truly mind-numbingly dull, as you describe. There was a post about this topic a few years ago on Marginal Revolution with some striking diagrams: http://www.marginalrevolution.com/marginalrevolution/2005/04/tv_and_the_flyn.html

What makes it even more puzzling is that these apparent huge increases in average folks' sharpness were not accompanied by anything similar at the higher levels of intellectual accomplishment. In many countries, a teacher or professor who taught for, say, 30 years during the se... (read more)

The explanation that Flynn describes in his book, What is Intelligence? is basically that modern culture gives us extra practice in many of the subskills that require a lot of intelligence. That, however, doesn't increase intelligence itself - it only makes us better at doing tasks that require those subskills. This doesn't mean that IQ tests would have lost their value, either - if, say, everyone in the population ends up exercising an additional five hours per week, then everyone's athletic ability does go up, but it's still the ones who were the most athletically talented in the beginning who end up having the best results. The same principle applies for IQ: "general intelligence" + "domain-specific talent" + "amount of practice had" is probably a pretty good formula for figuring out how good you are at something, and if everyone gets roughly the same amount of extra practice, the tests remain a good way of distinguishing the one with the highest IQ. In practice, the IQ tests' validity might be even better than only this would imply. The obvious question this raises is, "but does the whole population get the same amount of extra practice?". In all likelihood, the answer is no - but it's very possible that for a lot of things, those with the highest IQ get the largest amount of extra practice, since they will naturally find simple things boring and seek out the most complex things. Thus the amount of practice, itself, likely correlates with IQ.
One possibility is that our educational systems haven't caught up to the increase in general intelligence. Another is that people who could be making major contributions are distracted by the complexity of popular culture. :-/
I'd generalise that: maybe a more complex and IQ-oriented culture means people have to run faster just to stay in the same place, intellectually.
That may be the case, but I still don't find the explanation satisfactory from the point of view of the classic general intelligence theory (not that I have a better alternative, though). To clarify, the traditional theory of general intelligence, which is taken as a background assumption in most IQ-related research, assumes that general intelligence is normally distributed in the general population, and any reasonable measure of it will be highly correlated with IQ test scores (which are themselves artificially crafted to produce a normal distribution of scores). Moreover, it assumes that people whose intellects stand out as strikingly brilliant are drawn -- as a necessary condition, and not too far from sufficient -- from the pool of those whose general intelligence is exceptionally high. Now, if the scores on IQ tests are rising, but there is no visible increase in outstanding genius, it could mean one or more of these things (or something else I'm not aware of?): * We're applying higher criteria for genius. But are we really? Has the number of people at the level of von Neumann, Ramanujan, or Goedel really increased by two orders of magnitude since their time, as it should have if the distribution of general intelligence has simply moved up by 2SD since their time? (Note that for any increase in average, ceteris paribus, the increase in the rate of genius should be greater the higher the threshold we're looking at!) * The average has moved up, but the variance has shrunk. But this would have to be implausibly extreme shrinkage, since the average of IQ scores today is roughly at the z-score of +2 from two generations ago. * The modern culture is making common folks smarter, but it drags geniuses down. I believe there might be some truth to this. The pop culture everyone's supposed to follow, however trashy, has gotten more demanding mentally, but true intellectual pursuits have lost a lot of status compared to the past.

This study (which HughRistik originally pointed to here) suggests that IQ distribution might be better modeled as two overlapping normal distributions, one for people who are not suffering from any conditions disrupting normal intelligence development (such as disease, nutritional problems, maternal drug or alcohol use during pregnancy, etc.) and the other for those who suffered developmental impairment. If this model has some validity the Flynn effect could perhaps be explained as a reduction in the number of people falling into the 'impaired' distribution due to improved health and nutrition in the population. This would seem to explain an increase in the average score without a corresponding increase in the number of 'geniuses'.

I think this is more likely than not, but I couldn't quantify it. I think it's more likely for the simple reason that what earlier geniuses (like von Neumann etc.) did has already been done. To me, that implies the genius bar has been raised, in absolute terms, at least in the hard sciences and math. Agree. Agree. It's hard for me to imagine many geniuses getting derailed just by trash TV and ostracism. I believe IQ still correlates positively with performance among very high-achievers, just not as well as for normal people. The biggest factor here might be touched on in your second paragraph: I would bet that the standouts you're talking about would have higher average IQ, but would not actually be 'exceptionally' high, because IQ doesn't correlate that well with success. Also, many of the geniuses we're thinking of would probably be specialists, and it's harder to track specialized performance with the (relatively) generalist metric of IQ. If the IQ threshold for genius is lower than you think, an upward shift in the mean makes less difference. (Of course it can't explain the effect away entirely; something else is happening. But it could be a part.)
cupholder: That could well be the case. However, it fails to explain the lack of apparent genius at lower educational stages. For example, if you look at a 30 year period in the second half of the 20th century, the standard primary and high school math programs probably didn't change dramatically during this time, and they certainly didn't become much harder. Moreover, one could find many older math teachers who worked with successive generations throughout this period -- in which the Flynn IQ increase was above 1SD in many countries. If the number of young potential von Neumanns increased drastically during this period, as it should have according to the simple normal distribution model, then the teachers should have been struck by how more and more kids find the standard math programs insultingly easy. This would be true even if these potential von Neumanns have subsequently found it impossible to make the same impact as him because all but the highest-hanging fruit is now gone. Yes, that's basically what I meant when I speculated that IQ might be significantly informative about intellectually average and below-average people, but much less about above-average ones. Unfortunately, I think we'll have to wait for further major advances in brain science to make any conclusions beyond speculation there. Psychometrics suffers from too many complications to be of much further use in answering such questions (and the politicization of the field doesn't help either, of course).
Well, as discussed above, there are many interpretations of the Flynn effect, and it's not clear that the IQ increase actually corresponds to a gain in intelligence. From what Flynn has written, it seems most likely to be a measurement problem of sorts, in which case the number of "potential Von Neumanns" would not increase.
I think education not becoming harder in the earlier grades is a strong misnomer. My parents did punctuation symbols in their grade 5 curriculum, I did it in grade 3, It's currently done in Kindergarten or Grade 1, and many other topics have similar track records. As for high school math programs, many parts of the world have had a shift from a 13 grade program to a 12 grade program which compresses a lot of material. I think a bigger factor may be we are better at recognizing and marketing talent. The kids who find high school mathematics a complete joke in grade 8 are getting scholarships elsewhere. Many of my peers in undergraduate mathematics had done work with a professor at a university in their home city during their high school years, a sizable number had private school scholarships based on their talents. So perhaps these individuals are seldom present in ordinary standard math programs.
I'm not so sure. Here's a 2005 paper ('Rising mean IQ: Cognitive demand of mathematics education for young children, population exposure to formal schooling, and the neurobiology of the prefrontal cortex') suggesting that 'cognitive demands of mathematical curricula' in the US increased from about 1950. Anecdotally, I remember occasionally surprising my parents by telling them about what I was learning in math - my schools' math syllabuses apparently went faster than my parents'. The question is if this effect (and/or effects like it in other school subjects) would be enough to mask the Flynn effect at younger ages; I guess it could be enough to partly mask it but not wholly mask it, in which case there are other explanations at work too. Maybe the Flynn effect is less in children than adults as well. Neuroscience could certainly help, but I would think one could make a good start just by repeatedly IQ-testing a huge number of kids through childhood, tracking them into middle age, plotting child IQ against adult achievement, and drawing a lowess regression line through it. If the line starts out relatively steep but flattens out with increasing IQ, you and me are right: IQ isn't that informative about high flyers. I wouldn't be that surprised if someone hadn't already done something like this with the Project Talent data or some other big database.
Sputnik was a huge shock to the U.S., causing fear that the Soviet Union would eventually overwhelm or eclipse the U.S. One of the results of that fear was the enrollment of math professors in the design of a new model math curriculum called the New Math, which was widely deployed and in most places where it was deployed represented a sharp break with past math curricula. Elementary-school children were taught things like how to do addition in bases other than ten. The "laws of algebra" (e.g., the commutativity property) were introduced much earlier than they had been in the past. The New Math was a frequent topic of popular news articles and news segments in the late 1960s, probably because of the bewilderment of parents who attempted to help their children with math homework. I was an elementary-school student in Massachusetts public schools in the 1960s, and this New Math was my favorite part of an otherwise uninspired factory-style elementary-school education, so I salute the Soviet space program of the 1950s for shocking certain elements of the educational establishment of my country out of its complacency.
Do we know if the early start actually led to more talent in math and science when children of this age became adults? Or did we just end up with a lot of lawyers who learned and then forgot Calculus?
All I can tell you is that I am very good at math and science and that I am significantly less likely to have turned out that way if in elementary school, I had been taught a lot of calculational arithmetic and elementary-algebra skills with no coherent and thoughtful attempt to teach the "concepts" or the "broader understanding". My formal educational was pretty crappy, and I would have been much better off if someone'd just given me a small office or a desk and a chair in a quiet place and access to books at the end of elementary school, so I could have skipped the whole secondary-school experience like Eliezer did, but the elementary-school math was very well done, not because the teachers were particularly inspired but rather because the design and integrated nature of the whole curriculum or plan of tuition. Also, let us not lose sight of my reason for writing, which is to present evidence that at least in the U.S., math education for the average child changed drastically during the 20th Century.
It's conceivable that there are institutional barriers to genius expressing itself-- partly that there really is more knowledge to be assimilated before one can do original work, and partly that chasing grants just sucks up too much time and makes it less likely for people to work on unfashionable angles.
Still, it's not like historical geniuses all grew up as pampered aristocrats left to pursue whatever they liked. Many of them grew up as poor commoners destined for an entirely unremarkable life, but their exceptional brightness as kids caught the attention of the local teacher, priest, or some other educated and influential person who happened to be around, and who then used his influence to open an exceptional career path for them. Thus, if the distribution of kids' general intelligence is really going up all the way, we'd expect teachers and professors to report a dramatic increase in the number of such brilliant students, but that's apparently not the case. Moreover, many historical geniuses had to overcome far greater hurdles than having to chase grants and learn a lot before reaching competence for original work. Here I mean not just the regular life hardships, like when Tesla had to dig ditches for a living or when Ramanujan couldn't afford paper and pencil, but also the intellectual hurdles like having to become professionally proficient in the predominant language of science (whether English today or German, French, or Latin in the past), which can take at least as much intellectual effort as studying a whole subfield of science thoroughly. So, while your hypothesis makes sense, I don't think it can fully explain the puzzle.
It could also be communications. Many high intelligence situations involve disorders that also have as an effect anti-social behavior. Academia is highly geared against this in some cases going so far as to evaluate people's chances for success in a PhD based on their ability to form working relationships with a peer group during their MSc. Travel is easier and correspondence is far more personal. Would the mathematicians of the past have been as interested in this model? Perhaps some of them were the type of people that were happy to correspond by mail but found communicating face to face awkward. This wasn't a big barrier to success in the past, but it is very difficult in modern academia (particularly with most positions in most fields being teaching + research).
Far enough, and I'm not even sure the "more knowledge required" is that strong an argument for some parts of math. A scary possibility is that there are fewer people at the far right end of the bell curve. I have no idea what could case that effect, but we don't know what makes for genius of the sort which does significant creative work. It's conceivable but unlikely that teachers' ability to recognize extraordinary minds has declined.
Perhaps genius requires extraordinary effort, which is only worthwhile if you already have nothing to lose. So maybe the hardships and obstacles that previous highly intelligent people faced actually contributed to their eventual success.
There are still plenty of poor people, so lack of hardship doesn't seem to be the problem. IIRC, there's a theory that you get more genius when political entities are small and competing-- hence the Renaissance. However, that's generalizing from one example-- any clues plus or minus for the theory? There are always people with nothing to lose-- it may be less common to have elites with something to win.
I've explained, but I was thinking about books looking at the physics or philosophy implications of particular popular shows or books. It could just be that such books would have been popular a century ago, but no one thought to write and/or publish them.
Can you elaborate your comment--sounds fascinating. HCH
I don't have titles handy, but I think the first one I noticed was essays about Stephen King. Since then, there've been books about the physics of Star Trek and the ethics of Buffy. I'm curious about whether anyone knows of such books addressed to popular audiences from more than a few decades ago, or of studies of the genre.
Well, now it is four cents. Parents even teach to IQ tests. Childhood insults? I'm sure you meant childhood disease.
I think Harpending was using the word 'insults' in the (less common nowadays) sense of 'injuries.'

The discussion's been going on for a while and it's been slowing down, so I think it's time to close down the official Q&A session. Henry and Gregory, you're of course still free to check the post and write comments if you want to, but there's no "official" expectation for that. Of course, you're also free to familiarize yourself with the rest of the site, if you think it's interesting enough, but that's entirely up to you. :)

I would like to take the chance to thank you for your excellent answers. There was a lot of interesting stuff in there... (read more)

The mathematical models for an acceleration of human evolution seem like they could have been developed earlier. Would more researchers, or more 'maverick' researchers have much advanced progress in the field? Or would an increased stock of mathematical analysis have simply sat around unused until the advent of the new genomics tools and their ability to measure selection?

That is a big and interesting question. I do not think that evolutionary biology needed more math at all: they would have done better with less I think. The only math needed (so far) in thinking about acceleration is the result that the fixation probability of a new mutant is 1/2N if it is neutral and 2s if it has selective advantage s. The other important equation is that the change in a quantitative trait is the product of the heritability and the selective differential (the difference between the mean of the population and the mean of parents).

The history is that there was a ruckus in the 1960s between the selectionists and the new sect of neutralism, and neutralism more or less won. Selectionists persisted but that literature has a focus on bacteria in chemostats, plants, yeast, and such. Neutralism answered lots of questions and is associated with some lovely math, but as we took it up we (many of us) lost sight of real evolutionary issues.

Milford Wolpoff, in a review of our book in the American Journal of Physical Anthropology points out that his student Dave Frayer collected a lot of data on changes in European skull size and shape that implied very rapid evolution. In other words we "knew it all along" but never paid attention. In fact Cochran and I "knew" it but never put it together with the new findings from SNP chips. John Hawks did, right away.

So fashion rules and we it is difficult to get away from it I suppose.

Hawks and I were talking about new genetic studies that showed a surprising number of sweeps, more than you'd expect from the long-term rate of change - and simultaneously noticed that there sure are a lot more people then there used to be - all potential mutants.

As for why someone didn't point this out earlier - say in 1930, when key results were available - I blame bad traditions in biology. Biologists mostly don't believe in theory: even when its predictions come true, they're not impressed.

My advantage, at least in part, comes from have had exactly one biology course in my entire life, which I took in the summer of my freshman year of high school, in a successful effort to avoid dissecting. If I ever write a scientific autobiography, it will be titled "Avoiding the Frog".

Because theory in the field is so often wrong that they treat successes as a stopped clock being right twice a day? Or something more complex?

I think Greg's 'biologists' are a special subset of biologists. As I see it CP Snow was right about the two cultures. But within science there are also two cultures, one of whom speaks mathematics and the other that speaks organic chemistry. Speaker of organic chemistry share a view that enough lab work and enough data will answer all the questions. They don't need no silly equations.

In our field the folks who speak mathematics tend to view the lab rats as glorified techs. This is certain not right but it is there and leads to a certain amount of mutual disdain.

This kind of mutual disdain is apparently just not there in physics between the theoretical and experimental physics people. I wish evolutionary biology were more like physics.

It goes further; there are even two cultures of mathematics!
There are sub-patterns. There are facts about natural selection that every plant geneticist knows that few human geneticists will accept without a fight. I mean, really, Henry, when a prominent human geneticist says " You don't really believe that bit about lactase persistence being selected, do you?" , or when someone even more famous asks "So why would there be more mutations in a bigger population?" - their minds ain't right.
Could you expand on that?

[I waited until I could get a copy of the book and read it before making my point here.]

In the book you say that foragers had little reason to fight wars or to to be patient for long term investments. But forager wars are often about grabbing women, and they might also make long term investments in particular women or in developing skills, like singing, that can attract women.

I don't agree with you except a little bit. And there are foragers who do have some low time preference, like on the US Northwest Coast where they harvested lots of salmon that they smoked and stored. Interior Eskimo slaughtered migrating caribou herds and stored the meat by freezing.

But in general forager life has been almost literally hand to mouth. I have spent a lot of wasted time pulling my hair out about this. We have had lots of Bushman employees in the Kalahari, well compensated. We have spent hours pointing out that we would go back to America, they should invest in goats or cattle, build up a herd, so they will have something to live on after we left. Everyone agreed with us, but they minute Aunt Nellie got sick everything was slaughtered. Again and again and again. Aargghh......


My point was theoretical, not empirical. If you say that foragers often seem remarkably uninterested in making sacrifices for the future I'll believe you. But I'm questioning how well we understand that data, by noting that there are some aspects of their lives where they seem to make long term investments. Maybe they just don't have a consistent time preference, maybe it varies by type of behavior; for some areas like learning an art they evolved behaviors that respect future consequences, and for other areas like food storage they did not.
Yes, of course, I will give you that. You are suggesting that "time preference" is way too global and vague a concept and I can't disagree. HCH
Is your point that they couldn't imagine investing for the future, or that they had so little slack that they couldn't afford to?
They could certainly imagine investing: they have been invaded by cattle people over the last half century and they see husbandry all around. And they certainly could have afforded to keep their animals. But they just didn't (seem to) have it in them to "delay gratification". I think that our ability to invest and save resources must be new and different in our evolution.
My impression is that hunter-gatherers have a huge amount of social pressure towards short-term sharing. You mentioned "Aunt Nettie getting sick" as a reason to slaughter cattle. Was it food for her? Expensive medical care or rituals? Something else?
Food for her and to support a ritual gathering of folks for support. There is no medical care out in the bush, but if there were people would certainly chip in to help pay for it. HCH

I thought RichardKennaway's previous comment was interesting, and would appreciate hearing your comments on it. Commenting on the hypothesis that life under the rule of others may have selected for submissiveness, he wrote:

On the other hand, submissiveness is surely selected against in rulers, who as noted in the posting leave more descendants than proles. So perhaps in a society in which the strong rule and the weak submit there is some evolutionarily stable distribution along a submissive/aggressive spectrum, rather than favouring one or the other?

My feeling is that the dichotomy between societies where males are threatening and violent and societies where males are submissive and not threats to each other is the most interesting social dichotomy we have. In some societies where males are threats there is a clear alternative niche like the Berdache on the Great Plains. In urban ghettos with drug dealers and street corner males there is a significant set of males who hold down jobs and, often, bring the proceeds to support their matrifocal families. How much such males reproduce is not clear. A wonderful description of this, with a zany analysis, is (Sharff, J. W. (1981). Free enterprise and the ghetto family. Psychology Today, 15, 41-8.)

There may well be stable distributions lurking in the social system but they are likely different everywhere: that for Bushmen would be quite different from that for Mundurucu.

Rulers do not always leave more descendants than proles. I highly recommend Gregory Clark's "Farewell to Alms", in which he shows that the medieval ruling class in Britain essentially all killed each other and have no descendants today. On the other end peasants and laborers did not reproduce themselves, so almost everyone in the UK today is descended from the medieval gentry, prosperous merchants, and so on.

I'm one of their descendants. I rather assumed that every Anglo-Saxon was (excluding the royal family through Charles, whose ancestry is German, but including Diana Spencer and her children), and that I only knew how because I had wealthy ancestors who kept track. But even if that's not so, they don't have no descendants. ETA: On second thought, perhaps the scope of ‘essentially’ was meant to extend to the end of the sentence.
I don't think it's correct to assume a pure strategy (ie each male is either dominant OR submissive). It might make more sense for males to be able to switch when the opportunity arises from submissive to dominant (a mixed strategy in game theory terms). I think outsider orang-utans can become alphas (adding the distinctive cheek-flaps etc) when they find a group that will let them join, for example. We do see humans making the same transition (and in the other direction too) when they move between groups, and when opportunities arise.

When I think of evolutionary psychology I generally jump to sharp and well defined claims that "mental modules" exist that (1) enable superior cognitive performance in specific domains relative to what typical people can do when they rely on "general reasoning" faculties, (2) evolved due to positive selection on our ancestors to deal with problems we faced over and over in our evolutionary history, and (3) should be pretty much universal among humans who don't have too many deleterious mutations.

When I think of people who focus specific... (read more)

I think your perception is correct, but I am no expert. I sense that evolutionary psychologists are really interested in human universals: the famous experiments of Tooby and Cosmides go right to that point. Why are we all afraid of snakes? Why are our babies do hard to toilet train? But they generally don't have a lot to say about variation among humans in these traits. The other sort that you and I both perceive are interested in human diversity and aren't much concerned with the bigger questions of the ev psych people. No, they don't "play nice" with each other mostly. It is an exaggeration to say that each regards the phenomena of the other as nuisances. They certainly should see different things: C&T see evolved cheater detection in a logic game while psychologists of the London school see G playing itself out in the diversity of correct answers. The two areas will come together soon: they are already starting. As some of the comments here indicate, we can't really understand what "Neanderthal intelligence" might mean until we understand the evolution(s) of intelligence. We can examine data all day and still have not an iota of insight about that bigger issue.

For both/either: What are you working on right now?

I am trying to think about the genesis and maintenance of social class and about the dimensionality of class. We know from the biometricians at the end of the nineteenth century that cognitive ability is essentially a single dimension while athletic ability, for example, is multidimensional. I want to start with a pure inductive approach to class in North America and do the same thing with class. Fat chance, I have found, since every time I get started I get sucked back into genetics. Henry

I have always been curious about the effects of mass-death on human genetics. Is large scale death from plague, war, or natural-disaster likely to have much effect on the genetics of cognitive architecture, or are outcomes generally too random? Is there evidence for what traits are selected for by these events?

Too random to have much effect, I should think. And at the same time, not awful enough to reduce the population to the point where drift would become important. Unless we're talking asteroid impacts. One can imagine exceptions. For example, if alleles that gave resistance to some deadly plague had negative side effects on intelligence, then you'd see an effect. Note that negative side effects are much more likely than positive side effects. I know of some neat anecdotal exceptions. Von Neumann got out of Germany in 1930, while the getting was good. When a friend said that Germany was oh-so-cultured and that there was nothing to worry about, Von Neumann didn't believe it. He started quoting the Melian dialogue - pointed out that the Athenians had been pretty cultured. High intelligence helped save his life.
I'm also interested in Nanani's question below, with a specific emphasis on human-caused mass death selecting for specific characteristics. For example, the Cambodian purges of intellectuals or the Communist purges of successful businesspeople. Are these too tenuous a proxy for genes to cause long-term change in alleles, or did the Cambodians and Communists do long-term harm to their genetic legacy?
Seconded, but with a request for contrast, if possible, with human-caused mass-death such as invasion by conquering hordes. What effect do such phenomena have at the genetic level wrt cognition, as opposed to cultural or lingustic transmission?
4Scott Alexander13y
And what about human-caused mass death selecting for specific characteristics? For example, the Cambodian purges of intellectuals or the Communist purges of successful businesspeople. Are these too tenuous a proxy for genes to cause long-term change in alleles, or did the Cambodians and Communists do long-term harm to their genetic legacy?
Purges in Cambodia might have changed average genotypes because they hit such a high fraction of the population. Generally it's hard to change things much in one generation, though - particularly because of loose correlations between genotypes and dreadful political fates. In the future dictators should be better at this. Now if Stalin had taken all the smartest people in the Soviet Union and forcibly paired them up, artificially inflating assortative mating for intelligence, you would have seen an effect. If you were a billionaire, you could maybe bribe people into something similar.
In AD175 Marcus Aurelius brought 5,500 Sarmatian heavy cavalry warriors to northern Britain where, after twenty years service, they "settled in a permanent military colony in Lancashire" which was "still mentioned almost 250 years later." You remind us of the possibility that the colony could have influenced the legend of King Arthur, and go on to add something new: it also "could have introduced several thousand copies of that hypothetical allele into Lancashire" and that the average Englishman "might be mostly Sarmatian in a key gene or two." I'm English, and intrigued! Are you able to expand on this? (Book pp. 146-148) I hope it is something good like increased unruliness (independence streak) and aggressiveness in battle and not something naff like Sarmatian lewdness...!!
I have no further knowledge or insight about that, but Greg might. I will call this question to his attention and we may see what he knows. HCH

Wow! I haven't got any questions (yet) but I am very eager to dive into this Q&A. Thanks to everyone involved in organizing this.

By the way, you spelled Steve SailEr's name wrong.

And thank you all for the honor of your invitation. HCH