Terrence Deacon's The Symbolic Species is the best book I've ever read on the evolution of intelligence.  Deacon somewhat overreaches when he tries to theorize about what our X-factor is; but his exposition of its evolution is first-class.

Deacon makes an excellent case—he has quite persuaded me—that the increased relative size of our frontal cortex, compared to other hominids, is of overwhelming importance in understanding the evolutionary development of humanity.  It's not just a question of increased computing capacity, like adding extra processors onto a cluster; it's a question of what kind of signals dominate, in the brain.

People with Williams Syndrome (caused by deletion of a certain region on chromosome 7) are hypersocial, ultra-gregarious; as children they fail to show a normal fear of adult strangers.  WSers are cognitively impaired on most dimensions, but their verbal abilities are spared or even exaggerated; they often speak early, with complex sentences and large vocabulary, and excellent verbal recall, even if they can never learn to do basic arithmetic.

Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals—including certain social emotions—dominate more than they should.

"Both postmortem analysis and MRI analysis have revealed brains with a reduction of the entire posterior cerebral cortex, but a sparing of the cerebellum and frontal lobes, and perhaps even an exaggeration of cerebellar size," says Deacon.

Williams Syndrome's deficits can be explained by the shrunken posterior cortex—they can't solve simple problems involving shapes, because the parietal cortex, which handles shape-processing, is diminished.  But the frontal cortex is not actually enlarged; it is simply spared.  So where do WSers' augmented verbal abilities come from?

Perhaps because the signals sent out by the frontal cortex, saying "pay attention to this verbal stuff!", win out over signals coming from the shrunken sections of the brain.  So the verbal abilities get lots of exercise—and other abilities don't.

Similarly with the hyper-gregarious nature of WSers; the signal saying "Pay attention to this person!", originating in the frontal areas where social processing gets done, dominates the emotional landscape.

And Williams Syndrome is not frontal enlargement, remember; it's just frontal sparing in an otherwise shrunken brain, which increases the relative force of frontal signals...

...beyond the narrow parameters within which a human brain is adapted to work.

I mention this because you might look at the history of human evolution, and think to yourself, "Hm... to get from a chimpanzee to a human... you enlarge the frontal cortex... so if we enlarge it even further..."

The road to +Human is not that simple.

Hominid brains have been tested billions of times over through thousands of generations.  But you shouldn't reason qualitatively, "Testing creates 'robustness', so now the human brain must be 'extremely robust'."  Sure, we can expect the human brain to be robust against some insults, like the loss of a single neuron.  But testing in an evolutionary paradigm only creates robustness over the domain tested.  Yes, sometimes you get robustness beyond that, because sometimes evolution finds simple solutions that prove to generalize—

But people do go crazy.  Not colloquial crazy, actual crazy.  Some ordinary young man in college suddenly decides that everyone around them is staring at them because they're part of the conspiracy.  (I saw that happen once, and made a classic non-Bayesian mistake; I knew that this was archetypal schizophrenic behavior, but I didn't realize that similar symptoms can arise from many other causes.  Psychosis, it turns out, is a general failure mode, "the fever of CNS illnesses"; it can also be caused by drugs, brain tumors, or just sleep deprivation.  I saw the perfect fit to what I'd read of schizophrenia, and didn't ask "What if other things fit just as perfectly?"  So my snap diagnosis of schizophrenia turned out to be wrong; but as I wasn't foolish enough to try to handle the case myself, things turned out all right in the end.)

Wikipedia says that the current main hypotheses being considered for psychosis are (a) too much dopamine in one place (b) not enough glutamate somewhere else.  (I thought I remembered hearing about serotonin imbalances, but maybe that was something else.)

That's how robust the human brain is: a gentle little neurotransmitter imbalance—so subtle they're still having trouble tracking it down after who knows how many fMRI studies—can give you a full-blown case of stark raving mad.

I don't know how often psychosis happens to hunter-gatherers, so maybe it has something to do with a modern diet?  We're not getting exactly the right ratio of Omega 6 to Omega 3 fats, or we're eating too much processed sugar, or something.  And among the many other things that go haywire with the metabolism as a result, the brain moves into a more fragile state that breaks down more easily...

Or whatever.  That's just a random hypothesis.  By which I mean to say:  The brain really is adapted to a very narrow range of operating parameters.  It doesn't tolerate a little too much dopamine, just as your metabolism isn't very robust against non-ancestral ratios of Omega 6 to Omega 3.  Yes, sometimes you get bonus robustness in a new domain, when evolution solves W, X, and Y using a compact adaptation that also extends to novel Z.  Other times... quite often, really... Z just isn't covered.

Often, you step outside the box of the ancestral parameter ranges, and things just plain break.

Every part of your brain assumes that all the other surrounding parts work a certain way.  The present brain is the Environment of Evolutionary Adaptedness for every individual piece of the present brain.

Start modifying the pieces in ways that seem like "good ideas"—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges.  And then everything goes to hell.  Why shouldn't it?  Why would the brain be designed for easy upgradability?

Even if one change works—will the second?  Will the third?  Will all four changes work well together?  Will the fifth change have all that greater a probability of breaking something, because you're already operating that much further outside the ancestral box?  Will the sixth change prove that you exhausted all the brain's robustness in tolerating the changes you made already, and now there's no adaptivity left?

Poetry aside, a human being isn't the seed of a god.  We don't have neat little dials that you can easily tweak to more "advanced" settings.  We are not designed for our parts to be upgraded.  Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they are.  Idiot evolution does not look ahead, it does not design with the intent of different future uses.  We are not designed to unfold into something bigger.

Which is not to say that it could never, ever be done.

You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness.  A Friendly AI programmer could do even more arcane things to make sure the AI knew what you would-want if you understood the possibilities.  And then the AI could apply superior intelligence to untangle the pattern of all those neurons (without simulating you in such fine detail as to create a new person), and to foresee the consequences of its acts, and to understand the meaning of those consequences under your values.  And the AI could upgrade one thing while simultaneously tweaking the five things that depend on it and the twenty things that depend on them.  Finding a gradual, incremental path to greater intelligence (so as not to effectively erase you and replace you with someone else) that didn't drive you psychotic or give you Williams Syndrome or a hundred other syndromes.

Or you could walk the path of unassisted human enhancement, trying to make changes to yourself without understanding them fully.  Sometimes changing yourself the wrong way, and being murdered or suspended to disk, and replaced by an earlier backup.  Racing against the clock, trying to raise your intelligence without breaking your brain or mutating your will.  Hoping you became sufficiently super-smart that you could improve the skill with which you modified yourself.  Before your hacked brain moved so far outside ancestral parameters and tolerated so many insults that its fragility reached a limit, and you fell to pieces with every new attempted modification beyond that.  Death is far from the worst risk here.  Not every form of madness will appear immediately when you branch yourself for testing—some insanities might incubate for a while before they became visible.  And you might not notice if your goals shifted only a bit at a time, as your emotional balance altered with the strange new harmonies of your brain.

Each path has its little upsides and downsides.  (E.g:  AI requires supreme precise knowledge; human upgrading has a nonzero probability of success through trial and error.  Malfunctioning AIs mostly kill you and tile the galaxy with smiley faces; human upgrading might produce insane gods to rule over you in Hell forever.  Or so my current understanding would predict, anyway; it's not like I've observed any of this as a fact.)

And I'm sorry to dismiss such a gigantic dilemma with three paragraphs, but it wanders from the point of today's post:

The point of today's post is that growing up—or even deciding what you want to be when you grow up—is as around as hard as designing a new intelligent species.  Harder, since you're constrained to start from the base of an existing design.  There is no natural path laid out to godhood, no Level attribute that you can neatly increment and watch everything else fall into place.  It is an adult problem.

Being a transhumanist means wanting certain things—judging them to be good.  It doesn't mean you think those goals are easy to achieve.

Just as there's a wide range of understanding among people who talk about, say, quantum mechanics, there's also a certain range of competence among transhumanists.  There are transhumanists who fall into the trap of the affect heuristic, who see the potential benefit of a technology, and therefore feel really good about that technology, so that it also seems that the technology (a) has readily managed downsides (b) is easy to implement well and (c) will arrive relatively soon.

But only the most formidable adherents of an idea are any sign of its strength.  Ten thousand New Agers babbling nonsense, do not cast the least shadow on real quantum mechanics.  And among the more formidable transhumanists, it is not at all rare to find someone who wants something and thinks it will not be easy to get.

One is much more likely to find, say, Nick Bostrom—that is, Dr. Nick Bostrom, Director of the Oxford Future of Humanity Institute and founding Chair of the World Transhumanist Assocation—arguing that a possible test for whether a cognitive enhancement is likely to have downsides, is the ease with which it could have occurred as a natural mutation—since if it had only upsides and could easily occur as a natural mutation, why hasn't the brain already adapted accordingly?  This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn't the brain produce more acetylcholine already?  Maybe you're using up a limited memory capacity, or forgetting something else...

And that may or may not turn out to be a good heuristic.  But the point is that the serious, smart, technically minded transhumanists, do not always expect that the road to everything they want is easy.  (Where you want to be wary of people who say, "But I dutifully acknowledge that there are obstacles!" but stay in basically the same mindset of never truly doubting the victory.)

So you'll forgive me if I am somewhat annoyed with people who run around saying, "I'd like to be a hundred times as smart!" as if it were as simple as scaling up a hundred times instead of requiring a whole new cognitive architecture; and as if a change of that magnitude in one shot wouldn't amount to erasure and replacement.  Or asking, "Hey, why not just augment humans instead of building AI?" as if it wouldn't be a desperate race against madness.

I'm not against being smarter.  I'm not against augmenting humans.  I am still a transhumanist; I still judge that these are good goals.

But it's really not that simple, okay?

 

Part of The Fun Theory Sequence

Next post: "Changing Emotions"

Previous post: "Failed Utopia #4-2"

New Comment
41 comments, sorted by Click to highlight new comments since: Today at 7:33 AM

Well, one earlier limit on the evolution of the human brain is one that most definitely no longer applies to future human augmentation: the skull of a human baby needs to be able to pass through the birth canal without killing the mother, and it just barely does so. Humans have more difficult births than most other animals (at least, that's the impression I get). Today, we can perform Cesarean deliveries in relative safely, so that gives at least one "freebie" when it comes to improving on the work of the blind idiot god.

The birth canal is actually one of Bostrom's examples.

Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals - including certain social emotions - dominate more than they should.

Having not read the book, I don't know if Deacon deals with any alternative hypotheses, but one alternative I know of is the idea that WSers get augmented verbal and social skills is because it is the only cognitive skill they are able to practice. In short, WSers are (postulated to be) geniuses at social interaction because of practice, not because of brain signal imbalance. This is analogous to the augmented leg and foot dexterity of people lacking arms.

How could we test these alternatives? I seem to recall that research has been done in the temporary suppression of brain activity using EM fields (carefully, one would hope). If I haven't misremembered, then effects of the brain signal imbalance might be subject to experimental investigation.

Nitpick for Doug S.: that's actually two coupled evolutionary limits. Babies' heads need to fit through the women's pelvises, which also have to be narrow enough for useful locomotion.

Incidentally, a not very well known drawback to "photographic memory" is that people with photographic memories have trouble fitting what they remember into context; their memories tend to end up as disconnected trivia that they don't quite understand the significance of.

Carl, I did look, I just managed to miss it somehow. Oh well. Fixed.

Cyan, is that a standard hypothesis? I'm not sure how "practice" would account for a very gregarious child lacking an ordinary fear of strangers.

A late-breaking follow-up to my original reply: if I read about this instead of confabulating it, then I probably found it in The Brain that Changes Itself.

"Some ordinary young man in college suddenly decides that everyone around them is staring at them because they're part of the conspiracy."

I don't think that this is at all crazy, assuming that "they" refers to you (people are staring at me because I'm part of the conspiracy), rather than everyone else (people are staring at me because everyone in the room is part of the conspiracy). Certainly it's happened to me.

"Poetry aside, a human being isn't the seed of a god."

A human isn't, but one could certainly argue that humanity is.

Maybe this is tangential to the post, but even if it is too difficult to use biological tinkering to make modified humans with greater-than-human intelligence, it does not seem difficult to use biological tinkering to fairly reliably produce humans with substantially greater than current average intelligence. It doesn't seem difficult to do simpleminded statistical studies to find out which combinations of genes currently existing within the human population usually lead to high intelligence, and then start splicing these into new humans on a wide scale (i.e. selling them to those who plan to become parents). It seems worthwhile to try to estimate 1) what statistical distribution of IQ could be achieved this way 2) what would happen to economic growth, scientific progress, politics, etc. if a large fraction of births were drawn from this distribution. Casually, it seems that 2) might very hard to analyze in any detail, but basically the world would probably be radically transformed. 1) seems like it should be pretty easy to analyze quantitatively and with fairly high confidence using existing data about the probability distribution of IQ conditional on the IQ of the parents.

Eliezer: "And you might not notice if your goals shifted only a bit at a time, as your emotional balance altered with the strange new harmonies of your brain."

This is yet another example of Eliezer's disagreement with the human race about morality. This actually happens to us all the time, without any modification at all, and we don't care at all, and in fact we tend to be happy about it, because according to the new goal system, our goals have improved. So this suggests that we still won't care if it happens due to upgrading.

Just so. But not many of us become full-blown psychotic sadists, and few indeed of those have godlike superpowers.

So I think that the not-particularly harmfulness of the usual range of moral self-modification is not a strong argument for letting rip with the self-enhancing drugs.

Learning new values as people naturally do is a very different thing than, say, deleting the empathy part of your brain and becoming a psychopath. The first are changes that we accept voluntarily for the most part, whereas the second no one would chose for themselves and you would be horrified at your future self if you did so.

The point is just because future you doesn't care, doesn't mean it's a bad thing. An extreme example, if you were to just delete your intelligence entirely, you wouldn't regret it. But I don't think you want that for yourself.

There are less obvious cases, like deleting your value for social interaction and withdrawing from society completely. It's not an obviously bad thing, but I don't think it's something you would choose voluntarily.

High functioning autism might in part be caused by an "overclocking" of the brain.

My evidence:

(1) Autistic children have on average larger brains than neurotypical children do. (2) High IQ parents are more likely than average to have autistic children. (3) An extremely disproportionate number of mathematical geniuses have been autistic. (4) Some children learn to read before they are 2.5 years old. From what I know all of these early readers turn out to be autistic.

This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn't the brain produce more acetylcholine already?

There's considerable scope for the answer to this question being: "because of resource costs". Resource costs for nutrients today are radically different from those in the environment of our ancestors.

We are not designed for our parts to be upgraded. Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they are.

That's true - but things are not quite as bad as that makes it sound. Evolution is concerned with things like modularity and evolvability. Those contribute to the modularity of our internal organs - and that helps explain why things like kidney transplants work. Evolution didn't plan for organ transplant operations - but it did arrange things in a modular fashion. Modularity has other benefits - and ease of upgrading and replacement is a side effect.

People probably broke in the ancestral environment too. Organisms are simply fragile, and most fail to survive and reproduce.

Another good popular book on the evolution of intelligence is "The Runaway Brain". I liked it, anyway. I also have time for Sue Blackmore's exposition on the topic, in "The Meme Machine".

"Hm... to get from a chimpanzee to a human... you enlarge the frontal cortex... so if we enlarge it even further..." The road to +Human is not that simple.

Well, we could do that. Cesarian sections, nutrients, drugs, brain growth factor gene therapy, synthetic skulls, brains-in-vats - and so on.

It would probably only add a year or so onto the human expiration date, but it might be worth doing anyway - since the longer humans remain competitive for, the better the chances of a smooth transition. The main problem I see is the "yuck" factor - people don't like looking closely at that path.

You're implying that the functioning of biological systems is highly sensitive to maintaining the "right" concentration of several active compounds. This is generally not correct. Quite to the contrary, "Robustness is one of the fundamental characteristics of biological systems." (Hiroaki Kitano, Molecular Systems Biology, 3:137, 2007; a good review article that contains multiple pointers to relevant publications.)

Note that this finding is quite in line with your point that functional modification of human brains is highly nontrivial - a system that is robust to failure in the sense used above is also resistant to any deliberately induced imbalance.

The layman's impression of Williams-Beuren individuals verbal skills doesn't quite hold up to closer linguistic analysis. For example, WBs are prone to make certain syntactical and morphological errors, and cannot pick up a new language faster than everyone else. If exposed to a new language, however, they will quickly pick up some words and try to wrap them around their native syntax, which may amaze bystanders who only superficially know the language in question.

As for Bostrom's "Algernon light" approach to SNP targeting, let me say I'm more concerned about my intellectual ability than about my eventual reproductive success. There could, in principle, be polymorphisms that make you a genius with a craving for a single child.

It seems pretty obvious that time-scaling should work - just speed up the operation of all parts in the same proportion. A good bet is probably size-scaling, adding more parts (e.g. neurons) in the same proportion in each place, and then searching in the space of different relative sizes of each place. Clearly evolution was constrained in the speed of components and in the number of parts, so there is no obvious evolutionary reason to think such changes would not be functional.

Robin, it sounds as though you are thinking about the changes that could be made after brain digitalisation.

That seems like a pretty different topic to me. Once you have things in a digital medium, it is indeed much easier to make changes - even though you are still dealing with a nightmarish mess of hacked-together spaghetti code.

Time scaling is not unproblematic. We don't have a single clock in the brain, clocks must be approximated by neurons and by neural firing. Speeding up the clocks may affect the ability to learn from the real world (if we have a certain time interval for associating stimuli).

We might be able to adapt, but I wouldn't expect it to be straight forward.

@James

"overclocking"

Forgive me, I don't see how any of your list displays overclocking, or increased speed.

I was speaking just Friday to a shrink acquaintance of mine on the subject of Asbergers. He in fact argues these autism spectrum disorders are due to underclocking and poor brain region synchronization, as based on recent discoveries from brain imaging studies. That is, austism spectrum people may have a lot of stuff up there, and parts of it may seem overconnected, but those links seem weak and underperforming, while other parts of the brain are underconnected and underperforming.

High functioners and idiot-savants have lucked out in that the parts that are overconnected for them perform normally - thus their ability - but the rest is still underwired and underperforming. Or so he argues.

In contrast, I ponder about all the truly overclocked I have known. These are people who really do think much faster than the rest of us. Due to my background, they have tended to be physicists and applied mathematicians. Since it seems unlikely that their wetware actually has higher hertz, I wonder if what we term "overclocking" is really the re-use of certain brain areas for calculation.

For example, they may repurpose areas other people use for short-term memory, resulting in what is often called "absent-mindedness." Or they may not have as well-developed visual senses, again repurposing that giant area of our brains for calculation. They may have also found a way to improve their pattern skills. Several in fact have suggested this to me as the key to the way they think when I have asked them.

The majority of the overclocked do seem to be male, but I have been introduced to a few female overclockers, who were mostly in the biological sciences, such as pharma research. Thus I speculate there is some link to testosterone in very early utero development, but of course no one knows. Intelligence is only moderately valued our society, as OB readers will themselves attest, so I doubt we will solve this mystery soon.

Agreed that upgrtading humans is hard. Nick bostrom's suggestion is another version of Algernon's Principle ("every genetically easy change is a net evolutionary disadvantage in the EEA"). (Strangely, this principle does not show up on Google - am I spelling it wrong? Has the Algernon's Principle meme failed to spread on the internet, for some odd reason?)

Tim's "resource costs" is a general counter to this, since resources are much cheaper now than in the EEA, but it is unlikely to actually be the reason in all cases. And since aging seems to partly be caused by byproducts of metabolism, using more energy (the primary resource) is problematic, at least until mitoSENS.

Also agreed with Robin that upload and speed up sidesteps this...but at the cost of normal physical existence, which has its own problems. Personally I would much rather upgrade w/o uploading, at least at the beginning.

Anyway, my main reason for commenting is: I dunno if you were joking about how your thoughts on hunter gatherer mental illness were pure speculation, but in fact you are exactly right. Countries which eat more fish have much less mental illness (depression, bipolar, schizophrenia), the relationship is strong. So lack of robustness against insufficient omega 6 does indeed cause much mental illness. (One reason my son has been raised on lots of fish oil.)

Cyan, is that a standard hypothesis? I'm not sure how "practice" would account for a very gregarious child lacking an ordinary fear of strangers.

I don't know if it's a standard hypothesis -- it's just floating there in my brain as background knowledge sans citation. It's possible that read it in a popular science book on neuroplasticity. I'd agree that "practice" doesn't plausibly account for the lack of ordinary fear; it's intended as an explanation for the augmentations, not the deficits.

"You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness."

Really? Explain how? It seems like a general property of an intelligent system that it can't know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel's first incompleteness theorem) fairly directly. It might be possible to make a billion sequential upgrades with probabilistic guarantees of correctness, but only in a low entropy environment, and even then it's dicey, and I have no idea how you'd prove it.

Patri, try "Algernon's Law"

(4) Some children learn to read before they are 2.5 years old. From what I know all of these early readers turn out to be autistic.

I'm a living counterexample to this, as I learned to read at basically the same time that I picked up spoken language. I might have slight tendencies toward behavior consistent with autism, but I'm well within the range of "healthy" human variation, at least where the autistic spectrum is concerned. (My mental illnesses tend to run in other directions.)

It seems like a general property of an intelligent system that it can't know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel's first incompleteness theorem) fairly directly.

Er, no, it doesn't.

Doug S.: do you have any links to that? As described, it sounds like you're plagiarizing Borges...

Patri: Yes, I too thought of the Algernon principle when reading Bostrom's paper; I've never seen that exact phrase used in any formal work (although the Red Queen principle is similar), but I know I've seen people reference informally the 'Flowers for Algernon principle' or just 'the Algernon principle'.

Doug S.

I'm interested in learning more about extremely early readers. I would be grateful if you contacted me at

EconomicProf@Yahoo.com

@Danielle: Rice's Theorem says that you can't say of any possible computation whether or not it outputs 5, but this doesn't mean you can't know that 2 + 3 = 5. We work with special chosen cases of code that we do understand.

Patri, I coined the phrase "Algernon's Law" some time ago but that was as part of an even stranger phase of my earlier wild and reckless youth, age fifteen or thereabouts. I'd probably prefer to talk about Algernon's principle these days, just to avoid the connotations of that earlier "Law". Or just reference Bostrom's paper.

Eliezer:Sometimes changing yourself the wrong way, and being murdered or suspended to disk, and replaced by an earlier backup.

Uh, no. If restoration from backup happens shortly after the wrong change I'd think of it as a day you wasted and don't remember, definitely not a murder. Something only weakly to be avoided.

Unknown:This actually happens to us all the time, without any modification at all, and we don't care at all, and in fact we tend to be happy about it,

Most don't care. I for one am kind of worried about my personal goal drift and what my (natural) future brain neurochemistry changes could do to current values.

"So lack of robustness against insufficient omega 6 does indeed cause much mental illness. (One reason my son has been raised on lots of fish oil.)"

Patri, did you mean Omega 3?

@Eliezer

Sure, there are upgrades that one can make in which one can more or less prove deterministically how it changes a subsystem in isolation. Things like adding the capability for zillion bit math, or adding a huge associative memory. But it's not clear that the subsystem would actually be an upgrade in interaction with the AI and with the unpredictable environment, at once. I guess the word I'm getting hung up on is 'correctness.' Sure, the subsystems could be deterministically correct, but would it necessarily be a system-wide upgrade?

It's also especially plausible that there are certain 'upgrades' (or at least large cognitive system changes) which can't be arrived at deterministically, even by a super human intelligence.

"Doug S.: do you have any links to that? As described, it sounds like you're plagiarizing Borges..."

Huh? Links to what?

/me is confused

@Doug S

"Borges"

Gwern is referring to the famous story by Borges, Funes the Memorious, I believe. It's in Ficciones.

Oh, about the photographic memory. I'm not sure exactly where I heard it first, but my high school history teacher supported it with a personal anecdote: she once had a student who had what seemed to be a photographic memory, and would frequently answer questions on quizzes with lengthy, direct quotes from the textbook on completely irrelevant subjects.

Anyway, for whatever reason, the brain has a capacity to ignore and forget details it considers unimportant; as one Cesare Mondadori puts it, "maximal memory" and "optimal memory" are not synonymous.

Evolutionary argument against human enhancements is at the same time completely true and really really weak when you look closer at it.

In the most explicit form it would go something like "there are no easy ways without significant side effects to change a human being in a way that would make him produce more children while raised in a hunter-gatherer tribe in a Pleistocene savanna". Making kids in hunter-gatherer environment is what evolution optimized for, it didn't care about intelligence, health or anything unless it significantly contributed to making more kids in this particular environment.

Now we have different environment, different goals, different costs, and different materials to work with. Humans are not even close to being optimized to this environment. Evolution barely did a few quick patches (like lactose tolerance) to make humans good at making kids in primitive agricultural villages, not only did it not adapt humans to current environment, it never adapted anything to a goal other than "making as many kids as possible in a resource-constrained environment", hardly what we're trying to do.

For example using any improvement whatsoever is a possible without breaking this argument if it uses more resources, according to their availability in stone age. What happens to be the case with pretty much every single proposed enhancement.

Maybe you're using up a limited memory capacity, or forgetting something else... Maybe humans forget the stuff that's not important for hunter/gatherers to remember. I mean the brain doesn't create more acetylcholine specifically so our ancestors didn't waste time remembering the wrong stuff.

And I just realized the entire point of this article is that if that is so then it's still not a safe thing; thanks for making me think.

A subtle side effect: I tried taking Lexapro, and found that it greatly improved my energy level and mood..... but it also made me apt to get into pointless head-banging arguments. I don't think I'd have noticed the increased stubbornness if I weren't more introspective than most people.

Well, the brain does seem remarkably adaptable; people who have suffered extreme brain damage can sometimes learn to compensate. Also people who don't use a certain part of the brain for years are sometimes able to re-purpose it for other things. Given time, the brain seems able to re-wire itself to get better at doing tasks using whatever neural resources it has available.

I understand the concerns here, and there certainly are risks, but I think the brain able to adapt to moderate enhancements fairly well, so long as you did so slowly and gave the brain time to properly adapt to and learn to use it's new resources. If you give a human a slightly larger frontal cortex, I think the brain would be able to adapt to the change, and then you could probably make another small enhancement a few years later.

The way evolution seems to have worked with the brain is designing new systems and then letting those systems freely interact with older brain structures, and while it's a cludgy solution, it seems to be a fairly robust one over evolutionary periods of time.

I think the main limiting factor in human brain evolution has been that people with heads larger then a certain size were more likely to die during childbirth during pre-technological times.

I just want to mention that the thing about a human trying to self-modify their brain in the manner described and with all the dangers listed could make an interesting science fiction story. I couldn't possibly write it myself and am not even sure what the best method of telling it would be- probably it would at least partially include something like journal entries or just narration from inside the protagonists' head, to illustrate what exactly was going on.

Especially if the human knew the dangers perfectly well, but had some reason they had to try anyway, and also a good reason to think it might work- presumably this would require it to be an attempt at some modification other than "runaway intelligence" (and also a context where a modified self would have very little chance of thereafter achieving runaway superintelligence); if things went wrong they might spend the rest of their life doing very weird things, or die for one reason or another, or at the very worst go on a killing spree and kill a couple dozen people before being caught, but wouldn't convert the entire world into smiley faces. That way they would be a sympathetic viewpoint character taking perfectly reasonable actions, and the reader/viewer watches as their sanity teeters on the edge and is genuinely left wondering whether they'll last long enough to accomplish their goal.