I've tended to think that bioethics is maybe the most profoundly useless field in mainstream philosophy. I might sum it up by saying that it's superficially similar to machine ethics except that the objects of its warnings and cautions are all unambiguously good things, like cognitive enhancements and life extension. In an era when we should by any reasonable measure be making huge amounts of progress on those problems—and in which one might expect bioethicists to be encouraging such research and helping weigh it against yet another dollar sent to the Susan G. Komen foundation or whatever—one mostly hears bioethicists quoted in the newspaper urging science to slow down. As if doubling human lifespans or giving everyone an extra 15 IQ points would in some way run the risk of "destroying that which makes us human" or something.

Anyway, this has basically been my perspective as a newspaper reader—I don't read specialty publications in bioethics. And perhaps it should come as no surprise that bioethics' usefulness to mainstream discourse would be to reinforce status quo bias, whether that's a true reflection of the field or not. In any case, it was a welcome surprise to see an interview in The Atlantic with Allen Buchanan, who apparently is an eminent bioethicist (Duke professor, President's Council on Bioethics), entirely devoted to refuting common objections to cognitive enhancement.

Some points Buchanan makes, responding to common worries:

  • There's no good reason to think the human body and its capabilities are anywhere near their maximum.
  • Technologies that make human lives better tend to have egalitarian effects in the long run (he mentions cell phones), even if they're at first available only to the wealthy.
  • A much smarter human population will probably be morally, as well as cognitively, enhanced—the "evil genius" problem isn't necessarily a realistic one to worry about.
  • Many people worry that the use of cognitive enhancement by people who are willing to self-experiment is unfair to those who don't want to or fear the risks. Buchanan points out that this problem could be largely alleviated by more research into safety and efficacy of drugs with cognitive enhancement potential. The current atmosphere of fear, dubious legal status, and unwillingness to do large-scale testing surrounding cognitive enhancement is counterproductive in this regard.
  • As cool as it would be to be a cognitively enhanced person in today's world, it would be so much cooler to be a cognitively enhanced human in a world of other enhanced humans.
I doubt any of these points will be at all surprising or novel to LW readers, but I was really pleased to see them covered in a mainstream publication, and to know that bioethics has people like Buchanan who are more interested in what we stand to gain from technology than what we stand to lose.

New Comment
6 comments, sorted by Click to highlight new comments since:

A much smarter human population will probably be morally, as well as cognitively, enhanced—the "evil genius" problem isn't necessarily a realistic one to worry about.

This is something I think I've noticed. If it is true, then why is it true? Some hypotheses:

  • Smart people spend more time reading and less time watching movies and television shows - ethics in books are superior/are recalled better
  • Smart people read the works of other smart people - only morally sound writing survives over time, so less exposure to unethical stuff
  • Smart people are more likely to notice cognitive dissonance and more likely to do something about it in the event of being about to commit a questionable act
  • Smart people spend more time thinking about their actions
  • Smart people have more recall of bad things others have done to them and are unwilling to put others through similar situations
  • Smart people spend more time considering the consequences of their actions

And, in the interest of the virtue of evenness (I could just be inventing ways to confirm my preconceptions, after all), some hypotheses about why smart people are less moral than non-smart people:

  • Smart people are isolated and tormented as youngsters and this makes them cynical and bitter
  • Smart people are "too far above" most people to empathize with them
  • Smart people can overcome petty obstacles like "empathy" and "guilt"
  • Smart people only care about the advancement of science, not what's actually good for people
  • Thinking about ethics for too long makes you reject them all?

That second list was much harder to come up with than the first one. Here's hoping I'm just plain right, and that's the reason why.

Anyone want to do some science to figure out which, if any, of these guesses is true?

Interesting ideas -- I can think of a few more. On the "smart = more moral" side:

  • Smart people are more likely to be able to call on Kahneman's System 2 when necessary, which correlates with utilitarian judgments (see this paper by Joshua Greene et al.). Similarly, they're more likely to have the mental resources to resist their worst impulses, if they want to resist them.
  • Note that some of your "smart = less moral" proposals concern a world in which some people are much smarter than others. If cognitive enhancement were widespread, we might get its moral benefits without the drawbacks of smart people suffering social stigmas of various kinds (your first two bullets in the second set).
  • Being much smarter might include being much better at interpersonal skills, increasing empathy for others.
  • Likewise, if there are morality network effects -- as in the tendency for well-organized societies to be less violent -- then a smarter overall population might be very much more moral.

On the "smart = less moral" side:

  • If cognitive enhancement happens such that some people are much, much smarter than others, the temptation for the much smarter people to use their intelligence to take advantage of the less-smart people may be simply too great to resist. Presumably even very, very smart people will have their price.

By and large, I think I'd agree with you that it seems right that a smarter human population would be more moral, but it's by no means certain.

Smart people tend to be more cooperative and accepting of economic deals, IIRC; following refs from http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r37

  • Smart people are more moral because they have a greater ability to recognise what is in their self-interest and being moral is in their self-interest a significant proportion of the time (in other words, morality is instrumentally rational).

Note: I am not affirming this hypothesis, I'm merely think it is worth considering.

[-]Dmytry-10

IMO its fairly straightforward. Morality requires intelligence just as construction of buildings that do not fail requires intelligence. To decide on an action based on some high level moral imperative, one needs to think a fair lot.

Most people are just too stupid to be moral. They were with their own hands murdering minorities if they were in right position in 3rd reich. They were burning witches. They aren't even facing a choice to be moral or not. They are hundred percent amoral as far as big picture goes. They need to obey very direct simple rules made by others, the end result might be moral-ish for good set of rules. They can't reason from their action to any high level moral imperative. In a discussion they'll say that you can't either. Hell they'll say it with emphasis, seeing it as virtue - the 'its wrong' kind of can't.

The intelligent people... A normal kid who's grown with presumption of mental disability, among mentally disabled, will play mentally disabled to get slack. So do many intelligent people grow up with such habit, spoiled by having plausible deniability of intent via being stupid.

In light of this I think that intelligence enhancement, in culture that does progress towards improved morality, would improve morality.

As a nerd, I have a (usually socially unacceptable) impulse to offer 16 possible ways that some plan could go wrong. It's fun, and on occasion useful. It seems very possible to me that your impression of "the state of bioethics" comes from a selection effect, where bioethecists show off their coolest objections to an obviously good thing.

Actually, in engineering school, I learned the same notion -- "shoot lame puppies early". It's a good plan to look for every possible (for a reasonably narrow definition of "possible") way your design could fail before you move further.

All I'm trying to say is that just because these philosophers are talking about cases that probably don't matter doesn't mean that no-one should think about them. On the very small chance that they do matter, the payoffs for having thought about them are large.