From Michael Eisen's blog:

Yuval Levin, former Executive Director of the President's Council on Bioethics, has an op-ed in Tuesday's Washington Post arguing that Obama's new stem cell policy is dangerous. Levin does not argue that stem cell research is bad. Rather he is upset that Obama did not dictate which uses of stem cells are appropriate, but rather asked the National Institutes of Health to draft a policy on which uses of stem cells are appropriate:

It [Obama's policy] argues not for an ethical judgment regarding the moral worth of human embryos but, rather, that no ethical judgment is called for: that it is all a matter of science.

This is a dangerous misunderstanding. Science policy questions do often require a grasp of complex details, which scientists can help to clarify. But at their core they are questions of priorities and worldviews, just like other difficult policy judgments.

Lost in this superficially unobjectionable - if banal - assertion of the complexity of ethical issues involving science is Levin's (and many other bioethicists) credo: that the moral complexity of scientific issues means that scientists should not make decisions about them.

35 comments, sorted by Click to highlight new comments since: Today at 3:52 PM
New Comment

Scientists also have highly unrepresentative personalities, high in openness to experience, and tend not to care about conservative values like respect for authority, group loyalty, and various taboos. Delegation of decision-making power to representative samples of elite scientists will thus favor those values more than the policies that would be adopted by a set of comparably informed people with values representative of the population.

Scientists also have highly unrepresentative personalities, high in openness to experience, and tend not to care about conservative values like respect for authority, group loyalty, and various taboos .Delegation of decision-making power to representative samples of elite scientists will thus favor those values more than the policies that would be adopted by a set of comparably informed people with values representative of the population.

This is a good summary of the bioethicists' argument; but I find their argument unconvincing. My suspicion is that the values of "comparably informed people" would inevitably tend to resemble those of scientists -- at least for practical purposes.

Concretely, for instance, it seems that much if not most of the opposition to embryonic stem-cell research is based on a failure to grasp the empirical fact that personhood resides in brain structure: no neurons, no person.

Maybe in principle there could still be moral arguments worth having that don't directly depend on the science; and maybe scientists would be biased toward certain stances in such arguments. But I don't think that's what's really going on here.

[Pedant Alert:]

...the empirical fact that personhood resides in brain structure...

Which specific experiments have shown that there is such a thing as personhood and that it somehow resides in the brain?

The notion of personhood is a philosophical concept, not a scientific one.

Bryan Caplan's research on differences of opinion between expert economists and others finds (in his datasets) that there are big effects of education and IQ, bigger than liberal or conservative ideological effects, but the latter still remain: people with graduate degrees agree more with economists, but conservative PhDs in industry and liberal PhDs in academia tend to disagree with each other.

"a failure to grasp the empirical fact that personhood resides in brain structure: no neurons, no person."

Do you think that personhood is really an 'empirical fact'? How would you empirically measure when a developing fetus or infant's (or toddler's, depending on your view of personhood) brain becomes a person without a value-laden definition? Likewise for temporary or permanent brain damage.

Do you think that personhood is really an 'empirical fact'?

I wouldn't claim that current science easily resolves all questions about personhood; but it does locate the phenomenon within the brain as opposed to anywhere else. Neurons (or, more broadly, things with a similar function) are a necessary condition that may or may not be sufficient. The extent to which a fetus, toddler, or Alzheimer's patient possesses personhood may be legitimately debatable -- but the question of whether or not an embryo is a person is surely settled: it isn't.

I think you have a different concept of 'person' in mind than needed. We can define 'person' as "that which can think, reason, and has personality" or something similar (this is roughly what I think you mean by 'person'), but that isn't really relevant to the question. Like Carl said, we are looking for a value laden definition here - something to tell us whether we should use those embryos or not.

Honestly, all of this definition nonsense is misleading. We don't really care about the definition of 'person.' What we want is to sort out our values. Embryo's certainly aren't in my utility function, and that's all that matters. Defining 'person' is superfluous.

Do you think that personhood is really an 'empirical fact'? How would you empirically measure when a developing fetus or infant's (or toddler's, depending on your view of personhood) brain becomes a person without a value-laden definition? Likewise for temporary or permanent brain damage.

Is personhood really a binary proposition at all, or a matter of degree?

Of course, for almost any non-incoherent definition of personhood, the degree of personhood during the first trimester is roughly nil.

We need laws that incorporate continuous functions.

If personhood resides in brain structure then a brain-in-a-vat would be a person. Presumably its personhood would be postulated on the grounds of it having some sort of subjective experience. But that's not an empirical fact so I don't think personhood residing in brain structure can be classed as an empirical fact either.

if you're treating "brain in a vat=person" as a reductio, you've either got a lot to learn, or you've got a lot of explaining to do before this crowd's going to take you seriously.

It's not an empirical fact that a brain-in-a-vat has subjective experience. It's a thought experiment. Thought experiments don't establish empirical facts.

"It's not an empirical fact that a brain-in-a-vat has subjective experience."

If we could watch what the BIAV gets up to in its simulated world, we could see it interacting with its simulated environment. This would give us the same level of confidence in its having subjective experience as we have for any normal person.

Delegating this power to politicians has a poor track record.

Are you speaking from within a rationalist perspective, or are you defaulting to speaking from within a populist framework?

[-][anonymous]13y 0

"Are you speaking from within a rationalist perspective, or are you defaulting to speaking from within a populist framework?"

I made what I think are some true factual claims about the world. What do you mean and why is it relevant?

Your comment assumes that policies should be set by people with values representative of the population.

Representative democracy is not designed to follow values representative of the population. That would be direct democracy. Representative democracy is supposed to be a way of finding representatives who are wiser than the general population. So if we speak from just the slightly-more-elitist framework of representative democracy that the US founders intended, this assumption is wrong.

the slightly-more-elitist framework of representative democracy that the US founders intended

The US founders intended several different, opposing things. Some were much more elitist than others.

Representative democracy is not designed to follow values representative of the population.

That's an open question. Some prefer representative democracy because it lets ordinary people spend time on things other than politics - in which case one might still prefer to elect people who would most likely have made the same decision as you would, and oust them when they do something you wouldn't have done.

Representative democracy doesn't necessarily diverge from the general populace, but it can.

A strong case can be made that this feature is a large part of why it was chosen by the Founders in the first place. Even if it wasn't, it clearly permits forms of elitism that other systems would rule out immediately. This fact is significant even if it wasn't intended.

"Your comment assumes that policies should be set by people with values representative of the population."

No, it doesn't. One might do best to delegate power to someone pursuing different and partially opposed goals (at least in part because of a different personality rather than expertise) because of outweighing advantages like scientific knowledge, or because the values have special practical use in the case (e.g. a long time horizon in a central banker). But my comment just acknowledged that from the perspective of any particular values, it can be a mistake to delegate power to someone opposed to some of those values for reasons other than knowledge.

It is possible to say that a policy has a drawback relative to a utilitarian, or egalitarian, or tribalist, or U.S. founder perspective without sharing any of them. being any of those things.

"So if we speak from just the slightly-more-elitist framework of representative democracy that the US founders intended, this assumption is wrong."

I mentioned 'comparably informed' expertise, so 'wiser' seems to just mean people with certain basic values and personalities.

[-][anonymous]13y 2

Scientists also have highly unrepresentative personalities, high in openness to experience, and tend not to care about conservative values like respect for authority, group loyalty, and various taboos. Delegation of decision-making power to representative samples of elite scientists will thus favor those values more than the policies that would be adopted by a set of comparably informed people with values representative of the population.

This is true. For my part, however, I would not speak out about this. I believe that if a tribe delegates, for whatever reason, their ethical decision making to a group with that sort personality bias then the morality that results in is perfectly valid.

The very nature of morality is that it is determined not by consensus but by a dance of power and priorities. I suggest that having decisions made by a group that tends not to care about conservative values like respect for authority, group loyalty, and various taboos is far better than those usually made. This is partly because it would better suit my own preferences but also because each of those differences from the norm tends towards deciding what is best for the group rather than best for the leader or best for signalling allegiance to the leader.

My main reluctance with having scientists having this influence is that it is bad for science. The more political power you give a group the more the group becomes political.

"I believe that if a tribe delegates, for whatever reason, their ethical decision making to a group with that sort personality bias then the morality that results in is perfectly valid." By what standard? Morally conservative people who make such a delegation without understanding the bias and its effects may be making a serious mistake with respect to their own values.

I agree that the broad liberal-intellectual moral personality that permeates academia, media, and Less Wrong is better by my (liberal-intellectual) standard and yours, but if we don't understand this process it will be difficult to avoid similar mistakes on our part. I wouldn't worry too much about letting slip the well-published 'secret' that most journalists, scientists, and other academics are politically liberal. The only special danger here is letting slip that a portion of these groups' support is due to personality differences rather than knowledge.

suggest that having decisions made by a group that tends not to care about conservative values like respect for authority, group loyalty, and various taboos is far better than those usually made. This is partly because it would better suit my own preferences but also because each of those differences from the norm tends towards deciding what is best for the group rather than best for the leader or best for signalling allegiance to the leader.

Given that you are explicitly disregarding the group's ethical standards, how are you defining "best for the group"?

Scientists also have highly unrepresentative personalities, high in openness to experience, and tend not to care about conservative values like respect for authority, group loyalty, and various taboos.

What evidence is this assertion based on?

Levin's (and many other bioethicists) credo: that the moral complexity of scientific issues means that scientists should not make decisions about them.

Wouldn't they be out of a job if people believed that asking a professional bioethicist wasn't important?

One thing that talking heads never do is try to convince people that they shouldn't listen to talking heads. It's not just a matter of consistency: why would people try to put themselves out of a job?

How do we know that it's not just another priesthood: a profession that is useful only because people believe it's useful?

I don't understand -- are you claiming that scientists are people and therefore they're as much experts on ethics as anyone? Current bioethicists may suck, but the idea of having some people specialize at bioethics seems sound.

I don't understand -- are you claiming that scientists are people and therefore they're as much experts on ethics as anyone?

Yes. Actually, I would say scientists are better ethicists in their area of expertise, because

  • moral reasoning is reasoning, and smarter people are better at reasoning

  • they know what the heck they're talking about.

Current bioethicists may suck, but the idea of having some people specialize at bioethics seems sound.

Can you specialize in ethics? Or is it like - to use the ever-popular reason-as-martial-arts metaphor - like specializing in kata? You sometimes see schools that strongly emphasize kata. IMHO their kata is weak, because they don't understand the purpose of their movements. To answer to this question, you need to ask whether moral reasoning within a domain is qualitatively different from any other kind of reasoning in a domain.

Perhaps if our debates on ethics used esoteric concepts from category theory and the writings of German philosophers, it would be of some benefit to specialize in ethics. But they have never risen to that level.

Scientific training is specifically training in reasoning to a much greater extent than is, say, political training. Smarter people are better than dumber people at reasoning on average, but the advantage of scientists over politicians is less that they are smarter (they are, but only modestly) than that they are selected for and trained in reasoning well while politicians are selected for and trained in reasoning poorly.

moral reasoning is reasoning, and smarter people are better at reasoning

Philosophers are pretty smart.

If they were that smart they would be avoiding politics; then again, maybe the smart ones are and that's why the gov't ethicists seem so incredibly dumb.

Philosophers are pretty smart.

They get good scores on IQ tests. But in terms of dealing with reality, and producing real knowledge, they're incredibly dumb.

High INT, low WIS.

Generalizations, ahoy! That being said,

High INT, low WIS.

And sometimes way-too-high CHA. If you're naive and looking for wisdom, it's too easy to listen to someone talking nonsense about philosophy and be completely taken in. Witness the success of the irritatingly wrong postmodern thinking which holds that science is just another cultural opinion with no more validity than any other. If that were true then transistors would work about as well as rain dances or ancient Hindu theurgy, and yet people continue to spread the meme.

I would trust someone who understood and could use utilitarianism to solve ethical issues better than someone who didn't. Of course, modern bioethicists don't, so this is hardly a point in their favor. But I think in a perfect world people could specialize in ethics and gain unusual competence in that field.

The one real worry I have about scientists is that they're too personally invested. I wouldn't trust the guy who'd spent ten years of his life inventing a stem cell technique to determine when the technique probably shouldn't be used because of ethical issues. And I think that carries over to entire fields; biologists, in general, will have an personal investment in biological discoveries.

Optimal solution is smart people with scientific training specializing in utilitarian ethics. In our own world, I trust scientists about as much as anyone else, maybe a little more.

The one real worry I have about scientists is that they're too personally invested.

I have this same worry about a lot of bioethicists. Their whole shtick is telling scientists what they are and aren't allowed to do, and getting public support for their own actions. That's a recipe for fearmongering and being more restrictive than they should be in order to justify their own existence.

Obviously there are ethical decisions to be made in the field of biology, and it would probably be nice to have people who specialize in hashing out those issues, but the way the system is being set up seems dangerously dependent on -- and compliant to -- unfounded public fears.