tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.

Sam Harris attacks moral uber-relativism when he asserts that "Science can answer moral questions". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.

What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:

  1. "Teachers should be allowed to physically punish their students."
  2. "Children should be raised not to commit violence against others."

First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.

So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Why?

First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. "Teachers should be allowed to physically punish their students" might never feel the same to you after you find out it causes adult violence. Even if it originally felt like a terminal (fundamental) value, your prioritization of (2) might make (1) slowly fade out of your mind over time. In hindsight, you might just see it as an old, misinformed instrumental value that was never in fact terminal.

Second, as we increase the number of morals under consideration, the number of relations for science to consider grows rapidly, as (n2-n)/2: we have many more moral relations than morals themselves. Suddenly the old disjointed list of untouchable maxims called "morals" fades into the background, and we see a throbbing circulatory system of moral relations, objective questions and answers without which no person can competently reflect on her own morality. A highly prevalent moral like "human suffering is undesirable" looks like a major organ: important on its own to a lot of people, and lots of connections in and out for science to examine.

Treating relativistic vertigo

To my best recollection, I have never heard the phrase "it's all relative" used to an effect that didn't involve stopping people from thinking. When the topic of conversation — morality, belief, success, rationality, or what have you — is suddenly revealed or claimed to depend on a context, people find it disorienting, often to the point of feeling the entire discourse has been and will continue to be "meaningless" or "arbitrary". Once this happens, it can be very difficult to persuade them to keep thinking, let alone thinking productively

To rebuke this sort of conceptual nihilism, it's natural to respond with analogies to other relative concepts that are clearly useful to think about:

"Position, momentum, and energy are only relatively defined as numbers, but we don't abandon scientific study of those, do we?"

While an important observation, this inevitably evokes the "But that's different" analogy-immune response. The real cure is in understanding explicitly what to do with relative notions:

If belief is subjective, let us examine objective relations between beliefs.
If morality is relative, let us examine absolute relations between morals.
If beauty is in the eye of the beholder, let us examine the eyes of the beholders.

To use one of these lines of argument effectively — and it can be very effective — one should follow up immediately with a specific example in the case you're talking about. Don't let the conversation drift in abstraction. If you're talking about morality, there is no shortage of objective moral relations that science can handle, so you can pick one at random to show how easy and common it is:

  • "Birth control should be discouraged."
    "Teen pregnancy / the spread of STDs is undesirable."
    Question: Does promoting the use of condoms increase or decrease teen pregnancy rates / the spread of STDs?  
  • "Masturbation should be frowned upon."
    "Married couples should do their best not to cheat on each other."
    Question: Does masturbation increase or decrease adulterous impulses over time?  
  • "Gay couples should not be allowed to adopt children."
    "Children should not be raised in psychologically damaging environments."
    Question: What are the psychological effects of being raised by gay parents?

I'm not advocating here any of these particular moral claims, nor any particular resolution between them, but simply that the answer to the given question — and many other relevant ones — puts you in a much better position to reflect on these issues. Your opinion after you know the answer is more valuable than before.

"But of course science can answer some moral questions... the point is that it can't answer all of them. It can't tell us ultimately what is good or evil."

No. That is not the point. The point is whether you want teachers to beat their students. Do you? Well, science can help you decide. And more importantly, once you do, it should help you in leading others to the same conclusion.

A lesson from history: What happens when you examine objective relations between subjective beliefs? You get probability theory… Bayesian updating… we know this story; it started around 200 years ago, and it ends well.

Now it's morality's turn.

Between the subjective and the subjective lies the objective.
Relative does not mean structureless.
It does not mean arbitrary.
It does not mean meaningless.
Let us not discard the compass along with the map.


80 comments, sorted by Click to highlight new comments since: Today at 10:02 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Also, this line of argument struck me as a sneaky piece of Dark Arts, though in all likelihood unintentional:

Countering the counterargument that morality is too imprecise to be treated by science, he [Sam Harris] makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.

Actually, in the overwhelming majority of cases, "healthy" is a very precisely and uncontroversially defined concept. Nobody would claim that I became healthier if I started coughing blood, lost control of a limb, or developed chronic headaches.

However, observe one area where the concept of "health" is actually imprecise and controversial, namely mental health. And guess what: there are many smart and eminently sane people questioning whether, to what extent, and in what situations medicine can legitimately answer questions of health in this area. (I recommend this recent interview with Gary Greenberg as an excellent example.) Moreover, in this area, there are plenty of questions where both ideological and venal interests interfere with the discussion, and as a result, it's unden... (read more)

Nobody would claim that I became healthier if I started coughing blood, lost control of a limb, or developed chronic headaches.

Nobody would claim that I became more moral if I started stealing, killed two people for money, or turned into a notorious liar. That there are conditions uncontroversially classified as disease doesn't mean that the boundary is strict and precise.

I don't see how this answers my objection. I'll try to restate my main point in a more clear form. The claim that "'healthy' is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health" is, while superficially plausible, in fact false under the interpretation relevant for this discussion. Namely, the claim is true only for those issues where the concept of "health" is precise and uncontroversial. In situations where the concept of "health" is imprecise and a matter of dispute, there are sane and knowledgeable people who plausibly dispute that medicine can legitimately answer questions of health in those particular situations. Thus, what superficially looks like a lucid analogy is in fact a rhetorical sleight of hand. (Also, I'd say that by any reasonable measure, questions of health vs. disease are typically much more clear-cut than moral questions. The appearance of coughing or headaches, ceteris paribus, represents an unambiguous reduction of health; on the other hand, even killing requires significant qualifications to be universally recognized as evil. But my main objection stands regardless of whether you agree with this.)
Its easier to tell that something is unhealthy than if its optimally healthy. Coughing up blood is worse than not doing so, but is good stamina better than increased alertness? (I'd posit that) Most moral arguments are over if something is immoral or not, and I think that a lot of times those can be related to facts.
You're right that people often wonder whether something is moral as if it were a binary question, but they should be concerned about precisely how good or bad various actions or policies are, because all actions have opportunity costs. It makes little sense to say "it is immoral for teachers to beat schoolchildren" without considering the effects of not beating schoolchildren.
I am not sure whether I can fully agree, although I see your point more clearly now. To give one example, we had a discussion [] about deafness recently. One of the disputed question was whether the deaf are "sick" or "a linguistic minority". If deafness can be easily cured in all instances (and this is purely a question of medicine), then the "linguistic minority" stance would be hardly defensible. Anyway, there are questions which medicine certainly can answer (typically - what are the causes, can the condition be cured, what are the side effects of the treatment) pertaining to conditions whose qualification as disease is disputed by reasonable people.
The idea that someone who is fat is unhealthy isn't obvious. Science has shown that they're more likely suffer from heart disease among other things. Because of this, nearly everyone agrees that being fat is bad.
I'm confused. It looks like the original post is arguing that science can answer some moral questions, and using the health analogy to advance this claim. In that case, pointing out that science can't answer all health issues but only some, even if true, does not undercut the original argument.
Perhaps the fields of psychology and ethics both exhibit a continuum of objectivity of a similar nature. If this is the case, then as surely as psychology is helpful, so could be a well constructed formal theory of ethical action. Certainly moral solutions are not clear cut, and many factors can play in to choosing how to act. An Ethical system qua normative claims is effectively a system of heuristics for effecting an outcome. The normative claims represent our physical (neurological) response to external consequences, and there is definite interplay between situational parameters that weight the decision to act in one way or another. Many people, for instance, claim it is wrong to murder one person to save another, but various factors can come in to play that alter the weight of that conviction. For instance, it is generally considered acceptable to kill an attacker when it is necessary to prevent him/her from killing you. I am not convinced that is is not possible to effectively model average (or any augmentation of) human morality, and I think that it is also likely that if we could do this we might be able to more effectively sort out which actions to take given certain parameters. However, like a healthy psyche, a healthy morality is defined via social standards. Due to that, it will not be absolute, but rather goal relative. As far as I can tell, a healthy psyche is most generally one that allows for adherence to the most commonly held social conventions for what is of value and how that which is valuable is acceptably obtained. As long a certain basic reactions to certain consequences of one's actions are nearly universally accepted (and this seems to be the case when it comes to very basic questions of morality), I think that it is reasonable in theory (though I am fuzzy about how one might work out the details) to think that we could model moral decision making in such a way that it could effectively help us to make practical decisions to yield optimal
Yes. Morals are made of a completely different substance from anything else, including concepts about the empirical world like "health." Fuzzy concepts about "morality" and fuzzy concepts about how to classify things based on their empirical features are not even the same type of fuzziness. This is philosophy 101.
It probably is Philosophy 101. But in Philosophy 202 you go back and review the overlap and interaction of the two.
May I just remark that we are not libertarian deontologists, but rather determinist consequentialists; mental illness can be bad in many ways: Patient zerself can express that it is undesirable (many developmentally handicapped people are aware of their disability), patient's peers and loved ones can express that it is undesirable (my uncle is manic depressive and only admitted so to himself in his early fourties), the mental illness can have negative repercussions to society (treatment costs, damages caused by the patient), a prospective mother can express that having a child with a disorder is undesirable, etc. Mental illness is illness, right there in the name is the first clue. Most patients will upon realising they have a disorder want it gone, if for no other reason than to fit in. Classifying what things are disorders and which aren't is just looking at the consequences of it and making a cost benefit analysis.
Thanks for pointing this out. I'm sorry to say that I was fooled by this.
[-][anonymous]12y 10

Bad tactics: mentioning Sam Harris (who got a pretty bad reception here) and choosing somewhat political examples.

Your point seems so true as to be obvious. Even a deontologist cares about the state of the world; if you have a duty to, say, not kill people, it is relevant to know what kills people. That may involve only common-sense knowledge, but it may sometimes require the kind of science done by specialists.

(Even if your duty is to have a "good will," your good will must be somehow connected to the state of the real world; how can you be said to have a "good will" if you feed your wife a liquid without being remotely concerned as to whether it's poison or not?)

Bad tactics: mentioning Sam Harris (who got a pretty bad reception here) and choosing somewhat political examples.

I didn't want to choose issues people already agreed upon or ignored, including Harris himself.

Your point seems so true as to be obvious. ...

Have you not had a conversation that was ended or degraded with "Well morality is subjective anyway, this is all a pointless question."? The goal of the post is to respond as effectively as possible to this disorientation, and unsurprisingly, the most convincing response is an obviously true one... what I'm offering is which obviously true response is most effective. That's what I was getting at when I wrote

Though perhaps obvious, this idea has some seriously persuasive consequences

though maybe I should expand on that in the OP?

I didn't mean it as a criticism of you -- I meant more that I was shocked that people in the comments disagreed with your argument. I mean, no matter how you form your moral values, they're going to be affected by factual claims, and people will change their opinions on moral issues based on learning new objective facts. Actually, that's probably the predominant way that people change their minds on moral issues. "X is good." "What's that? You say X kills vast numbers of people? You have strong evidence for that? Oops, X is bad."

All of your examples dealing with morality take a consequentialist stance with regard to ethics. I don't think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions. So, I don't think you are saying anything fundamentally new here by applying science to pairs of ethical maxims rather than to one at a time.

But a lot of people are not consequentialists - they are deontologists (i.e. believers in moral duties). That duties may be in conflict on occasion has also been known for a long time - I'm told this theme was common in Greek tragedy. I'm curious as to whether and how your methodology can find a toehold for science in a duty-based account of morality.

For example:

  • Everyone has a duty not to masturbate.
  • Every married person has a duty not to commit adultery.

Where is the conflict, even if science is brought in?

Perplexed: Actually, my impression is that the overwhelming majority of people are practitioners of folk virtue ethics in their own personal lives. (This typically applies to the self-professed consequentialists and deontologists too, including those who have made whole academic careers out of advocating these ideas in the abstract.) I expanded on this thesis once in a long and somewhat rambling comment [], which I should rewrite in a more systematic way sometime. It mostly boils down to maintaining and enforcing an elaborate system of tacit-agreement focal points in one's interactions with other people, and priding oneself on being the sort of person who does this with consistent high skill, which is one of the basic elements of what the ancients called "virtue." (Of course, when it comes to views that don't have practical relevance for one's personal life, it's mostly about signaling games instead.)
Indeed. Put differently, science bears upon instrumental issues but not terminal ones. What would falsify this idea would be an example of new factual knowledge changing someone's perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action. Neither Harris nor Academian seems to have provided such an example, and I'm not sure one exists. Following are two examples of a slightly different type that also seem to fail. 1. Alice thinks homosexuality is immoral because it's unnatural. Bob tells her that there are cases of animal homosexuality. Alice decides that it's not unnatural and that it isn't wrong. (But isn't being natural the end, with sexuality being merely a means, such that what we see here is still just a revaluation of instruments?) 2. Alice thinks it's wrong to X until Bob tells her about an evopsych theory under which condemning X was adaptive before people invented farming. Condemning X is not obviously adaptive or maladaptive today. Alice stops condemning X because she thinks her disapproval of it was just a mind trick and she'd rather not expend effort condeming things that aren't "really wrong." (Again, the end here is some sort of mental energy economy, while the instrument is her moral belief set?) That said, I'm not too comfortable with the idea that new knowledge has no effect on terminal values. This is because the other contenders for influence on terminal values (e.g. ancient instinct) seem decidedly less open to my control. P.S. I'm rather new here, and have not finished the sequences. If I've missed something that's already been covered, I'd love a point in the correct direction.
For what I consider non-obvious reasons, I disagree. As you say (and thanks for pointing this out explicitly), I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal value that I stopped considering terminal upon realizing something factual about it. I'm guessing LucasSloan and Jayson_Virissimo are referring to similar experiences in these [] comments []. You could argue that it changing means "It wasn't really terminal to begin with". However, the separation of a given utility function into values and balancing operations is non-unique, so my current opinion is that the terminal/instrumental distinction is at best somewhat nominal. In other words, the change that it stopped feeling terminal may be the only sort of change worth calling "not being terminal anymore". So I think you should more precisely demand an example of a person's utility function [] changing in response to knowledge. On the day of the factual realization I mentioned above, while it's clear that my description of my utility function to myself and others changed, it's not clear to me that the function itself changed much right away. But it does seem to me that over time, expressing it differently has gradually changed the function, though I can't be sure. I only hinted at all this when I added When I first made the utility function/description distinction, it was for abstract reasons (I was making a toy model of human morality for another purpose), and I didn't quite notice the implications it would have for how people think of moral progress. Now in response to your demand for explicit examples, I'm a lot more motivated to sort this out. Thanks!
Changing terminal values in response to learning is not only possible, but downright normal. We pursue one goal or another and find the life thus lived to be good or bad in our experience. We learn more about the goal-state or goal object, and it deepens or loses its attraction. This needn't mean that "the true terminal value" is pleasure or other positive emotion, even though happiness does play a role in such learning. Most people reject wire-heading: clearly pleasure is not their overarching "true terminal value."
True, it wouldn't mean that pleasure was the actual terminal value, and the fact that many people reject wire-heading is evidence that pleasure is indeed not a terminal value for those people. However, what role could "happiness" or feelings of well-being play, if not as true terminal values, if it's in response to those feelings that people change (what they thought were) their terminal values?
Do what results in the smallest amount of duty-breaking.

Trouble is, when moral conclusions (and thus also the political and ideological positions that follow from them) depend on the conclusions of science, what force is going to keep scientists objective when they're faced with the resulting biases and perverse incentives? It's not like we have an oracle that would be guaranteed to provide objective and accurate scientific answers regardless of the moral, ideological, and political controversies for which the questions are relevant.

The evidence from the history of science, both past and current, clearly shows... (read more)

Science is involved in moral controversies even if the scientists aren't even aware they're participating in a moral debate. Any moral question refers to a state of the real world, and so whenever scientists discover something about the state of the world their knowledge could be used for a moral question. For instance the discovery that fish can feel pain has implications for bioethics, but I'm not sure if the scientists involved were thinking about bioethics. Science is involved in moral questions necessarily, in exactly the same way that ordinary perception and knowledge is involved in moral questions. The question "Is it moral for me to shoot this gun at you?" has something to do with the state of the world: is the gun loaded? are you shooting at me? Obviously in making a moral decision you would use your knowledge of such matters, right? You would not prefer to remain agnostic on factual questions? So likewise, "Should I vaccinate my child?" is a moral question that depends on the scientific questions "Does the vaccine prevent disease?" and "Does it have side effects?" Would you prefer science to remain agnostic on those questions because they are related to a moral issue? Would you prefer never to use scientific evidence in making this decision?
What you write is true, but these facts should be seen as imposing practical limitations on science. Sometimes, scientific inquiry will stumble onto ideologically charged questions, and the less aware the scientists are of the ideological implications, the greater the chance that their work will be sound. If the ideological implications are clear, the partisan opinions impassioned, and the consequences for practical power politics undeniable, we can't realistically expect that the results will not be influenced by these considerations, whether consciously or not. And if the scientific work is specifically motivated by the fact that the question is interesting for reasons of ideology or policy, the confidence we can have in its quality is very low indeed. For all practical purposes, this imposes limitations on the efficacy of institutional science of the sort we have today, and this must be recognized by anyone whose interest is finding truth rather than ideological ammunition. There are already many research areas where the ideological influences are so strong that their output can be trusted only after a very careful examination, and there are those whose output is almost pure bullshit, yet nevertheless gets to be adorned with the most prestigious academic affiliations. Therefore, it seems pretty clear to me that in the present situation, science is already excessively engaged in ideologically sensitive areas, and encouraging further such engagement will result only in additional corruption of science, not bringing clarity and rigor to the discussions of these areas. Take your example of vaccination. In a situation where researchers consider it a moral imperative to dispel the crackpot conspiracy theories and pseudoscientific claims about vaccination, I have very little confidence that their research will provide an accurate picture of the risks and negative consequences of vaccination if their magnitude actually is non-negligible, for fear of providing ammunit
That's fair, insofar as science doesn't give you correct answers when it isn't working properly. When science isn't working properly, the results of science are no better, or barely better, than random. A few questions: One, are you saying that scientists should strive to be ignorant of the existence of widely discussed ideological and moral issues? Is this one of the cases where less knowledge is better than more knowledge? Two, what is an ideology? (Of course, I know how to use the word in a sentence, but you use it so often on LW that I wonder if you have a precise definition.) For example, would you describe yourself as having any ideology? Three, of the possible means one could use to achieve one's desires, would you say that writing biased scientific papers is an immoral means? What about persuasive essays?
SarahC: Well, first, it depends on what they're working on. Many things are remote enough from any conceivable issues of ideology and power politics that this is not a problem; for example, Albert Einsten’s very silly ideology [] didn't seem to interfere with his physics. However, topics that have bearing on such issues would indeed be best done by space aliens who'd feel complete disconnect from all human concerns. This seems to me like an entirely obvious corollary of the general principle that in the interest of objectivity, a judge should have no personal stakes in the case he presides over. If scientists could somehow remain ignorant of the ideological implications of their work, this would indeed have a positive effect on their objectivity. But of course that this is impossible in practice, so it would make no sense to strive for it. This is a deep problem without a solution in sight. (Except for palliative measures like increasing public awareness that in ideologically sensitive areas, one should be skeptical even towards work with highly prestigious affiliations.) My favorite characterization was given by James Burnham: “An ‘ideology’ is similar in the social sphere to what is sometimes called ‘rationalization’ in the sphere of individual psychology. [...] It is the expression of hopes, wishes, fears, ideals, not a hypothesis about events -- though ideologies are often thought by those who hold them to be scientific theories.” (From The Managerial Revolution.) Taken in the broadest possible sense, therefore, every person has an ideology, which encompasses all their beliefs, ideas, and attitudes that are not a matter of exact scientific or practical knowledge, and which are at least partly concerned with the public matters of social order (with the implications this has on the practical relations of power and status, although these are rarely stated and discussed openly and explicitly). In a more narrow sense,
Vladimir, have you read Spreading Misandry [] and Legalizing Misandry [] by Nathanson & Young []? They've done some of the best work I've read on the subject of ideology. Here is their description of ideological feminism: Most of their criticism is aimed at feminism, but if you think about their description of ideology, it's not difficult to see the same problems in any political movement. Here are the features they relate to ideologies: * Dualism (see above) * Essentialism ("calling attention to the unique qualities of women") * Hierarchy ("alleging directly or indirectly that women are superior to men") * Collectivism ("asserting that the rights of individual men are less important than the communal goals of women") * Utopianism ("establishing an ideal social order within history") * Selective cynicism ("directing systematic suspicion only toward men") * Revolutionism ("adopting a political program that goes beyond reform") * Consequentialism ("asserting the beliefs that ends can justify means") * Quasi-religiousity ("creating what amounts to a secular religion") I would be interested to know how these features relate to your experiences with ideologies. Other notable sections in Spreading Misandry: Making the World Safe for Ideology [] The use of deconstructionism by ideologies [] Film Theory and Ideological Feminism [] I recommend these books to anyone who is interested in biases, group psychology, and ideologies; their books give excellent
I haven't read the books by Nathanson & Young, but looking at their tables of contents, I can say that I am well familiar with these topics. However, it's important to immediately note that the notion of ideology that you (and presumably N&Y) have in mind is narrower than what I was writing about. This might sound like nitpicking about meanings of words, and clearly neither usage can claim to be exclusively correct, but it is important to be clear about this to avoid confusion. Ideology in the broader sense also includes the well-established and uncontroversial views and attitudes that enable social cohesion in any human society. (This follows the usage in Burnham's text I cited; for example, in that same text, shortly after the cited passage, Burnham goes on to discuss individualism and belief in property rights as key elements of the established ideologies of capitalist societies.) In contrast, your meaning is narrower, covering a specific sort of more or less radical ideologies that have played a prominent role in modern history, which all display the traits you listed to at least some extent. One book you might find interesting, which discusses ideology in this latter sense, is Alien Powers: The Pure Theory of Ideology [] by the LSE political theorist Kenneth Minogue. I only skimmed through a few parts of the book, but I would recommend it based on what I've seen. Minogue is upfront about his own position (i.e. ideology, in Burnham's sense, but not his), which might be described as intellectual and moderate libertarianism; in my opinion, this is the kind of topic where authors of this sort usually shine at their brightest. You can find an excerpt presenting the basic ideas from the book here []. I'll check out these books by Nathanson & Young in more detail, and perhaps post some more comments later.
Thanks for replying. So, I guess you'd say that true statements of scientific fact are different in kind from statements of wishes, dreams, beliefs, attitudes and so on. And, additionally, that it's in the interest of human beings to have true statements of scientific fact, which are not contaminated with wishes, dreams, beliefs, and attitudes, or falsified by bias or forgery. Hmm. That seems plausible but I'm not certain of it. It's close enough, of course, that I don't intend to practice or condone scientific fraud in real life. And ideology is, for you, basically about conflict and incapacity to be rational. By that definition, you're probably not an ideologue. I'm probably not either, but I know I have points where I cannot continue a rational discussion (in particular, if someone makes an unkind personal remark.) But sometimes a person can care more about one of the things he or she values than about being patient and tolerant with everyone. Sometimes, some value takes precedence over peacemaking and discussion. Then conflict will happen, and rational discussion will not. I can think of situations where I would sympathize with the "ideologue" in that case. I am not sure that it's a good person who believes that nothing is more important than rational inquiry and the absence of conflict. Would I patiently entertain the notion that, say, it might be better for society for someone to kill my sister? (Imagine that there was some argument in favor of it.) Would I strive to be evenhanded about it? Or would I be in "conflict ... perhaps even physical" and "fatally biased" and "incapable of rational argument"?
I agree with all this. In all sorts of human conflicts, even if all the relevant questions of fact and logic have been addressed to the maximum extent achievable by rational inquiry, there is still the inevitable clash of power and interest, which can be resolved only by finding a modus vivendi, or with the victory of one side, which then gets to impose its will on the other. Among the available tactics in various types of conflicts, it is ultimately a judgment of value and taste which ones you'll see as legitimate, and which ones depraved. This is especially true when it comes to propaganda aimed at securing the coherence of one's own side in a conflict, and swaying the neutrals (and potential converts from the enemy camp) in one's favor. It so happens that I have a particularly strong loathing for propaganda based on claims that one side's pretensions to power are somehow supported by "science." I see this as the most debased sort of ideological warfare, the propagandistic equivalent of a war crime, especially if the effort is successful in attracting people with official institutional scientific affiliations to actively join and drag their own "science" into it. (It is also my factual belief that this phenomenon tends to make ideological conflicts more intense, more destructive, and less likely to end in a tolerable compromise, but let's not get sidetracked there.) Yet, while the intensity of my dislike is a matter of my own values and tastes, the question of whether such corruption of science has taken place in some particular instance is still an objective question of fact and logic, because it is a special (even if difficult) case of the objective question of discerning valid science from invalid. Therefore, people can be objectively and demonstrably wrong in seeing themselves on the side of science and truth against superstition and falsity, where they are in fact just engaged in a pure contest for power, whether in their own interest or as someone else's
Thanks. I do understand better now, and I think the world probably needs people like you who are vigilant about keeping science unbiased -- it would be a much worse world, from my perspective, if we ceased to have science at all. I also appreciate your courtesy over the past few days. I sometimes have trouble accepting and listening to skeptical perspectives; I'm learning to accept that ideas with a skeptical/critical/realist tone can be very valuable, but it does run against the grain for me, and I think I didn't handle myself very well. At any other web forum, this would have been a feud between us. So I appreciate your patience and your explanations.
If you're inclined to write about it, I would be interested in reading more about what your personal values/tastes are. This would help me place your comments in context.
You probably understand that a full answer to this question would require an enormous amount of space (and time), and that it would involve all kinds of diversions into controversial topics. But since you're curious, I will try to provide a cursory outline of my views that are relevant in this context. About a century ago -- and perhaps even earlier -- one could notice two trends in the public perception of science, caused by its immense practical success in providing all sorts of world-changing technological marvels. First, this success had given great prestige to scientists; second, it had opened hopes that in the future science should be able provide us with foolproof guidance in many areas of human concern that had theretofore been outside the realm of scientific investigation. The trouble with these trends was that around this time, dreams and hopes fueled by them started to seriously drift away from reality, and as might be expected, a host of pseudo-scientific bullshit-artists, as well as political and bureaucratic players with ready use for their services, quickly arose to exploit the opportunities opened by this situation. This has led to a gradually worsening situation that I described in an earlier LW comment []: This, in my view, is one of the worst problems with the entire modern system of government, and by far the greatest source of dangerous falsity and nonsense in today's world. I find it tragicomic when I see people worrying about supposedly dangerous anti-scientific trends like creationism or postmodernism [], without realizing that these are entirely marginal phenomena compared to the corruption that happens within even the most prestigious academic institutions due to the fatal entanglement of science with ideology and power politics, to which they are completely oblivious, and in which they might even be blindly taking part.
Thanks for your response. Actually, my question was broader in intent - I was expressing curiosity about your personal values/tastes in general rather than about the matter at hand in particular. But from the way you took my question I imagine that the matter at hand figures in prominently :-). Concerning I understand that doing so would require a lot of time and energy, I wouldn't want to divert your attention from things that are more important to you, but will express interest in reading a carefully argued, well-referenced top-level posting from you on a relatively uncontroversial topic expressing some small fraction of your views on science and government so that I can have a more detailed idea of what you're talking about. Most of what you've said so far has been allusive in nature and while I can guess at some of what you might have in mind, I strain to think of examples that would provoke such a strong reaction. Of course, this may be rooted in a personality difference rather than an epistemological difference, but you've piqued my curiosity and I wonder whether there might be something that I'm missing. At present: I think that various sectors of science have in fact become debased by politicization. This may have made the situation in certain kinds of science worse than it has been in the past, but I don't think that this has made the political situation worse than it has been in the past. As far as I know, there have always been issues of people putting manipulative spin on the truth for political advantage and I suspect that manipulative appeals to the authority of science are no more problematic than other sorts of manipulative appeals to authority were hundreds of years ago. Incidentally, I was drawn toward math in high school by the fact that the the truth seemed to me to be much more highly valued there than in most other subjects. I soon came to appreciate Beauty in Mathematics [] but a larg


Most of what you've said so far has been allusive in nature and while I can guess at some of what you might have in mind, I strain to think of examples that would provoke such a strong reaction.

Well, to fully explain my opinions on the role of institutional science and pseudoscience in modern governments, I would first have to explain my overall view of the modern state, which, come to think of it, I did sketch recently in a reply to an earlier question from you. So I'll try to build my answer from there (and ask other readers to read that other comment first if they're confused by this one).

The permanent bureaucracies that in fact run our modern governments, in almost complete independence from the entire political circus we see on TV, are intimately connected with many other, nominally private or "independent" institutions. These entities are formally not a part of the government bureaucracy, but their structure is, for all practical purposes, not separable from it, due to both formal and informal connections, mutual influences, and membership overlaps. (The workings of this whole system are completely outside the awareness of the typical citizen, w... (read more)

Thanks for writing this; upvoted. I'm not in a position to assess your comment's accuracy as I don't know very much about either of the workings of the government or the state of the field of macroeconomics, but you've offered me some food for thought. If I find Carl's subsequent postings potentially convincing grounds for political involvement I'll look more closely into the aforementioned topics and may ask you some more questions. Up until now I haven't had reason to carefully research and think about these things.
multifoliaterose: If you're interested in these topics, as an accompaniment to my fervent philippics, you should check out some more mainstream materials on the issues of administrative rulemaking [] and the Chevron doctrine []. Googling about these topics will uncover some fascinating discussions and examples of the things I've been writing about, all from unimpeachable official and respectable sources. (I'm sticking to the U.S. law and institutions because it's by far the easiest to find good online materials about them. However, if you live anywhere else in the developed world, you can be pretty sure that you have close local equivalents of all these things I've been talking about.)
Thank you for the references. I live in the U.S. so these should be relevant.
Oh, and here's one more fascinating link. Before you click on it, think about the average citizen's idea of how the laws of the land come into being. And then behold the majesty of this chart: [] (Though it should be noted that there are still visible vestigial influences of traditions from the old times when the de facto constitution of the U.S. resembled the capital-C one much more closely. Notice how the process is described as rulemaking, and by no means as legislation. It would still be unacceptable to use the latter name for something that doesn't come directly from the formally designated legislative branch, even if their practical control over the law has long since disappeared in favor of the bureaucracies and courts.)
In some ways, things have gotten better, not worse. Both communism and Nazism claimed scientific backing. I don't see anything like that on the horizon. On the other hand, people became disenchanted with them because of disastrous results-- I don't think there's any public recognition of the poor quality of science they used.
NancyLebovitz: These political systems, however, are now distant in both time and space, and their faults can be comfortably analyzed from the outside. The really important question is in what ways, and to what degree, our present body of official respectable knowledge and doctrine deviates from reality, which is far more difficult to answer with any degree of accuracy. This is both because for us it's like water for fish, and because challenging it is apt to provoke accusations of crackpottery (and perhaps even extremism), with all their status-lowering implications.
The vaccination controversy isn't a particularly good example of damages science takes from discussing morals. Although I agree that the rigour of research and the objectivity of publications suffer from the controversy, it isn't about morality. The anti-vaccination crackpots don't claim that vaccination is somehow ethically unjustifiable, they simply claim that it doesn't work and furthermore causes autism.
That's not entirely true. In recent years, at least in North America, HPV vaccines [] have become a significant ideological issue, mostly for purely moral reasons. (Though the media exposure of this controversy seems to have died down somewhat recently.) I haven't followed this issue in much detail, however I've noticed that it has involved not only moral disputes, but also disputes about factual questions that are in principle amenable to scientific resolution, but the discourse is hopelessly poisoned by ideological passions. What you write is true about the majority of the historical vaccination controversies, though.
I haven't known that, thanks.

The problem with this whole line of reasoning is that people really don't change their beliefs even if their reasons for their beliefs are shown to be contradictory with other values or internally logically incoherent. So even if you prove to someone that gay parents are not bad for kids with a huge longitudinal study with random assignment and causal control, a lot of people will simply say it still is inherently immoral for kids to be raised by gays. You can't say they're wrong.

People aren't optimizing for some coherent set of values, we just have a set of purely non-rational feelings about moral issues.

I think your point here is correct. However, the people who believe it's inherently morally wrong for gays to raise kids put a lot of money into convincing other people that it's wrong, and some of the convincees may then share the belief, but not the moral. Rellevant studies can then change the belief back.

I have changed my mind about my values due to noticing that my values were inconsistent.

Same here (at least twice).
Yeah, but that makes you really really weird.
For which I am truly grateful.
First of all, that was intended as a general statement, not an absolute description of every case. Experiments have been done on people to see if, for example, they stop being opposed to incest in fictional scenarios where the incest is stated outright to be harmless. Before the scenario was presented, people offered utilitarian justifications for the incest taboo, but even when those were stripped away, they insisted that incest is still "just wrong". My point is that this is what generally happens when someone points out incoherency in a moral system. People generally switch to offering an axiomatic rationalization for their moral sentiments instead of a utilitarian one. Also, I have to say: Do you mean that you made a judgement elevating one value above another you had in cases where they conflict? Or do you mean you actually gained a new value? It seems like you must have used some sort of higher level value preference to make that meta-level moral judgement.
I noticed that my values were inconsistent, and I decided that one of them needed to be expunged. I removed a "value" that had been created at too high a level of abstraction, one which conflicted with the rest of my values and whose actual, important content could be derived from lower level moral concepts.
Such a person isn't going against Academian's advice. They've been led through the correct procedure of analysis, though they've only gone part of the way. They've found evidence that, all else being equal it's better not to give into a desire to commit incest. The incest itself is what they find bad, not some consequence of it. You haven't identified an incoherence in their final position. To continue the analysis, they should see what bad consequences would follow from not doing the incest. They should check whether this badness outweighs the badness of doing the incest. They should be able to identify the hypothetical scenarios where it's worse not to commit the incest than to do it. In the end, they may decide that people shouldn't commit incest in most typical situations, even when there are no distinct bad consequences of the incest. Whether or not you agree with them, they would still be vastly more reflective about their morality than most people are. It would be great if more people were so reflective, even if they ended up disagreeing with you about which things are harms-in-themselves.
The gay parents example jumped out at me as a bad example as well (the two morals stated aren't contradictory in light of a study showing gays to be good parents). The first two examples illustrate Academian's point well though. Contradictions actually do change peoples minds, I think. Look at birth control in Catholicism. Despite the pope himself saying it is wrong, many Catholics use it and support it (because to do otherwise would contradict other morals/desires they have).
What if being raised by gay parents improved a lot of cognitive functions, and had no significant effect on other personality traits? Or some other uncompromisingly positive effect? I don't know how that would happen, but I don't really know much about having gay parents anyway. The point is that science would help me have a better opinion than whatever I have so far.

Just randomly stumbled upon this old post and found it to be one of the cleaner, most useful takes on the "is morality relative? What do we do with that? What role should science play in this?" debate.

Is not being hypocritical a moral value in itself, or is it above morality? Either way, why?

If my values contradict, but I don't care about hypocrisy, should it matter to me?

If your values contradict, then what're you going to do, lay on the floor flopping around trying to do multiple contradictory things at once? You want to sort out exactly how much you value each relative to the other and to what extent they contradict each other so that, well, you can act in accordance with your values. Hypocrisy is more about giving lip service to one set of values while acting on others.
I may act in accordance with different values without resulting in undirected floppyness. For instance, I could value both animal life and wearing traditional Bavarian lederhosn, and act on these values by producing, buying and wearing lederhosn while donating money to a save-the-cows fund. But I guess I could just donate an amount relative to how much I value the cows over/under lederhosn. Hm. Okay.
Yeah. When, at any specific moment your values produce different suggestions as to what action to take, you have to balance or alter them somehow.
Yes, otherwise someone might call you a 'donkey'!
hah hah hah. :P

"Teachers should be allowed to physically punish their students." "Children should be raised not to commit violence against others."

These two are contradictory. If a child is taught not to commit violence, they won't be able to become teachers who commit violence against children.

Presumably "commit violence" means "inflict harm in a non-socially-sanctioned manner", while "physically punish" means "inflict harm in a socially-sanctioned manner". Cynicism is fun! :) []
No, there's a non-contradictory solution: Arrange for there to be no teachers in the following generation. ;-)
I'd rather arrange for there to be no further children to teach after the current generation.
No contradiction. Allowing a teacher to physically punish children =/= requiring a teacher to do same. If the next generation of teachers all choose not to physically punish children, but have the option, both morals are conserved.

I'm not sure what's the best thread to link this, but a blog here purports to be written by a sociopath. Hat-tip to Chip Smith.

I feel slightly embarrassed now: I ran into that blog two months ago, skimmed a few posts, said "huh" and closed it. It never occurred to me as a potential Less Wrong link.
I think it should have been linked in one of the Open Threads they add periodically. Too late now, I guess, unless someone can move it.
At this point, the only new open threads are in the discussion section. On the other hand, there's nothing stopping people from starting a top-level open thread to see what happens.

Am I correct in assuming this is a lead-up to introducing Desirism? If so - huzzah! I'm a big fan myself!

[-][anonymous]12y 0

"Birth control should be discouraged." "Teen pregnancy / the spread of STDs is undesirable." Question: Does promoting the use of condoms increase or decrease teen pregnancy rates / the spread of STDs?
"Masturbation should be frowned upon." "Married couples should do their best not to cheat on each other." Question: Does masturbation increase or decrease adulterous impulses over time?
"Gay couples should not be allowed to adopt children." "Children should not be raised in psychologically damaging environments." Question: What are the psychological effects of being raised by gay parents?