Policy Debates Should Not Appear One-Sided

by Eliezer Yudkowsky3 min read3rd Mar 2007184 comments

141

PoliticsLibertarianism
Frontpage

Robin Hanson proposed stores where banned products could be sold.1 There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of five children is going to go into these stores and buy a “Dr. Snakeoil’s Sulfuric Acid Drink” for her arthritis and die, leaving her orphans to weep on national television.

I was just making a factual observation. Why did some people think it was an argument in favor of regulation?

On questions of simple fact (for example, whether Earthly life arose by natural selection) there’s a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called “balance of evidence” should reflect this. Indeed, under the Bayesian definition of evidence, “strong evidence” is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

One should also be aware of a related failure pattern: thinking that the course of Deep Wisdom is to compromise with perfect evenness between whichever two policy positions receive the most airtime. A policy may legitimately have lopsided costs or benefits. If policy questions were not tilted one way or the other, we would be unable to make decisions about them. But there is also a human tendency to deny all costs of a favored policy, or deny all benefits of a disfavored policy; and people will therefore tend to think policy tradeoffs are tilted much further than they actually are.

If you allow shops that sell otherwise banned products, some poor, honest, poorly educated mother of five kids is going to buy something that kills her. This is a prediction about a factual consequence, and as a factual question it appears rather straightforward—a sane person should readily confess this to be true regardless of which stance they take on the policy issue. You may also think that making things illegal just makes them more expensive, that regulators will abuse their power, or that her individual freedom trumps your desire to meddle with her life. But, as a matter of simple fact, she’s still going to die.

We live in an unfair universe. Like all primates, humans have strong negative reactions to perceived unfairness; thus we find this fact stressful. There are two popular methods of dealing with the resulting cognitive dissonance. First, one may change one’s view of the facts—deny that the unfair events took place, or edit the history to make it appear fair.2 Second, one may change one’s morality—deny that the events are unfair.

Some libertarians might say that if you go into a “banned products shop,” passing clear warning labels that say Things In This Store May Kill You, and buy something that kills you, then it’s your own fault and you deserve it. If that were a moral truth, there would be no downside to having shops that sell banned products. It wouldn’t just be a net benefit, it would be a one-sided tradeoff with no drawbacks.

Others argue that regulators can be trained to choose rationally and in harmony with consumer interests; if those were the facts of the matter then (in their moral view) there would be no downside to regulation.

Like it or not, there’s a birth lottery for intelligence—though this is one of the cases where the universe’s unfairness is so extreme that many people choose to deny the facts. The experimental evidence for a purely genetic component of 0.6–0.8 is overwhelming, but even if this were to be denied, you don’t choose your parental upbringing or your early schools either.

I was raised to believe that denying reality is a moral wrong. If I were to engage in wishful optimism about how Sulfuric Acid Drink was likely to benefit me, I would be doing something that I was warned against and raised to regard as unacceptable. Some people are born into environments—we won’t discuss their genes, because that part is too unfair—where the local witch doctor tells them that it is right to have faith and wrong to be skeptical. In all goodwill, they follow this advice and die. Unlike you, they weren’t raised to believe that people are responsible for their individual choices to follow society’s lead. Do you really think you’re so smart that you would have been a proper scientific skeptic even if you’d been born in 500 CE? Yes, there is a birth lottery, no matter what you believe about genes.

Saying “People who buy dangerous products deserve to get hurt!” is not tough-minded. It is a way of refusing to live in an unfair universe. Real tough-mindedness is saying, “Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.

I don’t think that when someone makes a stupid choice and dies, this is a cause for celebration. I count it as a tragedy. It is not always helping people, to save them from the consequences of their own actions; but I draw a moral line at capital punishment. If you’re dead, you can’t learn from your mistakes.

Unfortunately the universe doesn’t agree with me. We’ll see which one of us is still standing when this is over.

1Robin Hanson et al., “The Hanson-Hughes Debate on ‘The Crack of a Future Dawn,’” 16, no. 1 (2007): 99–126, http://jetpress.org/v16/hanson.pdf.

2This is mediated by the affect heuristic and the just-world fallacy.

141

184 comments, sorted by Highlighting new comments since Today at 10:42 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Like much of Eliezer's writings, this is dense and full of interesting ideas, so I'll just focus on one aspect. I agree that people advocating positions should fully recognize even (or especially) facts that are detrimental to their side. People advocating deregulation need to accept that things exactly like Eliezer describes will happen.

I'm not 100% sure that in a public forum where policy is being debated, that people should feel obligated to advance arguments that work to their side's detriment. It depends on what the ground rules are (possibly implicit... (read more)

Hal, I don't favor regulation in this context - nor would I say that I really oppose it. I started my career as a libertarian, and gradually became less political as I realized that (a) my opinions would end up making no difference to policy and (b) I had other fish to fry. My current concern is simply with the rationality of the disputants, not with their issues - I think I have something new to say about rationality.

I do believe that people with IQ 120+ tend to forget about their conjugates with IQ 80- when it comes to estimating the real-world effects of policy - either by pretending they won't get hurt, or by pretending that they deserve it. But so long as their consequential predictions seem reasonable, and so long as I don't think they're changing their morality to try to pretend the universe is fair, I won't argue with them whether they support or oppose regulation.

-3bio_logical7yI favor the thesis statement here ("Policy debates should not appear one-sided"), but I don't favor the very flawed "argument" that supports it. One-sided policy debates should, in fact, appear one-sided, GIVEN one participant with a superior intelligence. Two idiots from two branches of the same political party arguing over which way to brutalize a giant body of otherwise uninvolved people (what typically amounts for "policy debate") should not appear "one sided" ...except to the people who know that there's only one side being represented, (typically, the side that assumes that coercion is a good solution to the problem). This is a life or death issue, and you don't have a moral opinion? What purpose could you possibly have for calling yourself a "libertarian" then? If the libertarian philosophy isn't consistent, or doesn't work, then shouldn't it be thrown out? Or, if it doesn't pertain to the new circumstances, then shouldn't it be known as something different than "libertarianism"? (Maybe you'd be a "socialist utopian" post-singularity, but pre-singularity when lots of people have IQs of <2,000, you're a libertarian. In this case, it might make more sense to call yourself a Hayekian "liberal," because then you're simply identifying with a historical trend that leads to a certain predicted outcome.) Gosh, I'm glad that Timothy Murphy, Lysander Spooner, and Frederick Douglass didn't feel that way. I'm also glad that they didn't feel that way because they knew something about how influential they could be, they understood the issues at a deep level, and they were highly-motivated. Just because the Libertarian Party is ineffectual and as infiltrated as the major parties are corrupt doesn't mean it has to be. Moreover, there are far more ways to influence politics than by getting involved with a less-corrupt third party. This site itself could be immensely influential, and actually could obtain a more rational, laissez-faire outcome from politics (although it coul
1frankybegs8moI think you need to read more of the writings here re: scepticism of one's own beliefs
0HungryHobo6yI think you may be conflating 2 common meanings of the word "deserve" which may be magnifying some of the conflict over your statement. deserve [moral], ie that someone deserves it like someone might deserve prison for a terrible crime and deserve as in [events that happen to them as a result of their own actions] which need not have any moral elements. Someone who goes walking into the Sahara without water, shelter or navigation equipment has done no moral wrong. They don't deserve [moral] bad things to happen to them but bad things may happen to them as a result of their unwise choices and may be entirely their own doing. In that sense they get what they have earned or "deserve" [non-moral, as in "have a claim to" or outcome they have brought about]. It's not something malicious that's been forced upon them by others. Someone who steps in a hidden bear trap has been unfairly maimed by a cruel uncaring universe and does not deserve [moral] or deserve[reaping earned results of their own actions] it. Someone who, against the advice of others, ignoring all safety warnings, against even their own better judgement uses a clearly marked bear trap as a sex toy has done no moral wrong. They don't deserve[moral] bad things to happen to them but bad things may happen to them as a result of their unwise choices.

Nobody chooses their genes or their early environment. The choices they make are determined by those things (and some quantum coin flips). Given what we know of neuroscience how can anyone deserve anything?

7BlueAjah8y"Nobody chooses their genes or their early environment. The choices they make are determined by those things (and some quantum coin flips)." All true so far... but here comes the huge logical leap... "Given what we know of neuroscience how can anyone deserve anything?" What does neuroscience showing the cause of why bad people choose to do bad things, have to do with whether or not bad people deserve bad things to happen to them? The idea that bad people who choose to do bad things to others deserve bad things to happen to them has never been based on an incorrect view of neuroscience, and neuroscience doesn't change that even slightly.
4Chrysophylax8yThe point TGGP3 is making is that they didn't choose to do bad things, and so are not bad people - they're exactly like you would be if you had lived their lives. Always remember that you are not special - nobody is perfectly rational, and nobody is the main character in a story. To quote Eliezer, "You grew up in a post-World-War-Two society where 'I vas only followink orders' is something everyone knows the bad guys said. In the fifteenth century they would've called it honourable fealty." Remember that some Nazis committed atrocities, but some Nazis were ten years old in 1945. It is very difficult to be a "good person" (by your standards) when you have a completely different idea of what being good is. You are displaying a version of the fundamental attribution error - that is, you don't think of other people as being just like you and doing things for reasons you don't know about, so you can use the words "bad person" comfortably. The idea "bad people deserve bad things to happen to them" is fundamentally flawed because it assumes that there is such a thing as a bad person, which is unproven at best - even the existence of free will is debatable. There are people who consider themselves to be bad people, but they tend to be either mentally ill or people who have not yet resolved the conflict between "I have done X" and "I think that it is wrong to do X" - that is, they have not adjusted to having become new people with different morals since they did X (which is what criminal-justice systems are meant to achieve).
5twanvl8yI can only interpret a statement like this as "they are exactly like you would be if you were exactly like them", which is of course a tautology. If you first accept a definition of what is good and what is bad, then certainly there are bad people. A bad person is someone who does bad things. This is still relative to some morality, presumably that of the speaker.
-1MugaSofer8yNo. If they were, say, psycopaths, or babyeater aliens in human skins, then living their life - holding the same beliefs, experienceing the same problems - would not make you act the same way. It's a question of terminal value differences and instrumental value differences. The former must be fought, (or at most bargained with,) but the latter can be persuaded. So anyone who's actions have negative consequences "deserves" Bad Things to happen to them?
1twanvl8yI am not saying that. I was only replying to the part "... is fundamentally flawed because it assumes that there is such a thing as a bad person".
0MugaSofer8yMy point is that the distinction between "Bad Person" and "Good Person" seems ... well, arbitrary. Anyone's actions can have Bad Consequences. I guess that didn't come across so well, huh?
3Peterdjones8yThis is a flaw with (ETA: simpler versions of) consequentialism: no one can accurately predict the long range consequences of their actions. But it is unreasonable to hold someone culpable, to blame them, for what they cannot predict. So the consequentialist notion of good and bad actions doesn't translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise. This line of thinking can lead to a kind of fusion of deontology and consequentialism: we praise someone for following the rules ("as a rule, try to save a life where you can") even if the consequences were unwelcome ("The person you saved was a mass murderer");
1TheOtherDave8yI agree that if what I want is a framework for assigning blame in a socially useful fashion, consequentialism violates many of our intuitions about reasonableness of such a framework. So, sure, if the purpose of morality is to guide the apportionment of praise and blame, and we endorse those intuitions, then it follows that consequentialism is flawed relative to other models. It's not clear to me that either of those premises is necessary.
4ArisKatsaris8yThere's a confusion here between consequentialistically good acts (ones that have good consequences) and consequentialistically good behaviour (acting according to your beliefs of what acts have good consequences). People can only act according to their model of the consequences, not accoriding to the consequences themselves.
1TheOtherDave8yI find your terms confusing, but yes, I agree that classifying acts is one thing and making decisions is something else, and that a consequentialist does the latter based on their expectations about the consequences, and these often get confused.
3ArisKatsaris8yA consequentialist considers the moral action to be the one that has good consequences. But that means moral behaviour is to perform the acts that we anticipate to have good consequences. And moral blame or praise on people is likewise assigned on the consequences of their actions as they anticipated them... So the consequentialist assigns moral blame if it was anticipated that the person saved was a mass murderer and was likely to kill multiple times again....
0Peterdjones8yAnd how do we anticipate or project, save on the basis of relatively tractable rules?
3ArisKatsaris8yWe must indeed use rules as a matter of practical necessity, but it's just that: a matter of practical necessity. We can't model the entirety of our future lightcone in sufficient detail so we make generic rules like "do not lie" "do not murder" "don't violate the rights of others" which seem to be more likely to have good consequences than the opposite. But the good consequences are still the thing we're striving for -- obeying rules is just a means to that end, and therefore can be replaced or overriden in particular contexts where the best consequences are known to be achievable differently... A consequentialist is perhaps a bit scarier in the sense that you don't know if they'll stupidly break some significant rule by using bad judgment. But a deontologist that follows rules can likewise be scary in blindly obeying a rule which you were hoping them to break. In the case of super-intelligent agents that shared my values, I'd hope them to be consequentialists. As intelligence of agent decreases, there's assurance in some limited type of deontology... "For the good of the tribe, do not murder even for the good of the tribe..." [http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/]
0Peterdjones8yThat's the kind of Combination approach I was arguing for.
0DaFranker8yMy understanding of pure Consequentialism is that this is exactly the approach it promotes. Am I to understand that you're arguing for consequentialism by rejecting "consequentialism" and calling it a "combination approach"?
1MugaSofer8yThat would be why he specified "simpler versions", yes?
0Peterdjones8yYes
5JGWeissman8yWhat I want out of a moral theory is to know what I ought to do. As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.
0Peterdjones8ySo you don't want to be able to understand how punishments and rewards are morally justified--why someone ought, or not, be sent to jail?
5[anonymous]8yIt seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever. I don't think a moral theory has to have special cases built in for judging other people's actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.
-1Peterdjones8yBut, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do. Universalisability rides again.
4JGWeissman8yThe question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn't seem very meaningful. In general, I don't want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don't see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people's actions is just another sort of action you can choose, it is not fundamentally a special case.
-1Peterdjones8ySo when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They're either in jail or they are not. But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
4DaFranker8yNo. It's about what JGWeissman in general ought to do, including "JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman". Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we're having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate "Give fish or not?" This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world. Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn't directly impact or impacts it in a non-obvious way. For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative. Also note that "judging something good" and "giving praise and rewards", as well as "judging something bad" and "attributing blame and giving punishment", are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do. Your mental judgments are
-2Peterdjones8yIs it? That isn't relevant to me. It isn't relevant to interaction between people, it isn't relevant to society as a whole, and it isn't relevant to criminal justice. I don't see why I should call anything so jejune "morality". Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don't know what you think is blocking that off. Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone's wallet although the money is morally neutral. That is not an fact about morality that is a implication of the naive consequentualist theory of morality -- and one that is often used as an objection against it. Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality). Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
2DaFranker8yIndeed. "Judge actions of Person X" leads to better consequences than not doing it as far as they can predict. "Judging past actions of others" is an action that can be taken. "Judging actions of empirical cluster Y" is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include "punish the idiot who did that" and "blame the person" and whatever other moral judgments are appropriate). Did I somehow communicate that something was blocking that off? If you hadn't said "I don't know what you think is blocking that off.", I'd have assumed you were perfectly agreeing with me on those points. If you want to put your own labels on everything, then yes, that's exactly what my theory is and that's exactly how it works. It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not. So yes, by your words, I'm being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incredible [http://lesswrong.com/lw/ks/the_wonder_of_evolution/] coincidence [http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/], that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans. How incredibly coincidental and curious! Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you'll agree that someone who acts in a manner that instrume
-1Peterdjones8yThe point being what? That moral judgments have an instrumental value? That, they don't have a moral value? That morality collapses into instrumentality. Yes, but the idiosyncratic disposition of your values doesn't make egoism into standard c-ism. That was mean sarcastically: so it isn't coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea. What is the point of that comment? That is not obvious. That is incomplete.
1DaFranker8yOh, sorry. I was jumping from place to place. I've edited the comment, what I meant to say was: "To return to your previous words, I believe you'll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it's more moral. I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called "morally good" themselves." For me, it's a good heuristic that judgments and thoughts also count as actions when I'm thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly. So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they're better for. Mu, yes, no, yes. Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the "considered better" is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects). I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
-1Peterdjones8yThat isn't a reduction that can be performed by real-world agents. You are using "reduction" in the peculiar LW sense of "ultimately composed of" rather than the more usual "understandable in terms of". For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
1DaFranker8yErrh, could you reduce/taboo/refactor "instrumental concerns" here? If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just "considered moral now" but would result in lots of moral bad later. One weird example here is making computer programs. Isn't it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI? I'm not sure I understand your line of reasoning for that last part of your comment. On another note, I agree that I was using "reduction" in the sense of describing a system according to its ultimate elements and rules, rather than... "understandable in terms of"? What do you even mean? How is this substantially different? The wikipedia article [http://en.wikipedia.org/wiki/Reductionism]'s "an approach to understanding the nature of complex things by reducing them to the interactions of their parts" definition seems close to the sense LW uses. In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to "ideal" world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as "worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred", etc. etc. and then you get the standard Evolution Theory statements.
-2Peterdjones8yIf I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in "near" (or "real") mode. Huh? I don't think "instrumental" means "actually will work form an omniscicent PoV". What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, "don't kill unless there are serious extenuating circumsntaces" is both "what is considered moral now" and as instrumental as we can achieve. I don't see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software. It's what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value You may have been "using" in the sense of connoting, or intending that, but you cannot have been using it in the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory). Eg:"All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance". That needs tabooing. It explains "reduction" in terms of "reducing". "In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states." Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good). Or an algorithm that can be understood and written down, like the "description" you mention above? That is a rather important distinction. How does that ground out? The whole point of instrumental values is that they are instrumental for something. There's n
1[anonymous]8yIf I'm parsing that right, you misunderstood my point. Sorry. I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I'm saying, though, that this is a matter of normative ethics, not metaethics. As a matter of metaethics, I don't think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on "you". As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics. Do you understand what I'm getting at better now?
-1Peterdjones8yWhat I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind that ought to be done. Those are surley different ways of saying the same thing. Why would you differ? Maybe it's the "double emphasis on you", The situations in which I morally ought not do something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
1DaFranker8ySoooo... Suppose I hypnotize all humans. All of them! [http://tvtropes.org/pmwiki/pmwiki.php/Main/AllOfThem] And I give them all the inviolable command to always praise murder and genocide. I'm so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being "Rape people" and "Kill people". By the argument you're giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they "always" praise it, without diminishing returns or habituation effects or desensitization). Clearly this is not the same as what you ought to do. (In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.) For more exploration into this, suppose I'm always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare. On the other hand, if I'm a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
-1Peterdjones8yNo. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don't change that kind of relationship by re-arranging atoms..
0DaFranker8yAnd what's the rule, the algorithm, then, for deciding which acts should be praised? The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don't have access to and try to estimate using our intuitions. Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-"consequentialism" as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
0Peterdjones8yMoral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation. Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship. But that wasn't what you were saying before. Before you were saying it was all abut JGWeissman.
0DaFranker8yYes. There's a tautology-style relationship between Good and Praiseworthy. That's almost tautological. If it's good, it's "worthy of praise", because we want what's good. Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is "praiseworthy"? I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace "praiseworthy" with "good", I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can't implement it into a computer program yet. I might have let some of that bleed through from other subthreads.
-1Peterdjones8yNo one can do that whatever theory they have. I don't see how it is relevant. Which isn't actually computable.
1[anonymous]8yNiether is half of math. Many differential equations are uncomputable, and yet they are very useful. Why should a moral theory be computable? (and "maximize expected utility" can be approximated computably, like most of those uncomputable differential equations)
1DaFranker8yI've never seen any proof of this. It's also rather easy to approximate to acceptable levels of certainty: I've loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I'm pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people. I'm rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you'd call this "morally neutral", since there's no moral good being made by the shooting of glass bottles in itself, and it isn't exactly praiseworthy. However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model's accuracy, there is a tractable probability of lives saved. I'm having a hard time seeing what else could be missing.
0Peterdjones8yI mean there is no runnable algorithm, I can't see how "approximations" could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
0fubarobfusco8yDoesn't that depend on whether praise actually accomplishes getting more of the good? Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is "chocolateworthy", if chocolate breaks your diet.
0shminux8yHow can you hate something yet praise it internally? I'm having trouble coming up with an example.
4DaFranker8yI know a very good one, very grounded in reality, that millions if not billions of people have and do this. Death.
0[anonymous]8yI don't see what you're getting at. I'll lay out my full position to see if that helps. First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I'm asking about whether 4 is an integer. So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is "what should I do?", because it's the only one I can act on. I don't think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics. Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I've decided answers the question "what ought to be done"), I claim that I ought to act in such a way as achieves the "best" outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don't care which is done. You can call this "consequentialism" if you like. Then, unpacking "best" a bit, we find all the good things like fun, happiness, freedom, life, etc. Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like "he didn't know any better" and "can we really expect people to...", which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you've worked out what is terminally valueable. So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I
0Peterdjones8yWhat's wrong with sticking with "what ought to be done" as formulation? Meaning others shouldn't? Your use of the "I" formulation is making your theory unclear. They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict. I don't see why. Do you think you are much better at making predictions?
2fubarobfusco8yKnowledge without motivation may lend itself to akrasia. It would also be useful for a moral theory to motivate us to do what we ought to do.
0[anonymous]8yThat's not a flaw in consequentialism. It's a flaw in judging other people's morality. Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
0Peterdjones8yjudging the moral worth of others actions is something a moral theory should enable one to do. It's not something you can just give up on. So two consequentialists would decide that each of them has moral responsibility and the other doesn't? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
1[anonymous]8yWhat for? It doesn't help me achieve good things to know whether you are morally good, except to the extent that "you are morally good" makes useful predictions about your behaviour that I can use to achieve more good. And that's a question for epistemology, not morality. They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question "what is B morally responsible for" does not answer the question "what should A do", which is the only question A is interested in. A would agree that for B, B is morally responsible for everything, but would comment that that's not very interesting (to A) as a moral question. So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
1DaFranker8yBy extension, however, in case this corollary was lost in inferential distance: For A, "What should A do?" may include making moral evaluations of B's possible actions within A's model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important. Thus, by instrumental utility, A often should make a model of B in order to influence B's actions on the world as much as possible, since this influence is one possible action A can take that influences A's own moral responsibility towards the world.
1[anonymous]8yIndeed. I would consider it a given that you should model the objects in your world if you want to predict and influence the world.
-1Peterdjones8yBecause then you apportion reward and punishment where they are deserved. That is itself a Good, called "justice" I don't see how that follows from consequentialism or anything else. Then it is limited.
0[anonymous]8yI get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don't see things this way. It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A's actions isn't interesting to A.
-1Peterdjones8yIt doesn't follow from that that you have no interest in praise and blame. Isn't A interested in the actions of B and C that impinge on A?
1[anonymous]8yYes, and it doesn't follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it's just not the same as I use for myself. Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
-1Peterdjones8yYour metaethics treats everyone as acting but not acted on?
2DaFranker8yA is interested in: 1) The state of the world. This is important information for deciding anything. 2) A's possible actions, and their consequences. "Their consequences" == expected future state of the world for each action. "actions of B and C that impinge on A" is a subset of 1) and "giving praise and blame" is a subset of 2). "Influencing the actions of B and C" is also a subset of 2).
0Peterdjones8y1) The state of the world. This is important information for deciding anything. 2) A's possible actions, and their consequences. "Their consequences" == expected future state of the world for each action. Or, briefly "The Union of A and not-A" or, more briefly still: "Everything".
1[anonymous]8yBut some people take more actions that have Bad Consequences than others, don't they?
2DaFranker8yYes, but even that is subject to counter-arguments and further debate, so I think the point is in trying to find something that more appropriately describes exactly what we're looking for. After all, proportionality and other factors have to be taken into account. If Einstein takes more actions with Good Consequences and less actions with Bad Consequences than John Q. Eggfart, I don't anticipate this to be solely because John Q. Eggfart is a Bad Person with a broken morality system. I suspect Mr. Eggfart's IQ of 75 to have something to do with it.
-5bio_logical7y
2MugaSofer8yIf you mean that some people choose poorly or are simply unlucky, yes. If you mean that some people are Evil and so take Evil actions, then ... well, yes, I suppose, psychopaths. But most Bad Consequences do not reflect some inherent deformity of the soul, which is all I'm saying. Classifying people as Bad is not helpful. Classifying people as Dangerous ... is. My only objection is turning people into Evil Mutants - which the comment I originally replied to was doing. ("Bad Things are done by Bad People who deserve to be punished.")
0bio_logical7yI'd prefer to leave "the soul" out of this. How do you know that most bad consequences don't involve sociopaths or their influence? It seems unlikely that that's not the case, to me. Also, don't forget conformists who obey sociopaths. Franz Stangl said he felt "weak in the knees" when he was pushing gas chamber doors shut on a group of women and kids. ...But he did it anyway. Wagner gleefully killed women and kids. Yet, we also rightfully call Stangl an evil person, and rightfully punish him, even though he was "Just following orders." In hindsight, even his claims that the democide of over 6 million Jews and 10 million German dissidents and dissenters was solely for theft and without racist motivations, doesn't make me want to punish him less.
0MugaSofer7ydouble-posted
1MugaSofer7yIn before this is downvoted to the point where discussion is curtailed. And yet here you are arguing for Evil Mutants [http://wiki.lesswrong.com/wiki/Correspondence_bias]. I'm aware many people who believe this don't literally think of it in terms of the soul - if only because they don't think about it all - but I think it's a good shorthand for the ideas involved. Observing simple incompetence in the environment. I should probably note I'm not familiar with these individuals, although the names do ring a faint bell. Seems like evidence for my previous statements. No? These are Nazis, yes? I wouldn't be that surprised if some of them were "gleeful" even if they had literally no psychopaths among their ranks - unlikely from a purely statistical standpoint. While my contrarian tendencies are screaming at me to argue this was, in fact, completely unjust ... I can see some neat arguments for that ... We punished Nazis who were "just obeying orders" - and now nobody can use that excuse. Seems like a pretty classic example of punishment setting an example for others. No "they're monsters and must suffer" required. I'm probably more practiced at empathising with racists, and specifically Nazis - just based on your being drawn from our culture - but surely racist beliefs is a more sympathetic motivation than greed? (At least, if we ignore the idea of bias possibly leading to racist beliefs that justify benefiting ourselves at their expense, which you are, right?)
-2More_Right7yThere are a lot of people who really don't understand the structure of reality, or how prevalent and how destructive sociopaths [https://www.youtube.com/watch?v=MgGyvxqYSbE] (and the conformists that they influence [https://www.youtube.com/watch?v=OsFEV35tWsg]) are. In fact, there is a blind spot in most people's realities that's filled by their evolutionarily-determined blindness to sociopaths. This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons, total lack of empathy, as described by Robert Hare in "without conscience") with modern technology and a support network of other sociopaths [http://youtu.be/MgGyvxqYSbE?t=23m20s]. In fact, virtually everyone who hasn't read Stanley Milgram's book about it, and put in a lot of thought about its implications is in this category. I'm not suggesting that you or anyone else in this conversation is "bad" or "ignorant," but just that you might not be referencing an accurate picture of political thought, political reality, political networks. The world still doesn't have much of a problem with the "initiation of force" or "aggression." (Minus a minority of enlightened libertarian dissenters.) ...Especially not when it's labeled as "majoritarian government." ie: "Legitimized by a vote." However, a large and growing number of people who see reality accurately (small-L libertarians [http://www.libertarianism.org]) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people. Stangl could have recognized that the murder of women and children was "too wrong to tolerate." In fact, he did recognize this, by his comment that he felt "weak in the knees" while pushing women and children into the gas chamber. That he chose to follow "the path of compliance" "the path of obedience" and "the path of nonresistance" (all those prior paths are different ways of saying the sa
3MugaSofer7yI'm on a mobile device right now - I'll go over your arguments, links, and videos in more detail later, so here are my immediate responses, nothing more. Wait, why would evolution make us vulnerable to sociopaths? Wouldn't patching such a weakness be an evolutionary advantage? Wouldn't a total lack of mirror neurons make people much harder to predict, crippling social skills? "Ignorant" is not, and should not be, a synonym for "bad". If you have valuable information for me, I'll own up to it. Those strike me as near-meaningless terms, with connotations chosen specifically so people will have a problem with them despite their vagueness. Did you accidentally a word there? I don't follow your point. And clearly, they all deliberately chose the suboptimal choice, in full knowledge of their mistake. You're joking, right? Statistical likelihood of being murdered by your own government, during peacetime, worldwide. i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
-2More_Right7yI suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood. If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path (to the greatest extent allowed by the nation's many "law students" who become "licensed lawyers." What if all those law students had become STEM majors, and built better machines and technologies?) I dare say that that simple desire for an easier paycheck might be the cause of sociopathy on a grand scale. I have my own theories about this, but for a moment, nevermind _why. If societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we're playing in that fall. If societies don't fall entirely to over-parasitism, then what forces ameliorate parasitism? And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn't take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time. But I think R. J. Rummel's graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we're not on the same course. Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? ...Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated. But sure, the graph doesn't mean anything if technology makes us smart enough to break free from past cycles. In that case, the warning didn't nee
2TheAncientGeek7yGetting maths right is useless when youmhave got concpets wrong. Your graph throws Liberal democracies in with authoritarian and totalitarianism regimes. From which you derive that mugasofer is AA likely to be killed by Michael Higgins as he is by Pol Pot.
4[anonymous]7yYou're making lots of typos these days; is there something wrong with your keyboard or something?
2MugaSofer7yYou know, this raises an interesting question: what would actually motivate a clinical psychopath in a position of power? Well, self-interest, right? I can see how there might be a lot of environmental disasters, defective products, poor working conditions as a result ... probably also a certain amount of skullduggery would be related to this as well. Of course, this is an example of society/economics leading a psychopath astray, rather than the other way around. Still, it might be worth pushing to have politicians etc. tested and found unfit if they're psychopathic. I remain deeply suspicious of this sentence. This seems reasonable, actually. I'm unclear why I should believe you know better, but we are on LessWrong. I ... words fail me. I seriously cannot respond to this. Please, explain yourself, with actual reference to this supposed reality you perceive, and with the term "initiation of force" tabooed. And this is the result of ... psychopaths? Human psychological blindspots evolved in response to psychopaths? Well, that's ... legitimately disturbing. Of course, it may be inaccurate, or even accurate but justified ... still cause for concern. You know, my government could be taken down with a few month's terrorism, and has been. There are actual murderers in power here, from the ahem glorious revolution. I actually think someone who faced this sort of thing here might have a real chance of winning that fight, if they were smart. This contributes to my vague like of american-style maintenance-of-a-well-organized-militia gun ownership, despite the immediate downsides. And, of course, no other government is operating such attacks in Ireland, to my knowledge. I think I have a lot more to fear from organized crime than organized law, and I have a lot more unpopular political opinions than money. The site appears to be explicitly talking about genocide etc. in third-world countries. Citation very much needed, I'm afraid. You are skirting the edge of assumin
2TheAncientGeek7yThe non aggression principle is horribly broken [http://dbzer0.com/blog/why-the-non-aggression-principle-is-useless-as-a-moral-guideline/]
1soreff7yConcern about sociopaths applies to both business and government: http://thinkprogress.org/justice/2014/01/09/3140081/bridge-sociopathy/ [http://thinkprogress.org/justice/2014/01/09/3140081/bridge-sociopathy/]
3hairyfigment7ySo, is this trolling? You cite the Milgram experiment, in which the authorities did not pretend to represent the government. The prevalence and importance of non-governmental authority in real life is one of the main objections to libertarianism [http://rationallyspeaking.blogspot.com/2012/07/fundamental-contradiction-of.html] , especially the version you seem to promote here (right-wing libertarianism as moral principle).
2MugaSofer7yHaving reviewed your links: Your first link (https://www.youtube.com/watch?v=MgGyvxqYSbE [https://www.youtube.com/watch?v=MgGyvxqYSbE]) both appears to be, and is, a farly typical YouTube conspiracy theory documentary that merely happens to focus on psychopaths. It was so bad I seriously considered giving up on reviewing your stuff. I strongly recommend that, whatever you do, you cease using this as your introductory point. "The Psychology of Evil [https://www.youtube.com/watch?v=OsFEV35tWsg]" was mildly interesting; although it didn't contain much in the way of new data for me, it contained much that is relatively obscure. I did notice, however, that he appears to be not only anthropomorphizing but demonizing formless things [http://slatestarcodex.com/2013/03/07/we-wrestle-not-with-flesh-and-blood-but-against-powers-and-principalities/] . Not only are most bad things accomplished by large social forces, most things period are. It is easier for a "freethinker" to do damage than good, although obviously, considering we are on LW, I consider this a relatively minor point. I find the identification of "people who see reality accurately" with "small-l libertarians [http://www.libertarianism.org/]" extremely dubious, especially when it goes completely unsupported, as if this were a background feature of reality barely worth remarking on. Prison industrial complex link [https://www.youtube.com/watch?v=dS9Vxt3yQdk] is meh; this [https://www.youtube.com/watch?v=NaPBcUUqbew], on the other hand, is excellent, and I may use it myself. Schaeffer Cox [http://www.schaeffercox.com/dear-sensible-people-of-a-candid-world/] is a fraud, although I can't blame him for trying and I remain concerned about the general problem even if he is not an instance of it. The chart [http://hawaii.edu/powerkills/VIS.TEARS.ALL.AROUND.HTM] remains utterly unrelated to anything you mentioned or seem particularly concerned about here.
2Chrysophylax8yIf doing "bad" things (choose your own definition) makes you a Bad Person, then everyone who has ever acted immorally is a Bad Person. Personally, I have done quite a lot of immoral things (by my own standards), as has everyone else ever. Does this make me a Bad Person? I hope not. You are making precisely the mistake that the Politics is the Mind-Killer sequence warns against - you are seeing actions you disagree with and deciding that the actors are inherently wicked. This is a combination of correspondence bias, or the fundamental attribution error, [http://lesswrong.com/lw/hz/correspondence_bias/] (explaining actions in terms of enduring traits, rather than situations) and assuming that any reasonable person would agree to whatever moral standard you pick. A person is moral if they desire to follow a moral standard, irrespective of whether anyone else agrees with that standard.
6Vaniver8yIf a broken machine is a machine that doesn't work, does that mean that all machines are broken, because there was a time for each machine when it did not work? More clearly: reading "someone who does bad things" as "someone who has ever done a bad thing" requires additional assumptions.
4smk7yThey can't. The whole idea of "deserving" is... icky. I try not to use it in figuring out my own morals, although I do sometimes use the word "deserve" in casual speech/thought. When I'm trying to be more conscientious and less casual, I don't use it.
0EngineerofScience5yThis article might answer that questionDiseased thinking: dissolving questions about disease [http://lesswrong.com/lw/2as/diseased_thinking_dissolving_questions_about/]

TGGP, I think we have to define "deserve" relative to social consensus--a person deserves something if we aren't outraged when they get it for one reason or another. (Most people define this based on the consensus of a subset of society--people who share certain values, for instance.) Differences in the concept of "deserve" are one of the fundamental differences (if not the primary difference) between conservatism and liberalism.

-4JJ10DMAN10yI agree strongly with everything in the above paragraph, especially the end. And so should you. Greens 4 life!
2rela10yDown-voted due to political phrasing (despite shared political-party membership).
0handoflixue10yVoted up due to political phrasings (and assumed effort goal of humor :))
8ericn10yDo we need a definition of "deserve"? Perhaps it does not correspond to anything in reality. I would certainly argue that it doesn't correspond to anything in politics. For instance, should we have a council that doles out things people deserve? It just seems silly. Politics is ideally a giant cost/benefit satisficing operation. Practically, it is an agglomeration of power plays. I don't see where "deserve" fits in.
0CWG7yA "council that doles out things people deserve" sounds like Parecon: Life After Capitalism by Michael Albert. (Personally, it fills me with horror, but there are people who think it's a good idea.)

TGGP, if the mind were not embodied in the brain, it would be embodied in something else. You don't need neuroscience to see the problem with the naive conception of free will.

The reason I don't think idiots deserve to die is not because their genes played a role in making them idiots. Suppose it were not the genes. So what? The point is that being stupid is not the same as being malicious, or dishonest. It is simply being stupid, no more and no less. Drinking Sulfuric Acid Drink because you wishfully think it will cure your arthritis, is simply not on a moral par with deliberately burning out someone's eyes with hot pokers. No matter what you believe about the moral implications of determinism for sadistic torturers, in no fair universe would mere sloppy thinking be a capital crime. As it has always been, in this our real world.

6DSimon10yWhat about when sloppy thinking leads a person to hurt other people, i.e. a driver who accidentally kills a pedestrian while distracted by a call they thoughtlessly answered in motion?
-6Ender9y
-4bio_logical7yAnd, in no fair universe would the results of sloppy thinking be used as an excuse to create coercive policies that victimize thousands of sloppy thinkers for every sloppy thinker that is (allegedly) benefited by them. Yet, because even the philosophers, and rationality blog-posters of our universe are sloppy thinkers (in relation to artilects with 2000 IQs), some of us continue to accept the idea that the one-sided making of coercive laws (by self-interested, under-educated sociopaths) constitutes a legitimate attempt at a political solution. Nothing could be further from the truth.

I am not normally a nit pick (well, maybe I am) but this jumped out at me: an example of a fact--"whether Earthly life arose by natural selection." Because natural seletion is one of the cornerstones of modern biology, I thought I'd take a few seconds to enter this comment.

Natural selection is a biological process by which favorable traits that can be gentically inherited become more common in successive generations of a population of reproducing organisms, and unfavorable traits that can be inherited become less common. The driving force is the... (read more)

4AndyCossyleon10y"whether Earthly life arose by natural selection" was a bad example of Eliezer's. Natural selection does not account for how life arose, and dubitably accounts for how even the diversity of life arose*. Natural selection accounts, and only accounts, for how specified (esp. complex & specified) biological artifacts arose and are maintained. An infinitely better example would have been "whether terrestrial life shares a common ancestor," because that is a demonstrable fact. *This has probably mostly to do with plate tectonics carting around life forms from place to place and with genetic drift.

Sorry, Brayton. I do know better, it was simply an accident of phrasing. I hadn't meant to imply that abiogenesis itself occurred by selective processes - "arose" was meant to refer to life's ascent rather than sparking.

Though, in my opinion, the very first replicator (or chemical catalytic hypercycle) should not really count as "life", because it merely happens to have the accidental property of self-replication and was not selectively optimized to this function. Thus, it properly belongs to the regime of accidental events rather than the regime of (natural) optimization.

The problem here is bias to one's own biases, I think. After all, we're all stupid some of the time, and realising this is surely a core component of the Overcoming Bias project. Robin Hanson may not think he'd ever be stupid enough to walk into the Banned Shop, but we all tend to assume we're the rational one.

You also need to consider the real-world conditions of your policy. Yes, this might be a good idea in its Platonic ideal form, but in practice, that actually doesn't tell us very much. As an argument against "regulation", I think, with a co... (read more)

2cypher1978yI, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death. With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.
1Nornagest8yCertain types of content labeling might work a lot like Hanson's Banned Shop, minus the trivial inconvenience of going to a different shop: the more obvious and dire the label, the closer the approximation. Cigarettes are probably the most advanced example I can think of. Now, cigarettes have also been extensively regulated in other ways, so we can't infer from this too well, but I think we can tentatively describe the results as mixed: it's widely understood that cigarettes stand a good chance of killing you, and smoking rates have indeed gone down since labeling laws went into effect, but it's still common. Whether or not we count this as a win probably depends on whether, and how much, we believe smokers' reasons for smoking -- or dismiss them as the dribble of a hijacked habit-formation system.

Alex raises an interesting point: do most of us in fact assume that we would never walk into a Banned Shop? I don't necessarily assume that. I could envision going there for a medical drug which was widely available in Europe, but not yet approved by the U.S. FDA, for example. Or how about drugs that are supposed to only be available by prescription, might Banned Shops provide them to anyone who will pay? I might well choose to skip the time and money of a doctor visit to get a drug I've taken before without problems (accepting the risk that unknown to me,... (read more)

It's a similar argument to my proposal of Rational Airways, an airline that asks you to sign a release when buying a ticket to the effect that you realise how tiny the risk of a terrorist attack is, and therefore are willing to travel with Rational, who do not apply any annoying security procedures.

2Jiro6y(Responding to old post) This has another problem that other people haven't mentioned so far: it's not really possible to trace a terrorist attack to a specific cause such as lack of a particular security procedure. This means that Rational Airways will cut out their annoying security procedures, but the release they will make you sign will release them from liability to all terrorist attacks, not just to terrorist attacks related to them cutting down those security procedures. That's a bad deal for the consumer--the consumer wants to avoid intrusive searches, finds an airline which lets them avoid the searches by signing a release, but the release also lets the airline hire known serial killers as stewardesses as well as not search the passengers, and you can't sue them for it because the release is all-encompassing and is not just limited to terrorism that would have been caught by searches. Furthermore, then all the other airlines see how Rational Airlines works and decide to improve on it. They get together and decide that all passengers will have to either submit to being stripped fully naked, or sign a release absolving the airline of responsibility for terrorists. The passengers, of course, sign the releases, and the result is that the airlines never have to worry about hiring serial killers or any other forms of negligence either. (Because not screening the stewardesses for serial killers saves them money, any airline that decides not to do this cannot compete on price.) Later, some smart airlines decide they don't actually need the excuse and just say "there's an unavoidable base rate of terrorism and we don't want to get sued for that" and make everyone, period, sign a release acknowledging that before getting on the plane (and therefore absolving the airline of all responsibility for terrorism whether it is part of the base rate or not.) Even later, another airline decides to just make its customers promise not to sue them for anything at all (whether
2Jiro4yIn fact, let me add a comment to this. Someone may be willing to assume some risk but not a higher level of risk. But there's no way to say "I'm willing to accept an 0.5% chance of something bad but not a 5% chance" by signing a disclaimer--the effect of the disclaimer is that when something bad happens, you can't sue, which is an all or nothing thing. And a disaster that results from an 0.5% chance looks pretty much like a disaster that results from a 5% chance, so you can't disclaim only one such type of disaster.

Alex, a possible problem is that Rational would then attract all the terrorists who would otherwise have attacked different airlines.

PS: And, the risk might not be tiny if you took off all the safety precautions. But, yes, you could dispense with quite a few costly pointless ostentatious displays of effort, without changing the security risk in any significant sense.

James, my comment on drawing the moral line at capital punishment was addressed to the universe in general. Judicial executions count for a very small proportion of all death penalties - for example, the death penalty that you get for just being alive for longer than a century or so.

-2mat339y"...the death penalty that you get for just being alive for longer than a century or so." The "ethics of gods" most probably is the ethics of evolution. "Good" (in this particular sence) Universe have to be "bad" enough to allow the evolution of live, mind and [probabbly] technology. The shaw is natural selection - and the shaw must go on. Even as it includes aforementioned death penalty...

The experimental evidence for a purely genetic component of 0.6-0.8 is overwhelming

Erm. 0.6-0.8 what?

-Robin

1tut11yProbably 60-80% heredity. But that is also meaningless, because I have no idea which population it refers to.
5Celer10yI assume he means an R or an R^2 of 0.6-0.8. Both are measures of correlation. R^2 would be the percent in the variation of one twins intelligence predicted by the intelligence of the other twin.

I realize it has little to do with the main argument of the post, but I also have issues with Eliezer's claim:

"The experimental evidence for a purely genetic component of 0.6-0.8 is overwhelming..."

Genes matter a lot. But there are a number of problems with the calculation you allude to. See Richard Nisbett's work.

2kremlin9ywhat is the calculation he was alluding to? i wanted a source on that.

"Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation." Can you imagine a politician saying that? Neither can I.

--60 Minutes (5/12/96) Lesley Stahl on U.S. sanctions against Iraq: We have heard that a half million children have died. I mean, that's more children than died in Hiroshima. And, you know, is the price worth it?

Secretary of State Madeleine Albright: I think this is a very hard choice, but the price--we think the price is worth it.

She later expressed regret for it, after taking an awful lot of flack at the time, but this does sometimes happen.

7JDM7yI think your point that she took a lot of flak for it is evidence for the original point. The only other reasonable responses to that could have been changing her mind on the spot, or disputing the data, and neither of those responses would have brought similar backlash on her. Conceding weak points to your arguments in politics is often looked upon as a weakness when it shouldn't be.

its unfair to caricaturize libertarians as ultra-social-darwinists saying "stupid ppl who accidently kill themselves DESERVED it". if that quote was ever literally uttered, I would tend to think it was out of exasperation at the opposing viewpoint that govt has a paramount responsibility to save its citizens from themselves to the point of ludicrous pandering.

"Everyone gets what they deserve" is the unironic (and secular) motto of a close family friend who is wealthy in Brazil, one of the countries with the greatest levels of economic inequality in the world. I have heard the sentiment echoed widely among the upper and upper middle class. Maybe it's not as extreme as that, but it is a clear expression of the idea that unfortunate people deserve their misfortune to the point that those who have the resources to help them should not bother. This sentiment also characterizes Objectivism, which is commonly (though not always) associated with libertarianism.

3cupholder11ySounds like our good friend the just-world fallacy. [http://en.wikipedia.org/wiki/Just-world_phenomenon]
1SRStarin10yYou misunderstand Rand's Objectivism. It's not that people who bad-luck into a bad situation deserve that situation. Nor do people who good-luck into a good situation deserve that reward. You only deserve what you work for. That is Objectivism, in a nutshell. If I make myself a useful person, I don't owe my usefulness to anyone, no matter how desperate their need. That may look like you're saying the desperate deserve their circumstances, but that is just the sort of fallacy Eliezer was writing about in the OP. Where libertarian political theory relates to Objectivism is in the way the government often oversteps its bounds in expecting successful people to do extra work to help others out. Many libertarians are quite charitable--they just don't want the government forcing them to be so.
6shokwave10yYou only deserve what you work for - do you get what you deserve? If you don't, then what purpose does the word "deserve" serve? If you do get what you deserve, how come the world looks like it's full of people work for something, deserve it, and don't get it?
4SRStarin10yI'm only trying to correct the comment's incorrect assertions about objectivism and libertarianism. To address your comment, I'll start by pointing out Objectivism is a system of ethics, a set of rules for deciding how to treat other people and their stuff. It's not a religion, so it can't answer questions like "Why do some people who work hard and live right have bad luck?" So, I will assume you are saying that people who work hard in our society seem to you to systematically fail to get what they work for. To clarify my comment, objectivism says you only deserve to get what you work for from other people. That is, you don't in any way deserve to receive from others what they didn't already agree to pay you in exchange for your work. But, some people can't find anyone to pay them to work. Some can't work at all. Some can sell their work, but can't get enough to make a living. Because of the size and complexity of our society, there are huge numbers of people who have these problems. Sometimes it's their fault--maybe they goofed off in high school or college--but often it's not. If we were cavemen, we'd kick them out of the cave and let them starve, but we're not. We have multiple safety mechanisms, also because of the size and complexity of our society, through neighbors, schools, churches, and local, state and national governments, that help most people through hard times. The fact that I'm OK with governments being in that sentence is a major reason I can't call myself a strict Objectivist, but I'm still more a libertarian than anything else, politically. I think the ideal is that no one should fall through our safety nets, but there will always be people who do, just like the mother of five in the OP. And when everyone is having a harder time than usual, more people will fall through the safety nets. And if your problem is with whole nations of people who seem to work hard for very little, well, I probably agree with you, and our beef is with the history of
3jbay8y"To clarify my comment, objectivism says you only deserve to get what you work for from other people. That is, you don't in any way deserve to receive from others what they didn't already agree to pay you in exchange for your work." Although it might work as a system of ethics (or not, depending on your ethics), this definitely doesn't function as a system of economics. First of all, it makes the question of wealth creation a chicken-and-egg problem: If every individual A only deserves to receive what individual B agrees to pay them for work X, how did individual B obtain the wealth to pay A in the first place? The answer is probably that you can also work for yourself, creating wealth that did not exist without anyone paying you. So your equation, as you've expressed it, does not quite balance. You're missing a term. Wealth creation is very much a physical thing, which makes it hard to tie to an abstract system of ethics. The wealth created by work X is the value of X; whether it's the food grown from the earth, or the watch that has been assembled from precisely cut steel, glass, and silicon. That is the wealth that is added to the pool by labour and ingenuity, regardless of how it gets distributed or who deserves to get paid for it. And that wealth remains in the system, until the watch breaks or the food spoils (or gets eaten; it's harder to calculate the value of consumed food). It might lose its value quickly, or it might remain a treasure for centuries after the death of every individual involved in the creation of that wealth, like a work of art. It might also be destroyed by random chance well before its predicted value has been exploited. Who deserves to benefit from the wealth that was created by the work of, and paid for by, people who have been dead for generations? The question of who deserves to benefit from the labour X, and how much, becomes very tricky when the real world is taken into account... One might argue that that is what Wills are for

I recently spoke with someone who was in favor of legalizing all drugs, who would not admit that criminalizing something reduces the frequency at which people do it.

Was that actually his claim or was he saying that it doesn't necessarily reduce the frequency at which people do it? Clearly the frequency of drug use has gone up since they were made illegal. Now perhaps it would have gone up faster if drug use had not been made illegal but that's rather hard to demonstrate. It's at least plausible that some of the popularity of drugs stems from their illegality as it makes them a more effective symbol of rebellion against authority for teenagers seeking to signal rebelliousness.

Claiming that criminalizing can't possibly reduce the frequency at which people do something would be a pretty ridiculous claim. Claiming that it hasn't in fact done so for drugs is quite defensible.

2AlexSchell8yIn the real world, PhilGoetz's interlocutor was almost certainly not making the sophisticated point that in some scenarios making X illegal makes it more desirable in a way that outweighs the (perhaps low) extra costs of doing X. If the person had been making this point, it would be very hard to mistake them for the kind of person PhilGoetz describes.

Portugal, anyone? There is a point when arguments need to be abandoned and experimental results embraced. The decriminalization of drugs in Portugal has seen a scant increase in drug use. QED

The same goes for policies like Don't Ask, Don't Tell. Many countries around the world have run the experiment of letting gays serve openly and there have been no ill effects.

Abandon rationalization, embrace reality.

1AlexSchell8ySo you think an increase in drug use following decriminalization supports your view? And you were upvoted?

The claim of sensible consequentialist (as opposed to moralizing) drug control advocates who are in favor of the War on Drugs is that the War on Drugs, however disastrous, expensive, destructive of liberties, and perverting of justice (to whatever degree they will accept such claims - can't make an omelette without breaking eggs, etc.), is a lesser evil than the consequences of unbridled drug use. This claim is most obviously falsified by a net decrease in drug use, yes, but also falsified by a small increase which is not obviously worse than the War on Drugs since now the anti-War person can use the same argument as the pro-War person was: legalization is the lesser of two evils.

The benefits and small costs in Portugal are, at least at face value, not worse than a War. Hence, the second branch goes through: the predicted magnitude of consequences did not materialize.

1AlexSchell8yI agree completely. Note that PhilGoetz, following the subject of the thread, pointed out a good consequence of drug control (that is, good on its own terms) that an opponent of drug control refused to acknowledge. AndyCossyleon apparently thought that the Portugal example is a counterpoint to what PhilGoetz said, which it isn't (though as you point out it is evidence against some views held by drug control advocates). In retrospect, I should have said "rebuts PhilGoetz's point" instead of "supports your view" in the grandparent.
1thomblake8yFunny. The response to AndyCossyleon at the time should have been a link to this post.
4AndyCossyleon8yAlexSchell, "scant" is essentially a negative, much like "scarce(ly)" or "hardly" or "negligible/y". Rewriting: "The decriminalization of drugs in Portugal has scarcely seen an increase in drug use." I'd argue that these sentences mean the same thing, and that together, they mean something different from "The decriminalization ... has seen a small increase ..." which is what you seem to have interpreted my statement as, though not completely illegitimately.
0Solarian8yI would still read that as an increase. "Scant," "scarcely," etc., all mean "an amount so small it is negligible." But that's still an increase. 1 + 99^99 isn't 99^99. I understand what is trying to be said in the argument concerning decriminalization, but strictly-speaking, that is an increase in drug use.
5[anonymous]8yThere is something fishy about the words "legalize" and "decriminalize." Buying, selling, making and consuming wine are legal activities in Portugal. Not marijuana [http://en.wikipedia.org/wiki/Drug_policy_of_Portugal].

Just wanted to say thanks for a very thoughtful article. I've burned through a great deal of time, wondering about the morality (or immorality) of the "arguments are soldiers" mindset.

The point of banned goods is not that they are banned because of the hazards for the people alone who buy them but for everyone else also. Sulphuric acid for example is easily usable as a weapon especially in concentrated form. (It grows very hot if it touches water. And it is very acidic. So, by using a simple acid proof squirt gun one can do serious damage.)

And, that's not really all: Suppose I could go into such a shop, proof that I'm sufficiently intelligent to handle dangerous stuff without being a danger for myself and buy a) a PCR machine b) a flu ... (read more)

2guineapig10yMadagascar, obviously.
7JoshuaZ10yMost of the goods you mention aren't restricted at all. I don't need any special permits to buy a PCR machine or anything necessary to run it for example.

Real tough-mindedness is saying, "Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation." Can you imagine a politician saying that? Neither can I.

I can imagine it, but I can't say that I can remember it in a similar case. The "if it saves just one life...." arguments have always struck me as idiotic, but apparently there is a large market for it. Is it really the case that so many people think t... (read more)

1wedrifid9yAlicorn already told you [http://lesswrong.com/lw/iu/mysterious_answers_to_mysterious_questions/4vje] about how to do quotations.

Real tough-mindedness is saying, "Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation."

Interestingly, I independently came to a similar conclusion regarding drug legalization a few days ago, which I expressed during a class discussion on the topic. Out of about forty people in the class, one person other than me seemed to respond positively to this, everyone else (including people who were in favor of legalization) seemed to ignore it.

"But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?"

We do like to vote, you know. We do like to see other people vote. We do expect to see some kind of propagand, some kind of pitch to cast our votes in some certain way. We tend to feel fooled, than we don't see that, what we do expect to see in the right place. No, it isn't reserved exclusively for the politic issues.

"I don't think that when someone makes a stupid choi... (read more)

I don't think that when someone makes a stupid choice and dies, this is a cause for celebration.

Others disagree.

2thomblake9yWell yes, I think the post assumed most others seem to disagree. That's why the point was worth making.

I was just making a simple factual observation. Why did some people think it was an argument in favor of regulation?

I've noticed that Argument by Innuendo is unfortunately common, at least in in-person discussions. Basically, the arguer makes statements that seem to point to some conclusion or another, but stops a few steps short of actually drawing a conclusion, leaving the listener to draw the conclusion themselves. When I've caught myself doing this and ask myself why, there are a few reasons that come up, including:

  • I'm testing my audience's intel
... (read more)
2NickRetallack7yI think it's a good thing to do this. It is analogous to science. If you're a good reasoner and you encounter evidence that conflicts with one of your beliefs, you update that belief. Likewise, if you want to update someone else's belief, you can present evidence that conflicts with it in hopes they will be a good reasoner and update their belief. This would not be so effective if you just told them your conclusion flat out, because that would look like just another belief you are trying to force upon them.
1Document7yPossibly related: When Truth Isn't Enough [http://lesswrong.com/lw/4h/when_truth_isnt_enough/].

Do you really think you're so smart that you would have been a proper scientific skeptic even if you'd been born in 500 C.E.?

Yes. "But your genes would be different." Then it wouldn't be me. "Okay, same genes, but no scientific education." Then it wouldn't be me.

As much as such a thing as 'me' exists then it comes with all the knowledge and skills I have gained either through genetics, training or learning. Otherwise it isn't 'me'.

3TheOtherDave9ySo who was that person who started learning the skills that you now have?
1keddaw9yWell, the person who started typing this reply was someone incredibly similar, but not identical, to the person who finished (neither of who are the present me). It was a person who shared genes, who had an almost identical memory of childhood and education, who shares virtually all my goals, interests and dreams and is more like me than any other person that has ever lived. However, that person was not the me who exists now. Extrapolate that backwards, becoming less and less like current me over time and you get an idea of who started learning the skills I currently have. It's not my fault if people have a broken view of what/who they actually are.
9asparisi9yShouldn't that answer then result in a "Invalid Question" to the original "Would you be a proper scientific skeptic if you were born in 500 CE?" question? I mean, what you are saying here is that it isn't possible for you to have been born in 500 C.E., that you are a product of your genetics and environment and cannot be separated from those conditions that resulted in you. So the answer isn't "Yes" it is "That isn't a valid question." I'm not saying I agree, especially since I think the initial question can be rephrased as "Given the population of humans born in 500 C.E. and the historical realities of the era, do you believe that any person born in this era could have been a proper scientific skeptic and given that, do you believe that you would have developed into one had your initial conditions been otherwise identical, or at least highly similar?" Making it personal (Would you be...) is just a way of conferring the weight of the statement, as it is assumed that the readers of LW all have brains capable of modelling hypothetical scenarios, even if those scenarios don't (or can't even in principle) match reality. The question isn't asking if it is ACTUALLY possible for you to have been born in 500 CE, it is asking you to model the reality of someone in the first person as born in 500 CE and, taking into account what you know of the era, ask if you really think that someone with otherwise equivalent initial starting conditions would have grown into a proper scientific skeptic. It's also shorter to just bring in the personal hypothetical, which helps.
0keddaw9yCorrect. I made the jump of me appearing as is in 530CE as opposed to 'baby me' since I do not in any logical sense think that baby me is me. So yes, the question is invalid (in my view) but I tried to make it valid by altering the question without explicitly saying I was doing so (i.e. "If you were to pop into existence in 530 CE would you be a scientific skeptic?")
5TheOtherDave9yNor, by your reasoning, could it possibly ever be your fault, since my current view of what I am has causes in the past, and you didn't exist in the past. By the same reasoning, nothing else could possibly ever be your fault, except possibly for what you are doing in the instant that I blame you for it... not that it matters for practical purposes, since by the time I got around to implementing consequences of that, you would no longer exist. That strikes me as even more broken a view than the one you wish for it to replace... it destroys one of the major functions we use the notion of "a person" to perform.

I was surprised and pleased to discover that the rock band Switchfoot have a song about the terrible cost to oneself of treating one's arguments as soldiers. It's called "The Sound in My Mouth". (Youtube link, with incorrect lyrics below it; better ones can be found at the bottom of this fansite page)

It focuses on the social costs rather than the truth-finding costs, but it's still well ahead of where I usually expect to find music.

0TheNuszAbides7yto save those who would bother to trouble themselves as i just did... the trouble, the second link is for the album Oh! Gravity but "The Sound in My Mouth" is on the Oh! EP.

Alternate title: “debates should acknowledge tradeoffs”. I think that mnemonic is more helpful.

Longer summary: “Debates should acknowledge tradeoffs. Don’t rationalize away apparent good points for the other side; it’s okay and normal for the other side to have some good points. Presumably, those points just won’t be strong enough in total to overwhelm yours in total. (Also, acknowledging tradeoffs is easier if you don’t think of the debate in terms of ‘your side’ and ‘their side’.)”

An implicit assumption of this article which deserves to be made explicit:

"All negative effects of buying things from the banned store accrue to the individual who chose to purchase from the banned store"

In practical terms this would not be the case. If I buy Sulphuric Acid Drink from the store and discover acid is unhealthy and die, that's one thing. If I buy Homoeopathic Brake Pads for my car and discover they do not cause a level of deceleration greater than placebo, and in the course of this discovery run over a random pedestrian, that's morally a different thing.

The goal of regulation is not just to protect us from ourselves, but to protect us from each other.

2fubarobfusco8yOr, the individual who chooses to purchase from the banned store is able to compensate others for any negative effects.
3cypher1978yUnfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.
1fubarobfusco8yNo, but several historical cultures and a few current ones legitimize the notion of blood money as restitution to a victim's kin.
3cypher1978yNo amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place. All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.
6fubarobfusco8y"Homeopathic brake pads" are a reductio-ad-absurdum of the actual proposal, though — which has to do with products that are not certified, tested, or guaranteed in the manner that you're used to. There are lots of levels of (un)reliability between Homeopathic (works 0% of the time) and NHTSA-Certified (works 99.99% of the time). For instance, there might be Cheap-Ass Brake Pads, which work 99.95% of the time at 10% of the cost of NHTSA-Certified; or Kitchen Sponge Brake Pads, which work 90% of the time at 0.05% of the cost. We do not have the option of requiring everyone to only do things that impose no danger to others. So if someone chooses to use a product that is incrementally more dangerous to others — whether because this lets them save money by buying Cheap-Ass Brake Pads; or because it's just more exciting to drive a Hummer than a Dodge minivan — how do we respond?
3cypher1978yWell, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban. And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? I don't mean check a box or sign a paper, because that's like clicking "I Agree" on a EULA or a security warning, and we've all seen how well that's worked out for casual users in the computer realm, even though we constantly bombard them with messages not to do exactly the things that get them in trouble. Is it Paternalist arrogance when the system administrator makes it impossible to download and open .exe attachments in Microsoft Outlook? Clearly, there are cases where system administrators are paternalist and arrogant; on the other hand, there are a great many cases where users trash their machines. The system administrator has a much better knowledge about safely operating the computer; the user knows more about what work they need to get done. These things are issues of balance, but I'm not ready to throw out top-down bans on dangerous-to-self products.

I think it is useful here to distinguish politics as a consequence of morality from politics as a agreed set of methods of public decision-making. With the first politics, or politics(A), yes, one has to present all facts as they are regardless of whether they favor one’s stance IF one is to believe there is a moral duty to be rational. In a world where humans all share that particular view on morality, there won’t be a need for the second kind of politics, or politics(B). Because, in that world, the set of methods for rational decision making suffice as t... (read more)

Debates can easily appear one-sided, for each side. For example, some people believe that if you follow a particular conduct in life, you will go to heaven. To these people, any policy decision that results in sending less people to heaven is a tragedy. But to people who don't believe in heaven, this downside does not exist.

This is not just an arbitrary example. This shows up all the time in US politics. Until people can agree on whether or not heaven exists, how can any of these debates not seem one-sided?

There is so much wrong with this example that I don't know where to start.

You make up a hypothetical person who dies because she doesn't heed an explicit warning that says "if you do this, you will die". Then you make several ridiculous claims about this hypothetical person:

1) You claim this event will happen, with absolute certainty. 2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent. 3) You claim this event is a tragedy.

I disagree with all o... (read more)

I'd like to point out that the statistical value of human life is used by economists for calculations such as Eliezer mentions, so at some point someone has managed to do the math.

"I was just making a simple factual observation. Why did some people think it was an argument in favor of regulation?"

A (tiny) note of dissonance here. As noted earlier, any knowledge/understanding naturally constrains anticipation. Wont it naturally follow that a factual observation shall naturally concentrate the probability density in favour of one side of the debate (assuming, of course, that the debate is viewed as having only two possible outcomes, even if each outcome is very broad and contains many variants).

In this particular example, ... (read more)

An interesting perspective to take. Bravo!

There is two problems with making stores that can sell banned things-hurting the public and people that are uneducated. I could go into one of these stores and buy poison and fill my brother's glass with it. That would be a drawback because it would affect my brother who did not go into a store and ignore a safety warning and pick up a bottle of poison and drink it. This would be a problem. An uneducated mother of five children that drinks poison doesn't deserve to die, her children don't deserve to be orphans, and that is asumming that she drinks it herse... (read more)

[This comment is no longer endorsed by its author]Reply
1Wes_W5yBut... you can already buy many items that are lethal if forcefully shoved down someone's throat. Knives, for example. It's not obvious to me that a lack of lethal drugs is currently preventing anyone from hurting people, especially since many already-legal substances are very dangerous to pour down someone's throat. From the Overcoming Bias link, "risky buildings" seem to me the clearest example of endangering people other than the buyer.
1EngineerofScience5yI can see that, and I realize that there are advantages to having a store that can sell illegal things. I would now say that such a store would be benificial. There would have to be some restrictions to what that type of store could sell. Explosives like fireworks still could be for use by a licensed person, and nukes would not be sold at all.

I found this post particularly ironic. The statement that a mother of five would drink sulfuric acid but for government regulation is not "a simple factual observation." How could it be? Since we are imagining an alternative world and the statement is not based on any universal law of human action (nor even historical precedent, in which case it would be a probabilistic statement, not a statement of fact), it is speculation. And a very debatable speculation at that. That is, why would anyone bother to market such a product? Surely it would not be... (read more)

2gjm5yNo, but I'm pretty sure it's shorthand for something like this: which is a simple factual observation, plus this: which, while in principle it's "speculation", seems about as speculative as "if we set up a stall in the street offering free cake, some people would eat it". (I take it it's obvious that "Sulfuric Acid Drink" was intended as hyperbole, to indicate something not quite so transparently harmful, masquerading as a cure. If it isn't, you might want to consider why Eliezer called it "Dr Snakeoil's" [https://en.wikipedia.org/wiki/Snake_oil].) Apparently you disagree on the grounds that actually no one would be selling such things even if such shops existed. I think they very decidedly might. Selling fake cures for real diseases (or in some cases fake diseases) has historically been very profitable for some people, and some of those fake cures have been poisonous [https://en.wikipedia.org/wiki/Patent_medicine#Actual_ingredients]. That's a stronger argument. I think Robin may have been envisaging -- and, whether or not he was, Eliezer may have taken him to be envisaging -- that selling in the Banned Products Store exempts you from more than just standard-issue regulatory red tape. I am not an expert on US tort law, so I'll take your word for it that Dr Snakeoil would not be able to get out of trouble just by protesting that he honestly thought his Sulfuric Acid Drink was good against arthritis; if so, then indeed the Banned Products store might be substantially less dangerous than Eliezer suggests.
1tdb4yMaybe we need a banned products store and a tort-proof banned products store, both. I don't quite follow. Even when people "deserve" what they get, if what they "deserve" is death, their loved ones see that as a negative. Does this mean there are no moral truths, since every choice has a downside? Or am I overgeneralizing when I interpret it as "moral truths have no downside."
2gjm4yI'm not certain I understand Eliezer's argument there, but I think he simply made a mistake: I agree with you that if you do something that deserves a bad outcome and the bad outcome happens, it can still be bad that that happened and that can be a downside to whatever may have made it easier for you to do the bad thing.
1ChristianKl4yTort laws mean that the decision about what products are dangerous enough to warrant getting effectively banned aren't made by scientific literate experts but by laypeople in the jury. Uncertainty about what is and isn't allowed is also bad for business.

“Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.

This speaks to a very significant issue we face today. Vast sw... (read more)