by [anonymous]
1 min read3rd Oct 201233 comments

-31

Just a thought I had the other day; what do you think that the political ideas of conservatism have to do with cognitive bias? I mean, how much are people willing to change naturally, without arguing any points?
I know very little about all of these things, so forgive me if this is a silly thought.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 11:01 AM

Wow, I've clearly made some sort of mistake here, but thanks to all for your replies!

[This comment is no longer endorsed by its author]Reply

"Conservatism" is a political-shouting-match trigger word in the United States.

We generally don't take well to Reddit-style politics. It usually degrades quickly into a "circlejerk", which is what our goal here is to avoid.

In addition to "discussing politics is dangerous, because it usually leads to non-rationality", this is what I consider problematic in this specific article:

First, I have no idea what exactly are you talking about. The word "conservatism" means different things in different countries, and probably also different things to many people within the same country. I don't know what "political ideas of conservatism" mean for you. You could have provided a few examples in the article. Then we would know we are all thinking about the same thing (even if some of us would give it a different label).

Second, beware of connotations, especially in sensitive topics. (Connotations are the things you did not write, but people read them in your text anyway; because they pattern-match with arguments typically made by other people, or otherwise provide weak evidence.) For example, even if you did not write it, a reader might assume that you think the "political ideas of conservatism" are more biased that political ideas of other political things (especially those that come first into mind as an opposition to conservatism). Because, if you did not mean it, why did you choose conservatism for your example, instead of something else, or instead of speaking generally about biases in politics? If you meant it, you should be explicit, and if you did not mean it, a small disclaimer could help.

Third, this article is really short. This is what Open Thread topics are for.

Disclaimer: I am not saying that if you fix these three things, then the article will be OK. I am just saying that in addition to speaking about politics, these three things make it worse. (In addition to political taboo, you have also violated some topic-independent standards of LW discussion.)

I hope you will not be discouraged by the fate of this article.

political ideas of conservatism

Why privilege conservatism? Or "political"? Why without arguing? What exactly do you mean by "naturally"?

Belongs in an open thread, also seems like a blatant request for fuel for political shouting matches.

[-]TimS12y110

There is a strong community norm against talking about politics, particularly political electioneering (community organizer v. rich business executive, etc).

It's not so much that we aren't interested in political theory as that we have observed that People go funny in the head when talking about politics.

It might be a good idea to take this down before the entire community down votes it. It actually sounds like you could rephrase the question without the political spin. It seems like you're really inquiring about the degree to which people will blindly hold on to tradition. If that's what you're asking then you can easily divorce that idea from the conservative political faction.

Cognitive biases abound on both ends of the political spectrum. Recently a test of UK MPs showed they can't do basic probability, let alone deal with the kind of biases we discuss here on LessWrong. In the US, compare global climate change denial on one end of the spectrum with GM scares on the other. At first glance, neither group seems more likely to be susceptible to bias than the other.

For reference, I vote mostly with the Green party in the US, despite their idiotic views in homeopathy, pseudoscience, nuclear power, and several other talking points. There is no such thing as a Technical Rationality Party, and even if there was, I'm unsure as to what positions it would take on several issues that differ greatly on ethical assumptions (and hence bayesian priors).

For example, I'm sure most of you eat meat because you value the feelings of nonhuman animals so much less than I do. As a vegan, my ethical assumption is that there is nothing special about humans that makes their preferences matter more, and so I compare the benefit of good taste against the suffering involved with factory farms, concluding that it is not ethical for me to eat meat. Yet I completely understand and accept that several LessWrong members will think there is nothing wrong with eating meat, and will not be suffering from bias in coming to that conclusion, merely because they go into the bayesian calculation with a completely different prior: they qualitatively prefer humans rather than quantitatively, like I do.

Different ethical assumptions result in different political positions, even when no bias is present. Since ethics is not something that is an independent part of the world but rather is a part of what we impart into it, there is no basis on which any of us can conclusively convince another to change their initial ethical assumptions, except by exposing that one's current view is inconsistent or flawed in some way. Yet there is a huge gulf between being able to say "your ethical view is inconsistent with logic" and "my ethical view is the preferred one". Just because they're wrong doesn't make you right.

This is completely unrelated to your main point, but for the record eating meat isn't mutually exclusive with regard to the feelings of animals. I personally don't feel that my diet has enough of an impact on food production to warrant the inconvenience and discomfort that would come with a vegan lifestyle.

I personally don't feel that my diet has enough of an impact on food production to warrant the inconvenience and discomfort that would come with a vegan lifestyle.

On average, one more person buying an easily-produced resource results in the production of an estra personsworth of that resource. The large numbers of other people mainly just make it harder to feel, since the human brain sucks at large numbers.

I disagree.

Under the assumption that I am a recluse and have zero capacity to influence anyone else on dietary choices, my ability to affect animal welfare through buying choices is strongly quantized. Purchasing a burger at a busy restaurant in a large city will not affect how many burgers they purchase from their distributor. Assuming they buy by the case (what restaurant wouldn't?), affecting how much they purchase would require either eating there extremely often or being part of a large group of people that eat there, all of which cease buying burgers.

However, despite disagreeing with the specifics of what you posted here, I do agree with the spirit. As a compassionate person who has the capacity to influence others, it is important that I be vigilant with veganism, if for no other reason than that it makes me less persuasive if I appear to be hypocritical. Even if buying the occasional burger does not cause any additional harm in the world by itself, it would lessen my credibility, and my ability to influence others into making more ethical choices would be harmed.

Average. Average. On average.

I'm so sorry. On rereading I see that you said average; I guess I was reading too quickly when I posted this reply.

I will use this as an opportunity to remind myself to always slowly reread any comments I plan to reply to at least once. It was sloppy of me to reply after a single read through, especially when missing that one word made me misunderstand the key point I found disagreement with.

It's ok. Automatically thinking about the average is slightly unusal local convention.

Purchasing a burger at a busy restaurant in a large city will not affect how many burgers they purchase from their distributor.

It will, by exactly 1 burger. More specifically, if their unit of buying is a box of 100 frozen burgers, and they use the surplus from each day to start the next, then in the long run they will have bought exactly 1 more burger than they would if you had not bought yours: one in 100 of the boxes they get through will have been purchased 1 day earlier than it would have been.

This is a common fallacy: saying that if a large change in X produces a large change in Y, then a small change in X will produce no change at all in Y. Stated like that it's obviously absurd, but in concrete situations people apply the same wrong thinking as you have just done.

Compare the marketing parable (I don't know if the exact scenario ever happened) of the manager at a burger chain who suggested putting just 5 sesame seeds less on every bun. No-one would notice and they'd save money over millions of buns. Repeat until they have no customers left.

Here's another example. You are about to leave home to drive somewhere. There are many junctions with traffic lights on the way, and you will probably have to stop at some of them. If you are delayed by one second leaving home, by how much is your expected arrival time delayed?

[-]satt12y40

Purchasing a burger at a busy restaurant in a large city will not affect how many burgers they purchase from their distributor.

It will, by exactly 1 burger.

What EricHerboso said wasn't true in general but neither is that. I can well imagine that fast food places just buy a specific number of burgers periodically and discard the surplus. If there's slack from this, buying 1 burger can have a far smaller effect upstream.

Might as well check this line of argument works with a toy example. Suppose the number of would-be burger buyers X at my local McDonald's each day (discounting myself) is Poissonianly distributed with mean 80. The McDonald's buys either 100 or 120 burgers per day: if it had >100 customers the previous day, it buys 120, otherwise just 100. Then, on average, it buys 100 P(X ≤ 100) + 120 P(X > 100) = 100.26 burgers. Now suppose I turn up and buy a burger. Then the expected number of burgers the restaurant buys the next day is 100 P(X+1 ≤ 100) + 120 P(X+1 > 100) = 100.34 burgers. My buying 1 burger makes the restaurant buy only 0.08 burgers more (on average).

This is a common fallacy: saying that if a large change in X produces a large change in Y, then a small change in X will produce no change at all in Y.

There is an analogous fallacy: assuming that if a large change in X causes a large change in Y, a small change in X causes a proportionally small change in Y.

Compare the marketing parable (I don't know if the exact scenario ever happened) of the manager at a burger chain who suggested putting just 5 sesame seeds less on every bun. No-one would notice and they'd save money over millions of buns. Repeat until they have no customers left.

I hadn't heard of that parable before, but I had heard the more upbeat business story of American Airlines saving $40,000 a year by putting one less olive in each salad it served in first class.

You are about to leave home to drive somewhere. There are many junctions with traffic lights on the way, and you will probably have to stop at some of them. If you are delayed by one second leaving home, by how much is your expected arrival time delayed?

Once, when I was younger, I found I could shave 5 minutes off my commute by leaving for the train station 5 minutes later in the morning!

Might as well check this line of argument works with a toy example.

The argument needs to look at the wider situation. How did the burger shop decide on their restocking algorithm? By looking at demand. They will continue to look at demand and review their algorithm from time to time. Buying one burger contributes to that, so the situation is that one more customer may result in them changing the numbers from 100 and 120 to 110 and 130. Ten extra burgers a day until they review the numbers again. The probability that your burger pushes them into ordering more is smaller than in the original example, but the number of extra burgers is proportionately larger. Slack in the chain doesn't affect the mean effect, only the variance.

I hadn't heard of that parable before, but I had heard the more upbeat business story of American Airlines saving $40,000 a year by putting one less olive in each salad it served in first class.

A crucial detail: "most passengers did not eat the olives in their salads". At least, in that telling of the story. So there was reason to think that the olives weren't earning their place in terms of passenger satisfaction.

Once, when I was younger, I found I could shave 5 minutes off my commute by leaving for the train station 5 minutes later in the morning!

You knew the timetable and caught the same train?

[-]satt12y20

The argument needs to look at the wider situation. How did the burger shop decide on their restocking algorithm? By looking at demand. They will continue to look at demand and review their algorithm from time to time. Buying one burger contributes to that, so the situation is that one more customer may result in them changing the numbers from 100 and 120 to 110 and 130. Ten extra burgers a day until they review the numbers again.

Incorporating this into the toy model shows this isn't enough to guarantee proportionality either.

My hypothetical burger shop is now in Sphericalcowland, a land where every month is 30 days. It also has a new burger-buying policy. On day 1 of each month, it buys 100 burgers for that day, then uses a meta-decision rule to decide the burger-buying decision rule for the month's remaining 29 days. Let Y be the number of customers in the previous month. If Y was no more than, say, 2500, the shop uses my earlier 100/120 decision rule for the remaining 29 days. But if Y > 2500, it uses your upgraded decision rule (buy 130 burgers if there were >100 customers the previous day, otherwise buy 110). X ~ Po(80) as before, so Y ~ Po(2400). (I've deliberately held constant the burgers bought for day 1 of each month to avoid applying a previous day-based decision rule for day 1 and causing inter-month dependencies.)

With the 100/120 decision rule, the shop buys an average of 3007.639 burgers a month. So with the 110/130 decision rule, it buys an average of 3297.639 a month.

If I don't buy a burger, E(burgers bought next month) = (3007.639 × P(Y ≤ 2500)) + (3297.639 × P(Y > 2500)) = 3013.624 burgers, by my working.

If I buy a burger, E(burgers bought next month) = (3007.639 × P(Y+1 ≤ 2500)) + (3297.639 × P(Y+1 > 2500)) = 3013.920 burgers.

Hence in this example, the upstream marginal effect of my buying 1 burger is only 0.296 burgers. The presence of feedback doesn't suffice to guarantee proportionality.

A crucial detail: "most passengers did not eat the olives in their salads". At least, in that telling of the story. So there was reason to think that the olives weren't earning their place in terms of passenger satisfaction.

For all I know, neither do sesame seeds on buns! In any case, the American Airlines story might be apocryphal in itself. I just bring it up to illustrate that there's a countervailing anecdote to the parable.

You knew the timetable and caught the same train?

Exactly.

Forgive me if I'm misunderstanding, but doesn't the fallacy you bring up apply specifically to continuous functions only? For step-wise functions, a significantly small change in input will not correspond to a significantly small change in output.

Assuming the unit is a 100 burger box (100BB), then my purchase of a burger only affects their ordering choices if I bring the total burger sales over some threshold. I'm guesstimating, but I'd guess it's around 1/3 of a box, or 33 burgers in a 100BB. So if I'm the 33rd additional customer, it might affect their decision to buy an extra box; but if I'm one of the first 32, it probably won't. This puts a very large probability on the fact that my action will not have an effect.

Is my reasoning here flawed? I've gone over it again in my head as I wrote this comment, and it still seems to apply to me, even after reading your above comment. But perhaps I'm missing something?

This puts a very large probability on the fact that my action will not have an effect.

However, the complementary probability is the probability of a correspondingly large effect. The smallness of the probability and the largeness of the effect exactly cancel, giving an expected effect of 1 burger bought from the distributor for every burger bought by you.

The fact that the effect is nigh invisible due to the high level of stochastic noise does not mean it is not there.

To eat meat without it having been killed for your benefit, you should raid supermarket waste bins for the time-expired stuff they throw out.

When I first thought about this, I was fairly confident of my belief; after reading your first comment, I rethought my position but still felt reasonably confident; yet after reading this comment, you've completely changed my position on the issue. I had completely neglected to take into account the largeness of the effect.

You're absolutely correct, and I retract my previous statements to the contrary. Thank you for pointing out my error. (c:

That may be, but by becoming a vegan, you don't make a problem any better, you just neglect to make it worse. And since I have a very limited ability to make this issue worse and no power at all to make it better, the only ethical reason not to eat meat is to protect my conscience.

Besides, extra production when it comes to meat means raising another animal when previously one would not be allowed to mature. Its life may be short and unhappy, but I value a life of suffering more than no life at all.

So you are nearing the end of your life due to some disease, and the doctor tells you: "At the moment when you would otherwise die, we have a procedure that can give you an extra two minutes of life. Unfortunately, you will spend all of those two minutes in utter agony, as if your entire skin were flayed and bathed in salt and red-hot barbs had been simultaneously thrust into your throat, genitals and anus. Then, after two minutes of this, you will die." Will you opt for the procedure?

No, what's your point?

Ciphergoth's point was to show that you did not really believe the statement "I value a life of suffering more than no life at all."

Now that the point is made, your justification of extra production falls apart. Saying that extra production means more lives which means more good is not a good argument. If you honestly felt this way, then you'd accept ciphergoth's deal -- and you'd also be morally obligated to forcibly impregnate as many women as possible to boot.

If you read my comment below, you would see that I was not referring to my own life. Also, as I said before, that statement is a generalization. I am not hard programmed to absolutely value the maximization of life, but, as a general rule, I feel bad if something that could be alive is not.

Also, I never said that extra production is a good thing, I said that there was moral value to be found in it, which can compensate for the overall end of the process. And valuing additional life does not obligate you to impregnate women if you feel a stronger moral obligation towards not forcibly impregnating people. I made a statement about a characteristic of my utility function, I did not make a statement about the driving force behind my utility function. The desire to optimize life in others does not override most of my other desires.

By the way, its interesting that you automatically seem to assume I'm male. You happen to be right and the odds were on your side, but still.

My apologies on the male assumption. By sheer chance, when I first wrote the comment referencing ciphergoth, I noticed myself using the pronoun "he" and took steps to rephrase appropriately. Yet I did not do the same with you.

I really need to spend more time checking my assumptions before I post, but old habits are tough to break in a short period of time. Your above post will reinforce the need for me to check assumptions before hitting the "comment" button.

As for extra production, I can see that a stronger moral obligation would override in circumstances like rape. But what about culture wide influences? It isn't obvious to me that a stronger moral obligation would override your desire to have a culture-wide policy on reproducing as often as possible. Wouldn't a major goal of yours be to somehow help guide civilization toward some optimal human saturation in your light cone? I don't mean paperclip maximization style, as obviously after a certain density overall good would be lessened, not greatened. But surely an increase in human density up to some optimal saturation?

I know you say that "the desire to optimize life in others does not override most of my other desires", but surely this applies mostly to principles like "don't rape", and not to principles like "don't institute strong societal encouragement for procreation".

edit: Added a missing "don't institute" on the final line.

I value a life of suffering more than no life at all.

Just so I'm clear... are you saying that you predict you would never want to end your life if you predicted that it would be a life of suffering? Or that you might want to end your life in that case, but you currently believe it would be better if you were unable to? Or something else?

I never said anything about the life in question being mine. To be honest, I don't value personally experiencing life all that much. I meant that I generally value the lives of other beings even if much of their lives involve suffering. Of course that is a ridiculously generalized statement, but the probability of a creature's happiness will usually be infinitely greater if that creature actually exists.

In any case, my main point was that a person can value the feelings of a being and still rationally decide to allow that being to be slaughtered so that the person can eat it. I don't think I would have that much more of a problem eating tasty human meat than tasty chicken meat, but I could be wrong, seeing as how I've never eaten human.

I agree you didn't say the life in question was yours. You said that a life of suffering was more valuable than no life, from which I inferred (apparently incorrectly) that your life of suffering is more valuable than you not having life at all.

We have occasionally threads specific for politics-related discussions. I would suggest waiting for the next one to repost this as a comment there, possibly rewriting it to be less vague (I am not sure what you are asking exactly). Also, be careful because even on a politics thread you will be downvoted if you seem to be putting partisanship above constructive argument.