The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.”

    The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view . . .”

    —Marc Stiegler, David’s Sling

    I don’t know if the Sophisticate’s mistake has an official name, but I call it the Fallacy of Gray. We saw it manifested in the previous essay—the one who believed that odds of two to the power of seven hundred and fifty million to one, against, meant “there was still a chance.” All probabilities, to him, were simply “uncertain” and that meant he was licensed to ignore them if he pleased.

    “The Moon is made of green cheese” and “the Sun is made of mostly hydrogen and helium” are both uncertainties, but they are not the same uncertainty.

    Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black. Or even if not, we can still compare shades, and say “it is darker” or “it is lighter.”

    Years ago, one of the strange little formative moments in my career as a rationalist was reading this paragraph from Player of Games by Iain M. Banks, especially the sentence in bold:

    A guilty system recognizes no innocents. As with any power apparatus which thinks everybody’s either for it or against it, we’re against it. You would be too, if you thought about it. The very way you think places you amongst its enemies. This might not be your fault, because every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it. You come from one of the latter and you’re being asked to explain yourself to one of the former. Prevarication will be more difficult than you might imagine; neutrality is probably impossible. You cannot choose not to have the politics you do; they are not some separate set of entities somehow detachable from the rest of your being; they are a function of your existence. I know that and they know that; you had better accept it.

    Now, don’t write angry comments saying that, if societies impose fewer of their values, then each succeeding generation has more work to start over from scratch. That’s not what I got out of the paragraph.

    What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me.

    It was the whole notion of the Quantitative Way applied to life-problems like moral judgments and the quest for personal self-improvement. That, even if you couldn’t switch something from on to off, you could still tend to increase it or decrease it.

    Is this too obvious to be worth mentioning? I say it is not too obvious, for many bloggers have said of Overcoming Bias: “It is impossible, no one can completely eliminate bias.” I don’t care if the one is a professional economist, it is clear that they have not yet grokked the Quantitative Way as it applies to everyday life and matters like personal self-improvement. That which I cannot eliminate may be well worth reducing.

    Or consider an exchange between Robin Hanson and Tyler Cowen.1 Robin Hanson said that he preferred to put at least 75% weight on the prescriptions of economic theory versus his intuitions: “I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment.” Tyler Cowen replied:

    In my view there is no such thing as “straightforwardly applying economic theory” . . . theories are always applied through our personal and cultural filters and there is no other way it can be.

    Yes, but you can try to minimize that effect, or you can do things that are bound to increase it. And if you try to minimize it, then in many cases I don’t think it’s unreasonable to call the output “straightforward”—even in economics.

    “Everyone is imperfect.” Mohandas Gandhi was imperfect and Joseph Stalin was imperfect, but they were not the same shade of imperfection. “Everyone is imperfect” is an excellent example of replacing a two-color view with a one-color view. If you say, “No one is perfect, but some people are less imperfect than others,” you may not gain applause; but for those who strive to do better, you have held out hope. No one is perfectly imperfect, after all.

    (Whenever someone says to me, “Perfectionism is bad for you,” I reply: “I think it’s okay to be imperfect, but not so imperfect that other people notice.”)

    Likewise the folly of those who say, “Every scientific paradigm imposes some of its assumptions on how it interprets experiments,” and then act like they’d proven science to occupy the same level with witchdoctoring. Every worldview imposes some of its structure on its observations, but the point is that there are worldviews which try to minimize that imposition, and worldviews which glory in it. There is no white, but there are shades of gray that are far lighter than others, and it is folly to treat them as if they were all on the same level.

    If the Moon has orbited the Earth these past few billion years, if you have seen it in the sky these last years, and you expect to see it in its appointed place and phase tomorrow, then that is not a certainty. And if you expect an invisible dragon to heal your daughter of cancer, that too is not a certainty. But they are rather different degrees of uncertainty—this business of expecting things to happen yet again in the same way you have previously predicted to twelve decimal places, versus expecting something to happen that violates the order previously observed. Calling them both “faith” seems a little too un-narrow.

    It’s a most peculiar psychology—this business of “Science is based on faith too, so there!” Typically this is said by people who claim that faith is a good thing. Then why do they say “Science is based on faith too!” in that angry-triumphal tone, rather than as a compliment? And a rather dangerous compliment to give, one would think, from their perspective. If science is based on “faith,” then science is of the same kind as religion—directly comparable. If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars. It would make sense to say, “The priests of science can blatantly, publicly, verifiably walk on the Moon as a faith-based miracle, and your priests’ faith can’t do the same.” Are you sure you wish to go there, oh faithist? Perhaps, on further reflection, you would prefer to retract this whole business of “Science is a religion too!”

    There’s a strange dynamic here: You try to purify your shade of gray, and you get it to a point where it’s pretty light-toned, and someone stands up and says in a deeply offended tone, “But it’s not white! It’s gray!” It’s one thing when someone says, “This isn’t as light as you think, because of specific problems X, Y, and Z.” It’s a different matter when someone says angrily “It’s not white! It’s gray!” without pointing out any specific dark spots.

    In this case, I begin to suspect psychology that is more imperfect than usual—that someone may have made a devil’s bargain with their own mistakes, and now refuses to hear of any possibility of improvement. When someone finds an excuse not to try to do better, they often refuse to concede that anyone else can try to do better, and every mode of improvement is thereafter their enemy, and every claim that it is possible to move forward is an offense against them. And so they say in one breath proudly, “I’m glad to be gray,” and in the next breath angrily, “And you’re gray too!

    If there is no black and white, there is yet lighter and darker, and not all grays are the same.

    The commenter G2 points us to Asimov’s “The Relativity of Wrong”:

    When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

    1Hanson (2007), “Economist Judgment,” Cowen (2007), “Can Theory Override Intuition?”,

    New to LessWrong?

    New Comment
    81 comments, sorted by Click to highlight new comments since: Today at 2:32 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    I suggest this post for the "start here" list. It's unusually close to perfection.

    This post is unusually white. The two arguments -- all shades of gray being seen as the same shade and science being a demonstrably better "religion" -- have seriously expanded my mind. Thank you!

    That which I cannot eliminate may be well worth reducing.

    I wish this basically obvious point was more widely appreciated. I've participated in dozens of conversations which go like this:

    Me: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Person: "Yeah, but we can't get rid of government, because we need it for roads, police, etc." Me: " $%&*@#!! Of course we can't get rid of it entirely, but that doesn't mean it isn't worth reducing!"

    Great post. I encourage you to expand on the idea of the Quantitative Way as applied to areas such as self improvement and everyday life.

    Seeing Dan_Burfoot's comment from four years ago, I felt compelled to join the discussion. I would put it like this Libertarian: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Me: "Coercive violence is dissuading me from killing you. So maybe coercive violence is not so bad, after all." Seriously, what some people call "government" is the ground upon which civilization, and ultimately all rationality, rests. "Government" is not "coercive violence", it is the agreement between rational people that they will allow their

    Seriously, what some people call "government" is the ground upon which civilization, and ultimately all rationality, rests.

    I was nodding along until: "The ground upon which all rationality rests".

    You seem to have fallen into same trap of self-defeating hyperbole that the quoted straw-libertarian has fallen into. It is enough to make your point that government, and the implied threat of violence is not all bad and is even useful. Don't try to make ridiculous claims about "all rationality". Apart from being a distasteful abuse of 'rational' as an applause light it is also false. With actual rational agents all sorts of alternative arrangements not fitting the label "government" would be just as good---it is the particular quirks of humans that make government more practical for us right now.

    I am embarrassed that I accidentally clicked "close" before I was done writing my comment. While I was off composing it in the sandbox, you saw the first draft and commented on it. And you are correct, I think. Is my face red, or what? I have retracted my original comment. My browser shows it as struck out, anyway. So, yeah, saying that government is "coercive violence" is a straw argument. I think we agree. I think we agree. What are "actual rational agents"? I am new here, so maybe I should do some more reading. I'm sure Eliezer has published extensively on defining that term. My prejudice would be that "actual rational agents" are entities which "rationally" would want to protect their own existence. I mean, they may be "rational", but they still have self-interest. So what I'm saying is that "government" is a system for settling claims between competing rational agents. It's a set of game rules. Game rules enshrined by rational agents, for the purpose of protecting their own rational self-interests, are rational. Rational debate, without the existence of these game rules, which is what government is, is impossible. That's what I'm saying. Here's another way to look at it. The Laws of Logic (A is A, etc.) are also game rules. We don't think of them that way because we don't accept the Laws of Logic voluntarily. We are forced to accept them because they are necessarily true. Additional rules, which we call government, are also necessary. We write our own Constitution, but we still need to have one.
    We are using approximately the same meaning. (I would only insist that they value something, it doesn't necessarily have to be their own existence but that'll do as an example.) I'm disagreeing that government is actually necessary. It is a solution to cooperation problems but not the only one. It just happens to be the one most practical for humans.
    Well, for sufficiently large groups of humans.
    Bringing party politics into a discussion about rationality makes you the straw man, my friend. Attacking a philosophy of limited government would imply that every government action is the same shade of grey, and all must be necessary, because a group of people voted on a policy, therefore it must be thought out. Politics in itself is not the product of careful examination and rational thinking about public issues, but rather a way of conveying ones interests in a manner that appears to benefit the target audience and gain support. Not all rules are necessary or of the same necessity, simply because they are written. I would also add that we do, in fact accept the Laws of Logic voluntarily, but only if we are not indoctrinated to do otherwise. To believe that we don't, would suggest that the first philosophers had to have been taught, perhaps by some supernatural or extraterrestrial deity, or perhaps the first logical thought was triggered by a concussion.
    Doesn't "coercive violence is bad" beg the question in a way that would only be deemed natural if one were implicitly invoking the noncentral fallacy?
    No, many people think coercion qua coercion is wrong - for example, philosophers of a Kantian bent, which is very common in political philosophy.
    Point taken, but I would advance the view that the popularity of such a categorical point stems from the fallacy. It seems to be the backbone that makes deontological ethics intuitive. In any event, it's still clearly an instance of begging the question. But my goal was to cast a shadow on the off-topic point, not to derail the thread.
    I'm not sure it is; that government involves coercion is a substantive premise. Unfortunately, people who agree with the off-topic point can hardly accept such behaviour without response.
    Many libertarians think that. I'm not so sure about that. I don't think he would have wished "no criminals should be captured" or "Everyone should dodge taxes" to be the Universal Law.
    I'm not referring to Kant, I mean contemporary philosophers, like Michael Blake, who is not a libertarian.

    Agreed - best post in ages, many thanks. That is all.

    All who love this post, do you love it because it told you something you didn't know before, or because you think it would be great to show others who you don't think understand this point? I worry when our reader's favorite posts are based on how much they agree with the post, instead of how much they learned from it.

    It's possible both are true: that the reader understood the point already, but learned a better way to articulate it in an effort to advance another conversation.

    I already knew it, but this post made me understand it.

    For me, the main point is incremental advancement towards perfection means expending resources and creating other consequences. The questions ultimately have to be 'how much is it worth to move closer to perfection? What other consequences probably will happen?' This question obviously depends on your context. It appears that some kinds of perfectionism, as far as I can tell, have negative effects on the holder of perfectionistic standards, in the view of psychologists, relevant experts on the matter, and that costs have to be considered when moving in... (read more)

    Robin, I think people tend to be enthusiastic when an idea they've known on a more or less intuitive level for a long time is laid out eloquently, and in a way they could see relaying to their particular audience. It's a form of relief, maybe.

    So it's not so much "I like it because I agree with it," it's more "I like it because I knew it before but I could never explain it that well."

    /unscientific guessing


    I'm with LG, the answer to your question is 'neither'. I also enjoy posts which reinform my way of thinking, but a straight account of what I already think myself wouldn't draw praise. Crystallization of a hitherto-unclear concept can be invaluable - I quote:

    "What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me."

    Mike, any action or updating of beliefs will have a net effect on 'whiteness' ... (read more)

    Then why do they say "Science is based on faith too!" in that angry-triumphal tone, rather than as a compliment?

    When used appropriately, the "science is based on faith too" point is meant to cast doubt upon specific non-falsifiable conclusions that scientists take for granted: for instance, that the only things that exist are matter (rather than, say, an additional immaterial spirit) or that evolution happens by itself (rather than, say, being directed by an intelligent designer). Scientific evidence doesn't distinguish between these h... (read more)

    Utilitarian, you said:

    non-falsifiable conclusions that scientists take for granted: for instance, that the only things that exist are matter (rather than, say, an additional immaterial spirit) or that evolution happens by itself (rather than, say, being directed by an intelligent designer).

    How much time did you spend trying to come up with predictions from these hypotheses before declaring them unfalsifiable?

    How much time did you spend trying to come up with predictions from these hypotheses before declaring them unfalsifiable?

    Not much; it's possible that these hypotheses are falsifiable (in the sense of having a likelihood ratio < 1 compared against the other corresponding hypothesis). I was assuming this wasn't true given only the evidence currently available, but I'd be glad to hear if you think otherwise.

    It's easy to think of potential observations that would very strongly favor dualism or intelligent design, and the absence of those observations counts as falsifying evidence.

    I think it's worth keeping the distinction between falsification (a likelihood ratio of 0) and disconfirmation (a likelihood ratio < 1). Usually when people say "unfalsifiable" they really mean "undisconfirmable" or "unstronglydisconfirmable".

    Dan Burfoot, permit me to join in those conversations:

    Me: "No, coercive violence is merely a shade of gray. Another harm of the status quo, like sick children, may be a darker shade of gray, in which case I'm willing to become a little darker so I can gain more lightness overall. For example, I don't think there's much opposition to using coercive violence to protect the life of infants (criminalizing infanticide, taxation to support wards of state, etc.). Of course, opinions on the relative light/darkness of coercive violence vs. other 'bad' differ, and therein lies the popular contention between 'big govt' vs. 'small govt,' not whether government based on coercive violence, or that coercive violence is bad."

    This post reminds me of Isaac Asimov's The Relativity of Wrong, which is excellent. Wikipedia page

    It reminded me of that as well. Here is the full article; I'm glad it's online, because the errors he (and Yudkowsky, above) clears up are astonishingly prevalent. I've had cause to link to it many times.

    LG, Doesn't that mean you like the post, specifically becuase it appeals to confirmation bias, one of the known biases we should be seeking to overcome?

    In other words, "numbers matter". But I suppose mentioning numbers eliminates most of your audience.

    Ah, I love the way the cheap shots just keep on coming...

    Arthur Koestler has some thoughts that are relevant here.

    Thanks, Eliezer, for an excellent article. Some of my favorite quotables:

    • the Quantitative Way

    • Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black.

    • If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars.

    • "Everyone is imperfect" is an excellent example of replacing a two-color view with a one-color view.

    Then there's the fallacy of shades of gray: that every space can be reasonably modeled as 1-dimensional.

    I'm trying to imagine the other dimension we could add to this. If we have "more right" and "less right" along one axis, what's orthogonal to it? I initially felt this comment was silly (the post isn't saying every space can be reasonably modeled as one-dimensional, is it?), but my brain is telling me we actually could come up with a more precise way to represent the article's concept with a Cartesian plane... but I'm not actually able to think of one. False intuition based on my experience with the "Political Compass" graph, perhaps.
    Direction of divergence? Neither (1, 5) nor (5, 1) may be "more wrong" when the answer is (2, 2), but may still be quite meaningfully distinct for some purposes.
    That's true. They could be wrong in different ways (or "different directions", in our example), which could be important for some purposes. But as you say, that depends on said purposes; I'm still uncertain as to the fallacy that dspeyer refers to. If our only purpose is determining some belief's level of correctness, absent other considerations (like in which way it's incorrect), isn't the one dimension of the "shades of grey" model sufficient? Although -- come to think of it, I could be misunderstanding his criticism. I took it to mean he had an issue with the original post, but he could just be providing an example of how the shades-of-grey model could be used fallaciously, rather than saying it is fallacious, as I initially interpreted.
    I meant my comment more as a warning to readers than as a criticism of the article. When you've upgraded your mental model, don't stop and be satisfied -- see if there are more low-hanging upgrades. This is especially important if having recently improved your model biases you toward overconfidence (which I suspect is common). To address your actual challenge... Probability of correctness may actually be one dimensional. Though in practice it's worth keeping around what the big hunks of uncertainty are so you can update them easily if needed (i.e. P(my_understanding) = P(I_understood_what_I_read) P(the_author_was_honest) ... is easier to update if you later learn the author was a troll). Degrees of correctness are more complex. "The geography of the Earth is as shown on a Mercator map" and "The geography of the Earth is as shown on a Peters map" are both false. They are both useful approximations. Is one more useful than the other? That depends on what you want to do with it. There were other examples in the article besides correctness. "Every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it" and some maximize it with regard to their perspective on murder and minimize it with regard to their perspective on shellfish. "No one is perfect, but some people are less imperfect than others" and some people are imperfect in different ways from others, which are more or less harmful in different circumstances.

    This was a very useful post and one I will be adding into my daily dossier I know. I agree this is a good "start post" because it is lucid, clear, and useful. There's little I feel to add at the moment as doing so would simply be glorifying the item itself rather than using the knowledge gained, so thank you for the post.

    I'm glad this post is here! Today, I came across this lovely little statement on Xanga: "Richard Dawkins admitted recently that he can't be sure that God does not exist. He is generally considered the World's most famous Atheist. So this question is for Atheists. Can you be sure that God does not exist?"

    It made me cranky right away (I promise, I was more patient many many instances of this sentiment ago), and my first response was to link here in a comment. Well, I'm glad this post is here to link to. Grr.

    Surprised no-one's yet noted that the proper name for this is the continuum fallacy or sorites fallacy.

    i don't follow the relevance of article, as it seems quite obvious. the real problem with the black and white in the world of rationality is the assumption there is a universal answer to all questions. the idea of "grey" helps highlight that many answers have no one correct universal answer. what i dont understand about rationalists (LW rationalists) is that the live in a world in which everything is either right or wrong. this simplifies a world that is not so simple. what am i missing?

    Offtopic: Have you considered running your comments through a spell- and grammar- checker? It might help with legibility and signalling competence. Ontopic: Rationalists, or at least Bayesians, use probabilities, not binary right-or-wrong judgments. There is, mathematically, only one "correct" probability given the data; is that what you mean?
    Ok, yes, the idea of using probabilities raises two issues -- knowing you have the right inputs, and having the right perspective. Knowing and valuing the proper inputs to most questions seems impossible because of the subjectivity of most issues -- while Bayesian judgements may still hold in the abstract, they are often not practical to use (or so I would argue). Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world? You don't have to sign on completely to Nietzsche's theory to see its potential application, even if limited in scope. For example, a number of communication techniques employ a type of perspectivism because different people view issues through an "individual lens". In either case, seeing the world as constructed of shades of grey seems more practical and accurate relative to using probabilities. This seems at odds with Bayesian judgments that assume that probabilities yield one correct answer AND that a person can and should be able to derive that correct answer. The point i raise about communication techniques relates to your "offtopic" point. I assume you are a rationalist, and thus believe yourself to have superior decision making skills (at least relative to those that are not students (or masters) of rationality). If so, what is the value of your "off topic" point -- you clearly were able to answer my question despite its shortcomings -- why belittle someone that is trying to understand an article that is well-received by LW? Is the petty victory of pointing out my mistakes, from your perspective, the most rational way to answer my comment? I'm not insulted personally (this type of pettiness always makes me smile), but I'm interested in understanding the logic of your comments. From my perspective, rationality failed you in communicating in an effective way. It seems your arrogance could keep many from following and learning from LW -- unless of course the goal is to limit the ranks of th
    Unreliable evidence, biased estimates etc. can, in fact, be taken into account. This. Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes. Oh, I'm not going to downvote your comments or anything. I just thought you might prefer your comments to be easier to read and avoid signalling ... well, disrespect, ignorance, crazy-ranting-on-the-internet-ness, and all the other low status and undesirable signals given off. Of course, I'm giving you the benefit of the doubt, but people are simply less likely to do so when you give off signals like that. This isn't necessarily irrational, since these signals are, indeed, correlated with trolls and idiots. Not perfectly, but enough to be worth avoiding (IMHO.)
    If all you're looking for is confidence, why must you assign probabilities? I'm pushing you in hopes of understanding, not necessarily disagreeing. If I'm very religious and use that as my life-guide, I could be extremely confident in a given answer. In other words, the value of using probabilities must extend beyond confidence in my own answer -- confidence is just a personal feeling. Being "right" in a normative sense is also relevant, but as you point out, we often don't actually know what answer is correct. If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance -- this is simply not practical in many situations precisely because the world is so complex. I guess it boils down to this -- what is the value of being "right" if what is "right" cannot be determined? I think there are decisions where what is right can be determined -- and rationality and the bayesian model works quite well. I think far more decisions (social relationships, politics, economics -- particularly decisions that do not directly affect the decision maker) are too subjective to know what is "right" or accurately model inputs. In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one. I think I'm the only one on LessWrong that finds EY's writing maddening -- mostly the style -- I keep screaming to myself, "get to the point!" -- as noted, perhaps its just me. His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.
    I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to. Yes, this community is generally concerned with methods for, as you say, getting "the right answer more often than not." And, sure, sometimes a marginal increase in my chance of getting the right answer isn't worth the cost of securing that increase -- as you say, sometimes "accurately identifying the proper inputs and valuing them correctly [...] is simply not practical" -- so I accept a lower chance of having the right answer. And, sure, complex contexts such as social relationships, politics, and economics are often cases where the cost of a greater chance of knowing the right answer is prohibitive, so we go with the highest chance of it we can profitably get. To say that "rationality falls short" in these cases suggests that it's being compared to something. If you're saying it falls short compared to perfect knowledge, I absolutely agree. If you're saying it falls short compared to something humans have access to, I'm interested in what that something is. I agree that expressing beliefs numerically (e.g., as probabilities) can lead people to assign more value to the answer than it deserves. But saying that it's "the best answer" has that problem, too. If someone tells me that answer A is the best answer I will likely assign more value to it than if they tell me they are 40% confident in answer A, 35% confident in answer B, and 25% confident in answer C. I have no idea what you mean by the truth being "relative".
    i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used "confident as we should be"? Regardless, I am still wondering what the value of being "right" is if we can't determine what is in fact right? If it gives confidence/ego/comfort that you've derived the right answer, being "right" in actuality is not necessary to have those feelings. Fair. The use of rationality and the belief in its merits generally biases the decision maker to form a belief that rationality will yield a correct answer, even if it does not -- it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don't know they are accurate). To say it differently, to the extent a question has no clear answer (for example, because we don't have enough information or it isn't worth the cost), I think we'd be better off withholding judgment altogether than forming a judgment for the sake of having an opinion. Rumsfeld had this great quote -- "we dont know what we don't know" -- we also don't know the importance of what we don't know relative to what we do know when forming judgments. From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know. Rationality cannot take into account information that is not known to be relevant -- what is the value of forming a judgment in this case? To be clear, I'm not "throwing my hands up" for all of life's questions and saying we don't know anything -- I'm trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means). Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue. This is because our notions of truth are man-made, even if we account for the possibility that there are certain universal truths (what relevance do those truths hav
    Because it helps us make decisions. Incidentally, replacing words that may be unclear or misunderstood (by either party) with what we mean by those words is generally considered helpful 'round here for producing fruitful discussions - there's no point arguing about whether the tree in the forest made a sound if I mean "auditory experience" and you mean "vibrations in the air". This is known as "Rationalist's Taboo", after a game with similar rules, and replacing a word with (your) definition is known as "tabooing" it.
    I actually don't think we're using the word differently -- the issue was premised solely for issues where the answer cannot be known after the fact. In that case, our use of "confidence" is the same -- it simply helps you make decisions. Once the value of the decision is limited to the belief in its soundness, and not ultimate "correctness" of the decision (because it cannot be known), rationality is important only if you believe it to be correct way to make decisions.
    Indeed. And probability is confidence, and Bayesian probability is the correct amount of confidence.
    Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false. I'm not quite sure what "right" means, but if nothing will happen differently depending on whether A or B is true, either now or in the future, then there's no value in knowing whether A or B is true. Yes, pretty much. I wouldn't say "errs", but semantics aside, we're always forming probability judgments, and those judgments are always flawed (or at least incomplete) for any interesting problem. There are many decisions I'm obligated to make where the effects of that decision for good or ill will differ depending on whether the world is A or B, but where the question "is the world A or B?" has no clear answer in the sense you mean. For those decisions, it is useful to make the procedure I use as reliable as is cost-effective. But sure, given a question on which no such decision depends, I agree that withholding judgment on it is a perfectly reasonable thing to do. (Of course, the question arises of how sure I am that no such decision depends on it, and how reliable the process I used to arrive at that level of sureness is.) Yes, absolutely. Forming judgments based on a false idea of how much or how little we know is unlikely to have reliably good results. As above, there are many situations where I'm obligated to make a decision, even if that decision is to sit around and do nothing. If I have two decision procedures available, and one of them is marginally more reliable than the other, I should use the more reliable one. The value is that I will make decisions with better results more often. I'd say LW is willing to push rationality as the best "theory" in all cases short of perfect knowledge right up until the point that a better one comes along, where "better" and "best" refer to their ability to reliably obtain benefits. That's why I asked
    How is this different than being "comfortable" on a personal level? If it isn't, the only value of rationality where the answer cannot be known is simply the confidence it gives you. Such a belief only requires rationality if you believe rationality provides the best answer -- the "truth" is irrelevant. For example, as previously noted in the thread, if I'm super religious, I could use scripture to guide a decision and have the same confidence (on a subjective, personal way). Once the correctness of the belief cannot be determined as right or wrong, the manner in which the belief is created becomes irrelevant, EXCEPT to the extent laws/norms change because other people agree. I've taken the idea of absolute truth and simply converted it social truth because I think its a more appropriate term (more below). You are suggesting that rationality provides the "best way" to get answers short of perfect knowledge. Reflecting on your request for a comparatively better system, I realized you are framing the issue differently than I am. You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers). In that model, looking for the "best system" to find answers makes sense. In other words, you assume answers exist, and only the manner in which to derive them is unknown. I am proposing that there are issues for which answers do not necessarily exist, or at least do not exist within world of human comprehension. In those cases, any model by which someone derives an answer is equally ridiculous. That is why I cannot give you a comparison. Again, this is not to throw up my hands, its a different way of looking at things. Rationality is important, but a smaller part of the bigger picture in my mind. Is my characterization of your position fair? If so, what is your basis for your position that all issues have answers? I am only talking about the relevance of truth, not the absolute truth, because the absolute truth cannot
    Yes. The vial is either poisoned or it isn't, and my task is to decide whether to drink it or not. Do you deny that? Yes, I agree. Indeed, looking for systems to find answers that are better than the one I'm using makes sense, even if they aren't best, even if I can't ever know whether they are best or not. Sure. But "which vial is poisoned?" isn't one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting. This is where we disagree. Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way. And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours. Again, this is where we disagree. The relevance of "Truth" (as you're referring to it... I would say "reality") is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion. Sure, that's true. But it's far more useful to better entangle our decisions (our "subjective truths," as you put it) with reality ("Truth") before we make those decisions.
    With respect to your example, I can only play with those facts that you have given me. In your example, I assumed that knowledge of which vial has poison could not be known, and the best information we had was our collective beliefs (which are based on certain factors you listed). I agree with the task at hand as you put it, but the devil is of course in the details. But as noted above, if we cannot derive the truth, it is just as good as not existing. If the "vial picker" knows the truth beforehand, or is able to derive it, so be it, but immediately before he picks the vial, the Truth, as the vial picker knows it, is of limited value -- he is unsure and everyone around him thinks hes an idiot. After the fact, everyone's opinion will change accordingly with the results. By creating your own example, you're presupposing (i) an answer exists to your question AND (ii) that we can derive it -- we don't have that luxury in the real life, and even if we have that knowledge to know an "answer" exists, we don't know whether the vial picker can accurately pick the appropriate vial based on the information available. The idea of subjective truth (or subjective reality) doesn't rely solely on the fact that reality doesn't exist, most generally it is based on the idea that there may be cases a human cannot derive what is real even where there is some answer. If we cannot derive that reality, the existence of that reality must also be questioned. We of course don't have to worry about these subtleties if the examples we use assume an answer to the issue exists. The meaning of this is that rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived. If the answer to (i) and (ii) are yes, rationality sounds great. If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we're going about things in the best way. In such a world, there
    Or at least approximated. Yes. Lovely. I would say, rather, that it has no purpose at all in the context of that question. Having a false belief is not a useful purpose. And, as I've said before, I agree that there exist questions without answers, and questions whose answers are necessarily beyond the scope of human knowledge, and I agree that rationality doesn't provide much value in engaging with those questions... though it's no worse than any approach I know of, either. As above, I submit that in all cases the approach I describe either works better than (if there are answers, which there often are) or as well (if not) as any other approach I know of. And, as I've said before, if you have a better approach to propose, propose it! I don't know that. But I have to make decisions anyway, so I make them using the best approach I know. If you think I should do something different, tell me what you think I should do. OTOH, if all you're saying is that my approach might be wrong, then I agree with you completely, but so what? My choice is still between using the best approach I know of, or using some other approach, and given that choice I should still use the best approach I know of. And so should you. For the record, that's also the consensus position here. The interesting question is, given that we don't have 100% certainty, what do I do now?
    Inasmuch as subjectivism is a form of relativism, those comments seem to contradict each other.
    Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, "Murder is wrong," even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.
    Wait, does this "truth is relative" stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there's a sizable minority here who wont.
    What do you disagree with? That "truth is relative" applies to only moral questions? or that it applies to more than moral questions? If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)
    My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective". Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise - see e.g. Nazis. EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I'm having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for "metaethics sequence".
    I don't dispute the possibility that your conclusion may be correct, I'm wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I'm comfortable with that as the reason, but wondering if you are or if you have a different basis for the position. From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I've chosen to operate as if they are relative because (i) if moral truths exist but I don't know what they are, I'm in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn't mean you don't use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist. OK, i'll try to search for those EY writings, thanks.
    I, ah ... I'm not seeing anything here. Have you accidentally posted just a space or something?
    Thanks for the clarifiction.
    Indeed. One of the purposes of this site is to help people become more rational - closer to a mathematical perfect reasoner - in everyday life. In math problems, however - and every real problem can, eventually, be reduced to a math problem - we can always make the right choice (unless we make a mistake with the math, which does happen.) Unfortunately for you, most of the basic introductory-level stuff - and much of the really good stuff generally - is by him. So I'm guessing there's a certain selection effect for people who enjoy/tolerate his style of writing. I'm still not sure how truth could be "relative" - could you perhaps expand on what you mean by that? - although obviously it can be obscured by biases and simple lack of data. In addition, some questions may actually have no answer, because people are using different meanings for the same word or the question itself is contradictory (how many sides does a square triangle have?) EDIT: A lot of people here - myself included - practice or advise testing how accurate your estimates are. There are websites and such dedicated to helping people do this.

    Science is not based on faith, nor on anything else. Scientific knowledge is created by conjecture and criticism. See Chapter I of "Realism and the Aim of Science" by Karl Popper.

    I came across a good example of this. I recently graduated from a coding bootcamp and am looking for jobs. I applied to a selective company and was declined. They said, "unfortunately we won't be able to move forward with your candidacy at this time". They didn't say anything about the actual reason why I was rejected.

    (paraphrased conversation with my friend)

    • Me: I hate when people sugarcoat. I wish they just said, "you don't seem as smart as the other candidates".
    • Him: It isn't necessarily true that they don't think you're as smart. M
    ... (read more)
    Is there any reason you couldn't email back saying something along the lines of "I'd appreciate your pointing out what specific weaknesses made you rule out my application, so that I can improve to become a stronger candidate for later or for other similar companies, and possibly so that I can send candidates your way that better fit the profile?"
    6Adam Zerner9y
    I figured that they're really busy and don't have time to address that. Like if they did have time, I figure that they would have addressed it in the rejection email. Plus, I feel pretty confident that it's because they don't think I'm as smart as the other candidates. But you're the second person to recommend this, so perhaps I'm wrong in my assumptions. So I'm going to send them an email doing what you say.

    My favorite part of this post directly after reading was the highlighting of the apparent contradiction between the faithist's pride in their faith and the condemnation in their accusation of faith's use by science.

    But I noticed I didn't feel I totally understood the dynamics in play in such a mind, and decided to think about it over pasta.

    My tentative conclusion:

    This is not, I think, a case of bare-faced irrationality per se, as per "What would you do with immortality" when conjoined with "I have an immortal soul."

    The condemnation in t... (read more)

    When I first read this post back in ~2011 or so, I remember remembering a specific scene in a book I had read that talked about this error and even gave it the same name. I intended to find the quote and post it here, but never bothered. Anyway, seeing this post on the front page again prompted me to finally pull out the book and look up the quote (mostly for the purpose of testing my memory of the scene to see if it actually matched what was written).

    So, from Star Wars X-Wing: Isard's Revenge, by Michael A Stackpole (page 149 of the paperback edition):


    ... (read more)

    I think one important problem, elided here, is that when problems are highly multidimensional then shades of grey will be harder to distinguish. At the extremes, yes, we can say that Gandhi and Stalin are imperfect in quantitatively different amounts. But most of the important life decisions we make can be evaluated on so many different dimensions of value that discriminating and integrating across them feels intractable. Even 3 or 4 dimensions makes the problem so effortful (and perhaps impossible if the dimensions are not commensurable) that falling back to intuition becomes the only pragmatic solution.

    A related pattern I noticed recently:

    • Alice asks, "What effect does X have on Y?"
    • Bob, an expert in Y, replies, "There are many variables that impact Y, and you can't reduce it to simply X."

    Alice asked for a one-variable model with limited but positive predictive power, and Bob replied with a zero-variable model with no predictive power whatsoever.

    Necro but maybe I can add something to the debate....

    A problem I see is there are common cases where it is rational to be irrational, for example if being rational causes you emotional distress due to circumstances beyond your control.

    And this is a big problem if one's will to be "rational" is at root based on an emotional will to be "less wrong" for the purpose of improving internal feelings of one's own value.

    Because if that is the naked honest goal, then that rationalism is Hedonism by yet another name.

    But realizing that might be destabilizing to the ra... (read more)

    “I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment.”

    Another problem with this is "economic theory" is not monolithic. There are different schools of thought within economics, and applying economic theory No. 1 from X school might imply completely different things than applying it from Y school. Economics is a fractured, competitive field of concepts to say the least. Go listen to an argument between Neoclassical economists and Post-Keynesian economists and see what they agree on.