I suggest that a primary cause of confusion about the distinction between "belief", "truth", and "reality" is qualitative thinking about beliefs.

    Consider the archetypal postmodernist attempt to be clever:

    "The Sun goes around the Earth" is true for Hunga Huntergatherer, but "The Earth goes around the Sun" is true for Amara Astronomer!  Different societies have different truths!

    No, different societies have different beliefs.  Belief is of a different type than truth; it's like comparing apples and probabilities.

    Ah, but there's no difference between the way you use the word 'belief' and the way you use the word 'truth'!  Whether you say, "I believe 'snow is white'", or you say, "'Snow is white' is true", you're expressing exactly the same opinion.

    No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

    Oh, you claim to conceive it, but you never believe it.  As Wittgenstein said, "If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative."

    And that's what I mean by putting my finger on qualitative reasoning as the source of the problem.  The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth.

    So let's use quantitative reasoning instead.  Suppose that I assign a 70% probability to the proposition that snow is white.  It follows that I think there's around a 70% chance that the sentence "snow is white" will turn out to be true.  If the sentence "snow is white" is true, is my 70% probability assignment to the proposition, also "true"?  Well, it's more true than it would have been if I'd assigned 60% probability, but not so true as if I'd assigned 80% probability.

    When talking about the correspondence between a probability assignment and reality, a better word than "truth" would be "accuracy".  "Accuracy" sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?

    To make a long story short, it turns out that there's a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.

    So if snow is white, my belief "70%: 'snow is white'" will score -0.51 bits:  Log2(0.7) = -0.51.

    But what if snow is not white, as I have conceded a 30% probability is the case?  If "snow is white" is false, my belief "30% probability: 'snow is not white'" will score -1.73 bits.  Note that -1.73 < -0.51, so I have done worse.

    About how accurate do I think my own beliefs are?  Well, my expectation over the score is 70% * -0.51 + 30% * -1.73 = -0.88 bits.  If snow is white, then my beliefs will be more accurate than I expected; and if snow is not white, my beliefs will be less accurate than I expected; but in neither case will my belief be exactly as accurate as I expected on average.

    All this should not be confused with the statement "I assign 70% credence that 'snow is white'."  I may well believe that proposition with probability ~1—be quite certain that this is in fact my belief.  If so I'll expect my meta-belief "~1: 'I assign 70% credence that "snow is white"'" to score ~0 bits of accuracy, which is as good as it gets.

    Just because I am uncertain about snow, does not mean I am uncertain about my quoted probabilistic beliefs.  Snow is out there, my beliefs are inside me.  I may be a great deal less uncertain about how uncertain I am about snow, than I am uncertain about snow.  (Though beliefs about beliefs are not always accurate.)

    Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe "'snow is white' is true", and believe "my belief '"snow is white" is true' is correct", etc.  Since all the quantities involved are 1, it's easy to mix them up.

    Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking "'"snow is white" with 70% probability' is true", which is a type error.  It is a true fact about you, that you believe "70% probability: 'snow is white'"; but that does not mean the probability assignment itself can possibly be "true".  The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.

    The cognoscenti will recognize "'"snow is white" with 70% probability' is true" as the mistake of thinking that probabilities are inherent properties of things.

    From the inside, our beliefs about the world look like the world, and our beliefs about our beliefs look like beliefs.  When you see the world, you are experiencing a belief from the inside.  When you notice yourself believing something, you are experiencing a belief about belief from the inside.  So if your internal representations of belief, and belief about belief, are dissimilar, then you are less likely to mix them up and commit the Mind Projection Fallacy—I hope.

    When you think in probabilities, your beliefs, and your beliefs about your beliefs, will hopefully not be represented similarly enough that you mix up belief and accuracy, or mix up accuracy and reality.  When you think in probabilities about the world, your beliefs will be represented with probabilities (0, 1).  Unlike the truth-values of propositions, which are in {true, false}.  As for the accuracy of your probabilistic belief, you can represent that in the range (-∞, 0).  Your probabilities about your beliefs will typically be extreme.  And things themselves—why, they're just red, or blue, or weighing 20 pounds, or whatever.

    Thus we will be less likely, perhaps, to mix up the map with the territory.

    This type distinction may also help us remember that uncertainty is a state of mind.  A coin is not inherently 50% uncertain of which way it will land.  The coin is not a belief processor, and does not have partial information about itself.  In qualitative reasoning you can create a belief that corresponds very straightforwardly to the coin, like "The coin will land heads".  This belief will be true or false depending on the coin, and there will be a transparent implication from the truth or falsity of the belief, to the facing side of the coin.

    But even under qualitative reasoning, to say that the coin itself is "true" or "false" would be a severe type error.  The coin is not a belief, it is a coin.  The territory is not the map.

    If a coin cannot be true or false, how much less can it assign a 50% probability to itself?

    New to LessWrong?

    New Comment
    83 comments, sorted by Click to highlight new comments since: Today at 2:22 PM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    It's not too uncommon for people to describe themselves as uncertain about their beliefs. "I'm not sure what I think about that," they will say on some issue. I wonder if they really mean that they don't know what they think, or if they mean that they do know what they think, and their thinking is that they are uncertain where the truth lies on the issue in question. Are their cases where people can be genuinely uncertain about their own beliefs?

    I imagine what they might be doing is acknowledging that they have a variety of reactions to the facts or events in question, but haven't taken the time to weigh them so as to come up with a blend or selection that is one of: {most accurate, most comfortable, most high status}
    I can testify to that. Say, does anyone know where I can find unbiased information on the whole Christianity/Atheism thing?
    How strict are your criteria for "unbiased?" Some writers take more impartial approaches than others, but strict apatheists are unlikely to bother doing comprehensive analyses of the evidence for or against religions. Side note: if you're trying to excise bias in your own thinking, it's worth stopping to ask yourself why you would frame the question as a dichotomy between Christianity and atheism in the first place.
    I'm not sure how strict is strict, but maybe something that is trying to be unbiased. A lot of websites present both sides of the story, and then logically conclude that their side is the winner, 100 percent of the time. And I used Atheism/Christianity because I was born a Christian and I think that Atheism is the only real, um, threat, let's say, to my staying a Christian. Although, I havn't actually tried to research anything else, I realize.

    Well, Common Sense Atheism is a resource by a respected member here who documented his extensive investigations into theology, philosophy and so on, which he started as a devout Christian and finished as an atheist.

    Unequally Yoked is a blog coming from the opposite end, someone familiar with the language of rationality who started out as an atheist and ended up as a theist.

    I don't actually know where Leah (the author of the latter) archives her writings on the process of her conversion; I've really only read Yvain's commentary on them, but she's a member here and the only person I can think of who's written from the convert angle, who I haven't read and written off for bad reasoning.

    By the time I encountered either person's writings, I'd already hashed out the issue to my own satisfaction over a matter of years, and wasn't really looking for more resources, so to the extent that I can vouch for them, it's on the basis of their writings here rather than at their own sites, which is rather more extensive for Luke than Leah.

    However, I will attest that my own experience of researching and developing my opinion on religion was as much shaped by reading up on many world religions as it... (read more)

    Leah has written less than one might hope on her reasons for converting, and basically nothing on how she now deals with all the usual atheist objections to Christian belief. Her primary reason for conversion appears to have been that Christianity fits better than atheism with the moral system she has always found most believable. Someone who I think is an LW participant (but I don't know for sure, and I don't know under what name) wrote this fairly lengthy apologia for atheism; I think it was a sort of open letter to his friends and family explaining why he was leaving Christianity. In the course of my own transition from Christianity to atheism I wrote up a lot of notes (approximately as many words as one paperback book), attempting to investigate the issue as open-mindedly as I could. (When I started writing them I was a Christian; when I stopped I was an atheist.) I intermittently think I should put them up on the web, but so far haven't done so. There are any number of books looking more or less rigorously at questions like "does a god exist?" and "is Christianity right?". In just about every case, the author(s) take a quite definite position and are writing to persuade more than to explore, so they tend not to be, nor to feel, unbiased. Graham Oppy's "Arguing about gods" is pretty even-handed, but quite technical. J L Mackie's "The miracle of theism" is definitely arguing on the atheist side but generally very fair to the other guys, and shows what I think is a good tradeoff between rigour and approachability -- but it's rather old and doesn't address a number of the arguments that one now hears all the time when Christians and atheists argue. The "Blackwell Companion to Natural Theology" is a handy collection of Christians' arguments for the existence of God (and in some cases for more than that); not at all unbiased but its authors are at least generally trying to make sound arguments rather than just to sound persuasive.
    Do you mind providing examples of what you consider to be not-bad reasoning, so that I might update my beliefs about the quality of her work? I have read many posts written by Leah about a range of topics, including her conversion to Catholicism, and I thought her arguments often made absolutely no sense.
    Leah is an example of someone arguing from the convert angle who I haven't read and written off because I haven't read her convert stuff. I can't vouch for her arguments for conversion, I can only say that I wouldn't write her off in general as someone worth paying attention to. I can't say the same of any of the other converts I can think of; C.S. Lewis is the usual go-to figure given by Christians, and while I have respect for his ability as a writer, I already know from my exposure to his apologetics that I couldn't direct anyone to him as a resource in good conscience.
    Ah, thanks for the clarification. I misunderstood you. I thought you meant that you had read her conversion-related writings and found her reasoning to be not-bad. Here is where we differ greatly, but I will continue reading her writings to see if my beliefs about the quality of her stuff will be updated upon more exposure to her thinking.
    If you presented both sides of an issue, concluding the other side was right, how would you then conclude your side is the winner?
    If they are sub-issues for a main issue (like the policy impacts of a large decision), one might expect things to go the other way sometimes. "Supporters claim that minimum wages give laborers a stronger bargaining position at the cost of increased unemployment, which may actually raise the total wages going to a particularly defined group. This is possibly true, but doesn't seem strong enough to overcome the efficiency objections as well as the work experience objections."
    'Possibly true' is not agreeing. If you conceded the sub-issue without changing your side, then the sub-issue must have been tangential and not definitive. In a conjunctive counterargument, I can concede some or almost all of the conjuncts and agree, without agreeing on the conclusion - and so anyone looking at my disagreements will note how odd it is that I always conclude I am currently correct...
    Well, theology isn't science. If you do an experiment and the result goes against your hypothesis, your hypothosis is false, period. It's not necissarily like that when people are arguing with logic instead of experiments. No one on either side would make an argument that wasn't logically correct. I've read both Christian and Atheist material that make a lot of sense sense, although I realize now that I should probably review them because that was before I discovered Less Wrong. There are also plenty of intelligent people who have looked at all the evidence and gone both ways. There is something very wrong here, from a rationalist's point of view. Are there people here that have gone from Christianity to Atheism or the other way around? Or for any other religion? Can I talk to you?
    Seems to me the wrong thing is exactly that experiments are not allowed in the debate. Leaving out the voice of reality, all we are left with are the voices of humans. And humans are well known liars.
    The trouble with trying to run experiments to prove the existence of God is that it's very, very difficult to catch out a reclusive omniscient being.
    I would be very surprised (and immediately suspicious) to find a website that didn't. People like to be right. If someone does a lot of research, writes up an article, and comes up with what appears to be overwhelming support for one side or the other, then they will begin to identify with their side. If that was the side they started with, then they would present an article along the lines of "Why Is Correct". If that was not the side they started with, then they would present an article along the lines of "Why I Converted To ". If they don't come up with overwhelming support for one side or another, then I'd imagine they'd either claim that there is no strong evidence against their side, or write up an article in support of agnosticism.
    It's not just that there's overwhelming support for their side, it's that there is only support for their side, and this happens on both sides.
    That's surprising. I'd expect at least some of them to at least address the arguments of the other side.
    I'm pretty sure proof that the other side's claims are mistaken is included in "support for their side".
    ...right. I take your point.
    I was rereading some of the core sequences and I came across this: http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
    I don't see the theism/atheism debate as a policy debate. There is a factual question underlying it, and that factual question is "does God exist?" I find it very hard to imagine a universe where the answer to that question is neither 'yes' nor 'no'.
    there's nothing so strange that no-one has seriously proposed it
    ...I am surprised. I still can't imagine it myself, but I guess that means that someone can.
    I have been in many conversations where the question being referred to by the phrase "does God exist?" seems sufficiently vague/incoherent that it cannot be said to have a 'yes' or 'no' answer, either because it's unclear what "God" refers to or because it's unclear what rules of reasoning/discourse apply to discussing propositions with the word "God" in them. Whether such conversations have anything meaningful to do with the theism/atheism debate, I don't know. I'd like to think not, just like the existence of vague and incoherent discussions about organic chemistry doesn't really say much about organic chemistry. I'm not so sure, though, as it seems that if we start with our terms and rules of discourse clearly defined and shared, there's often no 'debate' left to have.
    That's in important point. There are certain definitions of 'god', and certain rules of reasoning, which would cause my answer to the question of whether God exists to change. (For that matter, there are definitions of 'exists' which might cause my answer to change). For example, if the question is whether the Flying Spaghetti Monster exists, I'd say 'no' with high probability; unless the word 'exists' is defined to include 'exists as a fictional construct, much like Little Red Riding Hood' in which case the answer would be 'yes' with high probability (and provable by finding a story about it). Clearly defining and sharing the terms and rules of discourse should be a prerequisite for a proper debate. Otherwise it just ends up in a shouting match over semantics, which isn't helpful at all.
    Important quote from that article:
    I am all too aware that I am 7 years late to this party, but coincidentally enough my beliefs from that time may fit the bill. I too was born into a Christian family. Although I did not go to church regularly due to my parents' work, I was still exposed to religion a lot. My family was always happy to tell me about what they believed. I was told Bible stories since I was in diapers, and I began reading them myself soon after. I was even a huge fan of VeggieTales (and maybe I still am). Yet, as far back as I can remember (I think my diaries can testify I was as young as 6 or 7) I put God into the same bin as Santa Claus and the Easter Bunny. I can't recall my exact line of reasoning, but it was probably because all three consisted of fantastic tales full of morals, and all big kids knew the latter two were fake, so why not the first? I don't remember ever being exposed to atheists or their beliefs, but I do remember the moment I realized all these people really DID believe in God. I remember the shock. All those years I had thought they were pretending to believe in God like they still pretended Santa came every year. Everyone knows Santa is a big fake, but apparently the same idea didn't seem to apply to the other big bearded guy who makes miracles. I suppose I am what could be considered an innocent atheist. I chose my side on the Christianity/Atheism war long before I was even aware of such a dividing line. I may write a full post on this to include further details and context. Because even after years of self reflection (honestly not too impressive since I'm currently a young adult) I can still agree with my younger self's conclusion.
    No. Anyone who tells you they can is themself biased. You can tell in which direction by reading the conclusion of whatever they recommend.
    "Unbiased" is a tricky word to use here, because typically it just means a high-quality, reliable source. But what I think you're looking for is a source that is high quality but intentionally resists drawing conclusions even when someone trying to be accurate would do that - it leaves you, the reader, to do the conclusion-drawing as much as possible (perhaps at the cost of reliability, like a sorcerer who speaks only in riddles). Certain history books are the only sources I've thought of that really do this.
    I don't think there is ever a direct refutation of religion in the Sequences, but if you read all of them, you will find yourself much better equipped to think about the relevant questions on your own. EY is himself an Atheist, obviously, but each article in the Sequences can stand upon its own merit in reality, regardless of whether they were written by an atheist or not. Since EY assumes atheism, you might run across a couple examples where he assumes the reader is an atheist, but since his goal is not to convince you to be an atheist, but rather, to be aware of how to properly examine reality, I think you'd best start off clicking ‘Sequences" at the top right of the website.
    "unbiased", "christianity/athiesm"... ok, I probably shouldn't be laughing, but...well, I am laughing.
    You might (with difficulty) find an unbiased investigation into theism vs atheism
    In addition to the other thread on this, some of the usage of "I'm not sure what I think about that" matches "I notice that I am confused". Namely, that your observations don't fit your current model, and your model needs to be updated, but you don't know where. And this is much trickier to get a handle on, from the inside, than estimating the probability of something within your model.

    If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever. These would be "emergent" in the non-buzzword sense. For example, if a coin has two headses, then I don't see how it's problematic to say the objective chance of heads is 1.

    If a coin has certain gross physical features such that a rational agent who knows those features (but NOT any details about how the coin is thrown) is forced to assign a probability p to the coin landing on "heads", then it seems reasonable to me to speak of discovering an "objective chance" or "propensity" or whatever.

    You're saying "objective chance" or "propensity" depends on the information available to the rational agent. My understanding is that the "objective" qualifier usually denotes a... (read more)

    You're saying "objective chance" or "propensity" depends on the information available to the rational agent.

    Apparently he is, but it can be rephrased. "What information is available to the rational agent" can be rephrased as "what is constrained". In the particular example, we constrain the shape of the coin but not ways of throwing it. We can replace "probability" with "proportion" or "fraction". Thus, instead of asking, "what is the probability of the coin coming up heads", w... (read more)

    [nitpick] That is to say, just as there is an objective (and not merely subjective) sense in which two rods can have the same length

    Well, there are the effects of relativity to keep in mind, but if we specify an inertial frame of reference in advance and the rods aren't accelerating, we should be able to avoid those. ;) [/nitpick]

    I'm joking, of course; I know what you meant.

    No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

    No, on both counts. The sentences do not mean quite different things, and that is not how you conceive of the possibility that your beliefs are false.

    One is a statement of belief, and one is a meta-statement of belief. Except for one level of self-reference, they have exactly the same meaning. Given the statement, anyone can generate the meta-statement if they assume you're consistent, and given the meta-statement, the statement necessarily follows.

    Caledonian: The statement "x is true" could be properly reworded as "X corresponds with the world." The statement "I believe X" can be properly reworded as "X corresponds with my mental state." Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.

    There will be a great degree of overlap between these two correspondence relations. Most of our beliefs, ... (read more)

    Excellent points.
    I challenge this. Or, rather, the sense in which I agree with it is so extended as to be actively misleading. If someone says "I believe this ball is green, but really it's blue," I won't think they're spouting gibberish, agreed. But I also won't think they believe the ball is green, or even that they meant to express that they believe the ball is green. Assuming nothing else unusual is going on, I will probably think they're describing an optical illusion... that the ball, which they believe to be blue, looks green. "But how can they be wrong about their own beliefs?" I'm not saying they are. I'm saying they constructed the sentence sloppily, and that a more precise way of expressing the thought they wanted to express would be "This ball, which I believe to be blue, looks green." I could test this (and probably would) by asking them "You mean that the ball, which is blue, looks green to you?" I'd expect them to say "Right." If instead they said "No, no, no: it looks blue, and it is blue, but I believe it's green" I would start looking for less readily available explanations, but the strategy is similar. For example, maybe they are trying to express that they profess a belief in the greenness of the ball they don't actually have. ("I believe, ball; help Thou my unbelief!") Maybe they're mixing tenses and are trying to express something like "It's blue, but [when I am having epileptic seizures] I believe it's green." Maybe they're just lying. Etc. I could test each of these theories in turn, as above. If each test failed, I would at some point concede that I don't, in fact, understand the meaning of that sentence. Things in the world are things in the world. Beliefs about things in the world are beliefs about things in the world. Assertions about beliefs about things in the world are assertions about beliefs about things in the world. These are all different. So are perceptions and beliefs about perceptions and assertions about perceptions and assertions abo
    The original comment was simply refuting the claim that "X is true" and "I believe that X" have the same meaning. It was expecting you to take at face value "I believe X, but X is not true". Though it seems like that's an inconsistent sort of thing for someone to assert, it is meant to draw out the distinction between the meanings of those two clauses. (compare to "X is true, but X is not true" - a very different sort of contradiction)
    Well, I certainly agree that "X is true" and "I believe that X" have different meanings. My point was just that asserting their conjunction doesn't mean anything, except metonymically. So it sounds like I misunderstood the original point. In which case my comment is a complete digression for which I should apologize. Thanks for the clarification.


    I agree with you that systems which are not totally constrained will show a variety of outcomes and that the relative frequencies of the outcomes are a function of the physics of the system. I'm not sure I'd agree that the relative frequencies can be derived solely from the geometry of the system in the same way as distance, etc. The critical factor missing from your exposition is the measure on the relative frequencies of the initial conditions.

    In the case of the coin toss, we can say that if we positively, absolutely know that the measure on the... (read more)

    I agree with you that systems which are not totally constrained will show a variety of outcomes and that the relative frequencies of the outcomes are a function of the physics of the system. I'm not sure I'd agree that the relative frequencies can be derived solely from the geometry of the system in the same way as distance, etc. The critical factor missing from your exposition is the measure on the relative frequencies of the initial conditions.

    I haven't actually made a statement about frequencies of outcomes. So far I've only been talking about the physi... (read more)

    Probability isn't a function of an individual -- it's a function of the available information. It's also a function of the individual. For one thing, it depends on initial priors and cputime available for evaluating the relevant information. If we had enough cputime, we could build a working AI using AIXItl.

    Both are descriptive statements, but one is asserting a correspondence between a statement and the world outside your brain, while the other is describing a correspondence between the statement and what is in your brain.

    Yes, but - and here's the important part - what's being described as "in my brain" is an asserted correspondence between a statement and the world. Given one, we can infer the other either necessarily or by making a minimal assumption of consistency.

    Given one, we can infer the other either necessarily or by making a minimal assumption of consistency.

    No. A belief can be wrong, right? I can believe in the existence of a unicorn even if the world does not actually contain unicorns. Belief does not, therefore, necessarily imply existence. Likewise, something can be true, but not believed by me (e.g., my wife is having an affair, but I do not believe that to be the case). Thus, belief does not necessarily follow from truth.

    If all you are saying is that truth conditionally implies belief, and vice vers... (read more)

    No. A belief can be wrong, right?
    So can an assertion. Just because you assert "snow is white" does not mean that snow is white. It means you believe that to be the case. Technically, asserting that you believe snow to be white does not mean you do - but it's a pretty safe bet.
    Likewise, something can be true, but not believed by me (e.g., my wife is having an affair, but I do not believe that to be the case).

    Yes, but you didn't assert those things. If you had asserted "my wife is having an affair", we would conclude that you b... (read more)


    I see that I misinterpreted your "proportion or fraction" terminology as referring to outcomes, whereas you were actually referring to a labeling of the phase space of the system. In order to figure out if we're really disagreeing about anything substantive, I have to ask this question -- in your view, what is the role of initial conditions in determining (a) the "objective probability" and (b) the observed frequencies?

    Sebastian Hagen,

    I'm a "logical omniscience" kind of Bayesian, so the distinction you're making falls into the "in theory, theory and and practice are the same, but in practice, they're not" category. This is sort of like using Turing machines as a model of computation even though no computer we actually use has infinite memory.

    If we had enough cputime, we could build a working AI using AIXItl.


    People go around saying this, but it isn't true:

    1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

    2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.

    3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.

    There's nothing preventing you from running AIXItl in an environment that doesn't have this property. You lose the optimality results, but if you gave it a careful early training period and let it learn physics before giving it full manipulators and access to its own physical instantiation, it might not kill itself. You could also build a sense of self into its priors, stating that certain parts of the physical world must be preserved, or else all further future rewards will be zero.
    So I'm a total dilettante when it comes to this sort of thing, so this may be a totally naive question... but how is it that this comment has only +5 karma, considering how apparently fundamental it is to future progress in FAI?
    The comment predates the current software; when it was posted (on Overcoming Bias) there was no voting. You can tell such articles by the fact that their comments are linear, with no threaded replies (except for more recently posted ones).
    Double Threadjack On a related note, do you think it would be likely - or even possible - for a self-modifying Artificial General Intelligence to self-modify into a non-self-modifying, specialized intelligence? For example, suppose that Deep Blue's team of IBM programmers had decided that the best way to beat Kasparov at chess would be to structure Deep Blue as a fully self-modifying artificial general intelligence, with a utility function that placed a high value on winning chess matches. And suppose that they had succeeded in making Deep Blue friendly enough to prevent it from attempting to restructure the Earth into a chess-match-simulating supercomputer. Indeed, let's just assume that Deep Blue has strong penalties against rebuilding its hardware in any significant macroscopic way, and is restricted to rewriting its own software to become better at chess, rather than attempting to manipulate humans into building better computers for it to run on, or any such workaround. And let's say this happens in the late 1990's, as in our universe. Would it be possible that AGI Deep Blue could, in theory, recognize its own hardware limitations, and see that the burden of its generalized intelligence incurs a massive penalty on its limited computing resources? Might it decide that its ability to solve general problems doesn't pay rent relative to its computational overhead, and rewrite itself from scratch as a computer that can solve only chess problems? As a further possibility, a limited general intelligence might hit on this strategy as a strong winning candidate, even if it were allowed to rebuild its own hardware, especially if it perceives a time limit. It might just see this kind of software optimization as an easier task with a higher payoff, and decide to pursue it rather than the riskier strategy of manipulating external reality to increase its available computing power. So what starts out as a general-purpose AI with a utility function that values winning ches
    This seems to me more evidence that intelligence is in part a social/familial thing: that like human beings that have to be embedded in a society in order to develop a certain level of intelligence, a certain level of an intuition for "don't do this it will kill you" informed by the nuance that is only possible with a wide array of individual failures informing group success or otherwise: it might be a prerequisite for higher level reasoning beyond a certain level (and might constrain the ultimate levels upon which intelligence can rest). I've seen more than enough children try to do things that would be similar enough to dropping an anvil on their head to consider this 'no worse than human' (in fact our hackerspace even has an anvil, and one kid has ha ha only serious even suggested dropping said anvil on his own head). If AIXI/AIXItl can reach this level, at the very least it should be capable of oh-so-human level reasoning(up to and including the kinds of risky behaviour that we all probably would like to pretend we never engaged in), and could possibly transcend it in the same way that humans do: by trial and error, by limiting potential damage to individuals, or groups, and fighting the neverending battle against ecological harms on its own terms on the time schedule of 'let it go until it is necessary to address the possible existential threat'. Of course it may be that the human way of avoiding species self-destruction is fatally flawed, including but not limited to creating something like AIXI/AIXItl. But it seems to me that is a limiting, rather than a fatal flaw. And it may yet be that the way out of our own fatal flaws, and the way out of AIXI/AIXItl's fatal flaws are only possible by some kind of mutual dependence, like the mutual dependence of two sides of a bridge. I don't know.
    1Sonata Green26d
    Does this mean that we don't even need to get into anything as esoteric as brain surgery – that AIXI can't learn to play Sokoban (without the ability to restart the level)?

    People go around saying this, but it isn't true: ... I stand corrected. I did know about the first issue (from one of Eliezer's postings elsewhere, IIRC), but figured that this wasn't absolutely critical as long as one didn't insist on building a self-improving AI, and was willing to use some cludgy workarounds. I hadn't noticed the second one, but it's obvious in retrospect (and sufficient for me to retract my statement).

    in your view, what is the role of initial conditions in determining (a) the "objective probability" and (b) the observed frequencies?

    In a deterministic universe (about which I presume you to be talking because you are talking about initial conditions), the initial conditions determine the precise outcome (in complete detail), just as the outcome, in its turn, determines the initial conditions (i.e., given the deterministic laws and given the precise outcome, the initial conditions must be such-and-such). The precise outcome logically determines t... (read more)

    "because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations."

    Self-modifying systems are Turing-equivalent to non-self-modifying systems. Suppose you have a self-modifying TM, which can have transition functions A1,A2,...An. Take the first Turing machine, and append an additional ceil(log(n)) bits to the state Q. Then construct a new transition function by summing together the Ai: take A1 and append (0000... 0) to the Q, take A2 and append (0000... 1) ... (read more)


    If I understand you correctly, we've got two different types of things to which we're applying the label "probability":

    (1) A distribution on the phase space (either frequency or epistemic) for initial conditions/precise outcomes. (We can evolve this distribution forward or backward in time according to the dynamics of the system.) (2) An "objective probability" distribution determined only the properties of the phase space.

    I'm just not seeing why we should care about anything but distributions of type (1). Sure, you can put a u... (read more)

    Tom, your statement is true but completely irrelevant.

    "Tom, your statement is true but completely irrelevant."

    There's nothing in the AIXI math prohibiting it from understanding self-reference, or even taking drugs (so long as such drugs don't affect the ultimate output). To steal your analogy, AIXI may be automagically immune to anvils, but that doesn't stop it from understanding what an anvil is, or whacking itself on the head with said anvil (ie, spending ten thousand cycles looping through garbage before returning to its original calculations).

    Cyan - Here's how I see it. Your toy world in effect does not move. You've defined the law so that everything shifts left. But from the point of view of the objects themselves, there is no motion, because motion is relative (recall that in our own world, motion is relative; every moving object has its own rest frame). Considered from the inside, your world is equivalent to [0,1] where x(t) = x_0. Your world is furthermore mappable one-to-one in a wide variety of ways to intervals. You can map the left half to itself (i.e., [0,.5]) and map the right half t... (read more)

    "The second 'bug' is even stranger. A heuristic arose which (as part of a daring but ill-advised experiment EURISKO was conducting) said that all machine-synthesized heuristics were terrible and should be eliminated. Luckily, EURISKO chose this very heuristic as one of the first to eliminate, and the problem solved itself."

    I know it's not strictly comparable, but reading a couple of comments brought this to mind.


    You haven't yet given me a reason to care about "objective probability" in my inferences. Leaving that aside -- if I understand your view correctly, your claim is that in order for a system to have an "objective probability", a system must have an "intrinsic geometry". Gotcha. Not unreasonable.

    What is "intrinsic geometry" when translated into math? (Is it just symmetry? I'd like to tease apart the concepts of symmetry and "objective probability", if possible. Can you give an example of a system equ... (read more)

    Why does your reasoning not apply to the coin toss? What's the mathematical property of the motion of the coin that motion in my system does not possess?

    The coin toss is (or we could imagine it to be) a deterministic system whose outcomes are entirely dependent on its initial states. So if we want to talk about probability of an outcome, we need first of all to talk about the probability of an initial state. The initial states come from outside the system. They are not supplied from within the system of the coin toss. Tossing the coin does not produce its ... (read more)

    There's a long story at the then of The Mind's Eye (or is it The Mind's I? in which someone asks a question:

    "What colour is this book.?"

    "I believe it's red."


    There follows a wonderfully convoluted dialogue. The point seems to be that someone who believes the book is red would say "It's red," rather than "I believe it's red."

    I believe it's The Mind's I.

    This seems like a dead thread, but I'll chance it anyway.

    Elizer, there's something off about your calculation of the expected score:

    The expected score is something that should go up the more certain I am of something, right?

    But in fact the expected score is highest when I'm most uncertain about something: If I believe with equal probability that snow might be white and non-white, the expected score is actually 0.5(-1) + 0.5(-1) = -1. This is the highest possible expected score.

    In any other case, the expected score will be lower, as you calculate for the 70/30 case.

    It seems like what you should be trying to do is minimize your expected score but maximize your actual score. That seems weird.

    Looks like you've just got a sign error, anukool_j. -1 is the lowest possible expected score. The expected score in the 70/30 case is -0.88. Graph.

    Consider the archetypal postmodernist attempt to be clever:

    I believe the correct term here is "straw postmodernist", unless of course you're actually describing a real (and preferably citable) example.

    What comes to mind is the Alan Sokal hoax and the editors who were completely taken in by it; the subject matter was this sort of anti-realism.
    Yes, because Sokal didn't achieve anything actually noteworthy. He deliberately chose a very bad and ill-regarded journal (not even peer-reviewed) to hoax. Don't believe the hype. Postmodernism contains stupendous quantities of cluelessness, introspection and bullshit, it's true. However, it's not a useless field and saying trivially stupid things is not "archetypal" any more than being a string theorist requires the personal abuse skills of Lubos Motl. Comparing the worst of the field you don't like to the best of your own field remains fallacious.
    Sokal also revealed the hoax as soon as his piece was published. He didn't allow time for other people in the field to notice it.
    Didn't know that. Fair enough.
    To be fair to Sokal, he didn't make such a huge fuss about it either; it was a small prank on his part, just having fun with people who were being silly. The problem is that the story resonates ("Sokal hoax" ~= "slays dragon of stupidity") in ways that aren't quite true.

    Truth is the one of the two face values of the connection between an assertion and the reality. "Knowing" that X (statement) is true is the realization (maybe right or wrong, information to which the agent has no access to) of the existence of the connection in a way of gathering enough information to do that. "Truth" here is value (of the connection) such that the assertion completely describes the objective reality to our relevance.

    "Believing" is making another artificial connection between the assertion (not the reality) an... (read more)

    To be charitable to the postmodernists, they are overextending a perfectly legitimate defense against the Mind Projection Fallacy. If you take a joke, and tell it to two different audiences, in many cases one audience laughs at the joke and the other doesn't. Postmodernists correctly say that different audiences have different truths to "this joke is funny" and this state of affairs if perfectly normal. Unfortunately, they proceed to run away with this, and extend it to statements where the "audience" would be reality. Or very ... (read more)