LESSWRONG
LW

Value Theory
Metaethics
Personal Blog

95

    Created Already In Motion

    by Eliezer Yudkowsky
    1st Jul 2008
    3 min read
    24

    95

    Metaethics
    Personal Blog

    95

    Previous:
    No Universally Compelling Arguments
    60 comments92 karma
    Next:
    Sorting Pebbles Into Correct Heaps
    110 comments232 karma
    Log in to save where you left off
    Created Already In Motion
    -2Ari
    18Peter_de_Blanc
    2Ari
    32Eliezer Yudkowsky
    1Vladimir_Nesov
    0Latanius2
    0Schizo
    9Luke_A_Somers
    3Kenny
    0ME3
    3DanielLC
    1IL
    6DanielLC
    5Kenny
    0Nick_Tarleton
    10michael_vassar3
    3David Althaus
    2poke
    -1Unknown
    2constant3
    16Liron
    1Arkanj3l
    0Carinthium
    1azergante
    New Comment
    24 comments, sorted by
    oldest
    Click to highlight new comments since: Today at 2:01 PM
    [-]Ari17y-20

    I think this just begs the question:

    Dynamic: When the belief pool contains "X is fuzzle", send X to the action system.
    Ah, but the tortoise would argue that this isn't enough. Sure, the belief pool may contain "X is fuzzle," and this dynamic, but that doesn't mean that X necessarily gets sent to the action system. In addition, you need another dynamic:

    Dynamic 2: When the belief pool contains "X is fuzzle", and there is a dynamic saying "When the belief pool contains 'X is fuzzle', send X to the action system", then send X to the action system.

    Or, to put it another way:

    Dynamic 2: When the belief pool contains "X is fuzzle", run Dynamic 1.

    Of course, then one needs Dynamic 3 to tell you to run Dynamic 2, ad infinitum -- and we're back to the original problem.

    I think the real point of the dialogue is that you can't use rules of inference to derive rules of inference -- even if you add them as axioms! In some sense, then, rules of inference are even more fundamental than axioms: they're the machines that you feed the axioms into. Then one naturally starts to ask questions about how you can "program" the machines by feeding in certain kinds of axioms, and what happens if you try to feed a program's description to itself, various paradoxes of self-reference, etc. This is where the connection to Gödel and Turing comes in -- and probably why Hofstadter included this fable.

    Cheers, Ari

    Reply
    [-]Peter_de_Blanc17y180

    Ari, dynamics don't say things; they do things.

    Reply
    [-]Ari17y20

    The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion. There is no argument so compelling that it will give dynamics to a static thing. There is no computer program so persuasive that you can run it on a rock.
    To add to my previous comment, I think there's a more rigorous way to express this point. (The "motion" analogy seems pretty vague.)

    A non-universal Turing machine can't simulate a universal Turing machine. (If it could, it would be universal after all -- a contradiction.) In other words, there are computers that can self-program and those that can't, and no amount of programming can change the latter into the former.

    Cheers, Ari

    Reply
    [-]Eliezer Yudkowsky17y320

    Well, at least I can't be accused of belaboring a point so obvious that no one could possibly get it wrong.

    Reply
    [-]Vladimir_Nesov17y10

    Within our anything can influence anything'' (more or less) physics, the distinction between communicating the proposition and just physicallysetting in motion'' is not clear-cut. Programmable mind can assume the dynamics that is encoded in some weak signals, a rock can also assume different dynamics, but you'll have to build a machine from it first, applying more than weak signals.

    Reply
    [-]Latanius217y00

    I think the moral is that you shouldn't try to write software for which you don't have the hardware to run on, not even if the code could run itself by emulating the hardware. A rock runs on physics, Euclid's rules don't. We have morality to run on our brains, and... isn't FAI about porting it to physics?

    So shouldn't we distinguish between the symbols physics::dynamic and human_brain::dynamic? (In a way, me reading the word "dynamic" uses more computing power than running any Java applet could on current computers...)

    Reply
    [-]Schizo17y00

    This is why it's always seemed to silly to me to try to axiomitize logic. Either you already "implement" logic, in which case it's unneccessary, or you don't, in which case you're a rock and there's no point in dealing with you.

    I think this also has deeper implications for the philosophy of math -- the desire to fully axiomitize is still deeply ingrained despite Goedel, but in some ways this seems like a more fundamental challenge. You can write down as many rules as you want for string manipulation, but the realization of those rules in actual manipulation remains ineffable on paper.

    Reply
    [-]Luke_A_Somers13y90

    Axiomatizing logic isn't to make us implement logic in the first place!

    It's to enable us to store and communicate logic.

    Reply
    [-]Kenny12y30

    I wouldn't describe any typical human mind as implementing logic. Even those that are logical don't seem to think that way naturally or innately. But particular human minds have had much success thinking with 'axiomitized' logic.

    Reply
    [-]ME317y00

    Isn't a silicon chip technically a rock?

    Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.

    Reply
    [-]DanielLC13y30

    Isn't a silicon chip technically a rock?

    Rocks are naturally formed. It's not physically impossible for natural processes to form silicon into a working computer, but it's certainly not likely.

    Reply
    [-]IL17y10
    Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.

    I used that as an argument against timeless physics: If you could have consciousness in a timeless universe, than this means that you could simulate a conscious being without actually running the simulation, you could just put the data on the hard drive. I'm still waiting out for an answer on that one!

    Reply
    [-]DanielLC13y60

    In order for it to be analogous, you'd have to put the contents of the memory for every step of the program as its running on the hard drive. The program itself isn't sufficient.

    Since there's no way to get the memory every step without actually running the program, it doesn't seem that paradoxical.

    Also, if time was an explicit dimension, that would just mean that the results of the program are spread out on a straight line aligned along the t-axis. I don't see why making it a curvy line makes it any different.

    Reply
    [-]Kenny12y50

    Huh? A "timeless universe" still contains 'time'; it's just not fundamental. Consciousness may be a lot of things, but it's definitely not static in 'time', i.e. it's dynamic with respect to causality.

    Reply
    [-]Nick_Tarleton17y00

    IL, isn't the difference the presence or absence of causality?

    Reply
    [-]michael_vassar317y100

    "And even if you have a mind that does carry out modus ponens, it is futile for it to have such beliefs as... (A) If a toddler is on the train tracks, then pulling them off is fuzzle. (B) There is a toddler on the train tracks. ...unless the mind also implements: Dynamic: When the belief pool contains "X is fuzzle", send X to the action system."

    It seems to me that much of the frustration in my life prior to a few years ago has been due to thinking that all other human minds necessarily and consistently implement modus ponens and the Dynamic: "When the belief pool contains "X is right/desired/maximizing-my-utility-function/good", send X to action system"

    These days my thoughts are largely occupied with considering what causal dynamic could cause modus poens and the above Dynamic to be implemented in a human mind.

    IL: Timeless physics retains causality. Change some of the data on the hard drive and the other data won't change as an inferential result. There are unsolved issues in this domain, but probably not easy ones. The process of creating the data on the hard drive might be necessarily conscious, for instance, or might not. I think that this was discussed earlier when we discussed giant look-up tables.

    Reply
    [-]David Althaus14y*30

    It seems to me that much of the frustration in my life prior to a few years ago has been due to thinking that all other human minds necessarily and consistently implement modus ponens and the Dynamic: "When the belief pool contains "X is right/desired/maximizing-my-utility-function/good", send X to action system"

    This is so true

    Reply
    [-]poke17y20

    You can fully describe the mind/brain in terms of dynamics without reference to logic or data. But you can't do the reverse. I maintain that the dynamics are all that matters and the rest is just folk theory tarted up with a bad analogy (computationalism).

    Reply
    [-]Unknown17y-10

    "Fuzzle" = "Morally right."

    Only in terms of how this actually gets into a human mind, there is a dynamic first: before anyone has any idea of fuzzleness, things are already being sent to the action system. Then we say, "Oh, these are things are fuzzle!", i.e. these are the type of things that get sent to the action system. Then someone else tells us that something else is fuzzle, and right away it gets sent to the action system too.

    Reply
    [-]constant317y20

    "Fuzzle" = "Morally right."

    Hm... As described, "fuzzle" = "chosen course of action", or, "I choose". Things labelled "fuzzle" are sent to the action system - this is all we're told about "fuzzle". But anything and everything that a system decides, chooses, sets out, to do, are sent to the action system. Not just moral things.

    If we want to distinguish moral things from actions in general, we need to say more.

    Reply
    [-]Liron15y160

    I just want to note that back in 2008, even though I had already read this dialogue and thought I understood it, this was one of Eliezer's posts that made me go: "Holy shit, I didn't realize it was possible to think this clearly."

    Reply
    [-]Arkanj3l12y10

    Going down to the bottom of the post for the TL;DR, I was pleasantly surprised to having the need to go back up again.

    Reply
    [-]Carinthium12y00

    Minor note- When trying to prove Strong Foundationalism (on which I have since given up), I came up with the idea of founding logic not on something anybody must accept but something that must be true in any possible universe. (E.g 1+1=2 according to traditional logic, reductionism if I understand Elizier correctly). This gets around the tortoise's problem and reestablishes logic.

    Of course, this isn't so relevant because the tortoise can in response suggest the possibility Achilles is insane either in his reasoning or his memory (or both, but that's superflous) being so far off-track that he can't trust them to perform proper reasoning.

    Reply
    [-]azergante3mo10

    It sometimes takes me a long time to go from "A is true", "B is true", "A and B implies C is true" to "C is true".

    I think this is a common issue with humans, for example I can see a word such as "aqueduct", and also know that "aqua" means water in Latin, yet fail to notice that "aqueduct" comes from "aqua". This is because when I see a word it does not trigger a dynamic that searches for a root.

    Another case is when the rule looks a bit different, say "a and b implies c" rather than "A and B implies C" and some effort is needed to notice that it still applies.

    I think an even more common reason is that the facts are never brought in working memory at the same time, and so inference never happens.

    All this hints to a practical epistemological-fu: we can increase our knowledge simply by actively reviewing our facts, say every morning, and trying to infer new facts from them! This might even create a virtuous circle, as the more facts one infers, the more facts one can combine to generate more inferences.

    On the other hand there is a limit to the number of facts one can review in a given amount of time, so perhaps a healthy epistemological habit to have is to trigger one's inference engine every time one learns a new (significant?) fact.

    Reply
    Moderation Log
    Curated and popular this week
    24Comments

    Followup to:  No Universally Compelling Arguments, Passing the Recursive Buck

    Lewis Carroll, who was also a mathematician, once wrote a short dialogue called What the Tortoise said to Achilles.  If you have not yet read this ancient classic, consider doing so now.

    The Tortoise offers Achilles a step of reasoning drawn from Euclid's First Proposition:

    (A)  Things that are equal to the same are equal to each other.
    (B)  The two sides of this Triangle are things that are equal to the same.
    (Z)  The two sides of this Triangle are equal to each other.

    Tortoise:  "And if some reader had not yet accepted A and B as true, he might still accept the sequence as a valid one, I suppose?"

    Achilles:   "No doubt such a reader might exist.  He might say, 'I accept as true the Hypothetical Proposition that, if A and B be true, Z must be true; but, I don't accept A and B as true.'  Such a reader would do wisely in abandoning Euclid, and taking to football."

    Tortoise:  "And might there not also be some reader who would say, 'I accept A and B as true, but I don't accept the Hypothetical'?"

    Achilles, unwisely, concedes this; and so asks the Tortoise to accept another proposition:

    (C)  If A and B are true, Z must be true.

    But, asks, the Tortoise, suppose that he accepts A and B and C, but not Z?

    Then, says, Achilles, he must ask the Tortoise to accept one more hypothetical:

    (D)  If A and B and C are true, Z must be true.

    Douglas Hofstadter paraphrased the argument some time later:

    Achilles:  If you have [(A⋀B)→Z], and you also have (A⋀B), then surely you have Z.
    Tortoise:  Oh!  You mean <{(A⋀B)⋀[(A⋀B)→Z]}→Z>, don't you?

    As Hofstadter says, "Whatever Achilles considers a rule of inference, the Tortoise immediately flattens into a mere string of the system.  If you use only the letters A, B, and Z, you will get a recursive pattern of longer and longer strings."

    By now you should recognize the anti-pattern Passing the Recursive Buck; and though the counterspell is sometimes hard to find, when found, it generally takes the form The Buck Stops Immediately.

    The Tortoise's mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool.  If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.

    The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion.  There is no argument so compelling that it will give dynamics to a static thing.  There is no computer program so persuasive that you can run it on a rock.

    And even if you have a mind that does carry out modus ponens, it is futile for it to have such beliefs as...

    (A)  If a toddler is on the train tracks, then pulling them off is fuzzle.
    (B)  There is a toddler on the train tracks.

    ...unless the mind also implements:

    Dynamic:  When the belief pool contains "X is fuzzle", send X to the action system.

    (Added:  Apparently this wasn't clear...  By "dynamic" I mean a property of a physically implemented cognitive system's development over time.  A "dynamic" is something that happens inside a cognitive system, not data that it stores in memory and manipulates.  Dynamics are the manipulations.  There is no way to write a dynamic on a piece of paper, because the paper will just lie there.  So the text immediately above, which says "dynamic", is not dynamic.  If I wanted the text to be dynamic and not just say "dynamic", I would have to write a Java applet.)

    Needless to say, having the belief...

    (C)  If the belief pool contains "X is fuzzle", then "send 'X' to the action system" is fuzzle.

    ...won't help unless the mind already implements the behavior of translating hypothetical actions labeled 'fuzzle' into actual motor actions.

    By dint of careful arguments about the nature of cognitive systems, you might be able to prove...

    (D)   A mind with a dynamic that sends plans labeled "fuzzle" to the action system, is more fuzzle than minds that don't.

    ...but that still won't help, unless the listening mind previously possessed the dynamic of swapping out its current source code for alternative source code that is believed to be more fuzzle.

    This is why you can't argue fuzzleness into a rock.

     

    Part of The Metaethics Sequence

    Next post: "The Bedrock of Fairness"

    Previous post: "The Moral Void"

    Mentioned in
    146Morality is Awesome
    129Where Recursive Justification Hits Bottom
    121The genie knows, but doesn't care
    100On attunement
    92Don't Double-Crux With Suicide Rock
    Load More (5/25)