What Program Are You?

Methodological remark: One should write at some point on a very debilitating effect that I've noticed in decision theory, philosophy generally, and Artificial Intelligence, which one might call Complete Theory Bias. This is the academic version of Need for Closure, the desire to have a complete theory with all the loose ends sewn up for the sake of appearing finished and elegant. When you're trying to eat Big Confusing Problems, like anything having to do with AI, then Complete Theory Bias torpedoes your ability to get work done by preventing you from n... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 5 replies (Click to show all)

Have you defined the type/interface of the magic modules? In haskell at least you can define a function as undefined with a type signature and check whether it compiles.

3mormon210y"TDT is very much a partial solution, a solution-fragment rather than anything complete. After all, if you had the complete decision process, you could run it as an AI, and I'd be coding it up right now." I must nitpick here: First you say TDT is an unfinished solution, but from all the stuff that you have posted there is no evidence that TDT is anything more than a vague idea; is this the case? If not could you post some math and example problems for TDT. Second, I hope this was said in haste not in complete seriousness that if TDT was complete you could run it as an AI and you'd be coding. So does this mean that you believe that TDT is all that is required for the theory end of AI? Or are you stating that the other hard problems such as learning; sensory input and recognition, and knowledge representation are all solved for your AI? If this be the case I would love to see a post on that. Thanks
0Psy-Kosh10yIncidentally, that's essentially a version the issue I was trying to deal with here [http://lesswrong.com/lw/17s/timeless_identity_crisis/] (and in the linked conversation between Silas and I)

What Program Are You?

by RobinHanson 1 min read12th Oct 200943 comments

28


I've been trying for a while to make sense of the various alternate decision theories discussed here at LW, and have kept quiet until I thought I understood something well enough to make a clear contribution.  Here goes.

You simply cannot reason about what to do by referring to what program you run, and considering the other instances of that program, for the simple reason that: there is no unique program that corresponds to any physical object.

Yes, you can think of many physical objects O as running a program P on data D, but there are many many ways to decompose an object into program and data, as in O = <P,D>.  At one extreme you can think of every physical object as running exactly the same program, i.e., the laws of physics, with its data being its particular arrangements of particles and fields.  At the other extreme, one can think of each distinct physical state as a distinct program, with an empty unused data structure.  Inbetween there are an astronomical range of other ways to break you into your program P and your data D.

Eliezer's descriptions of his "Timeless Decision Theory", however refer often to "the computation" as distinguished from "its input" in this "instantiation" as if there was some unique way to divide a physical state into these two components.  For example:

The one-sentence version is:  Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.

The three-sentence version is:  Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.

And also:

Timeless decision theory, in which the (Godelian diagonal) expected utility formula is written as follows:  Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe))  ... which is why TDT one-boxes on Newcomb's Problem - both your current self's physical act, and Omega's physical act in the past, are logical-causal descendants of the computation, and are recalculated accordingly inside the counterfactual. ...  Timeless decision theory can state very definitely how it treats the various facts, within the interior of its expected utility calculation.  It does not update any physical or logical parent of the logical output - rather, it conditions on the initial state of the computation, in order to screen off outside influences; then no further inferences about them are made.

These summaries give the strong impression that one cannot use this decision theory to figure out what to decide until one has first decomposed one's physical state into one's "computation" as distinguished from one's "initial state" and its followup data structures eventually leading to an "output."  And since there are many many ways to make this decomposition, there can be many many decisions recommended by this decision theory. 

The advice to "choose as though controlling the logical output of the abstract computation you implement" might have you choose as if you controlled the actions of all physical objects, if you viewed the laws of physics as your program, or choose as if you only controlled the actions of the particular physical state that you are, if every distinct physical state is a different program.

28