In my recent post, I outlined 5 conditions that I'd like a decision theory to pass; TDT, UDT and ADT pass them, while CDT and EDT don't. I called decision theories that passed those conditions "advanced decision theories", but that's probably not an optimal name. Can I ask you to brainstorm some other suggestions for me? (I may be writing a follow-up soon.)

As usual, it's best to brainstorm on your own before reading any of the comments. You can write down your ideas, then check if any have already been suggested, then comment with the new ones.

Thanks!

New to LessWrong?

New Comment
58 comments, sorted by Click to highlight new comments since: Today at 1:19 PM

Robust decision theory?

I think "TDT-like decision theories" is adequate...

That's what I've used so far, but I'd prefer something better.

"Logical decision theories"?

One quality shared by these theories is that they are concerned about the logical implications (within some mathematical model of the world) that would follow from the hypothesis that the agent chooses a given action.

Not necessarily, a variant of TDT could work without logical specification of decisions, it only needs some way of referring to the decision as part of the environment, and tools for figuring out what other facts about the environment follow from the fact that is the decision. So, "self-referential", "reflexive" and "non-independent" could work, but "logical" doesn't seem to capture what's going on. See this discussion between Wei Dai and myself.

Not necessarily, a variant of TDT could work without logical specification of decisions ...

Would you give a concrete example of what you mean? Right now, I seem to be in the same place where Wei Dai was when he wrote

And you're still using it as a logical fact, i.e., deducing logical consequences from it, right?

I feel like you must be making a point that I'm not getting...

I'm also not seeing the relevance of your reply to him.

When you write:

... it only needs some way of referring to the decision as part of the environment, and tools for figuring out what other facts about the environment follow from the fact that is the decision ...

... in what sense do you understand the "other facts" to "follow from" the "fact that is the decision"? What does the agent work with to determine whether one "fact" "follows from" another?

I agree that the agent needn't be working with strings in some formal language. But, in the decision theories under consideration, the agent is working with some mathematical model of the world, which the agent uses to infer what follows logically from the premise that the agent decides to do X *. I agree also that the agent need not be using a first-order predicate logic to make this inference. Nonetheless, it still seems correct to me to say that what the agent is inferring is a relationship of logical implication.

By analogy, Euclid didn't use a formal first-order predicate logic, but he was still inferring relationships of logical implication.


* I am least familiar with TDT among the decision theories being considered, so, if this statement is wrong, it is most likely wrong about TDT.

[-]gRR12y10

Renormalized DT-s?

Results of my brainstorming, threaded into separate comments for voting. Note karma sink at the end.

Reflexive decision theories

After googling Reflexive the only problem I have with using "Reflexive decision theories" is that it may prevent someone using Reflexive Decision Theory for an actual instance of a well developed decision theory. The name is perfect. (And the name UDT is just STUPID! It only makes sense by redefining 'update'.)

And the name UDT is just STUPID!

I hope you don't blame me too much for that, because it was Eliezer who gave it that name, and I was only complicit to the extent that I didn't object.

It only makes sense by redefining 'update'.

I think "updateless" was meant to refer to the fact that UDT doesn't do explicit Bayesian updating, which doesn't seem unreasonable to me...

(BTW I remember you once said you also didn't like the language I used to describe UDT, but didn't say why when I asked. I'm still curious about that.)

I hope you don't blame me too much for that

Not at all, and your work on the theory was excellent. I just hope that by the time the descendent decision theories evolve to a stable state that they have picked up a new moniker. The reflexive term sounds much more significant!

BTW I remember you once said you also didn't like the language I used to describe UDT, but didn't say why when I asked.

I think I would put less emphasis on subtracting updates and more on just which kind of information should be used. After all, not doing updates isn't the important thing (or a sufficient thing), it is that the right piece of information is used at the end.

I'm afraid I've actually been negligent in my decision theory reading. I've actually forgotten a lot since I originally read your work from - what was it? - two years ago or so. I wouldn't really have high confidence in my words if I tried to really explore the issues in detail these days.

Edit: Also, wedrifid_today considers the wording (and punctuation) used by wedrifid_last_month to be rather more hyperbolic than he would endorse.

After all, not doing updates isn't the important thing, it is that the right piece of information is used at the end.

It's an important thing, in that we don't know how to do updates without getting misled in some strange situations. UDT uses other sources of information, and shows how that's sufficient in principle, but the current puzzle is how to make use of the information that UDT doesn't use, avoiding UDT's logical cornucopia (so that one has to deal with resulting logical uncertainty, and resolve it to a limited extent based on observations).

Snap!

Most of the others just say "better" instead of saying what the desired sort of theory actually does.

Then what do we call decision theories capable of proving statements about self-modification?

You could include them in reflexive decision theories. When formalised, I suspect there may not be much of a distinction between theories that can know about themselves and theories that can modify themselves.

As a mathematician, I prefer using different notations for things until I've proved they're identical. And in this case, I rather suspect they're not.

Most of the others just say "better" instead of saying what the desired sort of theory actually does.

Actually, it's precisely for this reason that "reflexive" doesn't work for what I originally wanted: I wanted to use a word to denote TDT, UDT, ADT, etc, not in terms of how they work but in terms of satisfying the five conditions, the most important of which is that they out-perform CDT. How a decision theory works is a secondary consideration to how well it works.

Superrational decision theories

(By the way, I worry that this phrase would sound cultish to anyone who wasn't familiar with Hofstadter's superrationality.)

Yeah. I had thought about the bad connotations, then remembered that the usual superlative of "rationality" around here is "x-rationality". But I hadn't thought that this doesn't change how an outsider would perceive the phrase.

By the way, I worry that this phrase would sound cultish to anyone who wasn't familiar with Hofstadter's superrationality.

I learned superrationality in school. Not the name Hofstadter or anything about why superrationality is sane though.

The only problem with 'superrationality' is the connotation that CDT is rational. It's just not!

While I'm not 100% sure this is the best possible name (not that I can think of anything better myself right now), at least it has precedent (Hofstadter's superrationality, which is the only ‘mainstream’ --i.e. notable enough for a Wikipedia article-- name for something similar -- unless you count the categorical imperative, but IMO the name categorical decision theory would be better for one specific theory than for a class.)

Rational decision theories

It's a three-value range voting (-1, 0, 1), right? Or am I supposed to choose only one? (I've downvoted three and upvoted two plus the karma sink.)

Yes, with this many options I think three-range voting is better than approval (0, 1) voting. We don't want to settle on one that 2/3 of people love and 1/3 of people hate, rather than one that 1/2 of people love and 1/2 of people are indifferent toward.

The problem is that the karma sink doesn't work if abs(number of upvotes - number of downvotes) > 1...

Non-independent decision theories

Agreeable decision theories

Advanced decision theories

Acute decision theories

Good decision theories

Simulation-stable decision theories.

Okay, I have a question: what will you call the decision theory where x is the action, I write down f(x) which is the pay-off, like in applied mathematics, and then I solve it for maximum f(x) using regular school algebra? I use that a lot when writing AIs for the computer game, really (when I want to find the direction in which to shoot, for instance, or want to turn minimizing the 3rd derivative)

Then I don't need to go into any recursion what so ever if i have conditionals on x inside the pay-offs (as in newcomb). I don't do some update cycle to solve the x=x , i know it doesn't fix x, and i don't recurse if i find x=1+x/2

BTW, an observation on the newcomb's problem: it seems to me that one boxing people write payoff function as

f(1)=box1(1)

f(2)=box2(2)+box1(2)

box1(1)=1000000

box1(2)=0

box2(1)=1000

box2(2)=1000

and one-box, other people write it as:

f(1)=box1

f(2)=box2+box1

box1>=0

box2>=0

and ignore the fact about prediction (boxes as function of x) altogether because they trust a world model where this is forbidden more than they trust the problem statement, which is kind of silly thing to do when you're solving a hypothetical anyway. Or maybe because they listen harder to 'the contents of boxes are fixed' than to 'prediction'. In any case to me the 1-boxing vs 2-boxing now looks like a trivial case of 'people can disagree how to transform verbal sentence into a model' . Given that there's as many versions of English as there are people speaking English, it's not very interesting. One can postulate the both boxes being transparent to make even more nonsensical version.

Basically, all of the decision theories are just deducing payoffs and calculating argmax, but there's a subtle complication with regard to the deduction of payoffs. I'm almost done with the post that explains it.

[-]Dmytry12y-10

Well, you guys instead of using x for the choice and doing algebra to handle x on both sides of equations, start going meta and considering yourselves inside simulators, which, albeit intellectually stimulating, is unnecessary and makes it hard for you to think straight.

If I needed to calculate ideal orientation of a gun assuming that the enemy can predict orientation of a gun perfectly, i'd just use x for the orientation, and solve for both ballistics and enemy's evasive action.

Also, the newcomb's now sounds to me like simple case of alternative english to math conversions when processing the problem statement, not even a case of calculating anything differently. There's the prediction, but there's also the box contents being constant., you can't put both into math, you can in English but human languages are screwy and we all know it.

I finished the post that explains the problem with the decision theory you proposed- calculating payoffs in the most direct way risks spurious counterfactuals. (I hope you don't mind that I called it "naive decision theory", since you yourself said it seemed like the obvious straightforward thing to do.)

[-][anonymous]12y00

"Strong" decision theories, using the word "strong" as in, "strong principle of charity" or "strong atheism".

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y10

"Hard" decision theories, as in you can't break them with silly problems like newcomb's.

[This comment is no longer endorsed by its author]Reply

"orthonormal-complete"

Haha no, that's #25 on this list.

I wonder what sort of decision theory would try to generate a bunch of decision theories and evaluate relative performance of the decision theories on the problems identified as, well, problematic, and come up with a decision theory that performs the best. You can probably even formalize this with bounded form of Solomonoff induction - iterate over the decision theories, pick the best - except that you need to really formalize what a decision theory is.

Still not quite it.

BTW, an observation: if i want to maximize the distance at which the thrown stone lands, assuming constant initial speed and zero height, I work out the algebra - I have unknown, x, the shoot angle, and I have laws of physics that express distance as function of x, and I find best x. In newcomb's, I have x=my choice, I have been given rules of the world, whereby the payoff formula includes the x itself, i calculate best x, which is one-box (not surprisingly). The smoking lesion also works fine. Once you stop invoking your built-in decision theory on confusing cases, things are plain and clear.

At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel's theorem, there will be some problem that is going to get ya, i.e. cause failure.

At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel's theorem, there will be some problem that is going to get ya, i.e. cause failure.

This doesn't seem like something that needs to be solvable. You can you diagonalization to defeat any decision theory; just award some utility iff the agent chooses the option not recommended by that decision theory. A different decision theory can choose the other option, but that decision theory has acausal influence over the right answer that prevents it from winning.

Yep. Just wanted to mention that every theory where you can do diagonalization, i.e. every formal one, can be defeated.

My point is that one could just make the choice be x, then express the payoff in terms of x, then solve for x that gives maximum payoff, using the methods of algebra, instead of trying to redefine algebra in some stupid sense of iteration of values of x until finding an equality (then omg it fails at x=x), and trying to reinvent already existent reasoning (in form of theorem proving).

Improved decision theories. This only works as a temporary name (hopefully) but people could still use it in historical accounts. And it has fewer syllables than "superrational". That matters, even in Newton's Experimental Philosophy.

Karma balance.

[-]Larks12y-10

Less Wrong Decision Theories

(in both senses of the phrase, perhaps)

Except that some of the variants were developed before Less Wrong existed. Gary Drescher, as much as he likes this site, might rightly object.

Drescherian Decision Theories

(unfortunately you can't easily mix "Less Wrong" with his name)

Jokingly:

"Good, Real, and Less Wrong Decision Theories"