Posts

Sorted by New

Wiki Contributions

Comments

I see two reasons not to treat every measurement from the survey as having zero weight.

First, you'd like an approach that makes sense when you haven't considered any data samples previously, so you don't ignore the first person to tell you "humans are generally between 2 and 10 feet tall".

Second, in a different application you may not believe there is no causal mechanism for a new study to provide unique information about some effect size. Then there's value in a model that updates a little on the new study but doesn't update infinitely on infinite studies.

Thanks gwern! Jaynes is the original source of the height example, though I read it years ago and did not have the reference handy. I wrote this recently after realizing (1) the fallacy is standard practice in meta-analysis and (2) there is a straightforward better approach.

You can stagger the bets and offer either a 1A -> 1B -> 1A circle or a 2B -> 2A -> 2B circle.

Suppose the bets are implemented in two stages. In stage 1 you have an 89% chance of the independent payoff ($1 million for bets 1A and 1B, nothing for bets 2A and 2B) and an 11% chance of moving to stage 2. In stage 2 you either get $1 million (for bets 1A and 2A) or a 10/11 chance of getting $5 million.

Then suppose someone prefers a 10/11 chance of 5 million (bet 3B) to a sure $1 million (bet 3A), prefers 2A to 2B, and currently has 2B in this staggered form. You do the following:

  1. Trade them 2A for 2B+$1.
  2. Play stage 1. If they don't move on to stage 2, they're down $1 from where they started. If they do move on to stage 2, they now have bet 3A.
  3. Trade them 3B for 3A+$1.
  4. Play stage 2.

The net effect of those trades is that they still played gamble 2B but gave you a dollar or two. If they prefer 3A to 3B and 1B to 1A, you can do the same thing to get them to circle from 1A back to 1A. It's not the infinite cycle of losses you mention, but it is a guaranteed loss.

Yeah, I don't think it makes much difference in high-dimensions. It's just more natural to talk about smoothness in the continuous case.

A note on notation - [0,1] with square brackets generally refers to the closed interval between 0 and 1. X is a continuous variable, not a boolean one.

Why does UDT lose this game? If it knows anti-Newcomb is much more likely, it will two-box on Newcomb and do just as well as CDT. If Newcomb is more common, UDT one-boxes and does better than CDT.

You seem to be comparing SMCDT to a UDT agent that can't self-modify (or commit suicide). The self-modifying part is the only reason SMCDT wins here.

The ability to self-modify is clearly beneficial (if you have correct beliefs and act first), but it seems separate from the question of which decision theory to use.

This is a good example. Thank you. A population of 100% CDT, though, would get 100% DD, which is terrible. It's a point in UDT's favor that "everyone running UDT" leads to a better outcome for everyone than "everyone running CDT."

Ok, that example does fit my conditions.

What if the universe cannot read your source code, but can simulate you? That is, the universe can predict your choices but it does not know what algorithm produces those choices. This is sufficient for the universe to pose Newcomb's problem, so the two agents are not identical.

The UDT agent can always do at least as well as the CDT agent by making the same choices as a CDT would. It will only give a different output if that would lead to a better result.

Can you give an example where an agent with a complete and correct understanding of its situation would do better with CDT than with UDT?

An agent does worse by giving in to blackmail only if that makes it more likely to be blackmailed. If a UDT agent knows opponents only blackmail agents that pay up, it won't give in.

If you tell a CDT agent "we're going to simulate you and if the simulation behaves poorly, we will punish the real you," it will ignore that and be punished. If the punishment is sufficiently harsh, the UDT agent that changed its behavior does better than the CDT agent. If the punishment is insufficiently harsh, the UDT agent won't change its behavior.

The only examples I've thought of where CDT does better involve the agent having incorrect beliefs. Things like an agent thinking it faces Newcomb's problem when in fact Omega always puts money in both boxes.

Load More