Wiki Contributions


After all, if there’s a demon who pays a billion dollars to everyone who follows CDT or EDT then FDTists will lose out.

How does demon determine what DT a person follows?

If it's determined by simulating person's behavior in Newcomb-like problem, then once FDTist gets to know about that, he should two-box (since billion dollars from demon is more than million dollars from Omega).

If it's determined by mind introspection, then FDTist will likely self-modify to believe to be CDTist, and checking actual DT becomes a problem like detecting AI deceptive alignment.

I guess that people who downvoted this would like to see more details why this "court" would work and how won't it be sued when it misjudges (and the more cases there are, the higher probability of misjudge is).

(meta: I neither downvoted nor upvoted the proposal)

Second question: how does this work in different axiom systems? Do we need separate markets, or can they be tied together well? How does the market deal with "provable from ZFC but not Peano"? "Theorem X implies corollary Y" is a thing we can prove, and if there's a price on shares of "Theorem X" then that makes perfect sense, but does it make sense to put a "price" on the "truth" of the ZFC axioms?

Actually, I don't think that creates any problem? Just create shares of "ZFC axioms" and "not ZFC axioms" via logical share splitting. If you are unable to sell "not ZFC axioms", that only means that price of one main share is $1 (though, it's likely possible to prove something fun if we take these axioms as false).

Once the person already exists, it doesn’t matter what % of agents of a certain type exist. They exist—and as such, they have no reason to lose out on free value. Once you already exists, you don’t care about other agents in the reference class.

This means that you cannot credibly precommit to paying in a gamble (if coin comes up tails, you pay $1, otherwise you receive $20), since if coin comes up tails "you don't care about other variants" and refuse to pay.

Welcome to LessWrong! Have you also read about Bayes' Theorem?

  1.  is not right since it's trivially true (false statement can imply anything and true statement always implies True); were you talking about ?
  2. > how different do we expect the conclusions of those developments to be compared to the moral frameworks we have today?
    Rationalists usually can't coherently expect that beliefs of other rational system will be different in a pre-known way, since that's a reason to update own beliefs. See also: the third virtue of rationality, lightness.

It seems that your argument models will require a way of updating weights on one of the next steps, so I'd recommend you to read the Sequences.

The experiment is commonly phrased in non-anthropic way by statisticians: there are many items getting sequential unique numbers, starting from 1. You get to see a single item's number  and have to guess how many items are there, and the answer is . (Also, there are ways to guess count of items if you've seen more than one index)

I've noticed that I'm no longer confused about anthropics, and a prediction-market based approach works.

  1. Postulate. Anticipating (expecting) something is only relevant to decision making (for instance, expected utility calculation).
  2. Expecting something can be represented by betting on a prediction market (with large enough liquidity so that it doesn't move and contains no trade history).
  3. If merging copies is considered, the sound probability to expect depends on merging algorithm. If it sums purchased shares across all copies, then the probability is influenced by splitting; if all copies except one are ignored, then not.
  4. If copies are not merged, then what to anticipate depends on utility function.
  5. "quantum suicide" aka rewriting arbitrary parts of utility function with zeroes is possible but don't you really care about the person in unwanted scenario? Also, if AGI gets to know that, it can also run arbitrarily risky experiments...

Sleeping Beauty: if both trades go through in the case she is woken up twice, she should bet at probability 1/3. If not (for example, living the future: this opportunity will be presented to her only once), it's coherent to bet at probability 1/2.

I've heard a comment that betting odds is something different from probability:

... what makes you think it [probability] should have a use? You can feel sure something will happen, or unsure about it, whether or not that has a use.

Well, if you feel sure about an event with incorrect probability, you may end up in suboptimal state with respect to instrumental rationality (since expected utility calculations will be flawed), so it's perhaps more useful to have correct intuitions. (Eliezer may want to check this out and make fun of people with incorrect intuitions, by the way :-))

New problems are welcome!

Anthropics based on prediction markets - Part 2

Follow-up to

Which Questions Are Anthropic Questions? (

1. The Room Assignment Problem

You are among 100 people waiting in a hallway. The hallway leads to a hundred rooms numbered from 1 to 100. All of you are knocked out by a sleeping gas and each put into a random/unknown room. After waking up, what is the probability that you are in room No. 1?

2. The Incubator

An incubator enters the hallway. It will enter room No.1 and creat a person in it then does the same for the other 99 rooms.  It turns out you are one of the people the incubator has just created. You wake up in a room and is made aware of the experiment setup. What is the probability that you are in room No.1?

3. Incubator + Room Assignment

This time the incubator creats 100 people in the hall way, you are among the 100 people created. Each person is then assigned to a random room. What is the probability that you are in Room 1?

In all of those cases, betting YES on probability 1% is coherent in the sense that it leads to zero expected profit: each of the people buys 1 "ROOM-1" share at price of 1/100, and one of them wins, getting back 1 unit of money.

Wouldn't tape disbalance the fan so that it breaks faster / starts to make sound after a few months? Not sure, but these problems seem plausible.

If you're being simulated by Omega, then opening the second box ends the simulation and kills you.

Is making decisions based on being simulated really coherent? One may also imagine magic function is_counterfactual_evaluated, where you can do this query to the universe at an arbitrary time and get to know whether you're in simulation. However, Omega has power to simulate you in the case where you don't understand that you're in simulation - that is, where is_counterfactual_evaluated returns false.

Also, by simulating this scenario you kill both simulated-you and Omega. Doesn't this have extremely negative utility? :-)

Load More