I guess that people who downvoted this would like to see more details why this "court" would work and how won't it be sued when it misjudges (and the more cases there are, the higher probability of misjudge is).
(meta: I neither downvoted nor upvoted the proposal)
Second question: how does this work in different axiom systems? Do we need separate markets, or can they be tied together well? How does the market deal with "provable from ZFC but not Peano"? "Theorem X implies corollary Y" is a thing we can prove, and if there's a price on shares of "Theorem X" then that makes perfect sense, but does it make sense to put a "price" on the "truth" of the ZFC axioms?
Actually, I don't think that creates any problem? Just create shares of "ZFC axioms" and "not ZFC axioms" via logical share splitting. If you are unable to sell "not ZFC axioms", that only means that price of one main share is $1 (though, it's likely possible to prove something fun if we take these axioms as false).
Once the person already exists, it doesn’t matter what % of agents of a certain type exist. They exist—and as such, they have no reason to lose out on free value. Once you already exists, you don’t care about other agents in the reference class.
This means that you cannot credibly precommit to paying in a gamble (if coin comes up tails, you pay $1, otherwise you receive $20), since if coin comes up tails "you don't care about other variants" and refuse to pay.
Welcome to LessWrong! Have you also read about Bayes' Theorem?
It seems that your argument models will require a way of updating weights on one of the next steps, so I'd recommend you to read the Sequences.
The experiment is commonly phrased in non-anthropic way by statisticians: there are many items getting sequential unique numbers, starting from 1. You get to see a single item's number and have to guess how many items are there, and the answer is . (Also, there are ways to guess count of items if you've seen more than one index)
I've noticed that I'm no longer confused about anthropics, and a prediction-market based approach works.
Sleeping Beauty: if both trades go through in the case she is woken up twice, she should bet at probability 1/3. If not (for example, living the future: this opportunity will be presented to her only once), it's coherent to bet at probability 1/2.
I've heard a comment that betting odds is something different from probability:
... what makes you think it [probability] should have a use? You can feel sure something will happen, or unsure about it, whether or not that has a use.
Well, if you feel sure about an event with incorrect probability, you may end up in suboptimal state with respect to instrumental rationality (since expected utility calculations will be flawed), so it's perhaps more useful to have correct intuitions. (Eliezer may want to check this out and make fun of people with incorrect intuitions, by the way :-))
New problems are welcome!
Follow-up to https://www.lesswrong.com/posts/xG98FxbAYMCsA7ubf/programcrafter-s-shortform?commentId=ySMfhW25o9LPj3EqX.
You are among 100 people waiting in a hallway. The hallway leads to a hundred rooms numbered from 1 to 100. All of you are knocked out by a sleeping gas and each put into a random/unknown room. After waking up, what is the probability that you are in room No. 1?
An incubator enters the hallway. It will enter room No.1 and creat a person in it then does the same for the other 99 rooms. It turns out you are one of the people the incubator has just created. You wake up in a room and is made aware of the experiment setup. What is the probability that you are in room No.1?
This time the incubator creats 100 people in the hall way, you are among the 100 people created. Each person is then assigned to a random room. What is the probability that you are in Room 1?
In all of those cases, betting YES on probability 1% is coherent in the sense that it leads to zero expected profit: each of the people buys 1 "ROOM-1" share at price of 1/100, and one of them wins, getting back 1 unit of money.
Wouldn't tape disbalance the fan so that it breaks faster / starts to make sound after a few months? Not sure, but these problems seem plausible.
If you're being simulated by Omega, then opening the second box ends the simulation and kills you.
Is making decisions based on being simulated really coherent?
One may also imagine magic function is_counterfactual_evaluated
, where you can do this query to the universe at an arbitrary time and get to know whether you're in simulation. However, Omega has power to simulate you in the case where you don't understand that you're in simulation - that is, where is_counterfactual_evaluated
returns false
.
Also, by simulating this scenario you kill both simulated-you and Omega. Doesn't this have extremely negative utility? :-)
How does demon determine what DT a person follows?
If it's determined by simulating person's behavior in Newcomb-like problem, then once FDTist gets to know about that, he should two-box (since billion dollars from demon is more than million dollars from Omega).
If it's determined by mind introspection, then FDTist will likely self-modify to believe to be CDTist, and checking actual DT becomes a problem like detecting AI deceptive alignment.