All of KingSupernova's Comments + Replies

Looking for information on scoring calibration

Wouldn't an observed mismatch between assigned probability and observed probability count as Bayesian evidence towards miscalibration?

Strategic ignorance and plausible deniability

I think you're confusing ignorance with other people's beliefs about that agent's ignorance. In your example of the police or the STD test, there is no benefit gained by that person being ignorant of the information. There is however a benefit of other people thinking the person was ignorant. If someone is able to find out whether they have an STD without anyone else knowing they've had that test, that's only a benefit for them. (Not including the internal cognitive burden of having to explicitly lie.)

Test Your Calibration!

An open-ended probability calibration test is something I've been planning to build. I'd be curious to hear your thoughts on how the specifics should be implemented. How should they grade their own test in a way that avoids bias and still gives useful results?

"Rational Agents Win"

Whether Omega ended up being right or wrong is irrelevant to the problem, since the players only find out if it was right or wrong after all decisions have been made. It has no bearing on what decision is correct at the time; only our prior probability of whether Omega will be right or wrong matters.

1JBlack4moIt is extremely relevant to the original problem. The whole point is that Omega is known to always be correct. This version weakens that premise, and the whole point of the thought experiment. In particular, note that the second decision was based on a near-certainty that Omega was wrong. There is some ordinarily strong evidence in favour of it, since the agent is apparently in possession of a million dollars with nothing to prevent getting the thousand as well. Is that evidence strong enough to cancel out the previous evidence that Omega is always right? Who knows? There is no quantitative basis given on either side. And that's why this thought experiment is so much weaker and less interesting than the original.
2Vladimir_Nesov4moIf you observe Omega being wrong [https://www.lesswrong.com/posts/psyhmuDhazzFJKjXf/oracle-predictions-don-t-apply-to-non-existent-worlds] , that's not the same thing as Omega being wrong in reality, because you might be making observations in a counterfactual. Omega is only stipulated to be a good predictor in reality, not in the counterfactuals generated by Omega's alternative decisions about what to predict. (It might be the right decision principle to expect Omega being correct in the counterfactuals generated by your decisions [https://www.lesswrong.com/posts/psyhmuDhazzFJKjXf/oracle-predictions-don-t-apply-to-non-existent-worlds?commentId=ANme7qLDivbwRGfow] , even though it's not required by the problem statement either.)
"Rational Agents Win"

I think you have to consider what winning means more carefully.

A rational agent doesn't buy a lottery ticket because it's a bad bet. If that ticket ends up winning, does that contradict the principle that "rational agents win"?

That doesn't seem at all analogous. At the time they had the opportunity to purchase the ticket, they had no way to know it was going to win.

An Irene who acts like your model of Irene will win slightly more when omega makes an incorrect prediction (she wins the lottery), but will be given the million dollars far less commonly because

... (read more)
1Yair Halberstadt4moI'm showing why a rational agent would not take the 1000 dollars, and that doesn't contradict "rational agents win"
"Rational Agents Win"

I think you're missing my point. After the $1,000,000 has been taken, Irene doesn't suddenly lose her free will. She's perfectly capable of taking the $1000; she's just decided not to.

You seem to think I'm making some claim like "one-boxing is irrational" or "Newcomb's problem is impossible", which is not at all what I'm doing. I'm trying to demonstrate that the idea of "rational agents just do what maximizes their utility and don't worry about having to have a consistent underlying decision theory" appears to result in a contradiction as soon as Irene's decision has been made.

2Yoav Ravid4moI understood your point. What I'm saying is that Irene is Indeed capable of also taking the $1000, but if omega isn't wrong, she only gets the million in cases where for some reason she doesn't (and I gave a few examples). I think your scenario is just too narrow. Sure, if Omega is wrong, and it's not a simulation, and it's a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren't true, then you better find some reason or way not to take those 1000, or you'll never see the million in the first place, or you'll them in reality, or you'll never see them in the future.
-1TAG4moHow can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).
"Rational Agents Win"

Some clarifications on my intentions writing this story.

Omega being dead and Irene having taken the money from one box before having the conversation with Rachel are both not relevant to the core problem. I included them as a literary flourish to push people's intuitions towards thinking that Irene should open the second box, similar to what Eliezer was doing here.

Omega was wrong in this scenario, which departs from the traditional Newcomb's problem. I could have written an ending where Rachel made the same arguments and Irene still decided against doing i... (read more)

2Yair Halberstadt4moI think you have to consider what winning means more carefully. A rational agent doesn't buy a lottery ticket because it's a bad bet. If that ticket ends up winning, does that contradict the principle that "rational agents win"? An Irene who acts like your model of Irene will win slightly more when omega makes an incorrect prediction (she wins the lottery), but will be given the million dollars far less commonly because Omega is almost always correct. On average she loses. And rational agents win on average. By average I don't mean average within a particular world (repeated iteration), but on average across all possible worlds. Updateless Decision Theory helps you model this kind of thing.
1JBlack4moEliezer's alteration of the conditions very much strengthens the prisoner's dilemma. Your alterations very much weaken the original problem in both reducing the strength of evidence for Omega's hidden prediction, and in allowing a second decision after (apparently) receiving a prize.
1TAG4moI don't see how winning can be defined without making some precise assumptions about the mechanics...How Omega's predictive abilities work, whether you have free will anyway, and so on. Consider trying to determine what the winning strategy is by writing a programme Why would you expect one decision theory to work in any possible universe?
"Rational Agents Win"

I just did that to be consistent with the traditional formulation of Newcomb's problem, it's not relevant to the story. I needed some labels for the boxes, and "box A" and "box B" are not very descriptive and make it easy for the reader to forget which is which.

"Rational Agents Win"

I don't find the simulation argument very compelling. I can conceive of many ways for Omega to arrive at a prediction with high probability of being correct that don't involve a full, particle-by-particle simulation of the actors.

[This comment is no longer endorsed by its author]Reply
7Vladimir_Nesov4moConsider the distinction between a low level detailed simulation of a world where you are making a decision, and high level reasoning about your decision making. How would you know which one is being applied to you, from within? If there is a way of knowing that, you can act differently in these scenarios, so that the low level simulation won't show the same outcome as the prediction made with high level reasoning. A good process of making predictions by high level reasoning won't allow there to be a difference. The counterfactual world I'm talking about does not have to exist in any way similar to the real world, such as by being explicitly simulated. It only needs the implied existence of worldbuilding of a fictional story. The difference from a fictional story is that the story is not arbitrary, there is a precise purpose that shapes the story needed for prediction. And for a fictional character, there is no straightforward way of noticing the fictional nature of the world.
"Rational Agents Win"

In the case where you find yourself holding the $1,000,000 and the $1000 are still available, sure, you can pick them up. That only happens if either Omega failed to predict what you will do, or if you somehow set things up such that you couldn't, or had to pay a big price, to break your precommitment.

I don't think that's true. The traditional Newcomb's problem could use the exact setup that I used here, the only difference would be that either the opaque box is empty, or Irene never opens the transparent box. The idea that the $1000 is always "available" to the player is central to Newcomb's problem.

2Yoav Ravid4moIn my comment "that" in "That only happens if" referred to you taking the $1,000, not to them being available. So to clarify: If we assume that Omega's predictions are perfect, then you only find $1,000,000 in the box in cases where for some reason you don't also take the $1,000 * Maybe you have some beliefs about why you shouldn't do it * Maybe it's against your honor to do it * Maybe you're programmed not to do it * Maybe before you met Omega you gave a friend $2,000 and told him to give them back to you only if you don't take the $1,000, and otherwise burn them. If you find yourself going out with the contents of both boxes, either you're in a simulation or Omega was wrong. If Omega is wrong (and it's a one shot, and you know you're not in a simulation) then yeah, you have no reason not to take the $1,000 too. But the less accurate Omega is, the less the problem is newcomblike.
"Rational Agents Win"

I don't find the simulation argument very compelling. I can conceive of many ways for Omega to arrive at a prediction with high probability of being correct that don't involve a full, particle-by-particle simulation of the actors.

[This comment is no longer endorsed by its author]Reply
4Dagon4moThe underlying question remains the accuracy of the prediction and what sequences of events (if any) can include Omega being incorrect. In the "strong omega" scenarios, the opaque box is empty in all the universes where Irene opens the transparent box (including after Omega's death). Yoav's description seems right to me - Irene opens the opaque box, and is SHOCKED to find it empty, as she only planned to open the one box. But her prediction of her behavior was incorrect, not Omega's prediction. In "weak omega" scenarios, who knows what the specifics are? Maybe Omega's wrong in this case.
Noticing Frame Differences

making piece

should be

making peace

Long Covid Is Not Necessarily Your Biggest Problem

so it includes both asymptomatic cases

I think that "includes" should be "excludes"?

2Elizabeth4moYou're right, thank you. I fixed it on my blog and thought LW had picked it up but apparently not.
Agency in Conway’s Game of Life

This is an interesting question, but I think your hypothesis is wrong.

Any pattern of physics that eventually exerts control over a region much larger than its initial configuration does so by means of perception, cognition, and action that are recognizably AI-like.

In order to not include things like an exploding supernova as "controlling a region much larger than its initial configuration" we would want to require that such patterns be capable of arranging matter and energy into an arbitrary but low-complexity shape, such as a giant smiley face in Life.

If ... (read more)