Sequences

When does technical work to reduce AGI conflict make a difference?
Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda

Wiki Contributions

Comments

What principles? It doesn’t seem like there’s anything more at work here than “Humans sometimes become more confident that other humans will follow through on their commitments if they, e.g., repeatedly say they’ll follow through”. I don’t see what that has to do with FDT, more than any other decision theory. 

If the idea is that Mao’s forming the intention is supposed to have logically-caused his adversaries to update on his intention, that just seems wrong (see this section of the mentioned post).

(Separately I’m not sure what this has to do with not giving into threats in particular, as opposed to preemptive commitment in general. Why were Mao’s adversaries not able to coerce him by committing to nuclear threats, using the same principles? See this section of the mentioned post.)   


 

I don't think FDT has anything to do with purely causal interactions. Insofar as threats were actually deterred here this can be understood in standard causal game theory terms.  (I.e., you claim in a convincing manner that you won't give in -> People assign high probability to you being serious -> Standard EV calculation says not to commit to threat against you.) Also see this post.

Awesome sequence!

I wish that discussions of anthropics were clearer about metaphysical commitments around personal identity and possibility.  I appreciated your discussions of this, e.g., in Section XV.  I agree with what you, though, that it is quite unclear what justifies the picture “I am sampled from the set of all possible people-in-my-epistemic situation (weighted by probability of existence)”.  I take it the view of personal identity at work here is something like “‘I’ am just a sequence of experiences S”, and so I know I am one of the sequences of experiences consistent with my current epistemic situation E.  But the straightforward Bayesian way of thinking about this would seem to be: “I am sampled from all of the sequences of experiences S consistent with E, in the actual world”.

(Compare with: I draw a ball from an urn, which either contains (A) 10 balls or (B) 100 balls, 50% chance each. I don’t say “I am indifferent between the 110 possible balls I could’ve drawn, and therefore it’s 10:1 that this ball came from (B).” I say that with 50%, ball came from (A) and with 50% the ball came from (B).  Of course, there may be some principled difference between this and how you want to think about anthropics, but I don’t see what it is yet.)

This is just minimum reference class SSA, which you reject because of its verdict in God’s Coin Toss with Equal Numbers.  I agree that this result is counterintuitive. But I think it becomes much more acceptable if (1) we get clear about the notion of personal identity at work and (2) we try to stick with standard Bayesianism.  mrcSSA also avoids many of the apparent problems you list for SSA.  Overall I think mrcSSA's answer to God's Coin Toss with Equal Numbers is a good candidate for a "good bullet" =).

(Cf. Builes (2020), part 2, who argues that if you have a deflationary view of personal identity, you should use (something that looks extensionally equivalent to) mrcSSA.)

But it's true that if you had been aware from the beginning that you were going to be threatened, you would have wanted to give in.

To clarify, I didn’t mean that if you were sure your counterpart would Dare from the beginning, you would’ve wanted to Swerve. I meant that if you were aware of the possibility of Crazy types from the beginning, you would’ve wanted to Swerve. (In this example.)

I can’t tell if you think that (1) being willing to Swerve in the case that you’re fully aware from the outset (because you might have a sufficiently high prior on Crazy agents) is a problem. Or if you think (2) this somehow only becomes a problem in the open-minded setting (even though the EA-OMU agent is acting according to the exact same prior as they would've if they started out fully aware, once their awareness grows).

(The comment about regular ol exploitability suggests (1)? But does that mean you think agents shouldn't ever Swerve, even given arbitrarily high prior mass on Crazy types?)

What if anything does this buy us?

In the example in this post, the ex ante utility-maximizing action for a fully aware agent is to Swerve. The agent starts out not fully aware, and so doesn’t Swerve unless they are open-minded. So it buys us being able to take actions that are ex ante optimal for our fully aware selves when we otherwise wouldn’t have due to unawareness. And being ex ante optimal from the fully aware perspective seems preferable to me than being, e.g., ex ante optimal from the less-aware perspective.

More generally, we are worried that agents will make commitments based on “dumb” priors (because they think it’s dangerous to think more and make their prior less dumb). And EA-OMU says: No, you can think more (in the sense of becoming aware of more possibilities), because the right notion of ex ante optimality is ex ante optimality with respect to your fully-aware prior. That's what it buys us.

And revising priors based on awareness growth differs from updating on empirical evidence because it only gives other agents incentives to make you aware of things you would’ve wanted to be aware of ex ante.

they need to gradually build up more hypotheses and more coherent priors over time

I’m not sure I understand—isn't this exactly what open-mindedness is trying to (partially) address? I.e., how to be updateless when you need to build up hypotheses (and, as mentioned briefly, better principles for specifying priors).

If I understand correctly, you’re making the point that we discuss in the section on exploitability. It’s not clear to me yet why this kind of exploitability is objectionable. After all, had the agent in your example been aware of the possibility of crazy agents from the start, they would have wanted to swerve, and non-crazy agents would want to take advantage of this. So I don’t see how the situation is any worse than if the agents were making decisions under complete awareness.

Can you clarify what “the problem” is and why it “recurs”?

My guess is that you are saying: Although OM updatelessness may work for propositions about empirical facts, it’s not clear that it works for logical propositions. For example, suppose I find myself in a logical Counterfactual Mugging regarding the truth value of a proposition P. Suppose I simultaneously become aware of P and learn a proof of P. OM updatelessness would want to say: “Instead of accounting for the fact that you learned that P is true in your decision, figure out what credence you would have assigned to P had you been aware of it at the outset, and do what you would have committed to do under that prior”. But, we don’t know how to assign logical priors.

Is that the idea? If so, I agree that this is a problem. But it seems like a problem for decision theories that rely on logical priors in general, not OM updatelessness in particular. Maybe you are skeptical that any such theory could work, though.

The model is fully specified (again, sorry if this isn’t clear from the post). And in the model we can make perfectly precise the idea of an agent re-assessing their commitments from the perspective of a more-aware prior. Such an agent would disagree that they have lost value by revising their policy. Again, I’m not sure exactly where you are disagreeing with this. (You say something about giving too much weight to a crazy opponent — I’m not sure what “too much” means here.)

Re: conservation of expected evidence, the EA-OMU agent doesn’t expect to increase their chances of facing a crazy opponent. Indeed, they aren’t even aware of the possibility of crazy opponents at the beginning of the game, so I’m not sure what that would mean. (They may be aware that their awareness might grow in the future, but this doesn’t mean they expect their assessments of the expected value of different policies to change.) Maybe you misunderstand what we mean by "unawareness"?

For this to be wrong, the opponent must be (with some probability) irrational - that's a HUGE change in the setup

For one thing, we’re calling such agents “Crazy” in our example, but they need not be irrational. They might have weird preferences such that Dare is a dominant strategy. And as we say in a footnote, we might more realistically imagine more complex bargaining games, with agents who have (rationally) made commitments on the basis of as-yet unconceived of fairness principles, for example. An analogous discussion would apply to them.

But in any case, it seems like the theory should handle the possibility of irrational agents, too.

You can't just say "Alice has wrong probability distributions, but she's about to learn otherwise, so she should use that future information". You COULD say "Alice knows her model is imperfect, so she should be somewhat conservative, but really that collapses to a different-but-still-specific probability distribution.

Here’s what I think you are saying: In addition to giving prior mass to the hypothesis that her counterpart is Normal, Alice can give prior mass to a catchall that says “the specific hypotheses I’ve thought of are all wrong”. Depending on the utilities she assigns to different policies given that the catchall is true, then she might not commit to Dare after all.

I agree that Alice can and should include a catchall in her reasoning, and that this could reduce the risk of bad commitments. But that doesn’t quite address the problem we are interested in here. There is still a question of what Alice should do once she becomes aware of the specific hypothesis that the predictor is Crazy. She could continue to evaluate her commitments from the perspective of her less-aware self, or she could do the ex-ante open-minded thing and evaluate commitments from the priors she should have had, had she been aware of the things she’s aware of now. These two approaches come apart in some cases, and we think that the latter is better.

You don't need to bring updates into it, and certainly don't need to consider future updates. https://www.lesswrong.com/tag/conservation-of-expected-evidence means you can only expect any future update to match your priors.

I don’t see why EA-OMU agents should violate conservation of expected evidence (well, the version of the principle that is defined for the dynamic awareness setting).

Thanks Dagon:

Any mechanism to revoke or change a commitment is directly giving up value IN THE COMMON FORMULATION of the problem

Can you say more about what you mean by “giving up value”?

Our contention is that the ex-ante open-minded agent is not giving up (expected) value, in the relevant sense, when they "revoke their commitment" upon becoming aware of certain possible counterpart types. That is, they are choosing the course of action that would have been optimal according to the priors that they believe they should have set at the outset of the decision problem, had they been aware of everything they are aware of now. This captures an attractive form of deference — at the time it goes updateless / chooses its commitments, such an agent recognizes its lack of full awareness and defers to a version of itself that is aware of more considerations relevant to the decision problem.

As we say, the agent does make themselves exploitable in this way (and so “gives up value” to exploiters, with some probability). But they are still optimizing the right notion of expected value, in our opinion.

So I’d be interested to know what, more specifically, your disagreement with this perspective is. E.g., we briefly discuss a couple of alternatives (close-mindedness and awareness growth-unexploitable open-mindedness). If you think one of those is preferable I’d be keen to know why!

This model doesn't seem to really specify the full ruleset that it's optimizing for

Sorry that this isn’t clear from the post. I’m not sure which parts were unclear, but in brief: It’s a sequential game of Chicken in which the “predictor” moves first; the predictor can fully simulate the “agent’s” policy; there are two possible types of predictor (Normal, who best-responds to their prediction, and Crazy, who Dares no matter what); and the agent starts off unaware of the possibility of Crazy predictors, and only becomes aware of the possibility of Crazy types when they see the predictor Dare.

If a lack of clarity here is still causing confusion, maybe I can try to clarify further.

I also suspect you're conflating updates of knowledge with strength and trustworthiness of commitment. It's absolutely possible (and likely, in some formulations about timing and consistency) that a player can rationally make a commitment, and then later regret it, WITHOUT preferring at the time of commitment not to commit.

I’m not sure I understand your first sentence. I agree with the second sentence.

Thanks for sharing, I'm happy that someone is looking into this. I'm not an expert in the area, but my impression is that this is consistent with a large body of empirical work on "procedural fairness", i.e., people tend to be happier with outcomes that they consider to have been generated by a fair decision-making process. It might be interesting to replicate studies from that literature with an AI as the decision-maker.

Load More