Nominated Posts for the 2019 Review

Posts need at least 2 nominations to continue into the Review Phase.
Nominate posts that you have personally found useful and important.
Sort by: fewest nominations
32Calibrating With Cards
1 0
60Dual Wielding
2 3

2019 Review Discussion

[Epistemic status: Strong claims vaguely stated and weakly held. I expect that writing this and digesting feedback on it will lead to a much better version in the future. EDIT: So far this has stood the test of time. EDIT: As of September 2020 I think this is one of the most important things to be thinking about.]

This post attempts to generalize and articulate a problem that people have been thinking about since at least 2016. [Edit: 2009 in fact!] In short, here is the problem:

Consequentialists can get caught in commitment races, in which they want to make commitments as soon as possible. When consequentialists make commitments too soon, disastrous outcomes can sometimes result. The situation we are in (building AGI and letting it self-modify) may be...

4Martín Soto10d
The normative pull of your proposed procedure seems to come from a preconception that "the other player will probably best-respond to me" (and thus, my procedure is correctly shaping its incentives). But instead we can consider the other player trying to get us to best-respond to them, by jumping up a meta-level: the player checks whether I am playing your proposed policy with a certain notion of fairness $X (which in your case is $5), and punishes accordingly to how far their notion of fairness $Y is from my $X, so that I (if I were to best-respond to his policy) would be incentivized to adopt notion of fairness $Y. It seems clear that, for the exact same reason your argument might have some normative pull, this other argument has some normative pull in the opposite direction. It then becomes unclear which has stronger normative pull: trying to shape the incentives of the other (because you think they might play a policy one level of sophistication below yours), or trying to best-respond to the other (because you think they might play a policy one level of sophistication above yours). I think this is exactly the deep problem, the fundamental trade-off, that agents face in both empirical and logical bargaining. I am not convinced all superintelligences will resolve this trade-off in similar enough ways to allow for Pareto-optimality (instead of falling for trapped priors i.e. commitment races), due to the resolution's dependence on the superintelligences' early prior.
5Eliezer Yudkowsky10d
I am denying that superintelligences play this game in a way that looks like "Pick an ordinal to be your level of sophistication, and whoever picks the higher ordinal gets $9."  I expect sufficiently smart agents to play this game in a way that doesn't incentivize attempts by the opponent to be more sophisticated than you, nor will you find yourself incentivized to try to exploit an opponent by being more sophisticated than them, provided that both parties have the minimum level of sophistication to be that smart. If faced with an opponent stupid enough to play the ordinal game, of course, you just refuse all offers less than $9, and they find that there's no ordinal level of sophistication they can pick which makes you behave otherwise.  Sucks to be them!

I agree most superintelligences won't do something which is simply "play the ordinal game" (it was just an illustrative example), and that a superintelligence can implement your proposal, and that it is conceivable most superintelligences implement something close enough to your proposal that they reach Pareto-optimality. What I'm missing is why that is likely.

Indeed, the normative intuition you are expressing (that your policy shouldn't in any case incentivize the opponent to be more sophisticated, etc.) is already a notion of fairness (although in the fi... (read more)

Here's a pattern that shows up again and again in discourse:

A: This thing that's happening is bad.

B: Are you saying I'm a bad person for participating in this? How mean of you! I'm not a bad person, I've done X, Y, and Z!

It isn't always this explicit; I'll discuss more concrete instances in order to clarify. The important thing to realize is that A is pointing at a concrete problem (and likely one that is concretely affecting them), and B is changing the subject to be about B's own self-consciousness. Self-consciousness wants to make everything about itself; when some topic is being discussed that has implications related to people's self-images, the conversation frequently gets redirected to be about these self-images, rather than the concrete issue. Thus, problems


Certainly many people do the sort of thing you're describing, but I think you're fighting the hypothetical. The post as I understand it is talking about people who fail to live up to their own definitions of being a good person.

For example, someone might believe that they are not a racist, because they treat people equally regardless of race, while in fact they are reluctant to shake the hands of black people in circumstances where they would be happy to shake the hands of white people. This hypothetical person has not consciously noticed that this is a pa... (read more)

Load More