Jessica Taylor. CS undergrad and Master's at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
What I actually think would happen is that Photoshop would be mildly more expensive, and would contain code which tries to recognize and stop things like editing a photo of a driver’s license.
So free software would be effectively banned? Both free-as-in-beer (because that can't pay for liability) and free-as-in-speech (because that doesn't allow controlling distribution).
Your graph shows that expected utility tops when p=1. So does it mean that based on your analysis people should always take the bet?
What this is saying is that if everyone other than you always takes the bet, then you should as well. Which is true; if the other 19 people coordinated to always take the bet, and you get swapped in as the last person and your shirt is green, you should take the bet. Because you're definitely pivotal and there's a 9/10 chance there are 18 greens.
If 19 always take the bet and one never does, the team gets a worse expected utility than if they all always took the bet. Easy to check this.
Another way of thinking about this is that if green people in general take the bet 99% of the time, that's worse than if they take the bet 100% of the time. So on the margin taking the bet more often is better at some point.
Globally, the best strategy is for no one to take the bet. That's what 20 UDTs would coordinate on ahead of time.
If just one has to agree, then you should say "yes" a small percent of the time, because then it's more likely at least one says "yes" in the 18 case than the 2 case, because there are 18 chances. E.g. if you say yes 1% of the time then it's "yes" about 18% of the time in the 18 case and 2% in the 2 case which is worth it. I think when your policy is saying yes the optimal percent of the time, you're as a CDT individual indifferent between the two options (as implied by my CDT+SIA result). Given this indifference the mixed strategy is compatible with CDT rationality.
With the discoordination penalty, always saying "no" is CDT-rational because you expect others to say "no" so should "no" along with them.
You can analyze problems like this in the framework of my UDT/CDT/SIA post to work out how CDT under Bayesian updating and SIA is compatible with (but does not necessarily imply) the policy you would get from UDT-like policy selection. (note, SIA is irrelevant for the non-anthropic version of the problem)
Consider the policy of always saying "no", which is what UDT policy selection gives you. If this is your policy in general, then on the margin as a "random" green person (under SIA), your decision makes no difference. Therefore it's CDT-compatible to say "no" ("locally optimal" in the post).
Consider alternatively the policy of always saying "yes". If this is your policy in general, then on the margin as a "random" green person (under SIA), you should say yes, because you're definitely pivotal and when you're pivotal it's usually good to make the decision "yes". This means it's also "locally optimal" to always say yes. But that's compatible with the general result because it just says every globally optimal policy is also locally optimal, not the reverse.
Let's also consider "trembling hand" logic where your policy is to almost always say no (say, with 99% probability). In this case, the probability if there are 18 greens that you are pivotal is , whereas if there are 2 greens it's . So you're much much more likely to be pivotal conditioned on the second. Given the second you shouldn't say green. So under trembling hand logic you'd move from almost always saying "no" to always saying "no", as is compatible with UDT.
If on the other hand you almost always say "yes" (say, with 99% probability), you'd move towards saying yes more often (since you're probably pivotal, and you're probably in the first scenario). Which is compatible with the result, since it just says UDT is globally optimal.
The overall framework of the post can be converted to normal form game theory in the finite case (such as this). In the language of normal game theory, what I am saying is that always saying "no" is a trembling hand perfect equilibrium of the Bayesian game.
Thanks for the comment! I do think DACs are an important economics idea. This post details the main reason why I don't think they can raise a lot of money (compared with copyright etc) under most realistic conditions, where it's hard to identify lots of people who value the good at above some floor. AGI might have an easier time with this sort of thing through better predictions of agents' utility functions, and open-source agent code.
Reconciling free will with physics is a basic part of the decision theory problem. See MIRI work on the topic and my own theoretical write-up.
Moral agents are as in standard moral philosophy.
I do think that "moral realism" could be important even if moral realism is technically false; if the world is mostly what would be predicted if moral realism were true, then that has implications, e.g. agents being convinced of moral realism, and bounded probabilistic inference leading to moral realist conclusions.
Wouldn't the individual developers on the project be personally liable if they didn't do it through a LLC?