The LessWrongy framework I'm familiar with would say that value = expected utility, so it takes potential downsides into account. You're not risk-averse wrt your VNM utility function, but computing that utility function is hard in practice, and EV calculations can benefit from some consideration of the tail-risks.
Schelling's The Strategy of Conflict seems very relevant here; a major focus is precommitment as a bargaining tool. See here for an old review by cousin_it.
Iterated chicken seems fine to test, just as a spinoff of the IPD that maps to slightly different situations. (I believe that the iterated game of mutually modeling each other's single-shot strategy is different from iterating the game itself, so I don't think Abram's post necessarily implies that iterated chicken is relevant to ASI blackmail solutions.)
Speaking of iterated games, one natural form of blackmail is for the blackmailee to pay an income stream to the blackmailer; that way, at each time-step they're paying their fair price for the good of [not having their secret revealed between time t and time t+1]. Here's a well-cited paper that discusses this idea in the context of nuclear brinksmanship: Schwarz & Sonin 2007.
It's true the net effect is low to first order, but you're neglecting second-order effects. If premia are important enough, people will feel compelled to Goodhart proxies used for them until those proxies have less meaning.
Given the linked siderea post, maybe this is not very true for insurance in particular. I agree that wasn't a great example.
Slack-wise, uh, choices are bad. really bad. Keep the sabbath. These are some intuitions I suspect are at play here. I'm not interested in a detailed argument hashing out whether we should believe that these outweigh other factors in practice across whatever range of scenarios, because it seems like it would take a lot of time/effort for me to actually build good models here, and opportunity costs are a thing. I just want to point out that these ideas seem relevant for correctly interpreting Zvi's position.
The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant "make inferences about" why would it be bad?
As Raemon says, knowing that others are making correct inferences about your behavior means you can't relax. No, idk, watching soap operas, because that's an indicator of being less likely to repay your loans, and your premia go up. There's an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi's explored in his previous posts, and that's part of how I'm interpreting what he's saying here.
But it also helps in knowing who's exploiting them! Why does it give more advantages to... (read more)
I found this pretty useful--Zvi's definitely reflecting a particular, pretty negative view of society and strategy here. But I disagree with some of your inferences, and I think you're somewhat exaggerating the level of gloom-and-doom implicit in the post.
>Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it's worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.
No, this isn't bare repetition. I agree with Raemon that "judge" here means something closer to one of its standard usages, "to make inferences about". Though it also fits with the colloquial "deem unworthy for baring [understandable] flaws",... (read more)
The CHAI reading list is also fairly out of date (last updated april 2017) but has a few more papers, especially if you go to the top and select [3] or [4] so it shows lower-priority ones.
(And in case others haven't seen it, here's the MIRI reading guide for learning agent foundations.)
Oh wait, yeah, this is just an example of the general principle "when you're optimizing for xy, and you have a limited budget with linear costs on x and y, the optimal allocation is to spend equal amounts on both."
Formally, you can show this via Lagrange-multiplier optimization, using the Lagrangian . Setting the partials equal to zero gets you , and you recover the linear constraint function . So . (Alternatively, just optimizing works, but I like Lagrange multipliers.)
In this case, we want to maximize , which is equivalent to optimizing . Let's define , so we're optimizing .
Our constraint function is defined by the tradeoff between and . , so . , so .
Rearranging gives the constraint function . This is indeed linear, with a total 'budget' of .5 and a p-coefficient of 1. So by the above theorem we should have .
I think your solution to "reckless rivals" might be wrong? I think you mistakenly put a multiplier of q instead of a p on the left-hand side of the inequality. (The derivation of the general inequality checks out, though, and I like your point about discontinuous effects of capacity investment when you assume that the opponent plays a known pure strategy.)
I'll use slightly different notation from yours, to avoid overloading p and q. (This ends up not mattering because of linearity, but eh.) Let be the initial probabilities for winning and safety|winning. Let be the capacity variable, and without loss of generality let start at and end... (read more)
Yeah, I worry that competitive pressure could convince people to push for unsafe systems. Military AI seems like an especially risky case. Military goals are harder to specify than "maximize portfolio value", but there are probably reasonable proxies, and as AI gets more capable and more widely used there's a strong incentive to get ahead of the competition.
The Kitty Genovese Equation
Someone's in trouble. You can hear them from your apartment, but you can't tell if any of your neighbors are already rushing down, or already calling the police. It's time sensitive, and you've got to decide now: is it worth spending those precious minutes, or not?
Let's define our variables:
Cost to victim of nobody helping: C
cost to each bystander of intervening: k<C
Number of bystanders: N>=2. (Since k<C, for N=1 it's always right to intervene.)
Analysis:
Suppose the bystanders all simultaneously decide whether to intervene or not, with probability p. Then expected world-utility is UC,k,N(p)=−C(1−p)N−kpN
Utility is maximized when 0=dU/dp=NC(1−p)N−1−kN ; In other words, when (1−p)N−1=kC. Let α=kC .... (read more)