The malaria story has fair face validity if one observes the wider time series (e.g.). Further, the typical EA 'picks' for net distribution are generally seen as filling around the edges of the mega-distributors.
FWIW: I think this discussion would be clearer if framed in last-dollar terms.
If Gates et al. are doing something like last dollar optimisation, trying to save as many lives as they can allocating across opportunities both now and in the future, leaving the right now best marginal interventions on the table would imply they expect to exhaust their last dollar on more cost-effective interventions in the future.
This implies the right now marginal price should be higher than the (expected) last dollar cost effectiveness (if not, it should be reallocating some of the 'last dollars' to interventions right now). Yet this in turn does not imply we should see 50Bn of marginal price lifesaving lying around right now. So it seems we can explain Gates et al. not availing themselves of the (non-existent) opportunity to (say) halve communicable diseases for 2Bn a year worldwide (extrapolating from the right now marginal prices) without the right now marginal price being lied about or manipulated. (Obviously, even if we forecast the Gates et al. last dollar EV to be higher than the current marginal price, we might venture alternative explanations of this discrepancy besides them screwing us.)
I also buy the econ story here (and, per Ruby, I'm somewhat pleasantly surprised by the amount of reviewing activity given this).
General observation suggests that people won't find writing reviews that intrinsically motivating (compare to just writing posts, which all the authors are doing 'for free' with scant chance of reward, also compare to academia - I don't think many academics find peer review/refereeing one of the highlights of their job). With apologies for the classic classical econ joke, if reviewing was so valuable, how come people weren't doing it already? [It also looks like ~25%? of reviews, especially the most extensive, are done by the author on their own work].
If we assume there's little intrinsic motivation (I'm comfortably in the 'you'd have to pay me' camp), the money doesn't offer that much incentive. Given Rudy's numbers suppose each of the 82 reviews takes an average of 45 minutes or so (factoring in (re)reading time and similar). If the nomination money is ~roughly allocated by person time spent, the marginal expected return of me taking an hour to review is something like $40. Facially, this isn't too bad an hourly rate, but the real value is significantly lower:
Sure - there's a fair bit of literature on 'optimal stopping' rules for interim results in clinical trials to try and strike the right balance.
It probably wouldn't have helped much for Salk's dilemma: Polio is seasonal and the outcome of interest is substantially lagged from the intervention - which has to precede the exposure, and so the 'window of opportunity' is quickly lost; I doubt the statistical methods for conducting this were well-developed in the 50s; and the polio studies were already some of the largest trials ever conducted, so even if available these methods may have imposed even more formidable logistical challenges. So there probably wasn't a neat pareto-improvement of "Let's run an RCT with optimal statistical control governing whether we switch to universal administration" Salk and his interlocutors could have agreed to pursue.
Mostly I just find it fascinating that as late as the 1950s, the need for proper randomized blind placebo controls in clinical trials was not universally accepted, even among scientific researchers. Cultural norms matter, especially epistemic norms.
This seems to misunderstand the dispute. Salk may have had an overly optimistic view of the efficacy of his vaccine (among other foibles your source demonstrates), but I don't recall him being a general disbeliever in the value of RCTs.
Rather, his objection is consonant with consensus guidelines for medical research, e.g. the declaration of Helsinki (article 8): [See also the Nuremberg code (art 10), relevant bits of the Hippocratic Oath, etc.]
While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.
This cashes out in a variety of ways. The main one is a principle of clinical equipoise - one should only conduct a trial if there is genuine uncertainty about which option is clinically superior. A consequence of this is that clinical trials conducted are often stopped early if a panel supervising the trial finds clear evidence of (e.g.) the treatment outperforming the control (or vice versa) as continuing the trial continues to place those in the 'wrong' arm in harm's way - even though this comes at an epistemic cost as the resulting data is poorer than that which could have been gathered if the trial continued to completion.
I imagine the typical reader of this page is going to tend unsympathetic to the virtue ethicsy/deontic motivations here, but there is also a straightforward utilitarian trade-off: better information may benefit future patients, at the cost of harming (in expectation) those enrolled in the trial. Although RCTs are the ideal, one can make progress with less (although I agree it is even more treacherous), and the question of the right threshold for these is fraught. (There also also natural 'slippery slope' style worries about taking a robust 'longtermist' position in holding the value of the evidence for all future patients is worth much more than the welfare of the much smaller number of individuals enrolled in a given trial - the genesis of the Nuremberg Code need not be elaborated upon.)
A lot of this ethical infrastructure post-dates Salk, but this suggests his concerns were forward-looking rather than retrograde (even if he was overconfident in the empirical premise that 'the vaccine works' which drove these commitments). I couldn't in good conscience support a placebo-controlled trial for a treatment I knew worked for a paralytic disease either. Similarly, it seems very murky to me what the right call was given knowledge-at-the-time - but if Bell and Francis were right, it likely owed more to them having a more reasonable (if ultimately mistaken) scepticism of the vaccine efficacy than Salk, rather him just 'not getting it' about why RCTs are valuable.
I'm afraid I couldn't follow most of this, but do you actually mean 'high energy' brain states in terms of aggregate neural activity (i.e. the parentheticals which equate energy to 'firing rates' or 'neural activity')? If so, this seems relatively easy to assess for proposed 'annealing prompts' - whether psychedelics/meditation/music/etc. tend to provoke greater aggregate activity than not seems open to direct calorimetry, leave alone proxy indicators.
Yet the steers on this tend very equivocal (e.g. the evidence on psychedelics looks facially 'right', things look a lot more uncertain for meditation and music, and identifying sleep as a possible 'natural annealing process' looks discordant with a 'high energy state' account, as brains seem to consume less energy when asleep than awake). Moreover, natural 'positive controls' don't seem supportive: cognitively demanding tasks (e.g. learning an instrument, playing chess) seem to increase brain energy consumption, yet presumably aren't promising candidates for this hypothesised neural annealing.
My guess from the rest of the document is the proviso about semantically-neutral energy would rule out a lot of these supposed positive controls: the elevation needs to be general rather than well-localized. Yet this is a lot harder to use as an instrument with predictive power: meditation/music/etc. have foci too in the neural activity it provokes.
Thanks for this excellent write-up!
I'm don't have relevant expertise in either AI or SC2, but I was wondering whether precision might still be a bigger mechanical advantage than the write-up notes. Even if humans can (say) max out at 150 'combat' actions per minute, they might misclick, not be able to pick out the right unit in a busy and fast battle to focus fire/trigger abilities/etc, and so on. The AI presumably won't have this problem. So even with similar EAPM (and subdividing out 'non-combat' EAPM which need not be so accurate), Alphastar may still have a considerable mechanical advantage.
I'd also be interested in how important, beyond some (high) baseline, 'decision making' is at the highest levels of SC2 play. One worry I have is although decision-making is important (build orders, scouting, etc. etc.) what decides many (?most) pro games is who can more effectively micro in the key battles, or who can best juggle all the macro/econ tasks (I'd guess some considerations in favour would be that APM is very important, and that a lot of the units in SC2 are implicitly balanced by 'human' unit control limitations). If so, unlike Chess and Go, there may not be some deep strategic insights Alphastar can uncover to give it the edge, and 'beating humans fairly' is essentially an exercise in getting the AI to fall within the band of 'reasonably human', but can still subtly exploit enough of the 'microable' advantages to prevail.
Combining the two doesn't solve the 'biggest problems of utilitarianism':
1) We know from Arrhenius's impossibility theorems you cannot get an axiology which can avoid the repugnant conclusion without incurring other large costs (e.g. violations of transitivity, dependence of irrelevant alternatives). Although you don't spell out 'balance utilitarianism' enough to tell what it violates, we know it - like any other population axiology - will have very large drawbacks.
2) 'Balance utilitarianism' seems a long way from the frontier of ethical theories in terms of its persuasiveness as a population ethic.
a) The write-up claims that actions that only actions that increase sum and median wellbeing are good, those that increase one or the other are sub-optimal, and those that decrease both are bad. Yet what if we face choices where we don't have an option that increases both sum and median welfare (such as Parfit's 'mere addition'), and we have to choose between them? How do we balance one against the other? The devil is in these details, and a theory being silent on these cases shouldn't be counted in its favour.
b) Yet even as it stands we can construct nasty counter-examples to the rule, based on very benign versions of mere addition. Suppose Alice is in her own universe at 10 welfare (benchmark this as a very happy life). She can press button A or button B. Button A boosts her up to 11 welfare. Button B boosts her to 10^100 welfare, and brings into existence 10^100 people at (10-10^-100) welfare (say a life as happy as Alice but with a pinprick). Balance utilitarianism recommends button A (as it increases total and median) as good, but pressing button B as suboptimal. Yet pressing button B is much better for Alice, and also instantiates vast numbers of happy people.
c) The 'median criterion' is going to be generally costly, as it is insensitive to changing cardinal levels outside the median person/pair so long as ordering is unchanged (and vice-versa).
d) Median views (like average ones) also incur costs due to their violation of separability. It seems intuitive that the choiceworthiness of our actions shouldn't depend on whether there is an alien population on Alpha Centauri who are happier/sadder than we are (e.g. if there's lots of them and they're happier, any act that brings more humans into existence is 'suboptimal' by the lights of balance util).
(Very minor inexpert points on military history, I agree with the overall point there can be various asymmetries, not all of which are good - although, in fairness, I don't think Scott had intended to make this generalisation.)
1) I think you're right the German army was considered one of the most effective fighting forces on a 'man for man' basis (I recall pretty contemporaneous criticism from allied commanders on facing them in combat, and I think the consensus of military historians is they tended to outfight American, British, and Russian forces until the latest stages of WW2).
2) But it's not clear how much the Germany owed this performance to fascism:
3) Per others, it is unclear 'punching above one's weight' for saying something is 'better at violence'. Even if the US had worse infantry, they leveraged their industrial base to give their forces massive material advantages. If the metric for being better at violence is winning in violent contests, the fact the German's were better at one aspect of this seems to matter little if they lost overall.
It's perhaps worth noting that if you add in some chance of failure (e.g. even if everyone goes stag, there's a 5% chance of ending up -5, so Elliott might be risk-averse enough to decline even if they knew everyone else was going for sure), or some unevenness in allocation (e.g. maybe you can keep rabbits to yourself, or the stag-hunt-proposer gets more of the spoils), this further strengthens the suggested takeaways. People often aren't defecting/being insufficiently public spirited/heroic/cooperative if they aren't 'going to hunt stags with you', but are sceptical of the upside and/or more sensitive to the downsides.
One option (as you say) is to try and persuade them the value prop is better than they think. Another worth highlighting is whether there are mutually beneficial deals one can offer them to join in. If we adapt Duncan's stag hunt to have a 5% chance of failure even if everyone goes, there's some efficient risk-balancing option A-E can take (e.g. A-C pool together to offer some insurance to D-E if they go on a failed hunt with them).
[Minor: one of the downsides of 'choosing rabbit/stag' talk is it implies the people not 'joining in' agree with the proposer that they are turning down a (better-EV) 'stag' option.]
A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.
Happily, this factor has not been missed by either my profile or 80k's work here more generally. Among other things, we looked at:
I also cover this in talks I have given on medical careers, as well as when offering advice to people contemplating a medical career or how to have a greater impact staying within medicine.
I still think trying to get a handle on the average case is a useful benchmark.