Sorted by New

Wiki Contributions


MIRI announces new "Death With Dignity" strategy

Yeah that's essentially the example I mentioned that seems weirder to me, but I'm not sure, and at any rate it seems much further from the sorts of decisions I actually expect humanity to have to make than the need to avoid Malthusian futures.

MIRI announces new "Death With Dignity" strategy

I'm happy to accept the sadistic conclusion as normally stated, and in general I find "what would I prefer if I were behind the Rawlsian Veil and going to be assigned at random to one of the lives ever actually lived" an extremely compelling intuition pump. (Though there are other edge cases that I feel weirder about, e.g. is a universe where everyone has very negative utility really improved by adding lots of new people of only somewhat negative utility?)

As a practical matter though I'm most concerned that total utilitarianism could (not just theoretically but actually, with decisions that might be locked-in in our lifetimes) turn a "good" post-singularity future into Malthusian near-hell where everyone is significantly worse off than I am now, whereas the sadistic conclusion and other contrived counterintuitive edge cases are unlikely to resemble decisions humanity or an AGI we create will actually face. Preventing the lock-in of total utilitarian values therefore seems only a little less important to me than preventing extinction.

MIRI announces new "Death With Dignity" strategy

I think
- Humans are bad at informal reasoning about small probabilities since they don't have much experience to calibrate on, and will tend to overestimate the ones brought to their attention, so informal estimates of the probability very unlikely events should usually be adjusted even lower.
- Humans are bad at reasoning about large utilities, due to lack of experience as well as issues with population ethics and the mathematical issues with unbounded utility, so estimates of large utilities of outcomes should usually be adjusted lower.
- Throwing away most of the value in the typical case for the sake of an unlikely case seems like a dubious idea to me even if your probabilities and utility estimates are entirely correct; the lifespan dilemma and similar results are potential intuition pumps about the issues with this, and go through even with only single-exponential utilities at each stage. Accordingly I lean towards overweighting the typical range of outcomes in my decision theory relative to extreme outcomes, though there are certainly issues with this approach as well.

As far as where the penalty starts kicking in quantitatively, for personal decisionmaking I'd say somewhere around "unlikely enough that you expect to see events at least this extreme less than once per lifetime", and for altruistic decisionmaking "unlikely enough that you expect to see events at least this extreme less than once in the history of humanity". For something on the scale of AI alignment I think that's around 1/1000? If you think the chances of success are still over 1% then I withdraw my objection.

The Pascalian concern aside I note that the probability of AI alignment succeeding doesn't have to be *that* low before its worthwhileness becomes sensitive to controversial population ethics questions. If you don't consider lives averted to be a harm then spending $10B to decrease the chance of 10 billion deaths by 1/10000 is worse value than AMF. If you're optimizing for the average utility of all lives eventually lived then increasing the chance of a flourishing future civilization to pull up the average is likely worth more but plausibly only ~100x more (how many people would accept a 1% chance of postsingularity life for a 99% chance of immediate death?) so it'd still be a bad bet below 1/1000000. (Also if decreasing xrisk increases xrisk, or if the future ends up run by total utilitarians, it might actually pull the average down.)

MIRI announces new "Death With Dignity" strategy

I think that I'd easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I'd have the resolve to take a pill that causes me to have this resolve.)

I'd do this to save ten planets of worth of thriving civilizations, but doing it to produce ten planets worth of thriving civilizations seems unreasonable to me. Nobody is harmed by preventing their birth, and I have very little confidence either way as to whether their existence will wind up increasing the average utility of all lives ever eventually lived.

MIRI announces new "Death With Dignity" strategy

There's some case for it but I'd generally say no. Usually when voting you are coordinating with a group of people with similar decision algorithms who you have some ability to communicate with, and the chance of your whole coordinated group changing the outcome is fairly large, and your own contribution to it pretty legible. This is perhaps analogous to being one of many people working on AI safety if you believe that the chance that some organization solves AI safety is fairly high (it's unlikely that your own contributions will make the difference but you're part of a coordinated effort that likely will). But if you believe is extremely unlikely that anybody will solve AI safety then the whole coordinated effort is being Pascal-Mugged.

MIRI announces new "Death With Dignity" strategy

This is Pascal's Mugging.

Previously comparisons between the case for AI xrisk mitigation and Pascal's Mugging were rightly dismissed on the grounds that the probability of AI xrisk is not actually that small at all. But if the probability of averting the xrisk is as small as discussed here then the comparison with Pascal's Mugging is entirely appropriate.

They Don’t Know About Second Booster

The cost of Covid is not just unlikely chronic effects, nor vanishingly-unlikely-with-three-shots severe/fatal effects, but also making you feel sick and obliging you to quarantine for ~five days (and probably send some uncomfortable emails to people you saw very recently). With the understandable abandonment of NPIs and need to get on with life, the chance that you will catch Covid in a given major wave if not recently boosted seems pretty high, perhaps 50%? (There were 30M confirmed US cases during the Omicron wave, and at least for most of the pandemic confirmed cases seemed to undercount true cases by about 3x, which makes 27% of the US population despite recent boosters and NPIs.) 100% chance of losing one predictable day (plus perhaps 5% chance of losing five days) seems much better than 50% chance of losing five unpredictable days.

Covid 3/4: Declare Victory and Leave Home

- Is there any reason to think research that could lead to malaria vaccines is funding-constrained? There doesn't seem to be any shortage of in-mice studies, and in light of Eroom's Law the returns on marginal biomedical research investment seem low.
- Malaria is preventable and curable with existing drugs, so vaccines for it only make sense if their cost (including required research) works out lower than preventing it in other ways, which means some strategies that made sense for something like Covid won't make sense here.
- That's not how international waters works, you're still subject to the jurisdiction of the flag country and if they're okay with your trial you could do it more cheaply on land there.
- If you attempt an end-run of the developed-country regulators with your trial they will just refuse to approve anything based on your trial data, which is why pharma companies don't jurisdiction-shop much at present.
- That said developed country regulators do in fact approve challenge trials for malaria vaccines (as I noted) and vaccines for other curable diseases. Regulatory & IRB frameworks no doubt still add a bunch of overhead but this does further bound the potential benefits of attempting to work outside them.
- I don't know what "focusing on epistemics" could possibly entail in terms of concrete interventions. Trying to develop prediction markets I suppose? I have updated away from the usefulness of those based on their performances over the past past year though, and it seems like they are more constrained by policy than by lack of marginal funding (at retail donor levels).
- Policy change is still intractable.
- In general there are lots of margins on which the world might be improved, but the vast majority of them are not plausibly bottlenecked on resources that I or most EAs I know personally control. Learning about a few more such margins is not a significant update. I focus on bednets not because I think it's unusually much more important than other world-improving margins, nor because I think it will be a margin where unusually much improvement happens in coming years, but because it's a rare case of a margin where I think decisions I can make personally (about what to do with my disposable income dollars) are likely to have a nontrivial impact.

Covid 3/4: Declare Victory and Leave Home

It’s plausible that the Covid-19 pandemic could end up net massively saving lives, and a lot of Effective Altruists (and anyone looking to actually help people) have some updating to do. It’s also worth saying that 409k people died of malaria in 2020 around the world, despite a lot of mitigation efforts, so can we please please please do some challenge trials and ramp up production in advance and otherwise give this the urgency it deserves?

What update is this supposed to cause for Effective Altruists? We already knew that policy around all sorts of global health (and other) issues is very far from optimal, but there's nothing we can do about that. Even a global pandemic wasn't enough to get authorities to treat trials and approvals with appropriate urgency and consideration of the costs of inaction, so what hope would a tiny number of advocates have? We can fantasize all day about what we'd do if we ran the world, but back in reality policy change is intractable and donating to incrementally-scalable interventions like bednets remains the best most of us can personally do. Or am I misunderstanding what you meant here?

(Note also that malaria vaccine human challenge trials were already a thing; Effective Altruist John Beshir participated as a subject in one in 2019.)

$1,000 Bounty for Pro-BLM Policy Analysis
Yes, I'm conflating "BLM movement" and "individual Americans who want to help BLM achieve its goals" because isn't it the same thing.

No? I want to help BLM achieve its goals, but "launch a nationwide discussion" and "come to a consensus policy" are not actions I can personally take. If I post policy proposals on Facebook it seems unlikely to me that many people will read or be influenced by them; it also seems unlikely that they would be better than many other policy ideas already out there. If you actually do think that lack of policy ideas is the most important bottleneck for BLM and that personal Facebook posts by non-experts is a promising way of addressing it then that's a possible answer, but if so I'd like to see your analysis for why you believe that.

find solutions that both sides support

Note that at the national level this is inherently very difficult because for any proposal made by one party, the other party has an incentive to oppose it in order to deny the proposing party a victory (and the accompanying halo of strength and efficacy). But fortunately this is not necessarily a problem for at least some approaches to the police reform issue, because police are mostly controlled by state & city governments, and as noted many states and cities are under undisputed Democratic Party control, so the relevant politics are within rather than between parties.

defend shops from looters so people have more sympathy for your side

This seems to have already been done; reports of looting have become increasingly rare and polls report public sympathy for BLM is very high.

Load More