SHUT UP AND MULTIPLY
You must wager. It is not optional.
I work on prioritization research and donation advising, with a focus on AI safety in US politics. I'm really excited about this work, but a big downside is that almost none of it can be shared publicly. I used to write ~25 LW posts per year, but now I haven't written any in six months! So I'll publish one short post per day for several days. It'll mostly be abstract stuff on prioritization and grantmaking, with some object-level prioritization takes.
Note: I collaborate closely with Eric Neyman. But these posts will be my personal takes, not a "house view." And Eric has a higher bar for shipping stuff than I do, so these posts will be a lower bound on the quality of stuff he'd write :).
I'm interested in feedback or requests.
Call to action: submit this form to maybe receive AI safety donation recommendations.[1]
Update: I'm done posting daily. I wrote some good posts, but I failed to write some posts that I really want to exist:
- Donations are super important
- The US government is super important
- Don't diversify your impact[2]
- Buying galaxies is not cost-effective
I hope to cause most of these posts to exist in April, but now I'm going back to focusing on non-LW work.
- ^
No promises; we may be focused on research, writing, communicating with existing donors, and/or meta stuff rather than onboarding new small donors.
- ^
This is too strong; some limited diversification is defensible. But many scope-sensitive people underprioritize scope-sensitive impact, especially donations, for confused reasons.