From the MIRI announcement:
Our big ask for you is: If you have any way to help this book do shockingly, absurdly well— in ways that prompt a serious and sober response from the world — then now is the time.
sober response from the world
sober response
Uh... this is debatably a lot to ask of the world right now.
I said "one of the best movies about", not "one of the best movies showing you how to".
The punchline is "alignment could productively use more funding". Many of us already know that, but I felt like putting a mildly-opinionated spin on what kind of things, at the margin, may help top researchers. (Also I spent several minutes editing/hedging the joke)
Virgin 2030s [sic] MIRI fellow:
- is cared for so they can focus on research
- has staff to do their laundry
- soyboys who don't know *real* struggle
- 3 LDT-level alignment breakthroughs per week
CHAD 2010s Yudkowsky:
- founded a whole movement to support himself
- "IN A CAVE, WITH A BOX OF SCRAPS"
- walked uphill both ways to Lightcone offices.
- alpha who knows *real* struggle
- 1 LDT-level alignment breakthrough per decade
EDIT: Due to the incoming administration's ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to "they're gonna go all-in on racing for AI, regardless of how 'needed' it actually is".
https://www.lesswrong.com/posts/tpZciMYCXN49FYWnS/nicholaskross-s-shortform?commentId=f4PxFp8LkKKxdCXxh