This is a special post for quick takes by gustaf. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
In 2015 Sam Altman wrote to Elon Musk:
Been thinking a lot about whether it's possible to stop humanity from developing AI. I think the answer is almost definitely not. If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.
This seems to me like another example of what Vitalik calls Inevitibilism in his essay against Galaxy Brain Arguments.
Here are some things I did after reading If Anyone Builds It, Everyone Dies:
I'd attend a march, if 1000 people also pledged to march in Germany.
I might contact my representatives (How to email your politician).
Overview of Eliezer Yudkowsky's writing: