"No argument is so compelling that it can be wrong about every claim of fact and still convince skeptics, because skeptics do not believe that they are wrong about the facts." - I don't understand this claim, sorry. You said "about every claim of fact", but I didn't say that. Or you mean something different?
"If you believe something like this about any topic..." - what exactly?
I think my plan E fits neatly in this framework: https://www.lesswrong.com/posts/2xHhe4EBHAFofkQJf/plan-e-for-ai-doom
Thanks, yes, there was a typo!
Noted. I did not have this association myself, but got such feedback from some people.
I changed the name. Do you think there is a better word (when we try to limit ourselves to just one word)?
Thank you all for your comments and feedback! No matter how pleased I am with the active reception of my idea, the very same thing also makes me feel sad, for obvious reasons.
I agree that sending transmissions of sufficient intensity can be challenging and may be a dealbreaker. It would be great if someone did proper calculations; perhaps I will do them.
However, I want to emphasise one thing which I probably did not emphasise enough in the article itself: for me, at this point, it is more about acknowledging that something useful can be done even if AI doom is imminent and creating a list of ideas rather than discussing and implementing selected few ideas. I gave specific ideas more for the sake of illustration, although it is, of course, good if they can play out.
It may just be that suggesting additional ideas for plan E is genuinely hard, so no one did it, but maybe I did not create a proper call to action, so I am doing it now.
As a group of people who compiled and sent this message.
I mean, I can give a concrete sequence:
And so on, this list can be extended very long. What I saw from the book is that they intentionally relax conservative assumptions about the difficulty of the problem and it still looks very likely to be not solved.