If your goal is to discourage violence, I feel you've missed a number of key considerations that you need to address to discourage people. Specifically, I find myself confused by several things:
So, by all means, we at LessWrong condemn any attempt to use violence to solve the race to ASI that kills everyone. But, if you were attempting to prevent some group from splintering off to seek a violent means of resistance, I think you've somewhat missed the mark.
This type of post has bad epistemics, because people in support of terrorism are probably not going to come and correct you. You will receive no feedback, and your points even if incorrect will get amplified.
That is a valid point. I did ask two AIs to point out mistakes in the article so I got some criticism. One AI wanted me to steelman the position in favor of violence, which I didn't do because I feared it being taken out of context, and I feared that some might think I was really advocating for violence and putting in the anti-violence positions as cover.
But hunger strike is borderline violent thing? If a person will be able to die from hunger, it will put enormous guilt on AI creators – which they will perceive as a very personal attack.
Doomers are claiming that those building AI are threatening the lives of everyone, so that is already an attempt to put a lot of guilt on the builders.
Violence against AI developers would increase rather than reduce the existential risk from AI. This analysis shows how such tactics would catastrophically backfire and counters the potential misconception that a consequentialist AI doomer might rationally endorse violence by non-state actors.