If your goal is to discourage violence, I feel you've missed a number of key considerations that you need to address to discourage people. Specifically, I find myself confused by several things:
So, by all means, we at LessWrong condemn any attempt to use violence to solve the race to ASI that kills everyone. But, if you were attempting to prevent some group from splintering off to seek a violent means of resistance, I think you've somewhat missed the mark.
Violence against AI developers would increase rather than reduce the existential risk from AI. This analysis shows how such tactics would catastrophically backfire and counters the potential misconception that a consequentialist AI doomer might rationally endorse violence by non-state actors.