Thanks for this.
A minor comment and a major one:
The nits: the section on the the Israeli military's use of AI against Hamas could use some tightening to avoid getting bogged down in the particularities of the Palestine situation. The line "some of the surveillance tactics Israeli settlers tested in Palestine" (my emphasis) to me suggests the interpretation that all Israelis are "settlers," which is not the conventional use of that term. The conventional use of settlers applied only to those Israelis living over the Green Line, and particularly those doing so with the ideological intent of expanding Israel's de facto borders. Similarly but separately, the discussion about Microsoft's response to me seemed to take as facts what I believe to still only be allegations.
The major comment: I feel you could go farther to connect the dots between the "enshittification" of Anthropic and the issues you raise about the potential of AI to help enshittify democratic regimes. The idea that there are "exogenously" good and bad guys, with the former being trustworthy to develop A(G)I and the latter being the ones "we" want to stop from winning the race, is really central to AI discourse. You've pointed out the pattern in which participating in the race turns the "good" guys into bad guys (or at least untrustworthy ones).
I think this is the right response to the piece, but begs a more explicit challenge of the conclusion that underdog bias is maladaptive (@Garrett Baker offers both pre-modern tribal life and modern international relations as spheres in which this behavior is sensible).
One ought to be careful of the "anti-bias bias" leading one to accept evolutionary explanations for biases but then makes up reasons why they're maladaptive to fit the (speculative) narrative that the world can be perfected by increasing the prevalence of objectively true beliefs.
But being equally against both requires a positive program to prevent Option 1 other than the default of halting technological development that can lead to it (and thus taking Option 2, or a delay in immortality because human research is slower)! Conversely, without committing to finding such a program, pursuing the avoidance of Option 2 is an implicit acceptance of Option 1. Are you committing to this search? And if it fails, which option will you choose?
Well, it doesn't sound like I misunderstood you so far, but just so I'm clear, are you not also saying that people ought to favor being annihilated by a small number of people controlling an aligned (to them) AGI that also grants them immortality over dying naturally with no immortality-granting AGI ever being developed? Perhaps even that this is an obviously correct position?
Can you speak to the difficulties of addressing risks from development in the national defense sector, which tends to be secret and therefore exposes us to the streetlight problem?
For this to be true, parliamentarians would have to be like ducklings who are impressed by whoever gets to them first or perversely run away from protecting The Greater Good merely because it is The Greater Good. That level of cynicism goes far beyond what falls out of the standard Interest Group model (e.g. Mancur Olsen's). By that model, given that ControlAI represents a committed interest group, there is no reason to believe they can't win.
First let me say that with respect to the world of alignment research, or the AI world in general, I am nothing. I don't have a job in those areas, I am physically remote from where the action is. My contribution consists of posts and comments here.
This assertion deserves a lot of attention IMO, worthy of a post on its own, something along the lines of Why Rationalists Aren't Winners (meant not to mock, but to put it in terms of what rationalism is supposed to do). The gist is that morality is useful for mass coordination to solve collective action problems. When you participate in deliberation about what is good for the group, help arrive at shared answers to the question "how ought we to behave?" and then commit to following those answers, that is power and effectiveness. Overcoming biases that help with coordination so you can, what, win at poker, is not winning. Nassim Nicholas Taleb covers this quite well.
Thanks for working through your thinking. And thanks for bringing my/our attention to "The Peace War," I was not aware of it until now. My only caveat is that one must discount the verisimilitude of science fiction because it demands conflict to be interesting to read. It creates oppressive conditions for the protagonists to overcome, when rational antagonists would eschew those oppressive conditions so that there's no need to protect themselves from plucky protagonists.
The same kind of reasoning applies to the bringing about of AI overlords if you don't have to. @Mars_Will_Be_Ours covers this well in their comment.
The egoist/nihilist categories aren't mutually exclusive. "For the environment" is not nihilistic nor non-egoist when the environment is the provider of everything you need to live a good, free, peaceful albeit finite life.
Thanks, Seth. What troubles me at the meta-level is the assumption of exclusive privilege implied by rationalist/utilitarian arguments, that of getting to make choices between extreme outcomes. It's not just "I, as a rationalist, have considered the trade-offs between X and Y and, if forced to, will choose X." It's "I, a rationalist, believe that rationalism is superior to heuristic-based and otherwise inconsistent reasoning, and therefore assume the responsibility of making choices on behalf of those inferior reasoners." There's not much further to go to get to "I will conceal the 'mild s-risk' of the deaths of billions from them to get them to ally with me to avoid the x-risks that I am concerned about (but to which they, in their imperfect reasoning, are relatively indifferent)."