"You said it's mutually exclusive here and you said the opposite there" is adversarial in the sense that pointing out any sort of error or inconsistency is adversarial. In other words, it is literally adversarial, but it's a good kind of adversarial. You can't point out an inconsistency without being adversarial!
Why can’t there be a rock-paper-scissors–like structure, where in some position, 12. …Ne4 is good against positional players and bad against tactical players?
I would say that in that situation, the move is bad, but being a positional player after your opponent makes that move is also bad.
A human computer programmer would read your code after you submit it and decide whether your program chooses 1-boxing or 2-boxing.
The human computer programmer is not immune to the halting problem, so he can't always do this.
EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I’d argue CAN not be) objectively chosen.
That implies that if I want to make things better for Americans specifically, that would be EA.
Your answer highly depends on what the rule says you could be swapped with (and what it even means to be swapped with something of different intelligence, personality, or circumstances--are you still you?) Saying "every human on Earth" isn't getting rid of a nitpick; it's forcing an answer.
Some ideas inherently affect a lot of people. Anything involving government or income redistribution, including Marxism, falls into that category. Anything that's about what all people should do, such as veganism, also does.
You are inherently going to be arguing with a lot of stupid people, or a lot of "super fired up" people, when you argue ideas that affect such people. And you should have to. Most people wouldn't be able to correctly and logically articulate why you shouldn't steal their car, let alone anything related to Marxism or veganism, but I would say that their objections should have some bearing on whether you do so.
Minimum acceptable outcome.
That's a key point that a lot of people are missing when it comes to AI alignment.
Scenarios that people are most worried about such as the AI killing or enslaving everyone, or making paperclips in disregard of anyone who is made of resources and may be impacted by that, are immoral by pretty much any widely used human standard. If the AI disagrees with some humans about morality, but this disagreement is within the moral parameters about which modern, Western, humans disagree, the AI is for all practical purposes aligned.
Nobody means literally nobody by "nobody says X".
People don't want to save X number of birds as a terminal goal. They want to save a significant number of birds at a reasonable cost. And the fact that the question asks you about saving X number of birds would, in most contexts outside a poll specifically designed to not do so, provide information about what number is significant and what number can be saved at a reasonable cost. Since people being asked the question with 2000 and people who are asked the question with 200000 are provided what would normally be different information, different answers are expected.
Also, the phrase "anthropic principle" appears nowhere in this post. Having nothing important until humans came around is completely expected; being able to make a graph and being important are strongly correlated, and by the anthropic principle, the time when the graph is made is always centered around now.