A human computer programmer would read your code after you submit it and decide whether your program chooses 1-boxing or 2-boxing.
The human computer programmer is not immune to the halting problem, so he can't always do this.
EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I’d argue CAN not be) objectively chosen.
That implies that if I want to make things better for Americans specifically, that would be EA.
Your answer highly depends on what the rule says you could be swapped with (and what it even means to be swapped with something of different intelligence, personality, or circumstances--are you still you?) Saying "every human on Earth" isn't getting rid of a nitpick; it's forcing an answer.
Some ideas inherently affect a lot of people. Anything involving government or income redistribution, including Marxism, falls into that category. Anything that's about what all people should do, such as veganism, also does.
You are inherently going to be arguing with a lot of stupid people, or a lot of "super fired up" people, when you argue ideas that affect such people. And you should have to. Most people wouldn't be able to correctly and logically articulate why you shouldn't steal their car, let alone anything related to Marxism or veganism, but I would say that their objections should have some bearing on whether you do so.
Minimum acceptable outcome.
That's a key point that a lot of people are missing when it comes to AI alignment.
Scenarios that people are most worried about such as the AI killing or enslaving everyone, or making paperclips in disregard of anyone who is made of resources and may be impacted by that, are immoral by pretty much any widely used human standard. If the AI disagrees with some humans about morality, but this disagreement is within the moral parameters about which modern, Western, humans disagree, the AI is for all practical purposes aligned.
Nobody means literally nobody by "nobody says X".
I didn't mean that there's literally no such thing whatsoever. But "be selfish and ignore the greater good" is constantly derided and is rarely even accepted, let alone presented as a good moral. The whole reason the rationalism community is tied to EA is rejection of selfishness.
Obviously self-help books are an exception, in the same way that pro-murder books are an exception to "murder isn't widely accepted".
It would have given my mom the wrong impression about AI extinction risk (that it sounds crazy)
"It sounds crazy" is a correct impression, by definition. I assume you mean "the wrong impression (that it is crazy)".
But there's a fine line between "I won't mention this because people will get the wrong impression (that it's crazy)" and "I won't mention this because people will get the wrong impression (that it's false)". The former is a subset of the latter; are you going to do the latter and conceal all information that might call your ideas into doubt?
(One answer might be "well, I won't conceal information that would lead to a legitimate disagreement based on unflawed facts and reasoning. Thinking I'm crazy is not such a disagreement". But I see problems with this. If you believe in X, you by definition think that all disagreement with X is flawed, so this doesn't restrict you at all.)
I would say that in that situation, the move is bad, but being a positional player after your opponent makes that move is also bad.