Your answer highly depends on what the rule says you could be swapped with (and what it even means to be swapped with something of different intelligence, personality, or circumstances--are you still you?) Saying "every human on Earth" isn't getting rid of a nitpick; it's forcing an answer.
Some ideas inherently affect a lot of people. Anything involving government or income redistribution, including Marxism, falls into that category. Anything that's about what all people should do, such as veganism, also does.
You are inherently going to be arguing with a lot of stupid people, or a lot of "super fired up" people, when you argue ideas that affect such people. And you should have to. Most people wouldn't be able to correctly and logically articulate why you shouldn't steal their car, let alone anything related to Marxism or veganism, but I would say that their objections should have some bearing on whether you do so.
Minimum acceptable outcome.
That's a key point that a lot of people are missing when it comes to AI alignment.
Scenarios that people are most worried about such as the AI killing or enslaving everyone, or making paperclips in disregard of anyone who is made of resources and may be impacted by that, are immoral by pretty much any widely used human standard. If the AI disagrees with some humans about morality, but this disagreement is within the moral parameters about which modern, Western, humans disagree, the AI is for all practical purposes aligned.
Nobody means literally nobody by "nobody says X".
I didn't mean that there's literally no such thing whatsoever. But "be selfish and ignore the greater good" is constantly derided and is rarely even accepted, let alone presented as a good moral. The whole reason the rationalism community is tied to EA is rejection of selfishness.
Obviously self-help books are an exception, in the same way that pro-murder books are an exception to "murder isn't widely accepted".
It would have given my mom the wrong impression about AI extinction risk (that it sounds crazy)
"It sounds crazy" is a correct impression, by definition. I assume you mean "the wrong impression (that it is crazy)".
But there's a fine line between "I won't mention this because people will get the wrong impression (that it's crazy)" and "I won't mention this because people will get the wrong impression (that it's false)". The former is a subset of the latter; are you going to do the latter and conceal all information that might call your ideas into doubt?
(One answer might be "well, I won't conceal information that would lead to a legitimate disagreement based on unflawed facts and reasoning. Thinking I'm crazy is not such a disagreement". But I see problems with this. If you believe in X, you by definition think that all disagreement with X is flawed, so this doesn't restrict you at all.)
"People will draw conclusions that harm me" and "people will draw conclusions that weaken my argument" are very different things. Yelling that you shit your pants is in the first category. Saying things that make people less likely to believe in AI danger is in the second.
Hiding information in the second category may help you win, but your goal is to find the truth, not to win regardless of truth. Prosecutors have to turn over exculpatory evidence, and there is a reason for this.
I would disagree that people in the real world act based on what's cheaper. None of the cancellations we already see over other things are done so as to be financially optimal for the cancellers. Even companies don't act based on what's financially optimal; if Google was willing to fire James Damore, Google certainly would be willing to fire people for not selling their vote to Google. If Disney is willing to lose millions, maybe billions, of dollars through woke Marvel and Star Wars, I'm pretty sure they'd be willing to fire people who won't sell their votes to them, even if it "isn't cheaper".
It's true that the market hurts companies that do this sort of stuff, but it takes a long time between when someone loses money because they are acting against the market, and when they actually go out of business. Disney isn't about to die soon.
And punishing some people does often benefit them financially anyway because even though firing an employee costs money, it also intimidates other employees, reducing the sale price of their votes.
There's also the issue that some people won't sell their vote for the financially optimal price either, so the company or the mob will threaten to fire them to force them to. Many people wouldn't, absent coercion, sell their vote for any price, just like many people won't sell sex. Or they will, but only for a life-changing amount that is many times the market price.
If there are a hundred thousand people like you who are being asked to sell your vote and you stand to lose $1000 from the wrong person being elected, your vote can only be 1/100000 of the reason the person is elected, so you should sell it for $1000/100000 = 1 cent. Under the proper decision theory, you shouldn't sell it for less than $1000, but the number of people in the real world who understand (let alone both understand and agree with) such decision theories is negligible on the scale of voting.
That implies that if I want to make things better for Americans specifically, that would be EA.