Besides the scope of a person's boundaries, there's also variance in how bad a boundary violation feels. Those of us who experience boundary violations as particularly negative might prefer others not to try to find benign violations, even if the violator is well-intentioned and sincerely promises to never do that specific thing again. For these people, would-be violators' fear of punishment is a feature. The same goes for people unlikely to experience a benign violation because their gap between social and personal boundaries is small.
One exercise you can try is imagining a world where your currently popular belief is as unpopular as eugenics is now. Almost no one thinks your belief is plausible; most people are dumbfounded or angered by your sincere assertions, and ascribe bad motives to you. Some get mad just because you make an argument that might indirectly support that view. Take 5 minutes to think about what it would be like to experience such a world. If you find yourself less attached to the belief, you might be unduly influenced by its current popularity.(If you're inclined to contrarianism, imagine the opposite.)
The noncentral fallacy is about inappropriately treating a noncentral member of a category as if it were a central member. But your argument is that taxation isn't a member of the category "theft" at all. "Taxation is theft, but that's okay, because it's not the common, bad kind of theft" would be more in line with Scott's responses.
I think the person-affecting view shouldn't be dismissed so quickly. For example, when we talk about poverty-alleviation or health interventions in EA, we talk about how that's good because it makes actual people better off. Similarly, when something is bad, we point to people for whom it's bad, e.g. those who suffer as a consequence of an action. Saving a life isn't consequentially equivalent to creating one, because the conterfactuals are different: in the former, a life would've been nonconsensually terminated, which is bad for that person, but in the latter, there's no one for whom it would be bad. Nor does the person-affecting view endorse human extinction, though it evaluates it less negatively than total utilitarianism does.
So even if, from a total or average utilitarian view, it would be better if you were eventually replaced by new lives, they wouldn't miss out on anything if they weren't created, so they shouldn't count for anything if deciding not to create them, but those who already exist would count either way.
Upon further consideration, it seems to me that while it being enforced can make it worse, much of the prosociality cluster (e.g. guess culture) is oppressive in itself.
The maintenance of already existing cultural traits that are off-putting to outsiders may be more effective than intentionally designing filters, because the former are already part of the community, so by keeping them we're not diluting the culture, and the process of designing filters is likely to cause contestation within the community.about which of its traits are essential and which are peripheral.
It's hard to explicitly describe what the current barriers to entry are, but they include familiarity with LW ideas (and agreement with a lot of them), enjoying the analytical style of discussion and thought, etc. I occasionally see someone come across rationalistsphere and respond with something like "Ugh, a community of robots/autists started by essays written for aliens" - I want to keep whatever it is that repulses them.
I'm a peripheral member of the Berkeley rationalist community, and some of this sounds highly concerning to me. Specifically, in practice, trying to aim at prosociality tends to produce oppressive environments, and I think we need more of people making nonconforming choices that are good for them and taking care of their own needs. I'm also generally opposed to reducing barriers to entry because I want to maintain our culture and not become more absorbed into the mainstream (which I think has happened too much already).
I think you mean ethics and not morals.
I think you mean ethics and not morals.
Those terms are synonymous under standard usage.
Moral responsibility is related to but not the same thing as moral obligation, and it's completely possible for a utilitarian to say one is morally forbidden to be a bystander and let a murder happen while admitting that doing so doesn't make you responsible for it. This is because responsibility is about causation and obligation is about what one ought to do. Murderers cause murders and are therefore responsible for them, while bystanders are innocent. The utilitarian should say not that the bystander is as morally responsible as the murderer (because they aren't), but that moral responsibility isn't what ultimately matters.
I don't agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn't exist, while non-cognitivism holds that moral statements only express emotional attitudes ("Yay for X!") or commands ("Don't X!"), which can neither be true nor false. The difference between error theory and subjectivism is that subjectivists believe that some moral statements are true, but that they are made true by something mind-dependent (but what counts as mind-dependent turns out to be quite complicated).