A common response (although, one I cannot find an example of via the search feature, blah) I have observed from Less Wrongers to the challenge of interpersonal utility comparison is the claim that "we do it all the time". I take this to mean that when we make decisions we often consider the preferences of our friends and family (and sometimes strangers or enemies) and that whatever is going on in our minds when we do this approximates interpersonal utility calculations (in some objective sense). This, to me, seems like legerdemain for basically this reason:
...One stand restoring to utilitarianism its role of judging policy, is that interpersonal comparisons are obviously possible since we are making them all the time. Only if we denied "other minds" could we rule out comparisons between them. Everyday linguistic usage proves the logical legitimacy of such statements as "A is happier than B" (level-comparison) and, at a pinch, presumably also "A is happier than B but by less than B is happier than C" (difference-comparison). A degree of freedom is, however, left to interpretation, which vitiates this approach. For these everyday statements can,
The other day, I forgot my eyeglasses at home and while walking I got a good sized piece of dust or dirt lodged in my eye. My eye was incapacitated for the better part of a minute until tears washed it out. I had a bit of an epiphany: 3^^^3 dust specks suddenly seems a lot scarier, something you obviously need to agregate and assign a monstrous pile of disutility to. So Basically I have updated my position on torture vs specks.
I have a pill that will make you a psychopath. You will retain all your intellectual abilities and all understanding of moral theory, but your emotional reactions to others suffering will cease. You will still have the empathy to understand that others are suffering, but you won't feel automatic sympathy for it.
Do you want to take it?
I am having a discussion on reddit (I am TheMeiguoren), and I have a a moral quandry that I want to run by the community.
I'll highlight the main point (the context is a discussion about immortality):
...imbecile: For someone to have several lifetimes to be considered a good thing, it must be conclusively shown that this person improves the life of others more and faster than several other people could achieve in their lifetime together with the resources he has at his disposal.
me: If my existence really was harming the human race by not being as efficient a
I have a question: what is akrasia exactly?
Say I have to finish a paper, but I also enjoy wasting time on the internet. All things considered, I decide it would be better for me to finish the paper than for me to waste time on the internet. And yet I waste time on the internet. What's going on there? It can't just be a reflex or a tick: my reflexes aren't that sophisticated. Given how complicated wasting time on the internet is, and that I decidedly enjoy it, it looks like an intentional action, something which is the result of my reasoning. Yet I reasoned...
Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".
I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how goo...
Love - in increase in her utility causes an increase in your utility.
Hate - in increase in her utility causes a decrease in your utility.
Indifference - a change in her utility has no influence on your utility.
Love = good.
Hate = evil.
Indifference = how almost everyone feels towards almost everyone.
[...]and related-to-rationality enough to deserve its own thread.
I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite comp...
Question: What is the definition of morality? What is morality? For what humans use this concept and what motivitates humans to better understand morality, whatever it is?
So what does adding up to normal mean?
It means that if in your branch you are the first one to whistle the tune, there is no one else in your branch to contradict you. (Just as you would expect in One World.) In some other branch someone else was first, and in that branch you don't think that you were the first, so again no conflict.
if my normal is Newtonian physics
Then "adding up to normal" means that even when Einstein ruins your model, all things will behave the same way as they always did. Things that within given precision obeyed the Newtonian physics, will continue to do it. You will only see exceptions in unusual situations, such as GPS satellites. (But if you had GPS satellites before Einstein invented his theory, you would have seen those exceptions too. You just didn't know that would happen.)
In case of morality it means that if you had a rule "X is good" because it usually has good consequences (or because it follows the rules, or whatever), then "X is good" even with Many Worlds. The exception is if you try to apply moral significance to a photon moving through a double slit.
An explanation may change: for example it was immoral to say "if the coin ends this side up, I will kill you", and it is still immoral to do so, but the previous explanation was that "it is bad to kill people with 50% probability" and the new explanation is "it is bad to kill people in 50% of branches" (which means killing them with 50% probability in a random branch).
Okay, so on reflection, I think the idea that it all adds up to normality is just junk. It doesn't mean anything. I'll try to explain:
A: MW comes into conflict with this ethical principle.
B: It can't come into conflict. Physics always adds up to normality.
A: Really? Suppose I see an apple falling, and you discover that there's no such thing as an apple, but that what we called apples are actually a sub-species of blueberries. Now I've learned that I've in fact never seen an apple fall, since by 'apple' I meant the fruit of an independent species of plant. ...
I figure morality as a topic is popular enough and important enough and related-to-rationality enough to deserve its own thread.
Questions, comments, rants, links, whatever are all welcome. If you're like me you've probably been aching to share your ten paragraph take on meta-ethics or whatever for about three uncountable eons now. Here's your chance.
I recommend reading Wikipedia's article on meta-ethics before jumping into the fray, if only to get familiar with the standard terminology. The standard terminology is often abused. This makes some people sad. Please don't make those people sad.