@J Thomas: "Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"
Because they have a specific argument which leads them to believe that?
You know, there's no reason why one couldn't consider one language more efficient at communication than others, at least by human benchmarks, all else being equal (how well people know the language, etc.). Ditto for morality.
Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about what is fair, versus what is not fair, if you don't explicitly define "fair" in the first place. It's more than a little surprising that this isn't very obvious.
"Giving N people each 1/Nth is nonetheless a fair sort of thing to do"
How can we know this unless we actually define what "fair" is, or what its bedrock is? Or are we just assuming that roughly, "fair" means "equal proportions"?
@Eliezer: "As this is what I identify with the meaning of the term, 'good'..."
I'm still a little cloudy about one thing though Eliezer, and this seems to be the point Roko is making as well. Once you have determined what physically has happened in a situation, and what has caused it, how do inarguably decide that it is "good" or "bad"? Based on what system of prefering one physical state over another?
Obviously, saving a child from death is good, but how do you decide in trickier situations where intuition can't do the work for you, and where people just can't agree on anything, like say, abortion?
Are we really still beating up on group selectionism here, Eliezer?
I think this fallacy needs to be corrected. Yes, group selection is real. Maybe not in the anthropomorphic way of organisms "voluntarily" restraining their breeding, but in terms of adaptation, yes, individual genomes will adapt to survive better as per the requirements of the group. They have no choice BUT to do this, else they go extinct.
The example Eliezer gave of insect populations being selected for low population, actually proves group selectionism. Why? Because it doesn't matter that the low group population was achieved by cannibalism, so long as the populations were low so that their prey-population would not crash.
Saying group selection isn't real is as fallacious as saying a “Frodo” gene cannot exist, despite the fact that it does, in reality.
Can we correct these misconceptions yet?
This post is called the "The Meaning of Right", but it doesn't spend much time actually defining what situations should be considered as right instead of wrong, other than a bit at the end which seems to define "right" as simply "happiness". Rather its a lesson in describing how to take your preferred world state, and causally link that to what you'd have to do to get to that state. But that world state is still ambiguously right/wrong, according to any absolute sense, as of this post.
So does this post say what "right" means, other than simply "happiness" (which sounds like generic utilitarianism), am I simply missing something?
Eliezer: Wiseman, if everyone were blissed-out by direct stimulation of their pleasure center all the time, would that by definition be moral progress?
Compared to todays state of affairs in the world? Yes, I think that would be enormous moral progress compared to right now (so long as the bliss was not short term and would not burn out eventually and leave everyone dead. So long as the bliss was of an individual's choice. So long as it really was everyone in bliss, and others didn't have to suffer for it. Etc. etc.)
I don't get this side debate between Eliezer and Caledonian.
Caledonian's original comment was "Deeper goals and preferences can result in the creation and destruction of shallower ones", which cites a common and accepted belief in cognitive science that there is such a thing as hierarchical goal systems, which might explain human behavior. Nothing controversial there.
Eliezer responds by saying that emotions, not goals, have to be flat, and further, that "each facet of ourselves that we judge, is judged by the whole", which is only ambiguously related to both goals and emotions.
Now Caledonian, did you mean something other than just generic goals to explain this conflict?
Or Eliezer, do you really believe that a goal system is necessarily flat, or that emotions == goals? If so, under what pretense?
In that case I don't think MWI says anything we didn't already know: specifically that 'stuff happens' outside of our control, which is something which we have to deal with even in non-quantum lines of thought. Trying to make choices different when acknowledging that MWI is true probably will result in no utility gain at all, since saying that x number of future worlds out of the total will result in some undesirable state, is the same as saying, under copenhagen, the chances it will happen to you is x out-of total. And that lack of meaningfull difference should be a clue as to MWI's falshood.
In the end the only way to guide our actions is to abide by rational ethics, and seek to improve those.