Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
We're told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more
2hairyfigment4yI'm getting really sick of this claim that Eliezer says all humans would agree
on some morality under extrapolation. That claim is how we get garbage like this
[http://lesswrong.com/lw/7wu/open_thread_october_2011/dgc1?context=1#comments].
At no point do I recall Eliezer saying psychopaths would definitely become moral
under extrapolation. He did speculate about them possibly accepting
modification. But the paper linked here
[http://lesswrong.com/lw/le5/welcome_to_less_wrong_7th_thread_december_2014/c6yi]
repeatedly talks about ways to deal with disagreements which persist under
extrapolation:
(Naturally, Eugine Nier as "seer" downvoted all of my comments.)
The metaethics sequence does say IMNSHO that most humans' extrapolated volitions
(maybe 95%) would converge on a cluster of goals which include moral ones. It
furthermore suggests that this would apply to the Romans if we chose the 'right'
method of extrapolation, though here my understanding gets hazier. In any case,
the preferences that we would loosely call 'moral' today, and that also survive
some workable extrapolation, are what I seem to mean by "morality".
One point about the ancient world: the Bhagavad Gita, produced by a warrior
culture though seemingly not by the warrior caste, tells a story of the hero
Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn't
change his mind simply because of arguments about duty. In the climax, Krishna
assumes his true form as a god of death with infinitely many heads and jaws,
saying, 'I will eat all of these people regardless of what you do. The only deed
you can truly accomplish is to follow your warrior duty or dharma.' This view
seems plainly environment-dependent.
1Bound_up4yNo, you're totally right.
I've simplified it a bit for the sake of brevity and comprehension of the
central idea, but yeah, it's probably right to say that humans are all born with
ABOUT the same morality equation. And also true that psychopaths' equation is
further away than most's.
0Raiden4yI don't think it being unfalsifiable is a problem. I think this is more of a
definition than a derivation. Morality is a fuzzy concept that we have
intuitions about, and we like to formalize these sorts of things into
definitions. This can't be disproven any more than the definition of a triangle
can be disproven.
What needs to be done instead is show the definition to be incoherent or that it
doesn't match our intuition.
0Bound_up4yYou're right; I've provided no evidence.
Do you think the idea is sufficiently coherent and non-self-contradictory that
the way to find out if it's right or wrong is to look for evidence?
If it was incoherent or contradicted itself, it wouldn't even need evidence to
be disproven; we would already know it's wrong. Have I avoided being wrong in
that way?
(by the way, understanding slavery might be necessary, but not sufficient to get
someone to be against it. They might also need to figure out that people are
equal, too. Good point, I might need to add that note into the post).
2Lumifer4yYou do understand that debates about objective vs relative morality has been
going on for millenia?
No, they don't if they themselves are in danger of becoming slaves. Notably, a
major source of slaves in the Ancient world was defeated armies. Slaves weren't
clearly different people (like the blacks were in America), anyone could become
a slave if his luck turned out to be really bad.
0Bound_up4yRight. Someone could be against slavery for THEM personally without being
against slavery in general if they didn't realize that what was wrong for them
was also wrong for others. That's all I'm getting at, there.
Or do you mean that they should have opposed slavery for everybody as a sort of
game theory move to reduce their chance of ever becoming a slave?
"You do understand that debates about objective vs relative morality has been
going on for millenia?"
What I'm getting at here is that most moral theories are so bad you don't even
need to talk about evidence. You can show them to be wrong just because they're
incoherent or self-contradictory.
It's a pretty low standard, but I'm asking if this theory is at least coherent
and consistent enough that you have to look at evidence to know if it's wrong,
instead of just pointing at its self-defeating nature to show it's wrong. If so,
yay, it might be the best I've ever seen. :)
4Lumifer4yHuh? I'm against going to jail personally without being against the idea of jail
in general. In any case, wasn't your original argument that ancient Greeks and
Romans just didn't understand what does it mean to be a slave? That clearly does
not hold.
Do you mean descriptive or prescriptive moral theories? If descriptive, humans
are incoherent and self-contradictory.
Which moral theories do you have in mind? A few examples will help.
0Bound_up4yMmm, that's not quite the right abstraction. You're probably against innocents
going to jail in general, no?
Whereas some Roman might not care, as long as it's no one they care about.
All I'm getting at is that the Romans didn't think certain things were wrong,
but if they were shown in a sufficiently deep way everything we know, they would
be moved by it, whereas if we were shown everything they know, we would not find
it persuasive of their position. Neither would they, after they had seen what
we've seen.
I'm talking metaethics, what makes something moral, what it means for something
to be moral. Failed ones include divine command theory, the "whatever
contributes to human flourishing" idea, whatever makes people happy, whatever
matches some platonic ideals out there somehow, whatever leads to selfish
interest, etc.
2Lumifer4yThat doesn't seem obvious to me at all.
Let's try it on gay marriage. Romans certainly knew and practiced homosexuality,
same for marriage. What knowledge exactly do you want to convey to them to
persuade them that gay marriage is a good thing?
So, prescriptive. I am not sure in which way do you consider the theories
"failed" -- in the sense that they have not risen to the status of physics
meaning being able to empirically prove all their claims? That doesn't look to
be a viable criterion. In the sense of not having taken over the world? I don't
know, the divine command theory is (or, at least, has been) pretty good at that.
You probably wouldn't want a single theory to take over the world, anyway.
-1hairyfigment4yKind of a weird example
[https://en.wikipedia.org/wiki/History_of_same-sex_unions#Old_World], but I'll
assume we're talking about the Praetorian Guard. The Romans seem to have had
very little respect for women and for being penetrated. So right off the bat,
having them read a lot of women's minds might change their views. (I'm not sure
if I want to classify that as knowledge, though.) They likely also have false
beliefs not only about women but about the gods and stable societies. None of
this seems like a cure-all, but it does seem extremely promising.
3Lumifer4yI don't understand what that means.
You think no male Roman actually knew what women think? The Roman matrons were
entirely voiceless?
0Carinthium4yI think hairyfigment is of the belief that the Romans (and in the most coherent
version of his claim you would have to say male and female) were under
misconceptions about the nature of male and female minds, and believes that "a
sufficiently deep way" would mean correcting all these misconceptions.
My view is that we really can't say that as things stand. We'd have to know a
lot more about the Roman beliefs about the male and female minds, and compare
them against what we know to be accurate about male and female minds.
0Lumifer4yAnd what evidence do you have that they laboured under such major misconceptions
which we successfully overcame?
0Carinthium4yI was trying to say with my second paragraph that we specifically cannot be sure
about that. My first paragraph was simply my best effort at interpreting what I
think hairyfigment thinks, not a statement of what I believe to be true.
From my vague recollections I think the idea is worth looking up one way or the
other. After all, a massive portion of modern culture is under the impression
there are no gender differences and there are other instances of clear major
misconceptions I actually can attest to throughout history. But I don't have any
idea with the Romans.
0Lumifer4yThat's the stupid portion of modern culture, and I'm not sure they actually, um,
practice that belief. Here's a quick suggestion: make competitive sports
sex-blind :-/
I don't think it's massive, either.
0DanArmak4yYes, I think it is coherent.
Ideological Turing test: I think your theory is this: there is some set of
values, which we shall call Morals. All humans have somewhat different sets of
lower-case morals. When people make moral mistakes, they can be corrected by
learning or internalizing some relevant truths (which may of course be different
in each case). These truths can convince even actual humans to change their
moral values for the better (as opposed to values changing only over
generations), as long as these humans honestly and thoroughly consider and
internalize the truths. Over historical time, humans have approached closer to
true Morals, and we can hope to come yet closer, because we generally collect
more and more truths over time.
If you mean you don't have any evidence for your theory yet, then how or why did
you come by this theory? What facts are you trying to explain or predict with
it?
Remember that by default, theories with no evidence for them (and no unexplained
facts we're looking for a theory about) shouldn't even rise to the level of
conscious consideration. It's far, far more likely that if a theory like that
comes to mind, it's for due to motivated reasoning. For example, wanting to
claim your morality is better by some objective measure than that of other
people, like slavers.
That's begging the question. Believing that "people are equal" is precisely the
moral belief that you hold and ancient Romans didn't. Not holding slaves is
merely one of many results of having that belief; it's not a separate moral
belief.
But why should Romans come to believe that people are equal? What sort of
factual knowledge could lead someone to such a belief, despite the usually
accepted idea that should cannot be derived from is?
0Bound_up4yThis is an explanation of Yudkowsky's idea from the metaethics sequence. I'm
just trying to make it accessible in language and length with lots of concept
handles and examples.
Technically, you could believe that people are equally allowed to be enslaved.
All people equal + it's wrong to make me a slave = it's wrong to make anyone a
slave.
"All men are created equal" emerges from two or more basic principles people are
born with. You might say: "Look, you have value, yah? And your loved ones? Would
they stop having value if you forgot about them? No? They have value whether or
not you know them? How did you conclude they have value? Could that have
happened with other people, too? Would you then think they had value? Would they
stop having value if you didn't know them? No? Well, you don't know them; do
they have value?
You take "people I care about have value" (born with it) and combine it with "be
consistent" (also born with), and you get "everyone has value."
That's the idea in principle, anyway. You take some things people are all born
with, and they combine to make the moral insights people can figure out and
teach each other, just like we do with math.
1DanArmak4yIn a sense, the ancient Romans did believe this. Anyone who ended up in the same
situation - either taken as a war captive or unable to pay their debts - was
liable to be sold as a slave. So what makes you think your position is
objectively better than theirs?
This assumes without argument that "value" is something people intrinsically
have or can have. If instead you view value as value-to-someone, i.e. I value my
loved ones, but someone else might not value them, then there is no problem.
And it turns out that yes, most people did not have an intuition that anyone has
intrinsic value just by virtue of being human. Most people throughout history
assigned value only to ingroup members, to the rich and powerful, and to
personally valued individuals. The idea that people are intrinsically valuable
is historically very new, still in the minority today globally, and for both
these reasons doesn't seem like an idea everyone should naturally arrive at if
they only try to universalize their intuitions a bit.
0TheAncientGeek4yYou realise that's a reinvention of Kant?
Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:
values that motivates actions (set of concepts that agents care about) are two placed computations, one for class of beings (and possibly other parameters locating them) and the other for individual beings.
0Bound_up4yI think that's right.
Except that something is moral whether any being cares about morality or not,
just like something is prime regardless of whether or not anyone cares about
primality.
It's not that morality is there because of evolution, but that being who CARE
about morality are there because of evolution.
I'm not sure what you mean by fragile morality, but since you've gotten pretty
much everything right, I suspect you've got the right idea, there, too.
0TheAncientGeek4yAnd what happens when you plug in MrMinds claim that there are multiple species
specific moralities? Doesn't that mean that every action is both moral and
immoral from multiple perspective?
0Bound_up4yI think we've ceased to argue about anything but definitions.
Cut out "morality" and get:
Different species have different sets of values they respond to. Every action is
valued according to some such sets fo values and not valued or negatively valued
by other sets of values.
You can call any set of values "a" morality if you want, but I think that ceases
to refer to what we're talking about when we say something is moral whether
anybody values it or not.
0TheAncientGeek4yI'm not advocating the idea that morality is value, I am examining the
implications of what other people have said.
You wrote an article purporting to explain the Yudkowskian theory of morality,
and, indeed the one true theory of morality, since the two are the same.
Hypothetically, making a few comments about value, and nothing but value,
doesn't do what is advertised on the label. The reader would need to know how
value relates back to morality.
And in fact you supplied the rather definitional sounding statement that
Morality is Values.
If you base an argument on a definition ,don't be surprised if people argue
about it. The alternative, where someone can stipulate a definition, but no one
can challenge it, is a game that will always be won by the first to move.
0TheAncientGeek4yAnd what happens when you plug in MrMinds claim that there are multiple species
specific moralities? Doesn't that mean that every action is both moral and
immoral from multiple perspective?
Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".
The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff
That doesn't generalise to the point that non humans have no morality. You have m... (read more)
0Bound_up4yOkay. By saying "If they have failed to grasp that morality is obligatory, have
they understood it at all? They might continue caring more about eggnog, of
course. That is beside the point... morality means what you should care about,
not what you happen to do."
it seems you have not understood the idea. Were there any parts of the the post
that seemed unclear that you think I might make clearer?
Because the whole point is that to say something is moral = you should do it =
it is valued according to the morality equation.
For an Elf to agree something is moral is also to agree that they should do it.
When I say they agree it's moral and don't care, that also means they agree they
should do it and don't care.
Something being Christmas Spiritey = you Spiritould do it. Humans might agree
that something is Christmas Spirit-ey, and agree that they spiritould do it,
they just don't care about what they spiritould do, they only care about what
they should do.
moral is to Christmas spiritey what "should" is to (make up a word like)
"spiritould"
Obligatory is just a kind of "should." Elves agree that some things are
obligatory, and don't care, they care about what's ochristmastory.
.
Likewise, to say that today's morality equation is the "best" is to say that
today's morality equation is the equation which is most like today's morality
equation. Tautology.
Best = most good, and good = valued by the morality equation.
0TheAncientGeek4yAlmost everything. You explain morality by putting forward one theory. Under
those circumstances, most people would expect to see some critique of other
theories, and explanation of why your theory is the One True Theory. You don't
do the first, and it is not clear that you are even trying to do the second.
And to say that only humans have morality. But if there is something the Elves
should do, then morality applies to them., contradicting that claim.
That doesn't help. For one thing, humans don't exactly want to be moral...their
moral fibre has to be buttressed bty various punishments and rewards. For
another "should" and "want to" are not synonyms..but "moral" and "what you
should do" are. So if there is something the Elves should do, at that point you
have established that morality applies to the Elves, and the fact that they
don't want to do it is a side-issue. (And of course they could tweak their own
motivations by constructing punishments and rewards).
OK. Now you seem to be saying..without quite making it quite explicit of course,
..that morality is by definition unique to humans, because the word "moral" just
labels what motivates humans, in the way that "Earth" or "Terra" labels the
planet where humans live. That claim isn't completely incomprehensible, it's
just strange and arbitrary, and what is considerably strange is the way you feel
no need to defend it against alternative theories -- the main alternative being
that morality is multiply instantiable, that other civilisations could have
their own versions. like they have their own versions , in the way they could
have their own versions of houses or money.
You state it as though it is obvious, yet it has gone unnoticed for thousands of
years.
Suppose I were to announce that dark matter is angels' tears. Doesn't it need
some expansion? That's how your claim reads, that' the outside view.
Obligatory is a kind of "should" *that shouldn't be overridden by other
considerations. (A failure to do what
1Bound_up4yNo, no, no...
Every possible creature, and every process of physics SHOULD do XYZ. But
practically nothing is moved by that fact.
This sentence means: It is highly valued in the morality equation for XYZ to be
the state of affairs, independently of who/what causes it to be so.
Likewise, everything Spiritould do ABC, but only Elves are moved by that fact.
These are objective equations which apply to everything. To say should,
spiritould, clipperould, etc., is just to say about different things that they
are valued by this equation or that one. It's an objective truth that they are
valued by this equation or that one.
It's just that humans are not moved by almost any of the possible equations.
They ARE moved by the morality equation.
Humans and Elves should AND spiritould do whatever. They are both equally
obligated and ochristmasated. But one species finds one of those facts moving
and not the other, and the other finds the other moving and not the one.
Perhaps now it is clear?
1TheAncientGeek4yIt is not a clear expression of something that can be seen to work
Version 1.
I am obligated to both do and not do any number of acts by any number of
shouldness-equations
If that is the case, anything resembling objectivism is out of the window. If I
am obligate to do X, and I do X, then my action is right. If I am obligated not
do to X, and I do X, my action is wrong. if I am both obligated and not
obligated to do X, then my action is somehow both right and wrong..that is, it
has no definite moral status.
But that's not quite what you were saying.
Version 2.
There are lots of different kinds of morality, but I am only obligated by human
morality.
That would work, but it's not what you mean. You are explicitly embracing...
Version 3.
There are lots of different kinds of morality, but I am only motivated by human
morality
There's only one word of difference between that and version 2, which is the
substitution of "motivated" for "obligated". As we saw under version 1, it's the
existence of multiple conflicting obligations which stymies ethical objectivism.
And motivation can't fix that problem, because it is a different thing to
obligation. In fact it is orthogonal, because:
You can be motivated to do what you are not obligated to do. You can be
obligated to d what your are not motivated to do. Or both. Or neither.
Because of that, version 3 implies version 1, and has the same problem.
-1Bound_up4yIf you are interested, I might recommend trying to write up what you think this
idea is, and see if you find any holes in your understanding that way. I'm not
sure how to make it any clearer right now, but, for what it's worth, you have my
word that you have not understood the idea.
We are not disagreeing about something we both understand; you are disagreeing
with a series of ideas you think I hold, and I am trying to explain the original
idea in a way that you find understandable and, apparently, not yet succeeding.
-1TheAncientGeek4yI believe I just did something like that. Of course, I attributed the holes to
the theory not working. If you want me to attribute them to my not having
understood you, you need to put forward a version that works.
-1entirelyuseless4yAll of this is why Eliezer's morality sequence is wrong. Version 2 is basically
right. The Baby-Eaters were not immoral, but moral, but according to a different
morals. That is not subjectivism, because it is an objective fact that
Baby-Eaters are what they are, and are obligated by Baby-Eater morality, and
humans are humans, and are obligated by human morality.
But Eliezer (and Bound-Up) do not admit this, nonsensically asserting that
non-humans should be obligated by human morality.
0MrMind4yTo be honest, Eliezer made a slightly different argument:
1) humans share (because of evolution) a psychological unity that is not
affected by regional or temporal distinctions;
2) this unity entails a set of values that is inescapable for every human
beings, its collective effect on human cognition and actions we dub "morality";
3) Clippy, Elves and Pebblesorters, being fundamentally different, share a
different set of values that guide their actions and what they care about;
4) those are perfectly coherent and sound for those who entertain them, we
should though do not call them "Clippy's, Elves' or Pebblesorters' morality",
because words should be used in such a way to maximize their usefulness in
carving reality: since we cannot go out of our programming and conceivably find
ourselves motivated by eggnog or primality, we should not use the term and
instead use primality or other words.
That's it: you can debate any single point, but I think the difference is only
formal. The underlying understanding, that "motivating set of values" is a two
place predicate, is the same, Yudkowski preferred though to use different words
for different partially applied predicates, on the grounds of point 1 and 4.
0TheAncientGeek4ySo my car is a car becuse it motor-vates me, but your car is no car at all,
because it motor-vates you around, but not me. And yo mama ain't no Mama cause
she ain't my Mama!
Yudkowsky isn't being rigourous, he is instead appealing to an imaginary rule,
one that is not seen in any other case.
And it's not like the issue isn't important, either .. obviously the
premissibility of imposing ones values on others depends on whether they are
immoral, amoral, differently moral , etc. Differrently moral is still a
possibilirt, for the reasons that you are differently mothered, not unmohtered.
0MrMind4yThe difference is not between two cars, yours and mine, but between a passegner
ship and a cargo ship, built for two different purpose and two different class
of users.
On this we surely agree, I just find the new rule better than the old one. But
this is the least important part of the whole discussion.
This is well explored in "Three worlds collide". Yudkowski vision of morality is
such that it assigns different morality to different aliens, and the same
morality to the same species (I'm using your convention). When different worlds
collide, it is moral for us to stop babyeaters from eating babies, and it is
moral for the superhappy to happify us. I think Eliezer is correct in showing
that the only solution is avoiding contact at all.
1TheAncientGeek4yThat seems different to what you were saying before.
There's not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People
keep repeating it as though its a great revelation, but its equally true that
babyeater morality motivates babyeaters, so the situation comes out looking
symmetrical and therefore relativistc.
0Carinthium4yMaybe we should be abandoning the objectivity requirement as impossible. As I
understand it this is in fact core to Yudkowsky's theory- an "objective"
morality would be the tablet he refers to as something to ignore.
I'm not entirely on Yudkowsky's side in this. My view is that moral desires,
whilst psychologically distinct from selfish desires, are not logically distinct
and so the resolution to any ethical question is "What do I want?". There is the
prospect of coordination through shared moral wants, but there is the prospect
of coordination through shared selfish wants as well. Ideas of "the good of
society" or "objective ethical truth" are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His
stone tablet analogy, if I remember correctly, sums it up.
"I think Eliezer is correct in showing that the only solution is avoiding
contact at all.": Assumes that there is such a thing as an objective solution,
if implicitly.
"The difference is not between two cars, yours and mine, but between a passegner
ship and a cargo ship, built for two different purpose and two different class
of users.": Passenger and cargo ships both have purposes within human morality.
Alien moralities are likely to contradict each other.
"There's not much objectivity in that.": What if objectivity in the sense you
describe is impossible?
"Why is it so important that our morality is the one that motivates us? People
keep repeating it as though its a great revelation, but its equally true that
babyeater morality motivates babyeaters, so the situation comes out looking
symmetrical and therefore relativistc.": If it isn't, then it comes back to the
amoralist challenge. Why should we even care?
0TheAncientGeek4yMaybe we should also consider in parallel the question of whether objectivity is
necessary. If objectivity is both necessary to morality and impossible, then
nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics
is that it is connected to practices of reward and punishment, which either
happen or not.
The essential problem with the tablet is that it offers conclusions as a fait
accompli, with no justification of argument. The point does not generalise
against objectivity morality [http://lesswrong.com/lw/rr/the_moral_void/dd8k].
if you are serious about the unselfish bit, then surely it boils down to "what
do they want" or "what do we want".
i don't accept the Moral Void argument, for the reasons given. Do you have
another?
The idea that humans are uniquely motivated by human morality isn't put forward
as a an answer to the amoralist challenge, it is put forward as a a way of
establishing something like moral objectivism.
0entirelyuseless4y"words should be used in such a way to maximize their usefulness in carving
reality"
That does not mean that we should not use general words, but that we should have
both general words and specific words. That is why it is right to speak of
morality in general, and human morality in particular.
As I stated in other replies, it is not true that this disagreement is only
about words. In general, when people disagree about how words should be used,
that is because they disagree about what should be done. Because when you use
words differently, you are likely to end up doing different things. And I gave
concrete places where I disagree with Eliezer about what should be done, ways
that correspond to how I disagree with him about morality.
In general I would describe the disagreement in the following way, although I
agree that he would not accept this characterization: Eliezer believes that
human values are intrinsically arbitrary. We just happen to value a certain set
of things, and we might have happened to value some other random set. In
whatever situation we found ourselves, we would have called those things
"right," and that would have been a name for the concrete values we had.
In contrast, I think that we value the things that are good for us. What is
"good for us" is not arbitrary, but an objective fact about relationships
between human nature and the world. Now there might well be other rational
creatures and they might value other things. That will be because other things
are good for them.
0TheAncientGeek4yBut not everything people value is actually good for them. You are retaining the
problem of equating morality with values.
1entirelyuseless4yI agree that not everything in particular that people value is good for them. I
say that everything that they value in a fundamental way is good for them. If
you disagree, and think that some people value things that are bad for them in a
fundamental way, how are they supposed to find out that those things are bad for
them?
0TheAncientGeek4yYou are currently saying that the good is what people fundamentally value, and
what people fundamentally value is good....for them. To escape vacuity, the
second phrase would need to be cashed out as something like "side survival".
But whose survival? If I fight for my tribe, I endanger my own survival, if I
dodge the draft, I endanger my tribes'.
Real world ethics has a pretty clear answer: the group wins every time. Bravery
beats cowardice, generosity beats meanness...these are human universals. if you
reverse engineer that observation back into a theoretical understanding, you get
the idea that morality is something programned into individuals by communities
to promote the survival and thriving of communities.
But that is a rather different claim to The Good is the Good.
0Carinthium4yClarification please. How do you avoid this supposed vacuity applying to
basically all definitions? Taking a quick definition from a Google Search: A: "I
define a cat as a small domesticated carnivorous mammal with soft fur, a short
snout, and retractile claws." B: "Yes, but is that a cat?"
Which could eventually lead back to A saying that:
A: "Yes you've said all these things, but it basically comes back to the claim a
cat is a cat."
0TheAncientGeek4yDefinitions are at best a record of usage. Usage can be broadened to include
social practices such as reward and punishment. And the jails are full of people
who commit theft (selfishness) , rape (ditto), etc. And the medals and plaudits
go to the brave (altruism), the generous (ditto), etc.
0entirelyuseless4yI'm not sure how you're addressing what I said. What do you mean by escaping
vacuity? I used "good for them" in that comment because you did, when you said
that not everything people value is good for them. I agree with that, if you
mean the particular values that people have, but not in regard to their
fundamental values.
Saying that something is morally good means "doing this thing, after considering
all the factors, is good for me," and saying that it is morally bad means "doing
this thing, after considering all the factors, is bad for me." Of course
something might be somewhat good, without being morally good, because it is good
according to some factors, but not after considering all of them. And of course
whether or not it will benefit your communities is one of the factors.
0hairyfigment4yI'm going to assume you mean what you say and are not just arguing about
definitions. In that case:
You would be an apologist for HP Lovecraft's Azathoth
[http://www.hplovecraft.com/writings/texts/fiction/dwh.aspx], at best, if you
lived in his universe. There's no objective criterion you could give to explain
why that wouldn't be moral, unless you beg the question and bring in moral
criteria to judge a possible 'ground of morality.' Yes, I'm saying Nyarlathotep
should follow morality instead of the supposed dictates of his alien god. And
that's not a contradiction but a tautology.
While I'm on the subject, Aquinian theology is an ugly vulgarization of
Aristotle's, the latter being more naturally linked to HPL's Azathoth or the
divine pirates of Pastafarianism.
-4entirelyuseless4yI'm pretty sure this is not an attempt at discussion, but an attempt to be
insulting, so I won't discuss it.
0MrMind4yI prefer Eliezer's way because it makes evident, when talking to someone who
hasn't read the Sequence, that there are different set of self-consistent
values, but it's an agreement that people should have before starting to debate
and I personally would have no problem in talking about different moralities.
But does he? Because that would be demonstrably false. Maybe arbitrary in the
sense of "occupying a tiny space in the whole set of all possible values", but
since our morality is shaped by evolution, it will contain surely some
historical accident but also a lot of useful heuristics.
No human can value drinking poison, for example.
If you were to unpack "good", would you insert other meanings besides "what
helps our survival"?
0entirelyuseless4y"There are different sets of self-consistent values." This is true, but I do not
agree that all logically possible sets of self-consistent values represent
moralities. For example, it would be logically possible for an animal to value
nothing but killing itself; but this does not represent a morality, because such
an animal cannot exist in reality in a stable manner. It cannot come into
existence in a natural way (namely by evolution) at all, even if you might be
able to produce one artificially. If you do produce one artificially, it will
just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently
they hope to accomplish different things. I speak of morality in general, not to
mean "logically consistent set of values", but a set that could reasonably exist
in the real word with a real intelligent being. In other words, restricting
morality to human values is an indirect way of promoting the position that human
values are arbitrary.
As I said, I don't think Eliezer would accept that characterization of his
position, and you give one reason why he would not. But he has a more general
view where only some sets of values are possible for merely accidental reasons,
namely because it just happens that things cannot evolve in other ways. I would
say the contrary -- it is not an accident that the value of killing yourself
cannot evolve, but this is because killing yourself is bad.
And this kind of explains how "good" has to be unpacked. Good would be what
tends to cause tendencies towards itself. Survival is one example, but not the
only one, even if everything else will at least have to be consistent with that
value. So e.g. not only is survival valued by intelligent creatures in all
realistic conditions, but so is knowledge. So knowledge and survival are both
good for all intelligent creatures. But since different creatures will produce
their knowledge and survival in different ways, different things will
3TheAncientGeek4yAny virulently self-reproducing meme would be another.
-3entirelyuseless4yThis would be a long discussion, but there's some truth in that, and some
falsehood.
0Bound_up4yThey eat innocent, sentient beings who suffer and are terrified because of it.
That's wrong, no matter who does it.
It may not be un-baby-eater-ey, but it's wrong.
Likewise, not eating babies is un-baby-eater-ey, no matter who does it. It might
not be wrong, but it is un-baby-eater-ey.
We have two species who agree on the physical effects of certain actions. One
species likes the effects of the action, and the other doesn't. The difference
between them is what they value.
"Right" just means "in harmony with this set of values." Baby-eater-ey means "in
harmony with this other set of values."
There's no contradiction in saying that something can be in harmony with one set
of values and not in harmony with another set of values. Hence, there's no
contradiction in saying that eating babies is wrong, and is also baby-eater-ey.
You can also note that the action is found compelling by one species and not
compelling by another, and there is no contradiction in this, either.
What could "right" mean if we have "right according to these morals" AND "right
according to these other, contradictory morals?"
I see one possibility: "right" is taken to mean " in harmony with any set of
values." Which, of course, makes it meaningless. Do you see another possibility?
0entirelyuseless4yI disagree that it is wrong for them to do that. And this is not just a
disagreement about words: I disagree that Eliezer's preferred outcome for the
story is better than the other outcome.
"Right" is just another way of saying "good", or anyway "reasonably judged to be
good." And good is the kind of thing which naturally results in desire. Note
that I did not say it is "what is desired" any more than you want to say that
someone values at a particular moment is necessarily right. I said it is what
naturally results in desire. This definition is in fact very close to yours,
except that I don't make the whole universe revolve around human beings by
saying that nothing is good except what is good for humans. And since different
kinds of things naturally result in desire for different kinds of beings (e.g.
humans and babyeaters), those different things are right for different kinds of
beings.
That does not make "right" or "good" meaningless. It makes it relative to
something. And this is an obvious fact about the meaning of the words; to speak
of good is to speak of what is good for someone. This is not subjectivism, since
it is an objective fact that some things are good for humans, and other things
are good for other things.
Nor does this mean that right means "in harmony with any set of values." It has
to be in harmony with some real set of values, not an invented one, nor one that
someone simply made up -- for the same reasons that you do not allow human
morals to be simply invented by a random individual.
Returning to the larger point, as I said, this is not just a disagreement about
words, but about what is good. People maintaining your theory (like Eliezer)
hope to optimize the universe for human values. I have no such hope, and I think
it is a perverse idea in the first place.
0TheAncientGeek4yNo, morally rightness and wrongness have implications about rule following and
rule breaking, reward and punishment that moral goodness and harness dont.
Giving to charity is virus, but not giving to charity isn't wrong and doesn't
deserve punishment.
Similarly, moral goodness and hedonic goodness are different.
0entirelyuseless4yI'm not sure what you're saying. I would describe giving to charity as morally
good without implying that not giving is morally evil.
I agree that moral goodness is different from hedonic goodness (which I assume
means pleasure), but I would describe that by saying that pleasure is good in a
certain way, but may or may not be good all things considered, while moral
goodness means what is good all things considered.
0TheAncientGeek4yI'm saying its a bad idea to collapse together the ideas of moral obligation,
moral advisability and pleasure.
0entirelyuseless4yI agree.
0Bound_up4yI think I get it.
You're saying that "right" just means "in harmony with any set of values held by
sentient beings?"
So, baby-eating is right for baby-eaters, wrong for humans, and all either of
those statements means is that they are/aren't consistent with the fundamental
values of the two species?
1entirelyuseless4yThat is most of it. But again, I insist that the disagreement is real. Because
Eliezer would want to stomp out baby-eater values from the cosmos. I would not.
0Bound_up4yMetaethically, I don't see a disagreement between you and Eliezer. Ethically, I
do.
Eliezer says he values babies not being eaten more than he values letting a
sentient being eat babies just because it wants to.
You say you don't, that's all. Different values.
Are you serious, though? What if you had enough power to stop them from eating
babies without having to kill them? Can we just give them fake babies?
-1entirelyuseless4yI do not support "letting a sentient being eat babies just because it wants to"
in general. So for example if there is a human who wants to eat babies, I would
prevent that. But that is because it is bad for humans to eat babies. In the
case of the babyeaters, it is by stipulation good for them.
That stipulation itself, by the way, is not really a reasonable one. Some
species do sometimes eat babies, and it is possible that such a species could
develop reason. But it is likely that the very process of developing reason
would impede the eating of babies, and eating babies would become unusual, much
as cannibalism is unusual in human societies. And just as cannibalism is wrong
for humans, eating babies would become wrong for that species. But Eliezer makes
the stipulation because, as I said, he believes that human values are
intrinsically arbitrary, from an absolute standpoint.
So there is a metaethical disagreement. You could put it this way: I think that
reality is fundamentally good, and therefore actually existing species will have
fundamentally good values. Eliezer thinks that reality is fundamentally
indifferent, and therefore actually existing species will have fundamentally
indifferent values.
But given the stipulation, yes I am serious. And no I would not accept those
solutions, unless those solutions were acceptable to them anyway -- which would
prove my point that eating babies was not actually good for them, and not
actually a true part of their values.
2Bound_up4yWhen you say reality is fundamentally "good," doesn't that translate (in your
terms) to just a tautology?
Aren't you just saying that the desires of sentient beings are fundamentally
"the desires of sentient beings?"
It sounds like you're saying that you personally value sentient beings
fulfilling their fundamental desires. Do you also value a sentient being
fulfilling its fundamental desire to eliminate sentient beings that value
sentient beings that fulfill their fundamental desires?
That is, if it wants to kill you because you value that, are you cool with that?
What do you do, in general, when values clash? You have some members of a
species who want to eat their innocent, thinking children, and you have some
innocent, thinking children who don't want to be eaten. On what grounds do you
side with the eaters?
1entirelyuseless4y"When you say reality is fundamentally "good," doesn't that translate (in your
terms) to just a tautology?" Sort of, but not quite.
"Aren't you just saying that the desires of sentient beings are fundamentally
"the desires of sentient beings?"" No.
First of all, the word "tautology" is vague. I know it is a tautology to say
that red is red. But is it a tautology to say that two is an even number? That's
not clear. But if a tautology means that the subject and predicate mean the same
thing, then saying that two is even is definitely not a tautology, because they
don't mean the same thing. And in that way, "reality is fundamentally good" is
not a tautology, because "reality" does not have the same meaning as "good."
Still, if you say that reality is fundamentally something, and you are right,
there must be something similar to a tautology there. Because if there is
nothing even like a tautology, you will be saying something false, as if you
were to say that reality is fundamentally blue. That's not a tautology at all,
but it's also false. But if what you say is true, then "being real" and "being
that way" must be very deeply intertwined, and most likely even the meaning will
be very close. Otherwise how would it turn out that reality is fundamentally
that way?
I have remarked before that we get the idea of desire from certain feelings, but
what makes us call it desire instead of a different feeling is not the
subjective quality of the feeling, but the objective fact that when we feel that
way, we tend to do a particular thing. E.g. when we are hungry, we tend to go
and find food and eat it. So because we notice that we do that, we call that
feeling a desire for food. Now this implies that the most important thing about
the word "desire" is that it is a tendency to do something, not the fact that it
is also a feeling.
So if we said, "everyone does what they desire to do," it would mean something
like "everyone does what they tend to do." That is not a tautology,
0entirelyuseless4y"It sounds like you're saying that you personally value sentient beings
fulfilling their fundamental desires." Yes.
"Do you also value a sentient being fulfilling its fundamental desire to
eliminate sentient beings that value sentient beings that fulfill their
fundamental desires?"
No sentient being has, or can have (at least in a normal way) that desire as a
"fundamental desire." It should be obvious why such a value cannot evolve, if
you consider the matter physically. Considered from my point of view, it cannot
evolve precisely because it is an evil desire.
Also, it is important here that we are speaking of "fundamental" desires, in
that a particular sentient being sometimes has a particular desire for something
bad, due to some kind of mistake or bad situation. (E.g. a murderer has the
desire to kill someone, but that desire is not fundamental.)
"You have some members of a species who want to eat their innocent, thinking
children, and you have some innocent, thinking children who don't want to be
eaten. On what grounds do you side with the eaters?"
As I said in another comment, the babyeater situation is contrived, and most
likely it is impossible for those values to evolve in reality. But stipulating
that they do, then the desires of the babies are not fundamental, because if the
baby grows up and learns more about reality, it will say, "it would have been
right to eat me."
I am pretty sure that people even in the original context brought attention to
the fact that there are a great many ways that we treat children in which they
do not want to be treated, to which no one at all objects (e.g. no one objects
if you prevent a child from running out into the street, even if it wants to.
And that is because the desires are not fundamental.)
Your objection is really something like, "but that desire must be fundamental
because everything has the fundamental desire not to be eaten." Perhaps. But as
I said, that simply means that the situation is contrived and f
0Bound_up4yI don't know. I wonder if some extra visualization would help.
Would you help catch the children so that their parents could eat them? If they
pleaded with you, would you really think "if you were to live, you would one day
agree this was good, therefore it is good, even though you don't currently
believe it to be?"
Why say the important desire is the one the child will one day have, instead of
the one that the adult used to have?
0entirelyuseless4yI would certainly be less interested in aliens obtaining what is good for them,
than in humans obtaining what is good for them. However, that said, the basic
response (given Eliezer's stipulations), is yes, I would, and yes I would really
think that.
The adult has not only changed his desire, he has changed his mind as well, and
he has done that through a normal process of growing up. So (again given
Eliezer's stipulations), it is just as reasonable to believe the adults here as
it is to believe human adults. It is not a question of talking about whose
desire is important, but whose opinion is correct.
0TheAncientGeek4y....a word which means a number of things, which are capable of conflicting with
each other. Moral good refers to things that are beneficial at the group level,
but which individuals tend not to do without encouragement.
0entirelyuseless4yI think it is perfectly obvious that this usage of "should" and so on is wrong.
A paperclipper believes that it should make paperclips, and it means exactly the
same thing by "should" that I do when I say I should not murder.
And when I say it is obvious, I mean it is obvious in the same way that it is
obvious that you are using the word "hat" wrong if you use it for a coat.
0Bound_up4yI think you're using "should" to mean "feels compelled to do."
Yes, a paperclipper feels compelled to make paperclips, and a human feels
compelled to make sentient beings happy.
But when we say "should," we don't just mean "whatever anyone feels compelled to
do." We say "you might drug me to make me want to kill people, but I still
shouldn't do it."
"Should" does not refer to compelling feelings, but rather to a certain set of
states of beings that we value. To say we "still shouldn't kill people," means
it "still isn't in harmony with happy sentient beings (plus a million other
values) to kill people."
A paperclipper wouldn't disagree that killing people isn't in harmony with happy
sentient beings (along with a million other values), it just wouldn't care. In
other words, it wouldn't disagree that it shouldn't kill people, it just doesn't
care about "should;" it cares about "clipperould."
Likewise, we wouldn't disagree that keeping people around instead of making them
into paperclips is not in harmony with maximizing paperclips, we just wouldn't
care. We know we clipperould turn people into paperclips, we just don't care
about clipperould, we care about should.
0entirelyuseless4yNo, I am not using "should" to mean "feels..." anything (in other words,
feelings have nothing to do with it.) But you are right about compulsion. The
word "ought" is, in theory, just the past tense of "owe", and what is owed is
something that needs to be paid. Saying that you ought to do something, just
means that you need to do it. And should is the same; that you should do it just
means that there is a need for it. And need is just necessity. So it does all
have to do with compulsion.
But it is not compulsion of feelings, but of a goal. And to that degree, your
idea is actually correct. But you are wrong to say that the specific goal sought
affects the meaning of the word. "I should do it" means that I need to do it to
attain my goal. It does not say what that goal is.
-1Carinthium4yThe Open Question argument is theoretically flawed because it relies too much on
definitions (see this website's articles on how definitions don't work that way,
more specifically http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/
[http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/]).
The truth is that humans have an inherent instinct towards seeing "Good" as an
objective thing, that corresponds to no reality. This includes an instinct
towards doing what, thanks to both instinct and culture, humans see as "good".
But although I am not a total supporter of Yudowksy's moral support, he is right
in that humans want to do good regardless of some "tablet in the sky". Those who
define terms try to resolve the problem of ethical questions by bypassing this
instinct and referencing instead what humans actually want to do. This is
contradictory to human instinct, hence the philosophical force of the Open
Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of
morality whatsoever could be countered with "Is it good?".
0TheAncientGeek4yTrue but not very interesting. The interesting question is whether the
operations of intuitive black boxes can be improved on.
The tablet argument is entirely misleading.
i don't see what you mean by that. If the function of the ethical black bx can
be identified, then it can be improved on, in the way that physics physics
improves on folk physics.
Those who define terms try to resolve the problem of ethical questions by
bypassing this instinct and referencing instead what humans actually want to do.
This is contradictory to human instinct, hence the philosophical force of the
Open Question argument but it is the only way to have a coherent moral system.
The alternative, as far as I can tell, would be that ANY coherent formulation of
morality whatsoever could be countered with "Is it good?".
1entirelyuseless4y"ANY coherent formulation of morality whatsoever could be countered with "Is it
good?".
Exactly, if you think morality is different from goodness. That is why said
"morally right" just means "what it is good for me to do."
That is not the same as what I want at the moment. Humans have an inherent
instinct towards seeing good as objective rather than as "what I want" for the
same reason that we have an instinct towards seeing dogs and cats as objectively
distinct, instead of just saying "dog is what I call dog, and cat is what I call
cat, and if I decide to start calling them all dogs, that will be fine too."
Saying that good is just what I happen to want is just the same as saying that
dog is whatever I happen to call dog. And both positions are equally ridiculous.
0TheAncientGeek4yMoral goodness is clearly different form, eg, hedonic goodness. Enjoying killing
doesn't mean you should kill.
It might be the case that humans have a mistaken view of the objectivity of
morality, but it doesn't follow from that that morality=hedonism. You can't
infer the correctness of one of N>2 theories form the wrongness of another.
It is possible to misuse the terms "dog" and "cat", so the theory of semantics
you are appealing to as the only possible alternative to objective fully
objective semantics is wrong as well. HInt: intersubjectivity, convention.
So what's the correct theory?
0entirelyuseless4yI don't know why you are bringing up hedonism. It is bad to kill even if you
enjoy it; so if morally good means what it is good to do, as I say, it will be
morally bad to kill even if it is pleasant to someone.
The fully intersubjective but non-objective theory of meaning that you are
suggesting is also false, since if everyone all at once agrees to call all dogs
and cats "dogs", that will not mean that suddenly there is no objective
difference between the things that used to be called dogs and the things that
used to be called cats.
The correct theory is this:
"Dog" means something that has what is in common to the things that are normally
called dogs. Notice that this incorporates inter-subjectivity and convention,
since "things that are normally called dogs" means normally called that by
normal people. But it also includes an objective element, namely "what is in
common."
Now someone could say, "Well, what those things have in common is that people
normally call them dogs. They don't have anything else in common. So this theory
reduces to the same thing: dogs are what people call dogs."
But they would be wrong, since obviously there are plenty of other things that
dogs have in common, and where they differ from cats, which do not depend on
anyone calling them anything.
The correct theory of goodness is analagous:
"Good" means something that has what is in common to the things that are
normally called good. Again, this incorporates the element of convention, in
"normally called good," but it also includes an objective element, in "what is
in common."
As before, someone might say that actually they have nothing in common except
the name. But again that would be wrong.
More plausibly, though, someone might say that actually what they have in common
is that people desire them. And in a sense this is Eliezer's view. But this is
also wrong. Let me explain why.
One difficulty is that people are rarely wrong about whether something is a dog,
but they are often
0TheAncientGeek4ySo what is your theory? That the morally good is the morally good? Weren't you
criticising that approach?
"The morally good is the morally good" is vacuous.
"The morally good is the good" is subject to counteraxamples.
That is only true if you equate "wrong" with not capturing all the information.
But then we would always be wrong, since we never capture all the information.
There are languages where "mouse" and "rat" are translated by the same word.
Speakers of those languages are not systematically denuded.
That's rather redundant, since the idea that new sages of "dog" shoudl ave
something in common with established ones is already part of the norm.
I would say that you have the casual arrow the wrong way round there.
Also, you are, again, using "good" in a way that leads to obvious counterxamples
of things that are desired or desireable but not morally good.
If you could work out the difference between the mistakes and the norm, you
would have a non-vacuous theory of what "morally" means in "morally good".
However, I don;t know if you are even trying to do that, since you seem wedded
to the idea that the morally good is the good, period.
If you want the word "good" to do all the work in your theory of moral good, yo
would have that problem. If you allow the word "moral" to do some work, you
don't. The morally good has features in common , scuh as being co-operative and
prosocial, that the unqualified "good" does not, and that is stil the case if
the good is not an objective feature of the world.
You don't need objectivity, intersubjectivity is enough.
0entirelyuseless4yAlso, I did not say that people would be wrong if they started calling all cats
and dogs "dogs." I said that this would not mean that there were not objective
differences between the things that used to be called dogs, and the things that
used to be called cats. In fact, the only reason we are able to call some dogs
and some cats is that there are objective differences that allow us to
distinguish them.
0TheAncientGeek4yNot all semantics is based on objective differences. There's no objective
feature that makes someone a senator, or a particular piece of paper money..we
just have social conventions, coupled with memorising the members of the set
"money" or "senator". So if you arguing that "good" must have objective
characteristics because all menaingful words must denote something objective,
that doesn't work. But it is not clear you are arguing that way.
0entirelyuseless4yObjective differences doesn't have to mean physical differences of the thing at
the time. It is an objective fact that certain people have won elections and
that others have not, for example, even if it doesn't change them physically.
In this sense, it is true that every meaningful distinction is based on
something objective, since otherwise you would not be able to make the
distinction in the first place. You make the distinction by noticing that some
fact is true in one case which isn't true in the other. Or even if you are
wrong, then you think that something is true in one case and not in the other,
which means that it is an objective fact that you think the thing in one case
and not in the other.
0TheAncientGeek4yNo, it's intersubjective. Winning and elections aren't in the laws of physics.
You can't infer objecgive from not-subjective.
You need to be more granular about that. It is true that you can't recognise
novel members of an open-ended category (cats and dogs) except by objective
features, and you cant do that because you can't memorise all the members of
such a set. But you can memorise all the members fo the set of Seanators. So
objectivty is not a universal rule.
1entirelyuseless4yI think you might be arguing about words, in relation to whether the election is
an objective fact. I don't see what the laws of physics have to do with it.
There is no rule that objective facts have to be part of the laws of physics. It
is an objective fact that I am sitting in a chair right now, but the laws of
physics say nothing about chairs (or about me, for that fact.)
Even if you memorize the set of Senators, you cannot recognize them without them
being different from other people.
0entirelyuseless4yI do not know why you keep saying that I am saying that morally good is the same
as good.
According to me (and this is what I think they are, not an argument) : "Morally
good" is "what is good to do."
So morally good is not the same as good. Good is general, and "Good TO DO" is
morally good. So morally good is a kind of goodness, just as everyone believes.
0TheAncientGeek4yNot helping. Good to do can be hedonistically good to do, selfishly good to do,
etc. If I sacrifice the lives of 100 people to save my life, that is a good ting
to do from some points of view, but not what most people would call morally
good.
0entirelyuseless4ySaying that a thing is "hedonistically good to do" means that it is good to some
extent. It does not tell us whether it is good to do, period. If it is good to
do, period, it is morally good. If there are other considerations more important
than the pleasure, it won't be good to do, period, and so will be morally wrong.
0TheAncientGeek4yIt's not helpful to define the morally good as the "good, period", without an
explanation of "good, period". You are defining a more precise term using a less
precise one, which isn't the way to go.
0entirelyuseless4ySuppose there is a blue house with a red spot on it. You ask, "Is that a red
house?" Someone answers, "Well, there is a red spot on it."
There is no difference if there is something bad that you could do which would
be pleasant. You ask, "Is that something good to do?" Someone answers, "Well, it
is hedonistically good."
But I don't care if there is a red spot, or if it is pleasant. I am asking if
the house is red, and if it would be good to do the thing.
Those are answered in similar ways: the house is red if it is red enough that a
reasonable person would say, "yes, the house is red." And the action is morally
good if a reasonable person would say, "yes, it is good to do it."
0TheAncientGeek4yi think that's a fairly misleading analogy. For instance, a house's being red is
not exclusive of another ones..but my goods can conflict with another person's.
Survival is good, you say. If I am in a position to ensure my survival by
sacrificing Smith, is it morally good to do so? After all Smith's survival is
just as Good as mine.
0entirelyuseless4yAs I said, we are asking whether it is good to do something overall. So there is
no definite answer to the question about Smith. In some cases it will be good to
do that, and in some cases not, depending on the situation and what exactly you
mean by sacrificing Smith.
0TheAncientGeek4ySo what you call goodness cannot be equated with moral goodness, because moral
goodness does need to put an overall value on act, does need to say that an act
is permitted, forbidden or obligatory.
0entirelyuseless4yI don't understand what you are trying to say here. Of course in a particular
situation it will be good, and thus morally right, to sacrifice Smith, and in
other particular situations it will not be. I just said that you cannot say in
advance, and I see no reason why moral goodness would have to judge these
situations in advance without taking everything into account.
Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.
Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:
V(Elves, ) = Christmas spirity
V(Pebblesorters, ) = primality
V(Humans, _ ) = morality
If V(Humans, Alice) =/= V(Humans, ) that doesn't make morality subjective, it is rather i... (read more)
Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".
That doesn't generalise to the point that non humans have no morality. You have m... (read more)
Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.