Lesswrong 2016 Survey

I'm still not certain if I managed to get what I think is the issue across. To clarify, here's an example of the failure mode I often encounter:

Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it's objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that's wrong, right?
Philosopher: No, we cannot criticize other cultures, because morality is subjective.

The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice ("morality is subjective") and it not being possible to make moral claims in principle ("Morality is subjective"). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).

I think it's actually very similar to the error people make when it comes to discussing "free will". Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can't punish criminals because they have no free will (in the sense of their preferences affecting their actions).

Lesswrong 2016 Survey

That makes no sense to me.

I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. "just a matter of opinion". I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual's health. It does not preclude a science of morality.

However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated

Unfortunate, but understandable as that's a lot harder to prove than the philosophical argument.

I can definitely imagine that we find out that humans terminally value other's utility functions such that U(Sophronius) = X(U(DanArmak) + ..., and U(danArmak) = U(otherguy) + ... , and so everyone values everybody else's utility in a roundabout way which could yield something like a human utility function. But I don't know if it's actually true in practice.

Lesswrong 2016 Survey

Everything you say is correct, except that I'm not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I've always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).

All this is disregarding the empirical question of to what extend our preferences actually overlap - and to what extend we value each other's utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.

Lesswrong 2016 Survey

I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person's judgment is no better than another's (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.

Lesswrong 2016 Survey

RE: The survey: I have taken it.

I assume the salary question was meant to be filled in as Bruto, not netto. However that could result in some big differences depending on the country's tax code...

Btw, I liked the professional format of the test itself. Looked very neat.

Political Debiasing and the Political Bias Test

No, it's total accuracy on factual questions, not the bias part...

More importantly, don't be a jerk for no reason.

Political Debiasing and the Political Bias Test

Cool! I've been desperate to see a rationality test and so make improvements in rationality measurable (I think the Less Wrong movement really really needs this) so it's fantastic to see people working on this. I haven't checked the methodology yet but the basic principle of measuring bias seems sound.

The path of the rationalist

Hm, a fair point, I did not take the context into account.

My objection there is based on my belief that Less Wrong over-emphasizes cleverness, as opposed to what Yudkowsky calls 'winning'. I see too many people come up with clever ways to justify their existing beliefs, or being contrarian purely to sound clever, and I think it's terribly harmful.

Translating bad advice

My point was that you're not supposed to stop thinking after finding a plausible explanation, and most certainly not after having found the singularly most convenient possible explanation. "Worst of all possible worlds" and all that.

If you feel this doesn't apply to you, then please do not feel as though I'm addressing you specifically. It's supposed to be advice for Less Wrong as a whole.

Translating bad advice

That is a perfectly valid interpretation, but it doesn't explain why several people independently felt the need to explain this to me specifically, especially since it was worded in general terms and at the time I was just stating facts. This implied that there was something about me specifically that was bothering them.

Hence the lesson: Translate by finding out what made them give that advice in the first place, and only then rephrase it as good advice.

Load More