[ Question ]

Can you be 100% confident in your moral beliefs?

by MaxG1 min read27th Feb 20219 comments

3

World OptimizationRationality
Frontpage

Recently, I had a discussion with a close friend about morality and, let us say, fundamentals of epistemology. This person has no connection to the aspiring rationalist community.

We started out talking about discussion norms. I had sent my friend Julia Galef's talk on scout mindset. In previous discussions, I had gotten the impression that my friend argued to persuade, and I argued to learn something. This had collided and some of these discussions had seemed quite unfruitful to me.

My friend agreed that in factual discussions, scout mindset is the way to go. In moral discussions, my friend argued that one tries to persuade as one is a 100% sure of their moral beliefs. This got me wondering. For a long time, I have held a vague belief that there are no 0s and 1s when it comes to holding true beliefs. I pointed out that I do not think that morality belongs in another category. My friend confronted me with asking how I could say that I am not a 100% sure that murder is wrong. Well, I said, I am like 99.99% sure that it is wrong. Here the discussion got a bit emotional. They said they would not want to be friends with someone who gives even a slight chance that murder, racism, homophobia or any other bad thing might be morally right.

To me, saying I am 99.99% sure feels almost the same as saying I am 100% sure. I just thought that it is not a good idea on principle to assign 0% or 100% to any belief. Many people have been wrong about moral issues in the past, so who am I to be so sure about them.

So, people of Lesswrong, how sure are you that murder is wrong? Are you 100% sure? How would you go about solving the disagreement with my friend?

3

New Answer
Ask Related Question
New Comment

3 Answers

Your ”friend“ is blackmailing you to profess religious faith. It’s an ages old pattern in our species.

One hypothesis is that he (subconsciously, perhaps) wants to protect against the Goodhart effect, i.e., you going against your stated ethics and then claiming “you never said murder was wrong.” Another hypothesis is that he has a coalition-forming mindset, where sides must be taken and rules followed. There are many more such just so stories we can dream up. 
 

As you yourself know, people justify killing all the time. They just change the word. “Murder” is by its very definition an immoral kind of killing. What killings are societally acceptable obviously depends on the society in question and not some magical moral “truth.” Sam Harris’ analogy of moral landscapes (search for his TED talk for a short summary) is a good way to look at the validity of norms; They are “true” in so far as they create more net profits for a society that follows them relative to other normative systems. So killing terrorists is a celebratory event while killing random Joe just contracts the economy and undermines societal order.

My friendly advice to you is to not take people too seriously on debates on ethics. People prioritize signaling/reputation and norm enforcement over communicating their most objective thoughts. You’re much better served by judging their attitude via the actions they take. This is also true of oneself; Just having the right moral beliefs is worthless. What are you actually doing for others? 

Thank you for the answer! I thought the Goodheart effect was about metrics turning bad when they become the thing you are optimising for. Does the term apply to something else here? 

The term "murder" refering to something immoral by definition makes sense, I should have thought about this more explicitely. 

Your advice in the last paragraph resonates with me. My experience has also been that people often seem to prioritize signaling. I am not a big fan of Jordan Peterson, but this reminded me of a quote of his: "If you can't understand why someone is doing something, look at the consequences of their actions, whatever they might be, and then infer the motivations from their consequences."

1Rudi C1moGoodhart effect is a truly deep insight IMO. Goodhart itself states that lots of great metrics and “signals”, and in general, communicatory material, will lose its value when adopted by social systems. This leads to second order effects, such as people and society defending against this effect. This can be seen in speech easily: if your friend did not depend on your speech for various purposes, your speech would be a truly great way to learn about your “true” self. But now that he does depend on your speech, he has to guard against you lying and optimizing your speech to manipulate him and shift risks unto him. So he cannot accept your noncommittal stance on, e.g., “murder,” and requires you to commit your position. I have heard there are studies (beware replication crisis and yadada) that women prefer men with virtue ethics over utilitarians; One reason might be that virtue ethics constrains people much more effectively than “consequentialism,” where you’re your own auditor.

Pretty much anything can be right or wrong. It depends on the context. Murder is the deliberate killing of a human being in violation of the law. A French civilian killing an SS agent in 1943 constitutes murder, but is not an open-and-shut case of immorality.

The statement "murder is wrong" is too vague to assign a truth value to. It's like saying "chemistry is immoral". It depends what you're using it for.

As for confidences…

Confidences like 100% and 99.99% are not fundamental physical things like protons and photons. They're not even as concrete as cats and dogs. Confidences are abstractions created by people to make sense of the world we inhabit. A useful first step to unconfusing your disagreement with your friend might be to define exactly what you mean by "confidence".

Yes, ethics often deals with stuff that is not clear cut and obvious. 

I guess my conception of confidences might be flawed, do you have any recommended reading or some video clearing up what confidences are exactly? It seems that a lot of disagreements boil down to using some term in a different way, right? Isn't this what Yudkowsky wrote about?

5lsusr2moThere are two issues: One of them is using the same term in different ways. The deeper mistake is believing words have well-defined meanings at all. I think the deeper issue is the more important one. If you solve the deeper issue then the first issue might solve itself. Yudkowsky's Disputing Definitions is a good place to start. Other good readings along these lines is How to Do Philosophy [http://paulgraham.com/philosophy.html] by Paul Graham and The Specificity Sequence [https://www.lesswrong.com/posts/pFvZXFWbtvKvGiACJ/how-specificity-works] by Liron. A good (thought not perfect) definition of confidence is what odds you would bet on. That's why so many readers of this website are interested in betting markets [https://astralcodexten.substack.com/p/metaculus-monday-2821] and the Kelly Criterion.

There are different conceptions of what it means to be "right" or "wrong", and it's pretty easy to move uncertainty around so that parts of a moral evaluation CAN be 100%, while remaining less than 100% for any given fact behind the judgement.

An easy example: murder is 100% wrong, BY DEFINITION of "murder".  If it's not wrong, it's not murder.  But that just moves the uncertainty to "was this specific act murder"?

If you want to claim that nothing is 100% true, how do you react to "1 + 1 = 2"?

The clarification makes sense. As I wrote above, I should have thought of murder being defined this way. 

Well, I'm pretty sure that "1 + 1 = 2" but couldn't I be convinced otherwise? In that case, is it still reasonable to say that I am 100% confident that "1 + 1 = 2"? I feel like this is exactly what I am trying to ask here. 

1 comments, sorted by Highlighting new comments since Today at 10:26 AM

Confidence can be pretty straighforward if there is a separte outside reality to which correspondece is straightforward. Sometimes language is polymorphic in that vastly different things get covered in same or same sounding terms.

One potential categoy of categorization which moral statemdents might have is when the target of the statement is to be communally created. If half of a movie crew thought "We are making a romantic comedy" and another half thought "We are making a horror movie" there might be big trouble. If a creative leader has made a decision then being informed about it can be about accuracy in the traditional sense. However if the creative direction has never been discussed there might need to be the determination of the direction of the moive. Before such discussion has happened neither statement can be true. One of the possible outcomes is that group A goes a separated way to make a romantic comedy and group B goes to make a horror movie. If this happens in a sense both groups were right.

If a group of people goes out to make an arrangement where people do not backstab and kill each other mutually that can be a form of executive decision rather than an ethical discovery. If somebody then starts to "doubt" "maybe we should kill each other a little bit" that can be a form of setting out a different society or reforming the society to a different order. In this sense if somebody ask for your favorite color is it would be weird to be uncertain what your favorite color is. It would be really weird to read a proof to the effect of "your favorite color is actually green rather than red" and be convinced. Such things are true not because we discover them but because we stipulate them. Saying "maybe my favorite color is blue" is not knowing what your favorite color is or refusing to have a favorite color.

Now it seems open to me whether moral questions are such stipulative executive decisions. Under this kind of conception "murder is wrong, 95% confidence" can sound a lot like "I reserve a right to let 5% of murderers off the hook, but generally punish them."