He says things like scientists who think science can't answer "should" questions very often act as if should questions have objectively right answers,

Is that supposed to be a bad thing? In any case, the more usual argument is that I can't take "what my brain does" as the last word on the subject.

and our brains seem to store moral beliefs in the same way as they do factual beliefs.

I'm struggling to see the relevance of that. Our brains probably store information about size in the same way that they store information about colour, but that doesn't mean you can infer anything about an objects colour from information about its size. The is-ought is one instance of a general rule about information falling into orthogonal categories, not special pleading.

ETA: Just stumbled on:

"Thesis 5 is the idea that one cannot logically derive a conclusion from a set of premises that have nothing to do with it. (The is-ought gap is an example of this)."

https://nintil.com/2017/04/18/still-not-a-zombie-replies-to-commenters/

" I can't take "what my brain does" as the last word on the subject."

But what if morality is all about the welfare of brains? I think Harris would say that once you accept human welfare is the goal you have crossed the is-ought gap and can use science to determine what is in the best interest of humans. Yes this is hard and people will disagree, but the same is true of generally accepted scientific questions. Plus, Harris says, lots of people have moral beliefs based on falsifiable premises (God wants this) and so we can use science to evaluate these beliefs.

We need a better theory of happiness and suffering

by toonalfrink 1 min read4th Jul 201739 comments

2


We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)