N.B. This is a chapter in a planned book about epistemology. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I'll have an editor help me clean up the text further.

You're walking down the street and find a lost wallet. You open it and find the owner's ID and $100 dollars. You have a few possible courses of action. You could return the wallet exactly as you found it. If you did, most people would say you did the right thing. If instead you kept the wallet and all the money in it, most people would say you did the wrong thing. But what if you returned the wallet and kept a little of the money, say $10, as a "finder's fee"? Did you do a good thing or a bad thing?

Some might say you deserve the finder's fee since the time it takes you to return the wallet is worth something. Further, at least you were willing to return most of the money. A real thief could have found it and kept all the money! The owner will be nearly as happy to get back the wallet with $90 as they would be to get back the wallet with $100, so they might not even notice. And if they do ask about the $10 you can lie and say that's how you found the wallet to spare you both an awkward conversation.

Others will say that stealing is stealing, end of story. The owner can choose to offer you a reward when you return the wallet, but they don't have an obligation to do that. Even if they should offer you a reward, if they don't it shouldn't matter. Good actions aren't contingent on later rewards. Thus the only right thing to do is return the wallet in tact. After all, wouldn't you want someone to return your lost wallet without stealing any of the money?

Deciding what to do about a lost wallet may seem like a typical moral dilemma, but consider that in the last chapter we solved the problem of how words get their meanings. In that case, shouldn't we all know what "good" and "bad" mean? And if we know what "good" and "bad" mean, shouldn't we then be able to pick out which actions are good and which are bad and then only choose the good ones? How can there be disagreement about what is right and wrong?

Recall that we learn the meanings of words through our interactions with others when they show us what words mean to them. Maybe one day you share a toy on the playground and the teacher says you were "good" to share and rewards you with a smile. Another day you hit another kid on the playground, the teacher says you were "bad", and punishes you by making you sit on the ground while the other kids play. Through many of these interactions with teachers, parents, other adults, and your fellow kids, you build up a notion of what "good" and "bad" mean, as well as learn that doing good is generally rewarded while doing bad is generally punished. So you try to do good things to get rewards and avoid bad things so you don't get punished.

But your notion of what "good" and "bad" mean are ultimately your own since they are shaped by your individual experiences. Although many people try to influence your experience so you end up with the same notion of "good" and "bad" as they have, there will inevitably be some errors in the transmission. For example, maybe your parents tried to teach you to hold the door open for others, except they thought it was okay to not hold the door open when you have a good reason to hurry. You might not have picked up on the exception, and so learned the rule "always hold the door open for others, no matter what". So now you have a slightly different idea than the one your parents tried to teach you about what's good behavior due to an error.

And it doesn't even require error to get variance in values. Maybe your parents fell on hard times and could only feed the family by stealing food. They were raised with the idea that all stealing is bad so always felt guilty about stealing to eat. However, when they teach you to steal food to feed yourself, they insist it's a good thing because they don't want you to feel the same shame they do. So you grow up thinking stealing to eat is good. Now we have at least two different ideas in the world about the goodness of stealing: those who think all stealing is bad, even to feed yourself when there's no other option, and those who think stealing is fine if the alternative is hunger.

Given these differences in values naturally arise, how can we come to agreement on what's right? If we have a city where 99% of the people are Never Stealers—they think all stealing is bad—and 1% are Food Stealers—they think stealing food is okay when that's the only way to eat—should the Food Stealers bend to the will of the majority? Or would the "right thing" be for the Never Stealers to have a bit more compassion for the hungry and add some nuance to their beliefs about stealing? Is there some means by which we can know what's really right and wrong when people end up disagreeing?

Reaching Agreement

Let's leave questions or morals and ethics aside for a moment to talk about how people agree at all.

Let's suppose two people, Alice and Bob, are debating. Alice claims that all phoobs are red. Bob claims that all phoobs are blue. How might they come to some agreement about the color of phoobs?

Alice: I read in a book that all phoobs are red.

Bob: I saw a blue phoob with my own eyes! In fact, every phoob I've ever seen has been blue.

Alice: Hmm, interesting. I read that all phoobs are red in National Geographic. Are you saying the writer lied and the photographer doctored the images?

Bob: Oh, that's strange. I've definitely only ever see blue phoobs.

Alice: Wait! Where did you see these phoobs?

Bob: I saw them in a zoo.

Alice: I wonder if phoobs change color in captivity?

Bob: Yeah, or maybe there's more than one species of phoob!

Alice: Okay, so taking our experiences together, I think we can say that all phoobs are either red or blue.

Bob: Yeah, I agree.

Carroll: Hey, you're not going to believe it! I just saw a green phoob!

What happened here? Alice and Bob shared information with each other. Each one updated their beliefs about the world based on the information they learned. After sharing, they were able to come to agreement about phoob colors—at least until Carroll showed up with new information!

If Alice and Bob share information like this, should they always come to agreement? That is, with enough time and effort, could they always resolve their disagreements?

Turns out yes—at least if they are sufficiently "rational".

What does it mean to be sufficiently "rational", though? Most people would describe a person as rational if they make decisions and form beliefs without letting their emotions dominate. That's not the kind of rational necessary to get people to always agree, though. It might help, sure, but it's not enough to logically guarantee agreement. For that we're going to need people who are Bayesian rationalists.

Bayesian rationality is a precise mathematical way of describing rationality. Bayesian rationalists—or just "Bayesians"—have precise mathematical beliefs. Each of their beliefs is a combination of a statement about the world, like "the sky is blue", and a probability of how likely they think that statement is to be true, say 99.99%. They then update these beliefs based on their observations of the world using a mathematical rule known as Bayes' Theorem, hence why we call them Bayesians.

There's a few things you should know about Bayesians. First, the way they use Bayes' Theorem to update their beliefs is by running a calculation on two values. The first is the prior probability, which means the probability the Bayesian had about a statement before they saw any additional information. The second is the likelihood of the evidence, which is a calculation of how likely it is that whatever they observed is true given what they know about the world. Bayes' Theorem multiples the prior probability and the likelihood together to generate a posterior probability, which is what the Bayesian should update their beliefs to.

Second, Bayesian beliefs always have a probability between 0% and 100% but are never 0% or 100%. Why? Because if they were ever 100% certain about one of their beliefs they'd be stuck with it forever, never able to change it based on new information. This is a straightforward consequence of how Bayes' Theorem is calculated, but also matches intuitions: total certainty in the truth or falsity of a statement is to take the statement "on faith" or "by assumption" and so be unable or unwilling to consider alternatives. So if a Bayesian were ever 100% sure the sky is blue, they'd keep believing it even if they moved to Mars and the sky was clearly red. They'd see red and keep believing the sky is blue.

Third, Bayesians are optimal reasoners. That is, they never leave any evidence on the table; they always update their beliefs as much as possible based on what they observe. If you present evidence to a Bayesian that they've moved to Mars, they'll believe they are on Mars with exactly the probability permitted by the evidence, no more and no less. This means you can't trick Bayesians, or at least not in the normal sense. At most you can fool a Bayesian by carefully filtering what evidence they see, but even then they'll have accounted for the probability that they're being deceived and update in light of it! Think of them like Sherlock Holmes but with all the messy opportunity for human error removed.

Combined, these facts mean that Bayesians beliefs are always in a state of fluctuating uncertainty that nevertheless are the more accurate beliefs possible given a Bayesian's priors and the evidence they've seen. And that means that can pull off some impressive feats of reasoning!

For instance, returning to the claim that two people sufficiently rational can always agree, if two people are Bayesian rationalists, there's a theorem—Aumann's Agreement Theorem—which proves that they will always agreeunder special conditions. Those conditions are that they must have common prior beliefs—things they believed before they encountered any of the evidence they know that supports their beliefs—and they must share all the information they have with each other. If they do those two things, then they will be mathematically forced to agree about everything!

That's pretty cool, but humans disagree all the time. What gives? Well, humans aren't Bayesian rationalists! Most of us don't have precise probabilities assigned to all of our beliefs, and even if we did we wouldn't succeed at always applying Bayes' Theorem correctly to update them. Bayesians are more like a theoretical ideal we can compare ourselves against. We instead look more like Bayesians with a lot of error mixed in: we have vague ideas of how likely things we believe are to be true and do a somewhat fuzzy job of updating those beliefs when we learn new information, but we rely heavily on heuristics and biases to update our beliefs. With a lot of training we can be a little bit closer to being Bayesians, but we'll always make mistakes, ensuring we'll disagree at least some of the time.

We also don't meet one of the other requirements of Aumann's Agreement Theorem: we don't have the same prior beliefs. This is likely intuitively true to you, but it's worth proving. For us to all have the same prior beliefs we'd need to all be born with the same priors. This seems unlikely, but for the sake of argument let's suppose it's true that we are. As we collect evidence about the world we update our beliefs, but we don't remember all the evidence. Even if we have photographic memories, childhood amnesia assures that by the time we reach the age of 3 or 4 we've forgotten things that happened to us as babies. Thus by the time we're young children we already have different prior beliefs and can't share all our evidence with each other to align on the same priors because we've forgotten it. Thus when we meet and try to agree, sometimes we can't because even if we have common knowledge about all the information each other has now, we didn't start from the same place and so may fail to reach agreement.

So in theory we cannot all agree about everything, but in practice some people agree about some things. This is because our everyday agreement is fuzzy. Unlike Bayesians, we don't get hung up on disagreements about things like whether to be precisely 99.5% or 99.6% sure the sky is blue. We just agree that the sky is blue and leave open the vague possibility that something crazy happens like waking up on Mars and seeing a red sky.

Given that we can reach fuzzy agreement about some things, can we reach fuzzy agreement about morals and ethics? Can we agree about what is right and wrong in practice even if we cannot in theory?

Disagreeing on Priors

If we want to see if we can reach some fuzzy agreement about what's good and bad, we need to consider in more depth what it means when we disagree. Previously when we talked about disagreements about what words mean we did so in terms of error. This makes sense for many types of disagreements where there's broad agreement about what the right meaning is. For example, if I think all orange-colored citrus fruits are oranges, I might accidentally serve a redish variety of grapefruit to guests at my breakfast table. Their surprise when they take a bite of these "oranges" will quickly inform me of my mistake.

But other disagreements don't look so much like errors. To continue the fruit example, maybe I do know the difference between oranges and grapefruits, but I happen to think grapefruits taste better than oranges, so I intentionally serve them. My guests disagree. There's not really an error here, though, but rather a difference in preferences. It's similar with disagreements about what clothes to wear, art to view, music to listen to, and so on: these are differences in individual preferences rather than errors. Sure, we can find the likes of busybodies and professional critics who make a career of telling others they have the wrong preferences, but the errors they see only exist from their point of view. If I happen to like music that's derivative and simplistic because I have fond memories associated with it, then that's my business and no one else's—as long as I wear headphones!

But what's the deal with good and bad? Are differences in morals and ethics a matter of error or preference? On the one hand, it seems to be a matter of error because our actions can have serious consequences for other people. If I'm a Food Stealer who thinks it's okay to steal to eat and you're a Never Stealer who thinks all stealing is wrong, you'll be upset when your family has to skip a meal because I snuck into your house and stole your dinner for mine. This seems like a case where one of us is making an error: one of us is wrong about what is right, and we need to settle the ethical question of whether stealing food to eat when hungry is good or bad.

But isn't this also kind of a difference in preference? Perhaps I and my fellow Food Stealers prefer to live in a world where the hungry get to eat even if that means others sometimes have their food stolen. Perhaps you and your fellow Never Stealers prefer a world where no one ever steals, and we can be safe in the knowledge that our food remains ours to do with as we please. So maybe this isn't an error about what's right and wrong, but a disagreement about a preference for the kind of society we'd each like to live in.

When we find a situation like this where two interpretations of the same events seem reasonable, it's worth asking if there's a way both can be true at once. That is, is there a way that it can both look to us like differences in morals are errors and for them to behave like preferences?

To see, let's first return to our friends the Bayesians. How do they think about morality? For them, any beliefs they have about what's right and wrong are the same as any other beliefs they have, which is to say that those beliefs are statements with probabilities attached. So a Bayesian doesn't make categorical statements like "it's wrong to steal" the way most people do. They instead believe things like "I'm 95% certain that all stealing is wrong".

So if I'm a Bayesian, what does it mean for me to say someone else is "in error"? Well, since I'm an optimal reasoner and my beliefs are already the best ones that can be reckoned given my prior beliefs and the evidence I've seen, it would mean that they don't agree with my beliefs in some way. So if I'm 95% certain the statement "all stealing is wrong other than stealing to eat when hungry" is true and I meet another Bayesian who says that they are 95% certain that "all stealing is wrong" is true, then it looks to me like they've made a mistake since, as just noted, I already have the best possible beliefs. But because they're also a Bayesian, they think the same thing in reverse: they have the best beliefs and I am in error.

If we have the same priors we might be able to come into agreement by sharing all the evidence we have that supports our beliefs. Since we're Bayesians, Aumann's Agreement Theorem applies, so we'll be able to make the same updates on the same evidence and should come to believe the same things. But let's suppose that we're human-like with respect to our priors, which is to say that we don't share the same priors. If we figure this out, we can agree to disagree on priors, which is to say we agree that our disagreement cannot be resolved because we didn't start out with the same prior beliefs. This is analogous to the human situation of having different preferences, only it extends well beyond things we typically think of as preferences to questions of morals and ethics.

Returning to the world of humans, we're not so different from Bayesians who disagree on priors. To wit, we have deeply held beliefs about what is right and wrong. So do other people. Those beliefs were influenced by everything that happened in our lives to construct our unique concept of what is right and wrong. When we disagree on moral questions it feels like others are in error because we did our best to come up with our beliefs, but so did they. Instead of one or both of us being in error, it's reasonable to say that we have a fundamental disagreement about morals and ethics because we don't share the same deeply held beliefs about good and bad. Neither one of us is necessarily correct in some absolute sense; we both have our own reasonable claim on the truth based on what we know.

This is a pretty wild idea, though, because it implies that people with vastly different beliefs from our own might be just as justified as us in their ideas about what's right and wrong. This is true even if they believe something that, to us, seems utterly abhorrent. Rather than pushing you on any hot-button issues here—you can think about those for yourself—let's reconsider the disagreement between the Food Stealers and the Never Stealers.

If you ask Food Stealers what they think about Never Stealers, they'd likely say that Never Stealers are cold and heartless. The Never Stealers, in the same way, think the Food Stealers are unrepentant thieves freeloading off the hard work of the Never Stealers. But is either really right? The Food Stealers grew up thinking it was more important to care for others in need than to greedily hoard food. The Never Stealers grew up thinking it was more important to respect each others property than let beggars eat them out of house and home. They disagree because they believe fundamentally different things about the world and cannot agree. If they were Bayesians, they would be disagreeing on priors. But since they're humans, we might instead say they have different moral foundations.

Different Moral Foundations

The idea of moral foundations, and that people might have different ones, comes from Jonathan Haidt in his book The Righteous Mind. He argues that humans have different fundamental beliefs about what is right and wrong. These fundamental beliefs are built out of a few moral "foundations" or core beliefs. A person's moral beliefs can then be thought of like different moral "personalities", with each person identifying more or less with different foundational moral beliefs.

He identifies six moral foundations. They are:

  • care/harm: concern for the pain and joy of others
  • fairness/cheating: the desire for everyone to be treated the same
  • loyalty/betrayal: placing the group above the individual
  • authority/subversion: deference to leaders and tradition
  • sanctity/degradation: generalized "disgust"; splits the world into pure and impure
  • liberty/oppression: the right to be free of the control of others

Under this theory, one person might believe fairness is more important than liberty and think that it's good to give up some freedom in order to treat everyone equally. Another might believe just the opposite, seeing any limits on freedom as wrong no matter what the cost in terms of other moral foundations. Haidt and his fellow researchers use this theory to explain differences in beliefs about what is right and wrong between different cultures, religions, and even political parties. For example, based on survey data it seems that political conservatives more value loyalty, authority, and sanctity than the politically liberal, who more value care and fairness. Given this fact, it then seems likely that most disagreements between conservatives and liberals are actually not due to disagreements about what specific policies will be best for society but due to disagreements about what even is best, which is to say disagreements about moral foundations.

Perhaps unsurprisingly, some people disagree with Haidt's theory. Many of them think he's right that humans have something like moral foundations or fundamental core beliefs about morality but wrong in what the specific moral foundations are. Others think his theory is irrelevant, because even if people have different moral foundations there's still some fact of the matter about which moral foundations are best. This only further underscores how fundamentally uncertain we are about what things are right and wrong—that we can't even agree on the theoretical framework in which to work out what things are good and bad. In this chapter we've not even begun to touch on the long history of philosophers and theologians trying to figure out what's good and bad. That's a topic one step deeper that we'll return to in Chapter 8.

For now, whether or not Haidt's theory is correct, has the right idea but the wrong details, or is true but irrelevant, it illustrates well the point we've been driving at in this chapter, which is that we can disagree at a deep, fundamental level about what we believe to be true. So deeply that we may not be ever able to come to complete agreement with others. When we look at people from other political parties, religions, and cultures than our own and find them acting in ways that seem immoral, they look back at us and think the same. It seems that we each have approximately equally good footing to justify our beliefs and so are stuck disagreeing.

And it's not just other people we disagree with. We also disagree with ourselves all the time! For example, whether or not I think others should do the same, I think the right thing for me to do is to avoid sugary drinks. But most days I drink a Coke. What happened? Why didn't I do what I believed was right? That's the question we'll explore in the next chapter.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 11:34 PM

One classic but unpopular argument for agreement is as follows: if two agents disagreed, they would be collectively dutch-bookable; a bookie could bet intermediate positions with both of them, and be guaranteed to make money.

This argument has the advantage of being very practical. The fallout is that two disagreeing agents should bet with each other to pick up the profits, rather than waiting for the bookie to come around.

More generally, if two agents can negotiate with each other to achieve Pareto-improvements, Critch shows that they will behave like one agent with one prior and a coherent utility function. Critch also suggests that we can interpret the result in terms of agents making bets with each other -- once all the fruitful bets have been made, the agents act like they have a common prior (which treats the two different priors as different hypotheses).

So the overall argument becomes: if people can negotiate to take pareto-improvements, then in the limit of this process, they'll behave as if they had a common prior and shared preferences. 

A practical version of this argument might involve an axiom like, if you and I have different preferences, then the "right thing to do" is to take both preferences into account. You and I can eventually reach agreement about what is right-in-this-sense, by negotiating Pareto improvements. This looks something like preference utilitarianism; in the limit of everyone negotiating, a grand coalition is established in which "what's right" has reached full agreement between all participants. Any difference between our world and that world can be attributed to failures to take Pareto improvements, which we can think of as failure to approximate the ideal rationality.

This also involves behaving as if we also agree on matters of fact, since if we don't, we're Dutch-bookable, so we left money on the table and should negotiate another Pareto-improvement by betting on our disagreements.

Furthermore, everyone would agree on a common prior, in the sense that they would behave as if they were using a common prior.

Notice the relationship to the American Pragmatist definition of truth as what the scientific community would eventually agree on in the limit of investigation. "Right" becomes the limit of what everyone would agree on in the limit of negotiation.

Another argument for agreement which you haven't mentioned is Robin Hanson's Uncommon Priors Require Origin Disputes, which makes an argument I find quite fascinating, but will not try to summarize here. 

[-]TAG1y10

if two agents disagreed, they would be collectively dutch-bookable;

Which is to say that if two agents disagree about something observable and quantifiable...

True, this is an important limitation which I glossed over. 

We can do slightly better by including any bet which all participants think they can resolve later -- so for example, we can bet on total utilitarianism vs average utilitarianism if we think that we can eventually agree on the answer (at which point we would resolve the bet). However, this obviously still begs the question about Agreement, and so has a risk of never being resolved.

As we collect evidence about the world we update our beliefs, but we don't remember all the evidence. Even if we have photographic memories, childhood amnesia assures that by the time we reach the age of 3 or 4 we've forgotten things that happened to us as babies. Thus by the time we're young children we already have different prior beliefs and can't share all our evidence with each other to align on the same priors because we've forgotten it. Thus when we meet and try to agree, sometimes we can't because even if we have common knowledge about all the information each other has now, we didn't start from the same place and so may fail to reach agreement.

This part of your argument relies critically on the earlier mistake where you claimed that Aumann's theorem requires that we share all the evidence. Again, it does not - it only requires common knowledge of the specific posteriors about the question at hand, as opposed to common knowledge of all posterior beliefs.

We also don't meet one of the other requirements of Aumann's Agreement Theorem: we don't have the same prior beliefs. This is likely intuitively true to you, but it's worth proving. For us to all have the same prior beliefs we'd need to all be born with the same priors. This seems unlikely, but for the sake of argument let's suppose it's true that we are.

I want to put up a bit of a defense of the common prior assumption, although in reality I'm not so insistent on it. 

First of all, we aren't ideal Bayesian agents, so what we are as a baby isn't necessarily what we should identify as "our prior". If we think of ourselves as trying to approximate an ideal Bayesian reasoner, then it seems like part of the project is constructing an ideal prior to start with. EG, many people like Solomonoff's prior. These people could be said to agree on a common prior in an important way. (Especially if they furthermore can agree on a UTM to use.)

But we can go further. Suppose that two people currently disagree about the Solomonoff prior. It's plausible that they have reasons for doing so, which they can discuss. This involves some question-begging, since it assumes the kind of convergence that we've set out to prove, but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp rather than decisively arguing it. The point is that philosophical disagreements about priors can often be resolved, so even if two people can't initially agree on the Solomonoff prior, we might still expect convergence on that point after sufficient discussion.

In this picture, the disagreement is all about the approximation, and not at all about non-common priors. If we could approximate ideal rationality better, we could agree. 

Another argument in favor of a common-prior assumption is that even if we model people as starting out with different priors, we expect people to have experienced actually quite a lot of the world before they come together to discuss some specific disagreement. In your writing, you treat the different data as a reason for diverging opinions -- but taking another perspective, we might argue that they've both experienced enough data that they should have broadly converged on a very large number of beliefs, EG about how things fall to the ground when unsupported, what things dissolve in water, how other humans tend to behave, et cetera. 

We might broadly (imprecisely) argue that they've drawn different data from the same distribution, so after enough data, they should reach very similar conclusions.

Since "prior" is a relative term (every posterior acts as a prior for the next update), we could then argue that they've probably come to the current situation with very similar priors about that situation (that is, would have done if they'd been ideally rational bayesians the whole time) - even if they don't agree on, say the Solomonoff prior. 

The practical implication of this would be something like: when disagreeing, people actually know enough facts about the world to come to agree, if only they could properly integrate all the information. 

[-]TAG1y10

But we can go further. Suppose that two people currently disagree about the Solomonoff prior. It’s plausible that they have reasons for doing so, which they can discuss.

Sure, but where does that lead? If they discuss it using basically the same epistemology, htey might agree, and if they have fundamentally epistemology, they probably. They could have a discussion about their infra epistemology, but then the same dichotomy re-occurs a t a deeper level. There's no way of proving that two people who disagree can have a productive discussion that leads to agreement without assuming some measuer of pre-existing agreement at some level.

his involves some question-begging,

Yep.

but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp

That doesn't imply the incoherence of the anti-agreement camp. Coherence is like that: it's a rather weak condition, particularly in the sense that it can't show there is a single coherent view. If you believe there is a single truth, you shouldn't treat coherence as the sole criterion of truth.

Another argument in favor of a common-prior assumption is that even if we model people as starting out with different priors, we expect people to have experienced actually quite a lot of the world before they come together to discuss some specific disagreement.

But that doesn't imply that they will converge without another question-begging assumption that they will interpret and weight the evidence similarly. One person regards the bible as evidence, another does not.

We might broadly (imprecisely) argue that they’ve drawn different data from the same distribution, so after enough data, they should reach very similar conclusions.

If one person always rejects another's "data" that need not happen. You can have an infinite amount of data that is all of one type. Infinite in quantity doesn't imply infinitely varied.

if only they could properly integrate all the information.

They need to agree on what counts as information (data, evidence) in the first place.

That doesn't imply the incoherence of the anti-agreement camp.

I basically think that agreement-bayes and non-agreement-bayes are two different models with various pros and cons. Both of them are high-error models in the sense that they model humans as an approximation of ideal rationality. 

Coherence is like that: it's a rather weak condition, particularly in the sense that it can't show there is a single coherent view. If you believe there is a single truth, you shouldn't treat coherence as the sole criterion of truth.

I think this is reasoning too loosely about a broad category of theories. An individual coherent view can coherently think there's a unique truth. I mentioned in another comment somewhere that I think the best sort of coherence theory doesn't just accept anything that's coherent. For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn't itself imply that there are many truths.

[-]TAG1y10

An individual coherent view can coherently think there’s a unique truth

Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.

I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent.

Well, I have been having to guess what "coherence" means throughout.

For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.

Bayesians don't expect that there are multiple truths, but can't easily show that there are not. ETA:The claim is not that Bayesian lack of convergence comes from Bayesian probablism, the claim is that it comes from starting with radically different priors, and only accepting updates that are consistent with them --the usual mechanism of coherentist non-convergence.

Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.

To put it simply: I don't get it. If meta-reasoning corrupts your object-level reasoning, you're probably doing meta-reasoning wrong.

Well, I have been having to guess what "coherence" means throughout.

Sorry. My quote you were originally responding to:

This involves some question-begging, since it assumes the kind of convergence that we've set out to prove, but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp rather than decisively arguing it.

By 'coherence' here, I simply meant non-contradictory-ness. Of course I can't firmly establish that something is non-contradictory without some kind of consistency proof. What I meant was, in the paragraph in question, I'm only trying to sketch a possible view, to show some evidence that it can't be easily dismissed. I wasn't trying to discuss coherentism or invoke it in any way.

Bayesians don't expect that there are multiple truths, but can't easily show that there are not.

Not sure what you mean here.

Taking a step back from the details, it seems like what's going on here is that I'm suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you're complaining about the idea of multiple possible views. Does this seem very roughly correct to you, or like a mischaracterization? 

[-]TAG1y10

To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong

Of course, I didn't say "corrupts ". If you don't engage in meta level reasoning , you won't know what your object level reasoning is capable of, for better or worse. So you don't get get to assume your object level reasoning is fine just because you've never thought about it. So meta level reasoning is revealing flaws, not creating them.

Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views.

What matters is whether there is at least one view that works, that solves epistemology. If what you mean by "possible" is some lower bar than working fully and achieving all the desiderata, that's not very interesting because everyone know there are multiple flawed theories.

If you can spell out an abstract rationality to achieve Agreement, and Completeness and Consistency, and. ... then by all means do so. I have not seen it done yet.

Aumann's Agreement Theorem—which proves that they will always agree…under special conditions. Those conditions are that they must have common prior beliefs—things they believed before they encountered any of the evidence they know that supports their beliefs—and they must share all the information they have with each other. If they do those two things, then they will be mathematically forced to agree about everything!

To nitpick, this misstates Aumann in several ways. (It's a nitpick because it's obvious that you aren't trying to be precise.)

Aumann does not require that they share all information with each other. This would make the result trivial. Instead, all that is required is common knowledge of each others posterior beliefs on the one question at hand - then they must agree on the probabilities of answers of that question.

Getting more into the weeds, Aumann also assumes partitional evidence, which means that the indistinguishability relationship between worlds (IE the relationship xRy saying you can't rule out being in world x, when in world y) is symmetric, transitive, and reflexive (so, defines a partition on worlds, commonly called information sets in game theory). However, some of these assumptions can be weakened and still preserve Aumann's theorem. 

Thanks! I should be a bit more careful here. I'm definitely glossing over a lot of details. My goal in the book is to roughly 80/20 things because I have a lot of material to cover and I don't have the time/energy to write a fully detailed account of everything, so I want to say a lot of things as pointers that are enough to point to key arguments/insights that I think matter on the path to talking about fundamental uncertainty and the inherently teleological nature of knowledge.

I view this as a book written for readers who can search for things so expect people to look stuff up for themselves if they want to know more. But I should still be careful and get the high level summary right, or at least approximately right.

Yep, makes sense.

As someone reading to try to engage with your views, the lack of precision is frustrating, since I don't know which choices are real vs didactic. To where I've read so far, I'm still feeling an introductory sense and wondering where it becomes less so.

To some extent I expect the whole book to be introductory. My model is that the key people I need to reach are those who don't yet buy the key ideas, not those interested in diving into the finer details.

There's two sets of folks I'm trying to write to. My main audience is STEM folks who may not have engaged deeply with LW sequence type stuff and so have no version of these ideas (or have engaged with LW and have naive versions of the ideas). The second, smaller audience is LW-like folks who are for one reason or another some flavor of positivist because they only engaged enough layers of abstraction up with the ideas that positivism still seems reasonable.

Curious if you have work with either of the following properties:

  1. You expect me to get something out of it by engaging with it;
  2. You expect my comments to be able to engage with the "core" or "edge" of your thinking ("core" meaning foundational assumptions with high impact on the rest of your thinking; "edge" meaning the parts you are more actively working out), as opposed to useful mainly for didactic revisions / fixing details of presentation.

Also curious what you mean by "positivism" here - not because it's too vague a term, just because I'm curious how you would state it.

For (1), my read is that you already get a lot of the core ideas I want people to understand, so possibly not. Maybe when I write chapter 8 there will be some interesting stuff there, since that will be roughly an expansion of this post to cover lots of misc things I think are important consequences or implications of the core ideas of the book.

For (2), I'm not quite sure where the edge of my thinking lies these days since I'm more in a phase of territory exploration rather than map drawing where I'm trying to get a bunch of data that will help me untangle things I can't yet point to cleanly. Best I can say is that I know I don't intuitively grasp my own embedded nature, even if I understand it theoretically, such that some sense that I am separate from the world permeates my ontology. I'm not really trying to figure anything out, though, just explain the bits I already grasp intuitively.

I think of positivism as the class of theories of truth that claim that the combination of logic and observation can lead to the discovery of universal ontology (universal in the sense that it's the same for everyone and independent of any observer or what they care for). There's a lot more I could say potentially about the most common positivist takes versus the most careful ones, but I'm not sure if there's a need to go into that here.

In the argument about phoobs, I came to the conclusion that Bob is seeing animals that layman consider phoobs but scientists consider to be a different species.

(Also, if your phoob is red or blue, see a doctor!)

[-]TAG1y10

Disputes about morality and values aren't the only thing ones can't be solved by a Bayesian process of updating on evidence. Ontology can't either, because there are always different possible interpretations of evidence. People try to solve that sort of problem by appeal to, eg. simplicity principles, but there is a lot of disagreement about which one to use.

I'd say the biggest reason we disagree on morality is there is no mind-independent facts about it. This is the single biggest difference between other kinds of uncertainty about science/facts, and morality.

Or in other words, morality is at best a subjective enterprise, while understanding reality is objective.

Oh don't worry, I'm going to argue in a later chapter that there are no mind independent facts at all. 😊