For example, if you say, "The rational belief is X, but the true belief is Y" then you are probably using the word "rational" in a way that means something other than what most of us have in mind

This was copied from here.

Surely it is obvious that there are lots of examples when one might say this. Consider this:

Rob looks in the newspaper to check the football scores. The newspaper says that United won 3-2, but it is a misprint because City actually won 3-2. In this case, the rational belief is that United won, but the true belief is that City won.

Am I missing something?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 4:37 PM
[-]prase13y210

if you say, "The rational belief is X, but the true belief is Y" then you are probably using the word "rational" in a way that means something other than what most of us have in mind

There is nothing wrong with saying "the rational belief given incomplete information is X, but the truth is Y, as follows from more complete information", and the quote didn't try to forbid that. Rather, it warned against saying "as far as I know, Y is true, but rationality tells me to believe X".

Edit: Rationality (the epistemic sort of) is a method which derives true beliefs from given information in the optimal way. Rational beliefs should depend on accessible information and so they can't always correspond to the truth. The idea of the quote is that we define rationality as the best truth-finding algorithm, and therefore we are not allowed to use any other algorithm to determine the truth. If we do that, we either deliberately use a worse tool in the presence of a better one, or we don't follow our definition of rationality.

As Eliezer mentions here:

Jaynes used to recommend that no one ever write out an unconditional probability: That you never, ever write simply P(A), but always write P(A|I), where I is your prior information. I'll use Q instead of I, for ease of reading, but Jaynes used I. Similarly, one would not write P(A|B) for the posterior probability of A given that we learn B, but rather P(A|B,Q), the probability of A given that we learn B and had background information Q.

The sentiment is wrong, but I wonder how common our subjective interpretation is. The quote might be right about its rarity. Else why would a contrarian like Taleb argue for it?

"I will repeat this point again until I get hoarse: a mistake is not something to be determined after the fact, but in the light of the information until that point."

I've had managers who stubbornly refuse to understand this -- even in cases where I could point out that given the same evidence we would still make the same choice.

A more practical approach is to ask: What information, if we had had it, would have led us to make a better decision? Now, how do we make sure that next time, we have that information?

For the conversations I have in mind, I did bring that up.

In response they pointed to "the clear evidence of failure" (their sense of it at least). I replied that I had warned them about the possibility of that outcome -- and by itself that wasn't enough.

With this particular set of managers, my working style conflicted with their managing style: I would offer them options and warn them about potential issues. They would go for the fastest to implement option, and treat "potential" issues as very unlikely. When something did go wrong, they would no longer remember our original constraints, or my original warning. In these cases what they wanted from me was a guilty "mea culpa". Which I thought was pointless -- regardless of who was at fault. I told them that. Instead, I would talk to them about how we could make better decisions in the future. They were looking for emotion and I gave them logic. They took this as me not accepting responsibility.

Part of what we did wrong was that we did not document the decision making process. A year later when issues began, nobody had a clear memory of the original context for that decision. We also did not have any process for considering risk. These deficiencies made it difficult to improve our process over time.

Probability is the mind, and beliefs are probabilities. It is perfectly rational to assign different probabilities to the same event given that you have different information about it (because probabilities are statements about your lack of information, not inherent properties of the event in question). In your example, someone who read the newspaper and did not know about the misprint has different information about who won the game than somebody who knows that there WAS a misprint; therefore, even if both apply the methods of epistemic rationality, they are bound to assign different probability to the event that United won i.e. they end up with different beliefs. In other words, the function for determining if a given agent's belief is rational doesn't just take an argument for the event it refers to, but also needs a parameter for the available information the agent has about said event.

Hence, you are confused because you are talking about situations with different states of information, where different beliefs about the same event may be rational. The quote is talking about single situation with a single state of information where somebody still speaks of the "rational" belief being X while the "true" belief is Y.

Am I missing something?

Short answer: No.

See, in particular: Probability is Subjectively Objective.

This is related to a mistake that crops up when people say "Rationalists Should Win" and mean it literally. (So I try to avoid the phrase myself.)

This is related to a mistake that crops up when people say "Rationalists Should Win" and mean it literally.

Agree that that has similar problems.

I had forgotten the extent to which Eliezer stuck to his guns in backing that claim. Ick.

The charitable interpretation of Eliezer's position here is that he doesn't want to tie the word 'rational' to any particular methodology. He wants to tie it to "winning". So that, if someone comes up with a better decision theory (for example), he wants to evaluate the 'rationality' of that theory by the criterion of whether it wins, rather than by the criterion of whether it matches the orthodox methodology.

Personally, I think he is pissing into the wind in this attempt to hijack the meaning of a word. People in economics and related fields tend to take 'rational' as a term that can be freely equated to their own preferred methodology, and they use various hyphenated rationalities when they need to compare methodologies.

The charitable interpretation of Eliezer's position here is that he doesn't want to tie the word 'rational' to any particular methodology. He wants to tie it to "winning". So that, if someone comes up with a better decision theory (for example), he wants to evaluate the 'rationality' of that theory by the criterion of whether it wins, rather than by the criterion of whether it matches the orthodox methodology.

I don't have a problem with that definition. It seems the most useful one. It is just that being maximally rational doesn't make you win all the time. It maximizes winning. Or expected winning. Something like that. It doesn't make you guess lotto numbers perfectly or generally act like you're save scumming the universe.

The rational belief isn't that United won. It's that United has some high probability of having won.

I think the rational is the closest to true you can possibly [edit: justifiably] get from where you are.

Notice that in your example you introduce a third person perspective which is aware of the misprint. From Rob's perspective the rational is to believe the newspaper. If he knew of the misprint as the third-person perspective seems to, the rational would be not to believe the news paper.

However, to reach a position where you can say "The rational belief is X, but the true belief is Y", it means you are making this distinction from the same perspective, which you shouldn't be able to do if you're defining 'rational' as most of us do.

"I think the rational is the closest to true you can possibly get from where you are."

The truth is the closest you can get to the truth. Suppose Rob reads the newspaper but then believes that City won because their his team and it would make him happy if they won. His belief would be closer to the truth, but it would not be rational.

That can be patched with editing to "the rational is the closest to true you can justifiably get from where you are".

I'm taking a page from the definition of knowledge as 'justified true belief'. The belief that his team won would be true but not justified. Just as a broken clock is right twice a day but that still doesn't make it a reliable time measurement apparatus.

So I can use "unjustifiable" methods to get even closer to the truth? Screw "rationality", then!

Not consistently, which is the point.

So I can use "unjustifiable" methods to get even closer to the truth? Screw "rationality", then!

Yes, you can. But with a lower probability. At the limiting case you can add or subtract an epsilon from the 'rational' probability assignment and you will be closer to the truth (0.5 - _different_epsilon_) of the time.

This applies to instrumental rationality too. A lucky guess will win you the lottery. An expected utility calculation will not (except when extremely desperate and similarly lucky).

Did you see me mentioning methods anywhere? If a method can get you closer to truth, then that is its justification. But in the example given the method was "it would make him happy if they won". This is not a reliable method for reaching truth and on average will get you farther from it.

In this case, the rational belief is that United won, but the true belief is that City won.

Given the information that City indeed won, it is rational to believe that City won, and one who has that information could also say that (for all they know) it's true that City won. Given information that United won, it is rational to believe that United won, and one who has that information could also say that (for all they know) it's true that United won. All this with some levels of certainty, which would be less for reading bare stats one time in one newspaper, and much higher when evidence is received from multiple error-resistant sources.