As with all my blog posts, these are only very preliminary thoughts. I worry perhaps their scope gives them a sense of self-seriousness I don’t intend. This piece is about floating ideas that might be worth discussing, rather than strong claims to truth.

Cate Hall tweeted:

“In 2017 I was convinced AI timelines were <5 years so I cashed in my 401k and blew away the money and let me tell you this particular form of intellectual consistency is Not Recommended”

I didn’t believe that superintelligent AGI was less than <5 years away in 2017, but I can imagine many reasons a smart person could have thought that. I can also see the argument that, if you believe something, you should act on it. After all, not acting on your beliefs is a form of hypocrisy. Since at least Jesus, there has been a tradition of treating hypocrisy as among the very worst of sins. Hannah Arendt wrote:

As witnesses not of our intentions but of our conduct, we can be true or false, and the hypocrite's crime is that he bears false witness against himself. What makes it so plausible to assume that hypocrisy is the vice of vices is that integrity can indeed exist under the cover of all other vices except this one. Only crime and the criminal, it is true, confront us with the perplexity of radical evil; but only the hypocrite is really rotten to the core.

In the present day, there is a tradition of demanding people who make unusual claims be willing to place bets on them. If the asserter will not take the bet, then presumably they don’t really believe what they’ve claimed, or so the rhetoric goes. Sometimes people will preemptively offer to sake such bets.

But I think Cate Hall is right that the kind of intellectual consistency she engaged in here was a mistake. In showing why that’s the case, I propose to overturn several common and fundamental beliefs about how the human mind and intellectual inquiry work. I also plan to critique (in a broadly friendly way) the social and intellectual movement sometimes called rationalism or internet rationalism. I suppose I am on the margins of rationalism, but I have never felt comfortable identifying with it, at least in part because of the ideas I will develop in this piece.

Aside: Also, a general sense that rationalists have a tendency to discount the importance of hard, slogging, domain expertise, and a sense that rationalists are insufficiently attentive to the importance of objectively clashing interests in politics- but those points are discussions for another day.

A not-so-surprising claim

Let me start with a not-at-all-surprising claim. It is generally vastly more important that society gets a question right, or comes closer to the truth than that you personally get a question right or come closer to the truth. This is especially true in relation to:

*Political questions

*Questions relating to existential risk

*Scientific and technological questions.

*Ethical questions

A sequence of more surprising claims

Now, a sequence of more surprising claims.

1. There are multiple separable aspects of belief that could be called “belief”- in themselves at least four. These can go in different directions on the same subject in a single person.

2. Collective rationality can be opposed to individual rationality, Behavior that will help you ‘get it right’ about topics can be different from, and even opposed to, the behavior that will help society as a whole get it right.

3. There are different epistemological rules for different kinds of belief, and it is fitting for different aspects of belief to go in different directions

4. You should be somewhat pigheaded, stubborn, irrational and prone to confirmation bias in what you advocate. Hedgehogs may be more valuable than foxes.

5. You shouldn’t always advocate for what is most likely to be true.

Four different kinds of ‘belief’

In the past, I’ve written about the idea that there are multiple different aspects of belief. I’ve suggested that there are at least the following facets of belief, and there may be more:

Verbal action-guiding beliefs. Your verbal action-guiding beliefs are what you can sincerely assert.

Nonverbal action-guiding beliefs. Your non-verbal action-guiding beliefs are the beliefs that guide your actions

Aliefs. Defined by the philosopher Tamar Gendler, Aliefs are the tendency to feel as if a certain thing were the case, even when you don’t believe it. As someone with OCD, I’m very used to this unfortunately. A classic example is walking on glass over the Grand Canyon and feeling like you might fall, even though you are in no danger.

Commitment. You might be committed to acting as if a certain thing is a certain way. For example, you might “believe” in the goodness of humanity in the sense of intending to build a life around that idea.

It’s relatively obvious how most of these can diverge, but let me motivate the view that 1 & 2 can diverge.

[Example and idea owing to David Braddon-Mitchell] Young Catholic men believe that the sin of self-abuse (more colloquially called masturbation) at the very least risks putting their soul in a state of mortal sin. Further they believe that if you are in a state of mortal sin, and you die, you go to hell forever, a condition of torment by separation from God, and by the flames of hellfire. Despite this, young Catholic men often masturbate. Why?

The simplest explanation is that they don’t really believe it. I find this unsatisfactory. They believe they believe it. They sincerely assert it. They make inferences from it as if it were true.

Another explanation would be that they do believe it in an uncomplicated way but their behavior is massively irrational. For various reasons I find this unsatisfactory, e.g.: the scale and spread of the irrationality it would imply, the fact that the young men don’t rush to the confessional afterward in all cases, and so on.

So I prefer this explanation. The system of beliefs that governs verbal behavior and the system of beliefs that governs non-verbal behavior are at least partially separable.

Some people have suggested to me that we shouldn’t draw too much from this example because the phenomenon is peculiar to religion. I don’t think this is true. For example, I think it shows up in people’s betting behavior, there are plenty of people who feel subjectively certain of something, have no objection to gambling, but would be reluctant to gamble on that particular proposition. In my limited experience, this seems especially likely to be true when the belief is idiosyncratic- disagreed with by most people.

The vicissitudes of reason

An ‘optimized’ society should have different epistemological rules governing different modes of belief

Let’s say that I’m a doctor. I believe that a certain form of cancer treatment is superior to another. My view is not widely held in the medical profession.

If I wanted to be right, there’s a pretty good case I should reason as follows:

Sure my judgment could be correct, but all those people who disagree with me are equally certain of their correctness. Taking the outside view then, I’m probably in the wrong, ergo, I heavily moderate my disagreement with the mainstream, unless I have very strong special evidence that they haven’t seen.

But if everyone reasoned like this, diversity of opinion in the medical profession would fall, and medical opinion would improve in accuracy more slowly, if at all. On the other hand, there’s a pretty good argument that you want your personal physician, in making medical decisions, to lean towards the majority view strongly.

So we have a contradiction between different well-motivated norms, what should we do?

The simplest solution is Double Booking [the name was invented by Mark Alfano, a fellow Sydney philosopher- to be clear, he doesn’t necessarily endorse the idea].

>You should keep one set of beliefs governing what you advocate, formed solely on the basis of your judgment.

>And you should keep another set of beliefs governing what you think is true on the basis of all evidence- including the opinions of others and the recognition that there is nothing all that special about your reasoning compared to that of others (or at least others with expertise in the relevant area).

The first set of beliefs should guide your public advocacy of ideas, and the second set of beliefs should guide your actions, especially in high-stakes situations.

Now I happen to think that we already do this pretty naturally, in a way that corresponds to the distinction I drew between verbal and action beliefs. This is why, for example, people will passionately and sincerely argue for beliefs but get very reluctant to bet for them.

What we argue tends to be more based on how things seem to us, but our actions are relatively more informed by the outside view including the judgment of others- (or at least that’s what I think I’ve observed over a lifetime of careful people-watching- further experimentation would be needed to confirm this). I think this existing tendency is good, and we should accentuate it even further. Action beliefs should become even more based on the outside view and the judgments of others. Verbal beliefs should become even more based on our own good judgment. I propose as broad guidelines the following norms for different kinds of beliefs.

Verbal beliefs: Weighted heavily on your own impressions. This will ensure diversity in debate and stop us all from settling into consensus too quickly.

Action beliefs (particularly around high-stakes actions): Weighted more on general opinion.

Aliefs: As bluntly pragmatic as you can make them.

Commitment beliefs: Based on moral, pragmatic and aesthetic considerations.

Of course, the line between these different kinds of beliefs and how this interacts with our rationale is not clear. Consider, for example a scientist with a heterodox view. Our model indicates that her verbal beliefs should be biased towards the inside view and her action beliefs should be biased towards the inside view, but running experiments is an action, and presumably, we would want her choices of which experiments to run to be in line with her inside view. Many other counterexamples are possible, but as a rough rule of thumb, I would suggest the distinction works for many cases- actions should be guided by in large degree by expertise-weighted consensus, and advocacy should be guided by personal assessment.

[Come up with some counterexamples in the comments. It would be good to start work on making the distinction precise.]

So long as we are all transparent around this, I do not think there is any hypocrisy or anything otherwise morally undesirable about these practices.

Help everyone get it right

You should try to contribute to public reason, not just freeride on the reason of others

A number of authors in the financial world have complained about the dangers of index fund investing. The argument goes as follows- these people are effectively freeriding on the information gained by others. They are not making a contribution to economic rationality. Grossman and Stiglitz showed that this free-rider problem prevents the stock market from being as precise as it otherwise might be:

The Grossman-Stiglitz Paradox is a paradox introduced by Sanford J. Grossman and Joseph Stiglitz in a joint publication in American Economic Review in 1980 that argues perfectly informationally efficient markets are an impossibility since, if prices perfectly reflected available information, there is no profit to gathering information, in which case there would be little reason to trade and markets would eventually collapse.

(Above from Wikipedia)

The stakes of the stock market getting it exactly right are not clear, so it’s far from obvious that the Grossman-Stiglitz paradox causes significant harm. However, it is clear that, as a society, there are many issues where it’s very important we get it right. Freeriding should be discouraged.

That means contributing the information that you, uniquely have. If you just wanted to get it right, you’d spend most of your time reading and following up the perspectives of others, this would increase your individual rationality, but you shouldn’t want to be a passive investor in the marketplace of ideas, you should be looking for the hidden gems others have overlooked.

In fairness, we’re not exactly at any risk of people being shy with their idiosyncratic opinions, but it’s worth conceptualizing this, nonetheless, to clarify the value and role of sharing those idiosyncratic opinions.

In praise of the fox and the soldier

A hedgehog is someone who considers the world through a single, ideally carefully developed and elaborate theoretical lens. A fox is someone who thinks in terms of many different models and ideas, promiscuously applying and swapping between them to try and reach a balanced view.

Phillip Tetlock, in Superforecasting: The Art and Science of Prediction makes a clear argument for being a fox. The evidence he marshals suggests foxes are better at predicting the future than hedgehogs, ergo one should aspire to be a fox if one wants to be right, surely an open and shut case.

But to Tetlock I would ask where do you think the foxes are getting their multiple ideas to synthesize and interplay? In international affairs, for example, it’s all good and well to have a fellow who draws on a little liberalism, a little constructivism, a little realism and a little Marxism to make predictions. But step back, for that to happen you need someone first to full throatedly outline the liberal perspective, someone to outline the constructivist perspective and so on. Foxes are secondary consumers and a single fox needs to feed on many hedgehogs to survive and thrive.

I’m not certain about any of this, to be sure. However, I wouldn’t take it for granted that just because the theoretically promiscuous are better at making predictions, therefore they are better at making society as a whole better at making predictions.

Perhaps similar arguments could be made against Julia Galef’s distinction between scouts and soldiers- in defence of the honor of the soldier. However I have not read Galef’s book, and maybe she addresses this point, so I will not make a claim about it either way.

Mark Alfano has told me that there are similar arguments in the literature on Social Epistemology on a related topic- confirmation bias. Apparently, there are formal results showing that, in some models, a little stubbornness and confirmation bias can prevent convergence on an orthodoxy from happening too soon. I previously wrote about that in the precursor essay to this piece I’m not particularly worried about confirmation bias

If you want to increase the likelihood of society getting it right, maybe you shouldn’t pick what you advocate wholly based on the likelihood of it being true

Consider society’s aggregate assessment of the likelihood that B, and your assessment of the same (SB- Society’s assessment of B, PB, your personal assessment of B). Generally, people will advocate or not advocate for B on the basis of the value of PB.

But why not instead work on the basis of:

PB minus SB, or the absolute value of such. E.g., advocating for B on the basis of the gap between how strongly you believe B versus how strongly society as a whole believes B.

Of course, this will not be the only factor that decides it. We will also need to consider how important it is to you that society gets it right on the question of B. Let I equal importance, so we have roughly:

Advocate for beliefs for which: |(PB - SB)| * I is highest. Topics for which the correct answer is important, and where your opinion diverges from the mainstream.

This is only a very rough rule. There must be at least a dozen friendly amendments or questions that could be added or asked, for example:

*What if a question is important if it has a positive resolution but unimportant if it has a negative resolution?

*What if your personal beliefs diverge far from the majority view, but you don’t know a lot about the subject yet?

*How should you split your energy between different topics?

*What about a parameter representing the degree to which you can successfully affect public opinion on a given topic, relative to others?

And so on, but I think advocating around topics that maximize |(PB-SB)|*I is a good first pass for what we should be doing.

Let me suggest a great heuristic for finding areas where the value |(PB - SB)|*I is high. Look for topics where powerful people benefit from a certain directionality of belief and weaker people lose out. It doesn’t take a genius to see that such ideas will be under-resourced.

Towards a social rationalism

Various authors in the tradition of social epistemology have made the argument that social reason is different from individual reason. In this piece, I’ve given some reasons to think why. Some of those reasons are already in the literature (e.g. as I alluded other authors have talked about how “pigheadedness” can be collectively optimal). Other ideas are, to the best of my limited knowledge, new.

I don’t know whether I think rationalism is, overall, a good thing- I worry about a fetishistic view of rationality and reason. Still, let me give some unsolicited advice.

In my view, if the rationalist social movement wants to deal with its blindspots seriously, and I believe many rationalists do, a very good start would be conceiving of rationality as a more social process- not just social in pathological cases, but in general. Relatedly, much of what it means to improve rationality should be conceived of in terms of collective processes.

It follows:

1. Rationalists should try to behave in a way that maximizes public rationality with respect to society as a whole.

2. Rationalists should try to behave in a way that maximizes public rationality with respect to the rationalist community

3. In addition to comporting their personal behavior to maximize collective rationality, rationalists should encourage social and governmental reforms that support rationality.

Appendix: A partial list of the numerous exceptions to the idea that the majority opinion should direct your actions

1. Where the stakes are low.

2. Where actions are themselves an important part of exploring the space of competing ideas.

3. Where your different “beliefs” are not reducible to questions of fact, or moral inferences, but fundamental values.

4. Where you have a source of powerful evidence that is wholly unavailable to the majority.

5. This is not an exception per se, but important to keep in mind. Where we talk of majorities, this is shorthand for expertise-weighted majorities. If all the biologists believe in evolution, but 90% of people don’t, you go with the biologists.

Crossposted from my blog: https://philosophybear.substack.com/

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 3:06 PM

Advocate for beliefs for which: |(PB—SB)| * I is highest

So if society believes something is 1% likely, you believe it is 2% likely, and it is very important, you want to loudly claim that it's true.

This approach leads to bad things. First, people will stop believing you. Second, if they do believe you, you'll create an information cascade punishing the truthful ones.

I added this to the blog post to explain why I don't think your objection goes through:

"[Edit: To respond to an objection that was made on another forum to this blog- advocate for in the context of this section does not necessarily mean the claim is true. If the public thinks the likelihood of X is 1%, and your own assessment, not factoring in the weight of others’ judgments, is 30%, you shouldn’t lie and say you think it’s true. Advocacy just means making a case for it, which doesn’t require lying about your own probability assessment.]"

How do you address the case when people genuinely wish to believe in lies, hear lies, etc.?

How do we measure intent? 

Unless you mean to say a person who actively and verbally attempts to shun the truth?

How do we measure intent? 

Through their actions.