Robin wonders (in conversation) why apparently fairly abstract topics don’t get more attention, given the general trend he notices toward more abstract things being higher status. In particular, many topics we and our friends are interested in seem fairly abstract, and yet we feel like they are neglected: the questions of effective altruism, futurism in the general style of FHI, the rationality and practical philosophy of LessWrong, and the fundamental patterns of human behavior which interest Robin. These are not as abstract as mathematics, but they are quite abstract for analyses of the topics they discuss. Robin wants to know why they aren’t thus more popular.

I’m not convinced that more abstract things are more statusful in general, or that it would be surprising if such a trend were fairly imprecise. However supposing they are and it was, here is an explanation for why some especially abstract things seem silly. It might be interesting anyway.

Lemma 1: Rethinking common concepts, and being more abstract tend to go together. For instance, if you want to question the concept ‘cheesecake’ you will tend to do this by developing some more formal analysis of cake characteristics, and showing that ‘cheesecake’ doesn’t line up with the more cutting-nature-at-the-joints distinctions. Then you will introduce another concept which is close to cheesecake, but more useful. This will be one of the more abstract analyses of cheesecakes that has occurred.

Lemma 2: Rethinking common concepts and questioning basic assumptions look pretty similar. If you say ‘I don’t think cheesecake is a useful concept – but this is a prime example of a squishcake’, it sounds a lot like ‘I don’t believe that cheesecakes exist, and I insist on believing in some kind of imaginary squishcake’.

Lemma 3: Questioning basic assumptions is also often done fairly abstractly. This is probably because the more conceptual machinery you use, the more arguments you can make. e.g. many arguments you can make against the repugnant conclusion’s repugnance work better once you have established that aversion to such a scenario is one of a small number of mutually contradictory claims, and have some theory of moral intuitions as evidence. There are a few that just involve pointing out that the people are happy and so on, but where there are a lot of easy non-technical arguments to make against a thing, it’s not generally a basic assumption.

Explanation: Abstract rethinking of common concepts is easily mistaken for questioning basic assumptions. Abstract questioning of basic assumptions really is questioning basic assumptions. And questioning basic assumptions has a strong surface resemblance to not knowing about basic truths, or at least not having a strong gut feeling that they are true.

Not knowing about basic truths is not only a defining characteristic of silly people, but also one of the more hilarious of their many hilarious characteristics. Thus I suspect that when you say ‘I have been thinking about whether we should use three truth values: true, false, and both true and false’, it sounds a lot like ‘My research investigates whether false things are true’, which sounds like ‘I’m yet to discover that truth and falsity are mutually exclusive opposites’, which sounds a bit like ‘I’m just going to go online and check whether China is a real place’.

Some evidence to support this: when we discussed paraconsistent logic at school, it was pretty funny. If I recall, mostly of the humor took the form ‘Priest argues that bla bla bla is true of his system’ …’Yeah, but he doesn’t say whether it’s false, so I’m not sure if we should rely on it’. I feel like the premise was that Priest had some absurdly destructive misunderstanding of concepts, such that none of his statements could be trusted.

Further evidence: I feel like some part of my brain interprets ‘my research focuses on determining whether probability theory is a good normative account of rational belief’ as something like ‘I’m unsure about the answers to questions like ‘what is 50%/(50% + 25%)?”. And that part of my brain is quick to jump in and point out that this is a stupid thing to wonder about, and it totally knows the answers to questions like that.

Other things that I think may sound similar:

  • ‘my research focusses on whether not being born is as bad as dying’ <—> ‘I’m some kind of socially isolated sociopath, and don’t realize that death is really bad’
  • ‘We are trying to develop a model of rational behavior that accounts for the Allais paradox’ <—> ‘we can’t calculate expected utility’
  • ‘Probability and value are not useful concepts, and we should talk about decisions only’ <—> ‘My alien experience of the world does not prominently feature probabilities and values’
  • ‘I am concerned about akrasia’ <—> ‘I’m unaware that agents are supposed to do stuff they want to do’
  • ‘I think the human mind might be made of something like sub-agents’ <—> ‘I’m not familiar with the usual distinction of people from one another’.
  • ‘I think we should give to the most cost-effective charities instead of the ones we feel most strongly for’ <—> ‘Feelings…what are they?’

I’m not especially confident in this. It just seems a bit interesting.


New to LessWrong?

New Comment