[ Question ]

What are the open problems in Human Rationality?

byRaemon2mo13th Jan 201943 comments

60


LessWrong has been around for 10+ years, CFAR's been at work for around 6, and I think there have been at least a few other groups or individuals working on what I think of as the "Human Rationality Project."

I'm interested, especially from people who have invested significant time in attempting to push the rationality project forward, what they consider the major open questions facing the field. (More details in this comment)

Rough gesturing at "What is the Rationality Project?"

I'd prefer to leave "Rationality Project" somewhat vague, but I'd roughly summarize it as "the study of how to have optimal beliefs and make optimal decisions while running on human wetware."

If you have your own sense of what this means or should mean, feel free to use that in your answer. But some bits of context for a few possible avenues you could interpret this through:

Early LessWrong focused a lot of cognitive biases and how to account for them, as well as Bayesian epistemology.

CFAR (to my knowledge, roughly) started from a similar vantage point and eventually started moving in the direction of "how to do you figure out what you actually want and bring yourself into 'internal alignment' when you want multiple things, and/or different parts of you want different things and are working at cross purposes. It also looked a lot into Double Crux, as a tool to help people disagree more productively.

CFAR and Leverage both ended up exploring introspection as a tool.

Forecasting as a field has matured a bit. We have the Good Judgment project.

Behavioral Economics has begun to develop as a field.

I recently read "How to Measure Anything", and was somewhat struck at how it tackled prediction, calibration and determining key uncertainties in a fairly rigorous, professionalized fashion. I could imagine an alternate history of LessWrong that had emphasized this more strongly.

With this vague constellation of organizations and research areas, gesturing at an overall field...

...what are the big open questions the field of Human Rationality needs to answer, in order to help people have more accurate beliefs and/or make better decisions?

10 Answers

Wei_Dai

Jan 14, 2019

64

I went through all my LW posts and gathered the ones that either presented or reminded me of some problem in human rationality.

1. As we become more rational, how do we translate/transfer our old values embodied in the less rational subsystems?

2. How to figure out one's comparative advantage?

3. Meta-ethics. It's hard to be rational if you don't know where your values are supposed to come from.

4. Normative ethics. How much weight to put on altruism? Population ethics. Hedonic vs preference utilitarianism. Moral circle. Etc. It's hard to be rational if you don't know what your values are.

5. Which mental subsystem has one's real values, or how to weigh them.

6. How to handle moral uncertainty? For example should we discount total utilitarianism because we would have made a deal to for total utilitarianism to give up control in this universe?

7. If we apply UDT to humans, what does it actually say in various real-life situations like voting or contributing to x-risk reduction?

8. Does Aumann Agreement apply to humans, and if so how?

9. Meta-philosophy. It's hard to be rational if one doesn't know how to solve philosophical problems related to rationality.

10. It's not clear how selfishness works in UDT, which might be a problem if that's the right decision theory for humans.

11. Bargaining, politics, building alliances, fair division, we still don't know how to apply game theory to a lot of messy real-world problems, especially those involving more than a few people.

12. Reality fluid vs. caring measure. Subjective anticipation. Anthropics in general.

13. What is the nature of rationality, and more generally normativity?

14. What is the right way to handle logical uncertainty, and how does that interact with decision theory, bargaining, and other problems?

Comparing the rate of problems opened vs problems closed, we have so far to go....


Thrasymachus

Jan 13, 2019

19

There seem some foundational questions to the 'Rationality project', and (reprising my role as querulous critic) are oddly neglected in the 5-10 year history of the rationalist community: conspicuously, I find the best insight into these questions comes from psychology academia.

Is rationality best thought of as a single construct?

It roughly makes sense to talk of 'intelligence' or 'physical fitness' because performance in sub-components positively correlate: although it is hard to say which of an elite ultramarathoner, Judoka, or shotputter is fittest, I can confidently say all of them are fitter than I, and I am fitter than someone who is bedbound.

Is the same true of rationality? If it were the case that performance on tests of (say) callibration, sunk cost fallacy, and anchoring were all independent, then this would suggest 'rationality' is a circle our natural language draws around a grab-bag of skills or practices. The term could therefore mislead us into thinking it is a unified skill which we can 'generally' improve, and our efforts are better addressed at a finer level of granularity.

I think this is plausibly the case (or at least closer to the truth). The main evidence I have in mind is Stanovich's CART, whereby tests on individual sub-components we'd mark as fairly 'pure rationality' (e.g. base-rate neglect, framing, overconfidence - other parts of the CART look very IQ-testy like syllogistic reasoning, on which more later) have only weak correlations with one another (e.g. 0.2 ish).

Is rationality a skill, or a trait?

Perhaps key is that rationality (general sense) is something you can get stronger at or 'level up' in. Yet there is a facially plausible story that rationality (especially so-called 'epistemic' rationality) is something more like IQ: essentially a trait where training can at best enhance performance on sub-components yet not transfer back to the broader construct. Briefly:

  • Overall measures of rationality (principally Stanovich's CART) correlate about 0.7 with IQ - not much worse than IQ test subtests correlate with one another or g.
  • Infamous challenges in transfer. People whose job relies on a particular 'rationality skill' (e.g. gamblers and calibration) show greater performance in this area but not, as I recall, transfer improvements to others. This improved performance is often not only isolated but also context dependent: people may learn to avoid a particular cognitive bias in their professional lives, but remain generally susceptible to it otherwise.
  • The general dearth of well-evidenced successes from training. (cf. the old TAM panel on this topic, where most were autumnal).
  • For superforecasters, the GJP sees it can get some boost from training, but (as I understand it) the majority of their performance is attributed to selection, grouping, and aggregation.

It wouldn't necessarily be 'game over' for the 'Rationality project' even if this turns out to be the true story. Even if it is the case that 'drilling vocab' doesn't really improve my g, I might value a larger vocabulary for its own sake. In a similar way, even if there's no transfer, some rationality skills might prove generally useful (and 'improvable') such that drilling them to be useful on their own terms.

The superforecasting point can be argued the other way: that training can still get modest increases in performance in a composite test of epistemic rationality from people already exhibiting elite performance. But it does seem crucial to get a general sense of how well (and how broadly) can training be expected to work: else embarking on a program to 'improve rationality' may end up as ill-starred as the 'brain-training' games/apps fad a few years ago.


Elo

Jan 13, 2019

11

The problem of interfaces between cultures.

Humans live in different cultures. A simple version of this is in how cultures greet each other. The Italian double kiss, the ultra orthodox Jewish non touch, the hippie hug, the handshake of various cultures, the Japanese bow/nod, and many more. It's possible to gravely offend a different culture with the way you do introductions.

Now think about the same potential offence but for all conversation culture.

I have the open question of how to successfully interface with other cultures.


Wei_Dai

Jan 14, 2019

7

One more, because one of my posts presented two open problems, and I only listed one of them above:

15. Our current theoretical foundations for rationality all assume a fully specified utility function (or the equivalent), or at least a probability distribution on utility functions (to express moral/value uncertainty). But to the extent that humans can be considered to have a utility function at all, it's may best be viewed as a partial function that returns "unknown" for most of the input domain. Our current decision theories can't handle this because they would end up trying to add "unknown" to a numerical value during expected utility computation. Forcing humans to come up with an utility function or even a probability distribution on utility functions in order to use decision theory seems highly unsafe so we need an alternative.


G Gordon Worley III

Jan 14, 2019

6

To me the biggest open problem is how to make existing wisdom more palatable to people who are drawn to the rationalist community. What I have in mind as an expression of this problem is the tension between the post/metarationalists and the, I don't know, hard core of rationalists: I don't think the two are in conflict; the former are trying to bring in things from outside the traditional sources historically liked by rationalists; the latter see themselves as defending rationality from being polluted by antirationalist stuff; and both are trying to make rationality better (the former via adding; the latter via protecting and refining). The result is conflict even if I think the missions are not in conflict, though, so it seems an open problem is figuring out how to address that conflict.


Chris_Leong

Jan 14, 2019

6

Group rationality is a big one. It wouldn't surprise me if rationalists are less good on average at co-ordinating than other group because rationalists tend to be more individualistic and have their own opinions of what needs to be done. As an example, how long did it take for us to produce a new LW forum despite half of the people here being programmers? And rationality still doesn't have its own version of CEA.


norswap

Jan 22, 2019

3

For applied rationality, my 10% improvement problem: https://www.lesswrong.com/posts/Aq8QSD3wb2epxuzEC/the-10-improvement-problem

Basically, how do you notice small (10% or less) improvements in areas that are hard to quantify. This is important, because after reaping the low-hanging fruits, stacking those small improvements is how you get ahead.


ChristianKl

Jan 13, 2019

2

Is there a way to integrate probability based forecasting into the daily life of the average person that's clearly beneficial for them?

I don't think we are yet at that point where I can clearly say, that we are there. I think we would need new software to do this well.


quanticle

Jan 13, 2019

2

How about: "What is rationality?" and "Will rationality actually help you if you're not trying to design an AI?"

Don't get me wrong. I really like LessWrong. I've been fairly involved in the Seattle rationality community. Yet, all the same, I can't help but think that actual rationality hasn't really helped me all that much in my everyday life. I can point to very few things where I've used a Rationality Technique to make a decision, and none of those decisions were especially high-impact.

In my life, rationality has been a hobby. If I weren't reading the sequences, I'd be arguing about geopolitics, or playing board games. So, to me, the most open question in rationality is, "Why should one bother? What special claim does rationality have over my time and attention that, say, Starcraft does not?"


Elo

Jan 13, 2019

-10

One open problem:

The problem of communication across agents, and generally what I call "miscommunication".