Introduction

I claim that this series of posts will be a decent training regime for applied rationality. Accordingly, I think that one of the first steps is to tell you what I even think applied rationality is. However, since "applied rationality" is a little hard to describe succinctly, I will describe it through example and metaphor. To make the process of triangulation easier, I will also provide three such descriptions.

Disclaimer: I adapted the first 2 takes from CFAR; the third is my own.

Take 1

In this post-enlightenment era, we have this thing called science. Loosely speaking, science is a set of norms that allow for the gradual development of true and verifiable bodies of knowledge. Some examples of these norms include: testing hypotheses, peer review, and discarding theories that have large amounts of evidence piled against them. A short description of these norms is that whenever someone has a question about how the world works, the proper answer is “why don’t we just check.”

In my view, however, science has (at least) two major flaws.

  1. Science is slow. The norms of science are sufficient to eject false knowledge from scientific canon eventually. They are most certainly not sufficient to eject false knowledge quickly. As the replication crisis demonstrates, scientific canon has contained, for many years, large swaths of false knowledge. Theories that are false are discarded in the limit, but they are not discarded as soon as a competing theory has sufficient evidence. Sometimes theories have the same consequences, so “just checking” can’t distinguish between them.

  2. Science is expensive. There are many resources that science must consume to generate knowledge. Foremost among these resources are time and money, but it also sometimes requires certain amount of status. In general, science can only make broad prescriptions about how any given individual engages in the world. Of course, you can do science on yourself, but it also costs a fair amount of resources. Sometimes “just checking” is infeasible to do at such a small scale.

I claim that applied rationality does a pretty good job of filling both these niches. If you're practiced at the art of arriving at true beliefs, I claim that you can change your mind in as little as 5 minutes, upper-bounded by perhaps a single year. On the other hand, science routinely takes more than a single year to change its mind about things like "power poses".

Similarly, when making decisions on a day-to-day basis, it is difficult to do proper experiments to determine which decisions to make. Science is a powerful and expensive tool -- one that applied rationality can tell you when to use. When you find science insufficient for the task, applied rationality can help you make good decisions using information you already have.

Compressed into a single sentence, applied rationality fills the gaps of science in the pursuit of truth.

Take 2

Imagine a monarch who rules an entire kingdom. This monarch is tasked with making decisions that benefit the kingdom. This monarch is not stupid, so they surround themself with many expert advisors. For example, the monarch might have economic, political, environmental, interpersonal, and legal advisors. However, the advisors are not quite properly restrained, so the monarch finds that their advisors do not quite know the bounds of their own advice.

Sometimes, it is clear which advisor the monarch should listen to: when deciding how much money to print, the monarch should probably listen primarily to the economic advisors. However, the printing of money is bound by law, so the legal advisors should probably be consulted. Printing money is also a political action, so the political advisors should also be consulted. Additionally, any movement in the economy has environmental ramifications, so the environmental advisors might also be consulted.

As we see, even in a seemingly clear-cut case, there are reasons for consulting all of the advisors. Now imagine a decision in a domain for which the monarch has no explicit advisors. In this scenario, it seems to me like the vast majority of the work in making a good decision is to decide which advisors to listen to.

Obviously, this is all a metaphor. To make it explicitly clear, you are the monarch. The kingdom is everything that you value. The advisors are all the sources of information you have available to you. These sources might include actual advisors, like your friends or people you hire, but it also includes things like books, the internet, your explicit reasoning, your intuition, your emotional reactions, your reflective equilibrium, etc. Crucially, you have situationally bad advisors. When there is a tiger running at you at full speed, it is vital that you don’t consult your explicit reasoning advisor.

I claim that similarly to the imaginary monarch making decisions, most of the work that goes into making good decisions is choosing which sources of information to listen to. This problem is complicated by the fact that some sources of information are easier to query than others, but this problem is surmountable.

Compressed into a single sentence, applied rationality is the skill of being able to select the proper sources of information during decision-making.

Take 3

Imagine that you’re a human being with values. You are in some environment and have certain actions available to you that manipulate the environment to better satisfy your values. Clearly, you want to take the action that makes the world score higher according to your values. You’ve heard of this thing call “rationality” and decide that the most “rational” policy is to take the action that maximizes the expectation of value.

You’re faced with a decision as to what to eat for dinner. You look at all the possible options and you figure out distributions for how much value you’ll get from all the options. Then you take the expectation of the value under all the distributions. By the time you have finished, the restaurant has closed and you’re still hungry. You decide that “rationality” is bunk and you should go with your intuition in the future.

This example might seem a bit contrived (and it is), but the general principle still holds. In this scenario (as in real life), you’re a human being with values. This means that you are unfortunately bounded along nearly all dimensions (for now). In particular, you have bounded memory, a bounded action space, and bounded compute. Any “rationality” that doesn’t have a way of dealing with boundedness doesn’t seem like a very good “rationality.” This type of “rationality” is not what I mean by applied rationality.

Julia Galef once defined eudaimonia as “happiness minus whatever philosophical objections that you have to happiness.” In a similar vein, I define applied rationality as “rationality minus whatever reasons you have for why rationality will never work.”

If you think that being “rational” doesn’t work because you have to consider emotional reactions to things, then what I mean by applied rationality is a rationality that takes emotional reactions into account.

If you think that being “rational” doesn’t work because humans need to take breaks sometimes and do what they want, then what I mean by applied rationality is a rationality that allows you to take breaks and do whatever you want.

If you think that being “rational” doesn’t work because sometimes you need to use intuition, then what I mean by applied rationality is a rationality that allows you to use intuition.

I think you get the point by now.

Compressed into a single sentence, applied rationality is a system of heuristics/techniques/tricks/tools that helps you increase your values, with no particular restriction on what the heuristics/techniques/tricks/tools are allowed to be.

Exercise

I have described to you three perspectives on what applied rationality is. An exercise for the engaged reader is to find a friend and explain to them what applied rationality is to you. I encourage you to make use of metaphor and analogy, which are, in my experience, the strongest tools for quality explanation. I also want to hear what your explanation is, so I would enjoy it if you commented.

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 6:08 PM

I think of applied rationality pretty narrowly, as the skill of applying reasoning norms that maximize returns (those norms happening to have the standard name "rationality"). Of course there's a lot to that, but I also think this framing is a poor one to train all the skills required to "win". To use a metaphor, as requested, it's like the skill of getting really good at reading a map to find optimal paths between points: your life will be better for it, but it also doesn't teach you everything, like how to figure out where you are on the map now or where you might want to go.

Applied rationality: Methods for fostering quick, efficient, and well-informed decision-making toward a goal.

Winter is nearly here and you need a door for your house to keep out the cold. In your workspace there is a large block of an unknown type of wood. Using only what you can assertain about it from your senses and experiences, you determine which tool to use for each circumstance you uncover as you reduce the block into the best door you can make given the time, tools, and knowledge available.

Edit: thanks for the post. It was very helpful.

I am not sure I understand exactly what you are aiming for with your take 3: is applied rationality a rationality that I don't have to follow when I want to use emotions/intuitions/breaks, or is it a rationality that considers these options when making the decisions? The former seems to permissive, in that there is never a place where I have to use rationality, while the latter might fall into the same issues as the restaurant example, by pondering to much whether I should use intuition to choose my meal.

That being said, I like the different perspectives offered by the takes.

This comment consists solely of a different take* on the material of the OP, and contains no errors or corrections.

[*Difference not guaranteed, all footnotes are embedded, this comment is very long, 'future additions, warnings and alterations to attributes such as epistemic status may or may not occur', all...]


Contents:

Take 1

Take 2

Take 3

(The response to (parts of) each take is in three parts: a, b, and c. [This is the best part, so stop after there if you're bored.])

Exercise

Questions that may overlap with 'How to build an exo-brain?'

[I am not answering these questions. Don't get your hopes down, bury them in the Himalayas. (This is an idiom variant, literal burial of physical objects in the Himalayas may be illegal.)]


Take 1:

a.

Sometimes “just checking” is infeasible to do at such a small scale.

Or what is feasible at small scale isn't particularly usable, though large scale coordination could enable cheap experiments.

b.

When you find science insufficient for the task, applied rationality can help you make good decisions using information you already have.

I feel like this is re-defining what science is, to not include things that seem like they fall under it.

c.

Compressed into a single sentence, applied rationality fills the gaps of science in the pursuit of truth.

I might have called science [a] pursuit of truth, though distinguishing between different implementations/manifestations of it may be useful, like a group pursuing knowledge, versus an individual. (Though if they're using similar/compatible formats, then possibly:

  • the individual can apply the current knowledge from the group, and the group's experiments
  • A bunch of individuals performing experiments and publishing, can be the same as a group, only missing aggregation
  • An individual can merge data/knowledge from a group with their own. (Similar to how, with the right licence, open source programs may be borrowed from, and improved upon by companies internally, but without improving the original source or returning these versions to the 'open' pool.)

Take 2:

a.

Crucially, you have situationally bad advisors. When there is a tiger running at you at full speed, it is vital that you don’t consult your explicit reasoning advisor.

Crucially, you have 'slow' advisors, who can't be consulted quickly. (And presumably fast advisors as well.)

  • While you may remember part of a book, or a skill you've gained, things/skills you don't remember can't be used with speed, even if you know where to find them given time
  • While it may be quick to determine if car is going to hit you while crossing a street, it may take longer to determine whether or not such a collision would kill you - longer than it would take the car to collide, or not collide, with you.

b.

I claim that similarly to the imaginary monarch making decisions, most of the work that goes into making good decisions is choosing
which sources of information to listen to. This problem is complicated by the fact that some sources of information are easier to query than others, but this problem is surmountable.

Most of the work that goes into making good decisions is choosing:

How long to make decisions, and

  • when to revisit them.*
  • which advisors to consult in that time

Managing the council. This can include:

  • Managing disagreements between council members
  • Changing the composition - firing councilors, hiring new ones (and seeing existing members grow, etc.)

*Including how long to take to make a decision. A problem which takes less time to resolve (to the desired degree) than expected is no issue, but a problem that takes longer may require revisiting how long should be spent on the problem (if it is important/revisiting how important it is).)

c.

Compressed into a single sentence, applied rationality is the skill of being able to select the proper sources of information during decision-making.

As phrased this addresses 2.b (above), though I'd stress both the short term and the long term.


Take 3:

You look at all the possible options

There are a lot options. This is why 2.b focused on time. Unfortunately the phrase "Optimal stopping" already seems to be taken, and refer to a very different (apparent) framing on the hiring problem. Even if you have all information on all applicants, you have to decide who to hire, and hire them before someone else does! (Which is what motivates deciding immediately after getting an applicant, in the more common framing. A hybrid approach might be better - have a favorite food, look at a few options, or create a record so results aren't a waste.)

You decide that “rationality” is bunk and you should go with your intuition in the future.
This example might seem a bit contrived (and it is), but the general principle still holds.

So someone samples (tries) a thing once to determine if a method is good, but in applying the method doesn't sample at all. Perhaps extracting general methods from existing advisors/across old and new potential advisors is the way to go.

If you think that being [X] doesn’t work because [Y], then [try Z : X taking Y into account].
Compressed into a single sentence, applied rationality is a system of heuristics/techniques/tricks/tools that helps you increase your values, with no particular restriction on what the heuristics/techniques/tricks/tools are allowed to be.

That is very different from how I thought this was going to go. Try anything*, see what works, while keeping constraints in mind. This seems like good advice (though long term and short term might be important to 'balance'). The continuity assumption is interesting:

Don't consider points (the system as it is), but adapt it to your needs/etc**.


*The approach from the example/story seems to revolve around having a council and trying out adding one new councilor at a time.

**The amount of time till the restaurant closes may be less than the time till you'll be painfully hungry.


Exercise:

An exercise for the engaged reader is to find a friend and explain to them what applied rationality is to you.

I didn't see this coming. I do see writing as something to practice, and examining others' ideas "critically" is a start on

But I think what I've written above is a start for explaining what it means to me. Beyond that...


I might have a better explanation at the end of this "month", these 30 days or so.


This topic also relates to a number of things:

A) A blog/book that's being written about "meta-rationality"(/the practice/s of rationality/science (and studying it)): https://meaningness.com/eggplant

B) Questions that may overlap with 'How to build an exo-brain?'

  • How to store information (paper is one answer. But what works best?)
  • How to process information*
  • How to organize information (ontology)
  • How to use information (like finding new applications)

*a) You learn that not all organisms are mortal. You learn that sharks are mortal.

How do you ensure that facts like these that are related to each other, are tracked with/linked to each other?

b) You "know" that everything is/sharks are mortal. Someone says "sharks are immortal".

How do you ensure that contradictions are noticed, rather than both held, and how do you resolve them?

(Example based on one from the replacing guilt series/sequence, that illustrated a more general, and useful, point.)


Thinking about above, except with information replaced with other words like "questions" and "skills".:

Q:

  • Storing questions may be similar to storing information.
  • But while information may keep, questions are clearly incomplete. (They're looking for answers.)
  • Overlaps with above.
  • Which questions are important, and how can one ensure that the answers survive?*

Skills:

  • Practice (and growth)
  • It's not clear that this is a thing, or if it is, how it works. (See posts on Unlocking the Emotional Brain.)
  • Seems like a question about neuroscience, or 'how can you 'store' a skill you have now, so it's easier to re-learn/get back to where you are now (on some part of it, or the whole)?'*
  • This seems more applicable to skills you don't have, and deciding which new ones to acquire/focus on.

*This question is also important for after one's lifetime. [Both in relation to other people "after (your) death", and possible future de-cryo scenarios.]

I find the act of the detective to be most suitable to my frame, in the sense of not hanging on to the facts, entertaining all the proclaims, and seek one rational to best satisfy observations - along the lines of movie  Knives Out (2019) 

There are lots of possible goals. Some people are good at achieving some goals. Performance on most goals that are interesting to me is dependent on the decision making ability of the player (e.g. winning at poker vs being tall).

There is some common thread between being an excellent poker player, a supportive friend and a fantastic cook. Even if the inner decision-making workings seem very different in each one, I think that some people have a mindset that lets them find the appropriate decision-making machinery for each task.

To use a metaphor, whilst some people who can play the piano beautifully would not have become beautiful violin players if they had chosen the violin instead of the piano, most people that play the piano and violin beautifully are just good at practising. I think that most instrumentalists could have become good instrumentalists at most other instruments because they are good at practising (although of course, some people do find success in other ways).

Practising is to learning instruments as applied rationality is achieving goals.

Okay, so my take on this:

Applied rationality is the conscious method of selecting the best way for reaching the desired goal, including the use of a different method in cases where other methods are superior.

E.g.

  • An A. I. controlling a space ship will follow generally the best route it rationally calculates, but in a new, complex zone an otherwise inferior human pilot (or neural network) which is already well trained in that domain will be the better, hence it will rationally transfer control
  • It makes sense to calculate the trajectory of a ballistic missile before launching, but do not try to do the same when playing basketball