Last night, I did not register a patent to cure all forms of cancer. Even though it’s probably possible to figure such a cure out, from basic physics and maybe a download of easily available biology research papers.

Can we then conclude that I don’t want cancer to be cured – or, alternatively, that I am pathologically modest and shy, and thus don’t want the money and fame that would accrue?

No. The correct and obvious answer is that I am boundedly rational. And though an unboundedly rational agent – and maybe a superintelligence – could figure out a cure for cancer from first principles, poor limited me certainly can’t.

Modelling bounded rationality is tricky, and it is often accomplished by artificially limiting the action set/action space. Many economic models use revealed preferences, and feature agents that are assumed to be fully rational, but who are restricted to choosing between a tiny set of possible goods or lotteries. They don’t have the options of developing new technologies, rousing the population to rebellion, going online and fishing around for functional substitutes, founding new political movements, begging, befriending people who already have the desired goods, setting up GoFundMe pages, and so on.

There’s nothing wrong with modelling bounded rationality via action set restriction, as long as we’re aware of what we’re doing. In particular, we can’t naively conclude that because a such a model fits with observation, that therefore humans actually are fully rational agents. In particular, though economists are right that humans are more rational than we might naively suppose, thinking of us as rational, or “mostly rational”, is a colossally erroneous way of thinking. In terms of achieving our goals, as compared with a rational agent, we are barely above agents acting randomly.

Another problem with using small action sets, is that it may lead us to think that an AI might be similarly restricted. That is unlikely to be the case; an intelligent robot walking around would certainly have access to actions that no human would, and possibly ones we couldn’t easily imagine.

Finally, though action set reduction can work well in toy models, it is wrong about the world and about humans. So as we make more and more sophisticated models, there will come a time when we have to discard it, and tackle head-on the difficult issue of defining bounded rationality properly. And it’s mainly for this last point I’m writing this post; we’ll never see the necessity of better ways of defining bounded rationality, unless we realise that modelling it via action set restriction is a) common, b) useful, and c) wrong.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 4:51 PM

I think that I'm more optimistic about action set restriction than you are. In particular, I view the available action set as a fact about what actions the human is considering and choosing between, rather than a statement of what things are physically possible for the human to do. In this sense, action set restriction seems to me to be a vital part of the story of human bounded rationality, although clearly not the entire story (since we need to know why the action set is restricted in the way that it is).

I agree it's part of the story, but only a part. And real humans don't act as if there was a set of actions of size n, and they could consider all of them with equal ease. Sometimes humans have much smaller action sets, sometimes they can produce completely unexpected actions, and most of the time we have a pretty small set of obvious actions and a much larger set of potential actions we might be able to think up at the cost of some effort.

I guess I like the hierarchical planning-type view that our 'available action sets' can vary in time, and that one of them can be 'try to think of more possible actions'. Of course, not only do you need to specify the hierarchical structure here, you also need to model the dynamics of action discovery, which is a pretty daunting task.

What could be a better measure of the bounded rationality? A Kolmogorov complexity of the solution? Or a number of computations made for the answer?

If we want to apply it to humans, something much more complicated than that, which uses some measure of how complex humans see actions, takes into account how and when we search for alternate solutions. There's a reason most models don't use bounded rationality; it ain't simple.

Good way, I would almost say, the right way, how to do bounded rationality is the information-theoretic bounded rationality. There is a post about it in the works...