There's a lot of arguing, of course, on if humans are rational, but this often mixes up two things: there's the "Von Neumann-Morgenstern utility function maximization" definition of "rational", and there's a hypothetical "rational" that a human could fulfill with constraints much more complicated than the classical approach, more in the direction of prospect theory, or Predictive Coding.

I think I regard the second definition as sufficiently not understood or defined that it isn't yet worth using in most conversation. It seems challenging, to say the least, to ask if humans are rational according to some definition which we clearly do not even know yet, let alone expect others to agree with.

As such, I think the word "rational" should typically be used to refer to the former. This therefore means that humans not only aren't rational, but that they shouldn't be rational, as they are dealing with limitations that "rational" agents wouldn't have.

In this setup, "rational" really refers to a predominantly (I believe) 20th century model of human and organizational pattern; it exists in the map, not the territory.

If one were to discuss how rational a person is, they would be discussing how well they fit this specific model; not necessarily how optimal that entity is being.

On the "Rationalist" community

Rationality could still be useful to study.

While I believe rationalism should refer to a model more than agent ideals, that doesn't mean that studying the model isn't a useful way to understand what decisions we should be making. Rational agents represent a simple model, but that brings in many of the benefits of it being a model; it's relatively easy to use as a fundamental building block for further purposes.

At the same time, LessWrong is arguably more than about rationality when defined in this sense.

Some of LessWrong details problems and limitations regarding the classical rational models, so those would arguably fit outside of them better than inside of them. I see some of the posts as being about things that would be beneficial for a much better hypothetical "rationality++" model, even though they don't necessarily make sense within a classical rationality model.

3Pattern5moOr it could be an intuitive usage and mean "(more) optimal". "Why don't more people do [thing that will improve their health]?"

I like that question.

I think that if people were to try to define optimal in a specific way, they would find that it requires a model of human behavior; the common one that academics would fall back to is that of Von Neumann-Morgenstern utility function maximization.

I think it's quite possible that when we have better models of human behavior, we'll better recognize that in cases where people seem to be doing silly things to improve their health, they're actually being somewhat optimal given a large sets of physical and mental constraints.

ozziegooen's Shortform

by ozziegooen 31st Aug 2019127 comments