Can anyone tell me why it is that if I use my rationality exclusively to improve my conception of rationality I fall into an infinite recursion? EY say's this in The Twelve Virtues and in Something to Protect, but I don't know what his argument is. He goes as far as to say that you must subordinate rationality to a higher value.

I understand that by committing yourself to your rationality you lose out on the chance to notice if your conception of rationality is wrong. But what if I use the reliability of win that a given conception of rationality offers me as the only guide to how correct that conception is. I can test reliability of win by taking a bunch of different problems with known answers that I don't know, solving them using my current conception of rationality and solving them using the alternative conception of rationality I want to test, then checking the answers I arrived at with each conception against the right answers. I could also take a bunch of unsolved problems and attack them from both conceptions of rationality, and see which one I get the most solutions with. If I solve a set of problems with one, that isn't a subset of the set of problems I solved with the other, then I'll see if I can somehow take the union of the two conceptions. And, though I'm still not sure enough about this method to use it, I suppose I could also figure out the relative reliability of two conceptions by making general arguments about the structures of those conceptions; if one conception is "do that which the great teacher says" and the other is "do that which has maximal expected utility", I would probably not have to solve problems using both conceptions to see which one most reliably leads to win.

And what if my goal is to become as epistimically rational as possible. Then I would just be looking for the conception of rationality that leads to truth most reliably. Testing truth by predictive power.

And if being rational for its own sake just doesn't seem like its valuable enough to motivate me to do all the hard work it requires, let's assume that I really really care about picking the best conception of rationality I know of, much more than I care about my own life.

It seems to me that if this is how I do rationality for its own sake — always looking for the conception of goal-oriented rationality which leads to win most reliably, and the conception of epistemic rationality which leads to truth most reliably — then I'll always switch to any conception I find that is less mistaken than mine, and stick with mine when presented with a conception that is more mistaken, provided I am careful enough about my testing. And if that means I practice rationality for its own sake, so what? I practice music for its own sake too. I don't think that's the only or best reason to pursue rationality, certainly some other good and common reasons are if you wanna figure something out or win. And when I do eventually find something I wanna win or figure out that no one else has (no shortage of those), if I can't, I'll know that my current conception isn't good enough. I'll be able to correct my conception by winning or figuring it out, and then thinking about what was missing from my view of rationality that wouldn't let me do that before. But that wouldn't mean that I care more about winning or figuring some special fact than I do about being as rational as possible; it would just mean that I consider my ability to solve problems a judge of my rationality.

I don't understand what I loose out on if I pursue the Art for its own sake in the way described above. If you do know of something I would loose out on, or if you know Yudkowsky's original argument showing the infinite recursion when you motivate yourself to be rational by your love of rationality, then please comment and help me out.  Thanks ahead of time.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 4:06 PM

It's too easy to fool yourself with a comfortable mistake unless the penalties for failure are real.

It seems to me that the problem of being interested in rationality for its own sake isn't infinite recursion, so much as it is the fact that it's less grounded in real world problems and goals.

Without something outside of rationality, people seem less likely to delve into new fields and acquire new information. Without something to protect, you can advance as slowly as you want.

When I was interested in rationality qua rationality I was mostly interested in it because I felt like it. Since getting some actual goals, I've progressed much faster. Most people don't push their boundaries or strive to become stronger because they feel like it, because most people don't feel like it.

When there's something specific you're trying to accomplish, you're able to notice that your current best isn't good enough, or that your current efforts won't make it. And you're able to think clearly about what you need to do to get there, which gives you an actual track to follow.

It seems that becoming more rational is massively helpful for most goals, and the pull of a goal helps you work harder.

Well, sure, if there's something specific I'm trying to accomplish, my degree of success helps me rate my rationality, but if my goal is to be as rational as I can, then I can still use specific tasks to help me gauge how rational I am. If my goal is to be as rational as possible, without anything else to protect, there is still lots of time pressure on me, provided I don't think I am as a rational as possible already. The more inefficiently I use my time, the poorer my final conception of rationality will be when I die, or on the off chance that i become as rational as I ever will be, then it'll take longer to get there then if I had used my time more efficiently.

I agree that solving specific tasks besides becoming rational, is necessary for becoming more rational. I just don't see why being rational must be instrumental for solving the task successfully, rather than my success in solving the task be a guide to becoming more rational. Again, lets assume I care a lot about being as rational as possible, or understanding rationality as best I can.

I can test reliability of win by taking a bunch of different problems with known answers that I don't know, solving them using my current conception of rationality and solving them using the alternative conception of rationality I want to test, then checking the answers I arrived at with each conception against the right answers.

You have just limited yourself to only solving problems others have solved before (and trusting their answers). Furthermore by restricting yourself to only the category of problems with known answers you will find that you can develop heuristics that don't apply to general problems. In particular the most effective way to solve a problem with a known answer is usually to track down that answer.

Edit: Also you're restricting yourself to the class of problems with well-defined 'right answers', that's a very restricted class of problems.

See end of second to last paragraph, I just added that before I read your comment, sorry bout that.

But regardless: I can make the problems I win very diverse, even if they are all solved problems. I think that takes care of the non-general heuristic problem. And of course, there's a reason I specify that I'm not allowed to know the answer.

And of course, there's a reason I specify that I'm not allowed to know the answer.

What about finding someone who does and asking them for hints?

But regardless: I can make the problems I win very diverse, even if they are all solved problems.

The probability distribution over the types of problems you encounter will still be different. For example, if a problem was a known solution, it can be solved using a known method so a good strategy is to try various known methods until one works. Whereas if a problem is unsolved especially if it has been unsolved for a long time, that's a sign that known methods won't work on it, so you'll need to develop new methods.

A useful analogy, courtesy of the martial art of rationality, is the difference between a 'friendly fight' and a real fight, i.e., one where each opponent is trying to seriously injure or kill the other. Just as most one must practice in friendly fights before one is ready for a real fight, one must practice on problems with known solutions before one is ready to attack unsolved problems. However, without real fights to keep people honest you get schools proliferating without evidence.

Great use of LW metaphors.

Yea, I should try to have as few hints as possible from the science that's already solved it. But even then, I won't find new methods by working on old problems, nearly as often as if I'm working on new problems. So, you've shown that you'll improve your conception of rationality faster by solving both solved and unsolved problems, but I'm still floating. I'll edit the second paragraph to take your point into account, thanks.

There seems to be some confusion here between "you must have a goal" and "you must have a single over-arching goal". Rationality is a bunch of strategies for thinking effectively; in order to do that, you must have something to think about. That's almost tautological. If you only have a series of easy goals with time horizons less than a day, you can still apply rationality to achieving them. It's just that you'll exercise and develop your rationality more if you have goals that are challenging; and in particular, it helps to have goals that are broad enough that you can pause at any time, ask which goal the thing you're presently doing serves, and know that you've made a mistake if you can't answer.

I don't think he's wrong - your "higher value" in this instance is being more rational. You have a positive utility function for being a more rational person, not because of the consequences of it (for which you also presumably have a positive utility function) but because you want to be more rational. That's not wrong, it's just another "higher value".

Logic is a method of moving from syllogisms to conclusions. It doesn't necessarily say what those syllogisms should be, and obviously people with different syllogisms will reach different conclusions.

By this logic is there anyone who doesn't have a higher value?

I would argue that no, there isn't.

Except for maybe some people with brain injury in the "right" areas. Their behaviour will probably be influenced by extreme apathy.

Yeah, it seems like most people who want to pursue rationality are running into infinite recursion, it seems like they're doing it because they feel like it.

Up VOTE! Super helpful first paragraph. You rock, thank you. Being more rational is my higher goal than being rational. I subordinate my desire to be rational as I conceive it currently, to my desire to become more rational.

I'm a bit confused about the second. I agree with it. I just don't know why you bring it up

edit: why the down vote? I'm curious.