Hm... I'm not sure you're interpreting me all that charitably. You keep on mentioning a dichotomy between satisficing and maximizing, for instance, as if you think I'm advocating maximizing as the better option, but really, that's not what I'm saying at all! I'm saying that regardless of whether you have a policy of satisficing or maximizing, both methods benefit from additional time spent thinking. Good satisficing =/= stopping at the first solution you see. This is especially common in programming, I find, where you generally aren't a time limit (or at least, not a "sensitive" time limit in the sense that fifteen extra minutes will be significant), and yet people are often willing to settle for the first "working" solution they see, even though a little extra effort could have bought them a moderate-to-large increase in efficiency. You can consciously decide "I want to satisfice here, not maximize," but if you have a policy of stopping at the first "acceptable" solution, you'll miss a lot of stuff. I'm not saying satisficing is bad, or even that satisficing isn't as good an option as maximizing; I'm saying that even when satisficing, you should still extend your search depth by a small amount to ensure you aren't missing anything. (And I'm speaking from real life experience here when I say that yes, that is a common failure mode.)

In terms of the chess analogy (which incidentally I feel is getting somewhat stretched, but whatever), I note that you only mention options that are very extreme--things like losing rooks, queens, or getting checkmated, etc. Often, chess is more complicated than that. Should you move your knight to an outpost in the center of the board, or develop your bishop to a more active square? Should you castle, moving your king to safety, or should you try and recoup a lost pawn first? These are situations in which the "right" move isn't at all obvious, and if you spot a single "good" move, you have no easy way of knowing if there's not a better move lurking somewhere out there. Contrast the situation you presented involving winning a pawn versus checkmating your opponent; the correct move is easy to see there. In short, I feel your chess examples are a bit contrived, almost cherry-picked to support your position. (I'm not saying you actually did cherry-pick them, by the way; I'm just saying that's how it sort of feels to me.)

So basically, to summarize my position: when you're stuck dealing with a complicated situation, in chess and in life, halting your search at the first "acceptable" option is not a good idea. That's my claim. Not "maximizing is better than satisficing".

I'm saying that regardless of whether you have a policy of satisficing or maximizing, both methods benefit from additional time spent thinking.

Taken literally, this is obviously and trivially true. You get more resources, your solution is likely to improve.

But in the context, the benefit is not costless. Time (in particular in a chess game) is a precious resource -- to justify spending it you need cost-benefit analysis.

Your position offers no criteria and no way to figure out when you've spent enough resources (time) and should stop -- and that is the real issue at hand.

Rationality Quotes Thread March 2015

by Vaniver 1 min read2nd Mar 2015235 comments


Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.