It seems to me that the behavioural science research around 'the trade-off between maximising and satisficing' in terms of well-being relates with human beings being 'optimisers' only sometimes. Also, it seems to suggests that there can be alternative and better approaches to optimisation? Is this true? Or is 'satisficing' just a hidden multi-objective optimisation that factors in time and effort costs?

See what trigger my question: A decision-maker’s dilemma: search for the best option or settle for ‘good enough’?

New Answer
New Comment

3 Answers sorted by

My take:

Optimization is complicated. Much of the AI behavior and safety research is exactly this topic: how to optimize only when it's actually beneficial. Which comes down to how to define optimal.

Satisficing is just recognizing that there are points outside the range of your optimization plan. You have "enough" of whatever it is you're measuring, so marginal increases are worth less, and you should look at other resources/experiences/parameters to optimize instead of that one. In other words, re-calculate your optimization plan to figure out what's important now.

It's not that you stop optimizing - you _MUST BE_ optimizing something, if you make decisions. It's that you downweight one component that you've been focusing on, in order to upweight other things in your overall optimization. "An optimizer is exactly the same as a satisficer below it's satisfaction level" is an instructive phrase.

Stuart Armstrong's shows this truth.

I think a great disservice is done if you talk about optimizing utility directly, rather than resources or states of the universe. A rational agent always optimizes utility, that's the very definition of utility. There are many cases where optimizing some set of resources (slack, money, other humans saved or subjugated, whatever) perfectly optimizes utility. Satisficing is the recognition that those utility curves (really, resource-utility relationships) have inflection points where a different set of resources needs to be optimized in order to maximize utility.

Satisficing is optimization. I'd say even more generally, any kind of decision process is a kind of optimization because it's driving the world towards a particular state based on the decision method, even if that's something like "optimizing" for higher entropy. The only way not to optimize is to not make decisions which means not changing what happens based on feedback.

Or is 'satisficing' just a hidden multi-objective optimisation that factors in time and effort costs?

I think this is basically right. I recently had a salesperson tell me "Your job is harder than mine, because you're trying to optimize something, while I'm just trying to maximize something." I disagreed that her job was in any way easy, but understood what she meant.

Like, there must be something I would ideally be maximizing, but it's something underspecified, insufficiently well understood at the gears level, or too many steps removed from the specific actions I'm taking, for me to usefully think about it that way without instantly and unavoidable Goodharting myself. I used to have a (very smart and insightful) boss at this job who liked to say "If you ever figure out what we're trying to do here, please tell me." This is sorta what he was trying to point towards.

New to LessWrong?