Previously, Taw published an article entitled "Post your utility function", after having tried (apparently unsuccessfully) to work out "what his utility function was". I suspect that there is something to be gained by trying to work out what your priorities are in life, but I am not sure that people on this site are helping themselves very much by assigning dollar values, probabilities and discount rates. If you haven't done so already, you can learn why people like the utility function formalism on wikipedia. I will say one thing about the expected utility theorem, though. An assignment of expected utilities to outcomes is (modulo renormalizing utilities by some set of affine transformations) equivalent to a preference over probabilistic combinations of outcomes; utilities are NOT properties of the outcomes you are talking about, they are properties of your mind. Goodness, like confusion, is in the mind.

In this article, I will claim that trying to run your life based upon expected utility maximization is not a good idea, and thus asking "what your utility function is" is also not a useful question to try and answer.

There are many problems with using expected utility maximization to run your life: firstly, the size of the set of outcomes that one must consider in order to rigorously apply the theory is ridiculous: one must consider all probabilistic mixtures of possible histories of the universe from now to whatever your time horizon is. Even identifying macroscopically identical histories, this set is huge. Humans naturally describe world-histories in terms of deontological rules, such as "if someone is nice to me, I want to be nice back to them", or "if I fall in love, I want to treat my partner well (unless s/he betrays me)", "I want to achieve something meaningful and be well-renowned with my life", "I want to help other people". In order to translate these deontological rules into utilities attached to world-histories, you would have to assign a dollar utility to every possible world-history with all variants of who you fall in love with, where you settle, what career you have, what you do with your friends, etc, etc. Describing your function as a linear sum of independent terms will not work in general because, for example, whether accounting is a good career for you will depend upon the kind of personal life you want to live (i.e. different aspects of your life interact). You can, of course, emulate deontological rules such as "I want to help other people" in a complex utility function - that is what the process of enumerating human-distinguishable world-histories is - but it is nowhere near as efficient a representation as the usual deontological rules of thumb that people live by, particularly given that the human mind is well-adapted to representing deontological preferences (such as "I must be nice to people" - as was discussed before, there is a large amount of hidden complexity behind this simple english sentence) and very poor at representing and manipulating floating point numbers.

Toby Ord's BPhil thesis has some interesting critiques of naive consequentialism, and would probably provide an entry point to the literature:

‘An uncomplicated illustration is provided by the security which lovers or friends produce in one another by being guided, and being seen to be guided, by maxims of virtually unconditional fidelity. Adherence to such maxims is justified by this prized effect, since any retreat from it will undermine the effect, being inevitably detectable within a close relationship. This is so whether the retreat takes the form of intruding calculation or calculative monitoring. The point scarcely needs emphasis.’

There are many other pitfalls: One is thinking that you know what is of value in your life, and forgetting what the most important things are (such as youth, health, friendship, family, humour, a sense of personal dignity, a sense of moral pureness for yourself, acceptance by your peers, social status, etc) because they've always been there so you took them for granted. Another is that since we humans are under the influence of a considerable number of delusions about the nature of our own lives, (in particular: that our actions are influenced exclusively by our long-term plans rather than by the situations we find ourselves in or our base animal desires) we often find that our actions have unintended consequences. Human life is naturally complicated enough that this would happen anyway, but attempting to optimize your life whilst under the influence of systematic delusions about the way it really works is likely to make it worse than if you just stick to default behaviour.


What, then is the best decision procedure for deciding how to improve your life? Certainly I would steer clear of dollar values and expected utility calculations, because this formalism is a huge leap away from our intuitive decision procedure. It seems wiser to me to make small incrermental changes to your decision procedure for getting things done. For example, if you currently decide what to do based completely upon your whims, consider making a vague list of goals in your life (with no particular priorities attached) and updating your progress on them. If you already do this, consider brainstorming for other goals that you might have ignored, and then attach priorities based upon the assumption that you will certainly achieve or not achieve each of these goals, ignoring what probabilistic mixtures you would accept (because your mind probably won't be able to handle the probabilistic aspect in a numerical way anyway).

New Comment
53 comments, sorted by Click to highlight new comments since:

I generally agree, but I challenge the claim that the (mostly social) failures of conscious consequentialist reasoning are just a matter of speed-of-calculation versus a cached rule. In most social situations, one or several such rules feel particularly salient to our decisionmaking at any moment, but the process by which these particular rules seem salient is the essence of our real (unconscious) calculations.

We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain (though it can lead to occasional insights that the heuristics of the framework miss). Compare it to our native 'physics engine' that allows us to track objects, move around and even learn to catch a ball, versus the much slower and mistake-prone calculations that can still give the better answer when faced with truly novel situations (our intuitive physics is wrong about what happens to a helium balloon in the car when you slam on the brakes, but a physics student can get the right answer with conscious thought).

I suggest that attempting to live one's entire life by either conscious expected-utility maximization or even by execution of consciously-chosen low-level heuristics is going to work out badly. What works better for human beings is to generally trust the unconscious in familiar social domains, but to observe and analyze ourselves periodically in order to identify (and hopefully patch) some deleterious biases. We should also try to rely more on conscious Bayesian reasoning in domains (like scientific controversies or national politics) that were unlikely to be optimized for in the ancestral environment.

This leaves aside, of course, the question of what to do when one's conscious priorities seem to oppose one's unconscious priorities (which they do, not in most things, but in some crucial matters).

The unconscious mind is not some dark corner, it's the vast majority of "you". It's what and who you are, all the time you're not pointing your "Cartesian camcorder" at yourself. It's the huge huge majority of your computing capacity, nearly all of your personality, nearly all of your motivation, emotion, and beneath that the coldly calculating part that handles signaling, pack rank, etc. That's the real problem. Cutting it out of the picture in favor of the conscious mind is like the pinky finger demanding that the body be cut off.

Upvoted because I deliberatively judge that this should scare me, and yet I immediately recognize it as obviously true, and yet it does not really scare me, because nearly all of my emotion, motivation, and so forth are below the level where my deliberative judgement desperately cries that she should be in control of things.

[-]Roko00

nearly all of my emotion, motivation, and so forth are below the level where my deliberative judgement desperately cries that she should be in control of things.

The conscious mind finds itself riding an uncontrollable wild horse of emotions, and generally the success of a person in the real world will depend on the conscious mind's ability to strategically place carrots in places such that the horse goes roughly the right way.

But, on the other hand, moral antirealism means that if the conscious mind did ever completely free itself from that wild horse, it would have only an extremely impoverished purpose in life, because rationality massively under-constrains behaviour in this morally relative existence.

The conscious mind finds itself riding an uncontrollable wild horse of emotions, and generally the success of a person in the real world will depend on the conscious mind's ability to strategically place carrots in places such that the horse goes roughly the right way.

This is a very common view about the human mind, and I think it is a mistaken one. In most domains of daily life, the unconscious knows what it's doing far better than the conscious mind; and since much of our conscious goals consist of signaling and ignore the many unconscious actions that keep them running, the conscious goals would probably be incoherent or awful for us if we genuinely pursued them in an expected-utility-maximizing fashion. Fortunately, it is impossible for us to do so by mere acts of will.

I instead hope to let my conscious thought model and understand the unconscious better, in order to point out some biases (which can be corrected for by habit or conscious effort or mind-hack) and to see if there are ways that both my conscious and unconscious minds can achieve their goals together rather than wasting energy in clashes. (So far I haven't seen an unconscious goal that my conscious mind can't stomach; it's often just subgoals that call out for compromise and change.)

Also, there's no hope of the conscious mind "freeing itself", because it is not enough of an independent object to exist on its own.

IAWYC, but I want to add that the conscious mind has some strengths— like the ability to carefully verify logical arguments and calculate probabilities— which the unconscious mind doesn't seem to do much of.

I'm not sure how to describe what actually happens in my mind at the times that I feel myself trying to follow my conscious priorities against some unconscious resistance, but the phenomenon seems to be as volitional as anything else I do, and so it seems reasonable to reflect on whether and when this "conscious override" is a good idea.

[-]Roko00

the question of what to do when one's conscious priorities seem to oppose one's unconscious priorities

Fully sorting out this problem probably requires the physical capabilities and intelligence that are beyond our current level.

In key areas such as getting out of bed in the morning there are Hacks, like eating a whole dessert spoon of honey to increase your blood sugar and make you feel active.

We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain

It's not about outperforming, it's about improvement on what you have. There is no competition, incoherence is indisputably wrong wherever it appears. Only if the time spent reflecting on coherence of decisions could be better spent elsewhere is there a tradeoff, but the other activity doesn't need to be identified with "instinctive decision-making", it might as well be hunting or sleeping.

The context here is of an aspiring rationalist trying to consciously plan and follow a complete social strategy, and rejecting their more basic intuitions about how they should act in favor of their consequentialist calculus. This sort of conscious engineering often fails spectacularly, as I can attest. (The usual exceptions are heuristics that have been tested and passed on by others, and are more likely to succeed not because of their rational appeal relative to other suggestions but rather because of their optimization by selection.)

Then they are reaching out too much, using the tool incorrectly, confusing themselves instead of fixing the problems. Note that conscious planning is also mostly intuition, not expected utility maximization, and you've just magnified on the incoherence of the practice of applying it where the consequence of such act is failure, while the goal is success.

[-]Roko00

Excellent comment.

I agree that, if you want to experience the outcomes typical of other people in situations similar to yours, there's not much to be gained by thinking about your utility function. In general it's foolish to rely on inside-view thinking if you have more than enough outside-view information. But if you don't like your peer group's outcomes — maybe you got into an unfortunate situation, or you have some wacky preferences like wanting to build a friendly AI, or you just want to show off your rationalist chops — then using expected utility could still be valuable.

Our inability to model all possible histories of the universe does not make the concept of expected utility useless. We can apply it at whatever level of detail is tractable. We can't do real-time expected utility calculations, but we can use generalizations and implications of utility theory to discover when our rules of thumb are contradictory or self-defeating.

We don't need to throw away intuition either, we can consider it from an outside view. In situations where following intuition gives the best expected outcome, then follow intuition.

[-]Roko00

But if you don't like your peer group's outcomes — maybe you got into an unfortunate situation, or you have some wacky preferences like wanting to build a friendly AI, or you just want to show off your rationalist chops — then using expected utility could still be valuable.

This sounds good; if the default way of living is not working for you, then try something different and risky.

Your argument boils down to "Calculating expected utilities is hard, therefore its rarely worth trying." I agree with the premise, but the conclusion goes too far.

There are many situations in which I have done better by considering possible outcomes, the associated likelihoods, and payoffs. I have used this reasoning in my (short) career, in decisions about investing/insurance, in my relationships, and in considering what charities are worthwhile.

Yes, in many situations good habits and heuristics are more useful than thinking about probabilities, but you get mighty close to reifying our "intuitive decision procedure" aka our stone-age brain which was programmed to maximize inclusive fitness (which does not weigh heavily in my utility function) in an environment which was very different from the one in which we now find ourselves.

[-]loqi20

Your argument boils down to "Calculating expected utilities is hard, therefore its rarely worth trying."

I think this fails to capture an important point Roko made. If living according to expected utility calculations was merely hard, but didn't carry significant risks beyond the time spent doing the calculations, the statement "trying to run your life based upon expected utility maximization is not a good idea" would not carry much weight. However:

There are many other pitfalls: One is thinking that you know what is of value in your life, and forgetting what the most important things are

This is the real problem, and it seems more about calibration than accuracy.

[-]Roko00

In decisions about investing/insurance

It is certainly the case that there are some situations where utility maximization works well, such as investment.

in my relationships

You used utility maximization to manage a relationship? Or to choose a partner? I'd like to hear more.

You are looking at application of decision theory in this context from a wrong angle. You see a decision procedure as constructed bottom up, a complete toy model, that can't possibly match the challenge of the real thing. Instead, decision procedure here is a restriction, a principle that allows to catch inconsistencies in the messy human decision-making process.

If you believe that choosing Y requires X to be true, you don't believe X to be true, but you choose Y, something fishy is going on. You believe in the correctness of the rules, you observe that your stated opinions don't match the rules, and so you are forced to revise your opinions. The complexities of the physical world are not the issue, this is a device for the sanity of mind, and it can be applied at any level of granularity, with concepts however fuzzy and imprecise.

[-]Roko00

this is a device for the sanity of mind, and it can be applied at any level of granularity, with concepts however fuzzy and imprecise.

This would be nice if it were true! I presume you mean utility function maximization by "this"?

The thing that "is a device for the sanity of mind, and it can be applied at any level of granularity, with concepts however fuzzy and imprecise" is rationality in the broad sense. The example you give:

If you believe that choosing Y requires X to be true, you don't believe X to be true, but you choose Y,

is not an instance of maximizing a utility function.

I am not attacking rational thinking in general here. Only the specific practise of trying to run your life by maximizing a utility function.

No, of course it's not for "running your life", that would be the approach of constructing a complete model (the right stance for FAI, the wrong one for human rationality). It's for mending errors in your mind that runs your life.

The special place of expected utility maximization comes from the conjecture that any restriction for coherence of thought can be restated in terms of expected utility maximization. My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X. This form won't be the most convenient, the original one may be better, but it's still equivalent, the structure of what's required of coherent opinion is no stronger.

[-]Roko00

My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X.

It doesn't matter what utilities you assign to outcomes X and Y, what you have caught by saying

you believe that choosing Y requires X to be true, you don't believe X to be true, but you choose Y

is an error of logic. The person here believes ¬X and X <== Y and Y.

As I said, it's just a special case, with utility maximization not being the best form for thinking about it (as you noted, simple logic suffices here). The conjecture is that everything in decision-making is a special case of utility maximization.

[-]Roko00

The conjecture is that everything in decision-making is a special case of utility maximization.

Sure - Just like every program can be written out in brainfuck. But you wouldn't actually use that in real life, because it is not efficient.

More broadly, any realizable decision system must take into account the trade-offs of estimation, the costs of computation or cognition, as well as the price of gathering information.

It is tempting to hand wave these away in the apparent simplicity of a utility function, but each of these needs to find their place either in the use or in the construction of that function. And assuming that we've solved them already trivializes the real problems. After all, these sorts of questions make utility functions hard to build correctly, even potentially uncomputable.

So some outcomes must be conflated with each other and the set of outcomes must be described in rough heuristic terms (e.g. lives where I in order to cut the set down in size.

Lives where you what?

[-]Roko00

Thanks, didn't catch that typo.

[-]taw10

I don't agree with your arguments. First, nobody is proposing infinitely accurate utility function, just that a rough utility function is a good approximation of human behaviour descriptively and prescriptively.

As for your particular examples:

  • I don't see how being stupid about reality of relationships is adding much value. You should be aware what are the chances of infidelity etc. If knowing these chances you decide not to monitor your lover, that's a purely consequentialist decision.
  • Moderate deontology can be emulated in consequentialism by simply assigning values to following and breaking rules. I don't think anybody's values are truly absolute.
  • "Delusions" can be handled by taking an outside view and adding extra terms to the function, for example using all research of what influences our happiness.

None of them is terribly convincing.

And arguing from consequences - the entire field of economics in pretty much all its forms is based on assumption that utility maximization is a good approximation of human behaviour. If utility funcitons aren't even that much, then the entire economics is worthless almost automatically. It doesn't seem to be entirely worthless, so utility functions seem to have some meaning.

[-]Roko00

assumption that utility maximization is a good approximation of human behaviour.

It is possible that modelling humans as utility functions works with a large aggregate of humans, but is not accurate on the individual level. The deviations from utility maximization probably cancel out in a large crowd.

[-]taw00

Aggregation wouldn't really work unless utility function was a pretty decent approximation, and its errors were reasonably random.

[-]Roko10

I think that when economists say that

assumption that utility maximization is a good approximation of human behaviour

they mean "well we looked at the behaviour of 1000 people, and lo and behold, it fits this utility function U!"

each person individually could have preferences nothing like U. The average of {1,1,1,1,1,1,1,1,1,1000} is about 100, but no individual number is anywhere near 100.

Deviations from utilility maximization (i.e. irrationality) will likely cancel out en masse.

Furthermore, economists will generally model some small part of human behaviour - e.g. purchasing tradeoffs in a particular domain - but I have never seen an economist model the entire human preference set. This is probably because it is too complicated for a team of expert economists to write down - never mind the poor individual concerned.

Good point, especially when it comes to markets. You can have a lot of people acting in predictably irrational ways, and a few people who see an inefficiency and make large sums of money off of it, and the net result is a quite rational market.

[-]taw00

Average of large number of functions that look nothing like U has little reason to look much like U. The fact that something like U turns out repetitively needs an explanation.

It's true that usually only a small portion of human behaviour is usually modeled at time, but utility maximization is composable, so you can take every single domain where utility maximization works, and compose it into one big utility maximization model - mathematically it should work (with some standard assumptions about types of error we have in small domain models, assumptions which might be false).

[-]Roko00

utility maximization is composable, so you can take every single domain where utility maximization works, and compose it into one big utility maximization model - mathematically it should work (with some standard assumptions about types of error we have in small domain models, assumptions which might be false).

Sure! I don't doubt this at all. I'm not saying that you cannot in principle build some humongous utility function that is a justifiable fit to what I, a particular human, want. BUT the point is that it isn't feasible in practise - hence my statement in the original post:

I will claim that trying to run your life based upon expected utility maximization is not a good idea

[-]taw00

What I was trying to do was more trying to figure out rough approximation of my utility function descriptively, to see if any of my actions are extremely irrational - like wasting too much time/money on something I care about very little, or not spending some time/money on something I care about a lot.

[-]Roko00

OK, but then the question is how do you approximate a mathematical function from a set X to R, especially when your biggest problem is not being able to enumerate the elements of X? If you miss out most of the elements of X, then no possible assignment of numbers to those that you do include will constitute a good approximation to the function.

I like this idea, though.

[-]taw00

Approximation is likely to be a list of "I value event X relative to default state at Y utilons", following economic tradition of focusing on the marginal. Skipping events from this list doesn't affect comparisons between events on the list.

[-]Roko00

But what will you do with your incomplete list:

I value event make $x relative to default state at log(x) utilons

I value event play computer game relative to default state at 30 utilons

I value event marry woman of my dreams relative to default state at 1600 utilons

...

once you've compiled it? what do you do with the "utilon" numbers?

[-]taw10

Next I look at utilons to costs ratios, and do more of things which result in events with high ratios, and less of things which result in events with low ratios.

By the way, as the function is marginal, value of money will be approximately linear, extra $100 is worth pretty much 100 times more than extra $1, it only breaks down on very large $s that significantly affect your net worth.

I'm sympathetic to much of what you say here, and probably agree with it as a first approximation. But I dislike the implication (not sure whether intended or not) that a single way of making decisions (calculation vs. something closer to our intuitive decision procedure) is right in general. It seems far more likely to me that there are situations where one or the other, or some combination of procedures, is likely to be most useful; and the interesting questions are about what to use when, rather than what to use always and everywhere.

[-]Roko10

Yes, I agree, though a meta-decision procedure would be required to do the selection.

A lot of this probably comes down to:

Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, or that your ability to mentally trace through the consequences of actions comes anywhere near the richness of reality’s ability to do so.

Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, [...]

Enough for what? Or better/worse as opposed to what?

Rich enough that, if you're going to make these sorts of calculations, you'll get reasonable results (rather than misleading or wildly misleading ones).

The catch is of course that your reply is in itself a statement of the form that you declared useless (misleading/wildly misleading - how do you know that?).

I think there's some misunderstanding here. I said don't assume. If you have some reason to think what you're doing is reasonable or ok, then you're not assuming.

You could use that feedback from the results of prior actions. Like: http://www.aleph.se/Trans/Individual/Self/zahn.txt

[-]Roko00

True. However this caveat applies to any formalism for decision making - my claim is that expected utility maximization is hurt especially badly by these limitations.

I'm a utilitarian - and I don't "get" this post. Utilities are not dollar values. Being a utilitarian doesn't mean that you don't use heuristics.

Utilitarianism clarifies your goal to yourself. Everyone has a utility function - the main issue I see is whether they know what it is or not.

Why does everybody think that utilitarianism necessarily means consciously crunching real numbers?

Surreal numbers are a more natural fit - as argued by Conway. Otherwise, it is less convenient to model preferences where arbitrarily low probabilities of B are worth more than certainty of A, while A still has some value.

[-]Roko00

I am criticising utility function maximization by assigning dollar values, which, by definition, involves real numbers. (well, ... ok, in reality you wouldn't bother to use irrational numbers)

What else did you have in mind? Remember, when you answer, that you had better keep the expected utility theorem in mind.

Using dollar values instead of utilons is indeed a bad idea. But regardless of that, we don't need the numbers, we only need to guess which choice gives the highest number with whatever accuracy we can achieve given the resources at our disposal.

Surely the only point you're making in this long post is not that naïve consequentialism is a bad idea?

consider brainstorming for other goals that you might have ignored, and then attach priorities.

And how exactly does one attach priorities?

[-]Roko10

I am pretty bad in terms of writing long posts. I have shortened this one considerably given your comment. Let me know if you still think it is too long.