Delayed Gratification vs. a Time-Dependent Utility Function

by momothefiddler 3 min read6th May 201257 comments


Ideally, a utility function would be a rational, perfect, constant entity that accounted for all possible variables, but mine certainly isn't. In fact, I'd feel quite comfortable claiming that no humans at the time of writing do.

When confronted with the fact that my utility function is non-ideal or - since there's no universal ideal to compare it to - internally inconsistent, I do my best to figure out what to change and do so. The problem with a non-constant utility function, though, is that it makes it hard to maximise total utility. For instance, I am willing to undergo -50 units of utility today in return for +1 utility on each following day indefinitely. What if I accept the -50, but then my utility function changes tomorrow such that I now consider the change to be neutral, or worse, negative per day?

Just as plausible is the idea that I be offered a trade that, while not of positive utility according to my function now, will be according to a future function. Just as I would think it a good investment to buy gold if I expected the price to go up but bad if I expected the price to go down, so I have to base my long-term utility trades on what I expect my future functions to be. (Not that dollars don't correlate with units of utility, just that they don't correlate strongly.)

How can I know what I will want to do, much less what I will want to have done? If I obtain the outcome I prefer now, but spend more time not preferring it, does that make it a negative choice? Is it a reasonable decision, in order to maximise utility, to purposefully change your definition of utility such that your expected future would maximise it?


What brings this all to mind is a choice I have to make soon. Technically, I've already made it, but I'm now uncertain of that choice and it has to be made final soon. This fall I transfer from my community college to a university, where I will focus a significant amount of energy studying Something 1 in order to become trained (and certified) to do Something 2 for a long period of time. I had thought until today that it was reasonable for Something 1 to be math and Something 2 to be teaching math. I enjoy the beauty of mathematics. I love how things fit together, barely anything can excite me as much as the definition of a derivative and its meaning, and I've shown myself to be rather good at it (which, to be fair, is by comparison to those around me, so I don't know how I'd fare in a larger or more specialized pool). In addition, I've spent some time as a tutor and I seem to be good at explaining mathematics to other people and I enjoy seeing their faces light up as they see how things fit together.

Today, though, I don't know if that's really a wise decision. I was rereading Eliezer's paper on AI in Global Risk and was struck by a line: "If we want people who can make progress on Friendly AI, then they have to start training themselves, full-time, years before they are urgently needed." It occurred to me that I think FAI is possible and that I expect some sort of AI within my lifetime (though I don't expect that to be short). Perhaps I'd be happier studying topology than I would cognitive science and I'd definitely be happier studying topology than I would evolutionary psychology, but I'm not sure that even matters. Studying mathematics would provide positive utility to me personally and allow me to teach it. Teaching mathematics would be valued positively by me both because of my direct enjoyment and because I value a universe where a given person knows and appreciates math more than an otherwise-identical universe where that person doesn't. The appearance of an FAI would by far outclass the former and likely negate the significance of the latter. A uFAI has such a low utility that it would cancel out any positive utility from studying math. In fact, even if I focus purely on the increase of logical processes and mathematical understanding in Homo Sapiens and neglect the negative effects of a uFAI, moving the creation of an FAI forward by even a matter of days could easily be of more end value than being a professor for twenty years.


I don't want to give up my unrealistic, idealized dream of math professorship to study a subject that makes me less happy, but if I shut up and multiply the numbers tell me that my happiness doesn't matter except as it affects my efficacy. In fact, shutting up and multiplying indicates that, if large amounts of labour were of significant use (and I doubt that would be any more use than large amounts of computing power) then it'd be plausible to at least consider subjugating the entire species and putting all effort to creating an FAI. I'm nearly certain this result comes from having missed something, but I can't see what and I'm scared that near-certainty is merely an expression of my negative anticipation regarding giving up my pretty little plans.


Eliezer routinely puts forward examples such as an AI that tiles the universe with molecular smiley faces as negative. My basic dilemma is this: Does the utility function at the time of the choice have some sort of preferred status in the calculation, or would it be highly positive to create an AI that rewrites brains to value above all else a universe tiled with molecular smiley faces and then tiles the universe with molecular smiley faces?