An elementary question that has probably been discussed for 300 years, but I don't know what the right keyword to use to google it might be.

How, theoretically, do you deal with (in decision theory/AI alignment) a "noncompact" utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?

E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?

I guess the obvious answer is "such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent's true utility function is", but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn't cause the agent to be stupid.

New Answer
New Comment

3 Answers sorted by

Donald Hobson

Aug 11, 2020

40

What you do is you pick a turing machine that you think halts eventually, but takes a really long time to do so, and then expend the resource when that TM stops.

After all, a computer with an N bit program can't delay longer than BB(N). In practice, if most of the bits aren't logically correlated with long running turing machines, then it can't run nearly that long. This inevitably becomes a game of logical uncertainty about which turing machines halt.

I don't understand the significance of using a TM -- is this any different from just applying some probability distribution over the set of actions?

3Donald Hobson4y
Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was "The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom. An AI asked to output the largest number it can, is playing a game of name the largest computable number.
3Abhimanyu Pallavi Sudhir4y
Wait, but can't the AI also choose to adopt the strategy "build another computer with a larger largest computable number"?
4Donald Hobson4y
If the computer has a finite amount of memory and can't build more, this puts a 2n bound on how long it can weight. If it can build more, it will. The point is that it needs to pick some long running computation that it can be fairly sure halts eventually. This gets into details about exactly how the AI is handling logical uncertainty.
1[comment deleted]4y

Andrew Kao

Aug 11, 2020

40

In economics, if utility is strictly increasing in t—the quantity of it consumed—then we would call it a type of a "good", and utility functions are often unbounded. What makes the ultimate choice of t finite is that utility from t is typically modeled as concave, while costs are convex. I think you might be able to find some literature on convex utility functions, but my impression is that there isn't much to study here:

If utility is strictly increasing in t (even accounting for cost of its inputs/opportunity cost etc.), then you will consume all of it that you can access. So if the stock of available t is finite, then you have reduced the problem to a simpler subproblem, where you allocate your resources (minus what you spent on t) to everything else. If t is infinite, then you have attained infinite utility and need not make any other decisions, so the problem is again uninteresting.

Of course, the assumption that utility is strictly increasing in t, regardless of circumstance, is a strong one, but I think it is what you're asking.

Infinite t does not necessarily deliver infinite utility.

Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.

It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.

(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)

3Andrew Kao4y
I see what you mean now, thanks for clarifying. I'm not personally aware of any "best" or "correct" solutions, and I would be quite surprised if there were one (mathematically, at least, we know there's no single maximizer). But I think concretely speaking, you can restrict the choice set of t to a compact set of size (0, 1 - \epsilon] and develop the appropriate bounds for the analysis you're interested in. Maybe not the most satisfying answer, but I guess that's Analysis in a nutshell.
In economics, we call something a "good" if utility is increasing in t—the quantity of it consumed—and utility functions are often unbounded

I don't think that's true. Modern economists say utility is increasing in t _at the relevant margins_. There is no claim that it's unbounded or that it doesn't become negative utility at some future time or quantity.

2Andrew Kao4y
Yes, agree with you! I was more classifying the t that OP was asking about than providing a definition, but wording was somewhat unclear. Have edited the original comment.

Dagon

Aug 11, 2020

30

Basic answer: there are no infinities in the real world. whatever resource t you're talking about is actually finite - find the limit and your problem goes away (that problem, at least; many others remain when you do math on utility, rather than doing the math on resources/universe-states, and then simply transforming to utility).

It does not require infinities. E.g. you can just reparameterize the problem to the interval (0, 1), see the edited question. You just require an infinite set.

1Dagon4y
The answer remains the same - as far as we know, the universe is finite and quantized. At any t, there is a probability of reaching t+epsilon, making the standard expected utility calculation (probability X reward) useful.
2Abhimanyu Pallavi Sudhir4y
Suppose the function U(t) is increasing fast enough, e.g. if the probability of reaching t is exp(-t), then let U(t) be exp(2t), or whatever. I don't think the question can be dismissed that easily.