Suppose, for a moment, that somebody has written the Utility Function.  It takes, as its input, some Universe State, runs it through a Morality Modeling Language, and outputs a number indicating the desirability of that state relative to some baseline, and more importantly, other Universe States which we might care to compare it to.

Can I feed the Utility Function the state of my computer right now, as it is executing a program I have written?  And is a universe in which my program halts superior to one in which my program wastes energy executing an endless loop?

If you're inclined to argue that's not what the Utility Function is supposed to be evaluating, I have to ask what, exactly, it -is- supposed to be evaluating?  We can reframe the question in terms of the series of keys I press as I write the program, if that is an easier problem to solve than what my computer is going to do.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 11:49 AM

I'm not entirely sure what your argument is yet, but here's a simple example utility function that might be interesting as a baseline:

def utility(universe):
    return 42

This function halts for all inputs, and assigns each input a desirability value that can be compared with others. What sort of utility function are you imagining?

If the set of Universe States is finite, then yes, there will be a computable utility function for any VNM-rational preferences (the program can be just a lookup table).

If the set of possible Universe States is countably infinite, and you can meaningfully encode every universe state as a finite string, then no, not every utility function is computable. Counterexample: number the possible universes and assign a utility of $1 to every universe whose number describes a halting turing machine, and $0 for every universe whose number describes a non-halting turing machine.

If the set of possible Universe States is uncountably infinite, or you cannot meaningfully encode every universe state as a finite string, then no, the utility functions might not be remotely computable.

What does Morality Modeling Language do? If you allow it to describe only computable utility functions, then you can make it describe only computable utility functions!

If the set of Universe States is finite, then yes, there will be a computable utility function for any VNM-rational preferences (the program can be just a lookup table).

Ooops, not totally correct, because the probabilities in the lotteries could be uncomputable.

Isn't this just the AI reflection problem?

The problem is that an AI needs to model itself, which seems exactly the problem that you're talking about.

The answer obviously has something to do with the fact that your utility program isn't perfect if running on real hardware, so it will approximate its own effect on utility.

Or, if the question means something else, the answer could be the thing TDT does, and the program gives you the utility assuming that it doesn't run. This probably isn't what a utility function is for, though; we likely want it to calculate utility for states where it does exist, in which case the reflection problem needs to be solved.

A utility function is a function, not a program. You could talk about whether or not it's computable. Since you can find a utility function by randomly putting the agent into various universes and seeing what happens, it's computable.

  1. Some utility functions can be found by randomly putting agents into various universes and seeing what happens
  2. The universe is computable.
  3. Therefore, all utility functions are computable.

3 does not follow from 1 and 2.

Some utility functions can be found by randomly putting agents into various universes and seeing what happens

Implicit utility functions can. Explicit utility functions could not be computable, in the sense that you can go around saying that you want to put rocks into piles of sizes corresponding to programs that never halt, but what you'll actually be doing is putting them into sizes corresponding to programs that you think will never halt (either ones that provably don't, or possibly ones that pass some heuristic).

OrphanWilde appears to be talking about morality, not decision theory. The moral Utility Function of utilitarianism is not necessarily the decision-theoretic utility function of any agent, unless you happen to have a morally perfect agent lying around, so your procedure would not work.

Since you can find a utility function by randomly putting the agent into various universes and seeing what happens, it's computable.

Empirically determinable and computable are not the same thing. For example, consider the hypothetical of the Halting problem encoded in the digits of the fine structure constant.