Response to: Universal agents and utility functions

Related approaches: Hibbard (2012)Hay (2005)

Background

 

Here is the function implemented by finite-lifetime AI\xi:

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\left(r\left(x_{k}\right)+...+r\left(x_{m}\right)\right)\cdot\xi\left(\dot{y}\dot{x}_{<k}y\underline{x}_{k:m}\right)},

where m is the number of steps in the lifetime of the agent, k is the current step being computed, X is the set of possible observations, Y is the set of possible actions, r is a function that extracts a reward value from an observation, a dot over a variable represents that its value is known to be the true value of the action or observation it represents, underlines represent that the variable is an input to a probability distribution, and \xi is a function that returns the probability of a sequence of observations, given a certain known history and sequence of actions, and starting from the Solomonoff prior. More formally,

{\displaystyle \xi\left(\dot{y}\dot{x}_{<k}y\underline{x}_{k:m}\right)=\left(\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}2^{-\ell\left(q\right)}\right)},

where Q is the set of all programs, \ell is a function that returns the length of a program in bits, and a program applied to a sequence of actions returns the resulting sequence of observations. Notice that the denominator is a constant, depending only on the already known \dot{y}\dot{x}_{%3Ck}, and multiplying by a positive constant does not change the argmax, so we can pretend that the denominator doesn't exist. If q is a valid program, then any longer program with q as a prefix is not a valid program, so {\displaystyle%20\sum_{q\in%20Q}2^{-\ell\left%28q\right%29}\leq1}.

 

Problem

 

A problem with this is that it can only optimize over the input it receives, not over aspects of the external world that it cannot observe. Given the chance, AI\xi would hack its input channel so that it would only observe good things, instead of trying to make good things happen (in other words, it would wirehead itself). Anja specified a variant of AI\xi in which she replaced the sum of rewards with a single utility value and made the domain of the utility function be the entire sequence of actions and observations instead of a single observation, like so:

{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m}\in%20Y}\sum_{x_{m}\in%20X}U\left%28\dot{y}\dot{x}_{%3Ck}yx_{k:m}\right%29\cdot\xi\left%28\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m}\right%29}.

This doesn't really solve the problem, because the utility function still only takes what the agent can see, rather than what is actually going on outside the agent. The situation is a little better because the utility function also takes into account the agent's actions, so it could punish actions that look like the agent is trying to wirehead itself, but if there was a flaw in the instructions not to wirehead, the agent would exploit it, so the incentive not to wirehead would have to be perfect, and this formulation is not very enlightening about how to do that.

[Edit: Hibbard (2012) also presents a solution to this problem. I haven't read all of it yet, but it appears to be fairly different from what I suggest in the next section.]

 

Solution

 

Here's what I suggest instead: everything that happens is determined by the program that the world is running on and the agent's actions, so the domain of the utility function should be Q\times%20Y^{m}. The apparent problem with that is that the formula for AI\xi does not contain any mention of elements of Q. If we just take the original formula and replace r\left(x_{k}\right)+...+r\left(x_{m}\right)

with U\left(q,\dot{y}_{<k}y_{k:m}\right), it wouldn't make any sense. However, if we expand out \xi in the original formula (excluding the unnecessary denominator), we can move the sum of rewards inside the sum over programs, like this:

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}\left(r\left(x_{k}\right)+...+r\left(x_{m}\right)\right)2^{-\ell\left(q\right)}}.

Now it is easy to replace the sum of rewards with the desired utility function.

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}U\left(q,\dot{y}_{<k}y_{k:m}\right)2^{-\ell\left(q\right)}}.

With this formulation, there is no danger of the agent wireheading, and all U has to do is compute everything that happens when the agent performs a given sequence of actions in a given program, and decide how desirable it is. If the range of U is unbounded, then this might not converge. Let's assume throughout this post that the range of U is bounded.

[Edit: Hay (2005) presents a similar formulation to this.]

 

Extension to infinite lifetimes

 

The previous discussion assumed that the agent would only have the opportunity to perform a finite number of actions. The situation gets a little tricky when the agent is allowed to perform an unbounded number of actions. Hutter uses a finite look-ahead approach for AI\xi, where on each step k, it pretends that it will only be performing m_{k} actions, where \forall%20k\,%20m_{k}\gg%20k.

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m_{k}}\in Y}\sum_{x_{m_{k}}\in X}\left(r\left(x_{k}\right)+...+r\left(x_{m_{k}}\right)\right)\cdot\xi\left(\dot{y}\dot{x}_{<k}y\underline{x}_{k:m_{k}}\right)}.

If we make the same modification to the utility-based variant, we get

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m_{k}}\in Y}\sum_{x_{m_{k}}\in X}\sum_{q\in Q:q\left(y_{\leq m_{k}}\right)=x_{\leq m_{k}}}U\left(q,\dot{y}_{<k}y_{k:m_{k}}\right)2^{-\ell\left(q\right)}}.

This is unsatisfactory because the domain of U was supposed to consist of all the information necessary to determine everything that happens, but here, it is missing all the actions after step m_{k}. One obvious thing to try is to set m_{k}:=\infty. This will be easier to do using a compacted expression for AI\xi:

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\max_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}\left(r\left(x_{k}^{pq}\right)+...+r\left(x_{m_{k}}^{pq}\right)\right)2^{-\ell\left(q\right)}},

where P is the set of policies that map sequences of observations to sequences of actions and x_{i}^{pq} is shorthand for the last observation in the sequence returned by q\left(p\left(\dot{x}_{<k}x_{k:i-1}^{pq}\right)\right). If we take this compacted formulation, modify it to accommodate the new utility function, set m_{k}:=\infty, and replace the maximum with a supremum (since there's an infinite number of possible policies), we get

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}U\left(q,\dot{y}_{<k}y_{k}y_{k+1:\infty}^{pq}\right)2^{-\ell\left(q\right)}},

where y_{i}^{pq} is shorthand for the last action in the sequence returned by p\left(q\left(\dot{y}_{<k}y_{k}y_{k+1:i-1}^{pq}\right)\right).

 

But there is a problem with this, which I will illustrate with a toy example. Suppose Y:=\left\{ a,b\right\} , and U\left(q,y_{1:\infty}\right)=0 when \forall k\in\mathbb{N}\, y_{k}=a, and for any n\in\mathbb{N}, U\left(q,y_{1:\infty}\right)=1-\frac{1}{n} when y_{n}=b and \forall k<n\, y_{k}=a. (U does not depend on the program q in this example). An agent following the above formula would output a on every step, and end up with a utility of 0, when it could have gotten arbitrarily close to 1 by eventually outputting b.

 

To avoid problems like that, we could assume the reasonable-seeming condition that if y_{1:\infty} is an action sequence and \left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty} is a sequence of action sequences that converges to y_{1:\infty} (by which I mean \forall k\in\mathbb{N}\,\exists N\in\mathbb{N}\,\forall n>N\, y_{k}^{n}=y_{k}), then {\displaystyle \lim_{n\rightarrow\infty}U\left(q,y_{1:\infty}^{n}\right)=U\left(q,y_{1:\infty}\right)}.

 

Under that assumption, the supremum is in fact a maximum, and the formula gives you an action sequence that will reach that maximum (proof below).

 

If you don't like the condition I imposed on U, you might not be satisfied by this. But without it, there is not necessarily a best policy. One thing you can do is, on step 1, pick some extremely small \varepsilon>0, pick any element from

{\displaystyle \left\{ p^{*}\in P:\sum_{q\in Q}U\left(q,y_{1:\infty}^{p^{*}q}\right)2^{-\ell\left(q\right)}>\left(\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\varepsilon\right\} }, and then follow that policy for the rest of eternity, which will guarantee that you do not miss out on more than \varepsilon of expected utility.

 

Proof of criterion for supremum-chasing working

 

definition: If y_{1:\infty} is an action sequence and \left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty} is an infinite sequence of action sequences, and \forall k\in\mathbb{N}\,\exists N\in\mathbb{N}\,\forall n>N\, y_{k}^{n}=y_{k}, then we say \left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty} converges to y_{1:\infty}. If p is a policy and \left\{ p_{n}\right\} _{n=1}^{\infty} is a sequence of policies, and \forall k\in\mathbb{N}\,\forall x_{<k}\in X^{k}\,\exists N\in\mathbb{N}\,\forall n>N\, p\left(x_{<k}\right)=p_{n}\left(x_{<k}\right), then we say \left\{ p_{n}\right\} _{n=1}^{\infty} converges to p.

 

assumption (for lemma 2 and theorem): If \left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty} converges to y_{1:\infty}, then {\displaystyle \lim_{n\rightarrow\infty}U\left(q,y_{1:\infty}^{n}\right)=U\left(q,y_{1:\infty}\right)}.

 

lemma 1: The agent described by

{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}U\left(q,\dot{y}_{<k}y_{k}y_{k+1:\infty}^{pq}\right)2^{-\ell\left(q\right)}} follows a policy that is the limit of a sequence of policies \left\{ p_{n}\right\} _{n=1}^{\infty} such that

{\displaystyle \lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}.

 

proof of lemma 1: Any policy can be completely described by the last action it outputs for every finite observation sequence. Observations are returned by a program, so the set of possible finite observation sequences is countable. It is possible to fix the last action returned on any particular finite observation sequence to be the argmax, and still get arbitrarily close to the supremum with suitable choices for the last action returned on the other finite observation sequences. By induction, it is possible to get arbitrarily close to the supremum while fixing the last action returned to be the argmax for any finite set of finite observation sequences. Thus, there exists a sequence of policies approaching the policy that the agent implements whose expected utilities approach the supremum.

 

lemma 2: If p is a policy and \left\{ p_{n}\right\} _{n=1}^{\infty} is a sequence of policies converging to p, then

{\displaystyle \sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}=\lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}}.

 

proof of lemma 2: Let \varepsilon>0. On any given sequence of inputs x_{1:\infty}\in X^{\infty}, \left\{ p_{n}\left(x_{1:\infty}\right)\right\} _{n=1}^{\infty} converges to p\left(x_{1:\infty}\right), so, by assumption, \forall q\in Q\,\exists N\in\mathbb{N}\,\forall n\geq N\,\left|U\left(q,y_{1:\infty}^{pq}\right)-U\left(q,y_{1:\infty}^{p_{n}q}\right)\right|<\frac{\varepsilon}{2}.

For each N\in\mathbb{N}, let Q_{N}:=\left\{ q\in Q:\forall n\geq N\,\left|U\left(q,y_{1:\infty}^{pq}\right)-U\left(q,y_{1:\infty}^{p_{n}q}\right)\right|<\frac{\varepsilon}{2}\right\} . The previous statement implies that {\displaystyle \bigcup_{N\in\mathbb{N}}Q_{N}=Q}, and each element of \left\{ Q_{N}\right\} _{N\in\mathbb{N}} is a subset of the next, so

{\displaystyle \exists N\in\mathbb{N}\,\sum_{q\in Q\setminus Q_{N}}2^{-\ell\left(q\right)}<\frac{\varepsilon}{2\left(\sup U-\inf U\right)}}. The range of U is bounded, so \sup U and \inf U are defined. This also implies that the difference in expected utility, given any information, of any two policies, is bounded. More formally:{\displaystyle \forall Q'\subset Q\,\forall p',p''\in P\,\left|\left(\left(\sum_{q\in Q'}U\left(q,y_{1:\infty}^{p'q}\right)2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q'}2^{-\ell\left(q\right)}\right)\right)-\left(\left(\sum_{q\in Q'}U\left(q,y_{1:\infty}^{p''q}\right)2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q'}2^{-\ell\left(q\right)}\right)\right)\right|\leq\sup U-\inf U},

so in particular,

{\displaystyle \left|\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|\leq\left(\sup U-\inf U\right)\sum_{q\in Q\setminus Q_{N}}2^{-\ell\left(q\right)}<\frac{\varepsilon}{2}}.

{\displaystyle \left|\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|\leq\left|\left(\sum_{q\in Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q_{N}}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|+\left|\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon}.

 

theorem:

{\displaystyle \sum_{\dot{q}\in Q}U\left(\dot{q},\dot{y}_{1:\infty}\right)2^{-\ell\left(\dot{q}\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}},

where {\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}U\left(q,\dot{y}_{<k}y_{k}y_{k+1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}.

 

proof of theorem: Let's call the policy implemented by the agent p^{*}.

By lemma 1, there is a sequence of policies \left\{ p_{n}\right\} _{n=1}^{\infty}

converging to p^{*} such that

{\displaystyle \lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}.

By lemma 2,

{\displaystyle \sum_{q\in Q}U\left(q,y_{1:\infty}^{p^{*}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}.


New Comment
20 comments, sorted by Click to highlight new comments since:

Thanks for putting in all this work!

I recommend you post this to the MAGIC list and ask for feedback. You'll also see a discussion of Anja's post there.

[-]Anja60

I like how you specify utility directly over programs, it describes very neatly how someone who sat down and wrote a utility function

)

would do it: First determine how the observation could have been computed by the environment and then evaluate that situation. This is a special case of the framework I wrote down in the cited article; you can always set

=\sum_{q:q(y_{1:m_k})=x_{1:m_k}}%20U(q,y_{1:m_k}))

This solves wireheading only if we can specify which environments contain wireheaded (non-dualistic) agents, delusion boxes, etc..

True, the U(program, action sequence) framework can be implemented within the U(action/observation sequence) framework, although you forgot to multiply by 2^-l(q) when describing how. I also don't really like the finite look-ahead (until m_k) method, since it is dynamically inconsistent.

This solves wireheading only if we can specify which environments contain wireheaded (non-dualistic) agents, delusion boxes, etc..

Not sure what you mean by that.

[-]Anja20

you forgot to multiply by 2^-l(q)

I think then you would count that twice, wouldn't you? Because my original formula already contains the Solomonoff probability...

Oh right. But you still want the probability weighting to be inside the sum, so you would actually need =\frac{1}{\xi\left(\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m_{k}}\right)}\sum_{q:q(y_{1:m_k})=x_{1:m_k}}%20U(q,y_{1:m_k})2%5E{-\ell\left(q\right)}%0A)

[-]Anja20

True. :)

[-]Anja00

Let's stick with delusion boxes for now, because assuming that we can read off from the environment whether the agent has wireheaded breaks dualism. So even if we specify utility directly over environments, we still need to master the task of specifying which action/environment combinations contain delusion boxes to evaluate them correctly. It is still the same problem, just phrased differently.

If I understand you correctly, that sounds like a fairly straightforward problem for AIXI to solve. Some programs q_1 will mimic some other program q_2's communication with the agent while doing something else in the background, but AIXI considers the possibilities of both q_1 and q_2.

[-]V_V40

Formulas aren't being rendered correctly, they appear as latex code.

Looks like it's back up now.

That appears to be because the formulas are images hosted by codecogs.com, which is down.

This is great!

I really like your use of ). The seems to be an important step along the way to eliminating the horizon problem. I recently read in Orseau and Ring's "Space-Time Embedded Intelligence" that in another paper, "Provably Bounded-Optimal Agents" by Russell and Subramanian, they define %20=%20u(h(\pi,%20q))) where) generates the interaction history . (I have yet to read this paper.)

Your counterexample of supremum chasing is really great; it breaks my model of how utility maximization is supposed to go. I'm honestly not sure whether one should chase the path of U = 0 or not. This is clouded by the fact that the probabilistic nature of things will probably push you off that path eventually.

The dilemma reminds me a lot of exploring versus exploiting. Sometimes it seems to me that the rational thing for a utility maximizer to do, almost independent of the utility function, would be to just maximize the resources it controlled, until it found some "end" or limit, and then spend all its resources creating whatever it wanted in the first place. In the framework above we've specified that there is no "end" time, and AIXI is dualist, so there are no worries of it getting destroyed.

There's something else that is strange to me. If we are considering infinite interaction histories, then we're looking at the entire binary tree at once. But this tree has uncountably infinite paths! Almost all of the (infinite) paths are incomputable sequences. This means that any computable AI couldn't even consider traversing them. And it also seems to have interesting things to say about the utility function. Does it only need to be defined over computable sequences? What if we have utility over incomptuable sequences? These could be defined by second-order logic statements, but remain incomputable. It gives me lots of questions.

I'm honestly not sure whether one should chase the path of U = 0 or not. This is clouded by the fact that the probabilistic nature of things will probably push you off that path eventually.

Making the assumption that there is a small probability that you will deviate from your current plan on each future move, and that these probabilities add up to a near guarantee that you will eventually, has a more complicated effect on your planning than just justifying chasing the supremum.

For instance, consider this modification to the toy example I gave earlier. Y:={a,b,c}, and if the first b comes before the first c, then the resulting utility is 1 - 1/n, where n is the index of the first b (all previous elements being a), as before. But we'll change it so that the utility of outputting an infinite stream of a is 1. If there is a c in your action sequence and it comes before the first b, then the utility you get is -1000. In this situation, supremum-chasing works just fine if you completely trust your future self: you output a every time, and get a utility of 1, the best you could possibly do. But if you think that there is a small risk that you could end up outputting c at some point, then eventually it will be worthwhile to output b, since the gain you could get from continuing to output a gets arbitrarily small compared to the loss from accidentally outputting c.

There's something else that is strange to me. If we are considering infinite interaction histories, then we're looking at the entire binary tree at once. But this tree has uncountably infinite paths! Almost all of the (infinite) paths are incomputable sequences. This means that any computable AI couldn't even consider traversing them. And it also seems to have interesting things to say about the utility function. Does it only need to be defined over computable sequences? What if we have utility over incomptuable sequences? These could be defined by second-order logic statements, but remain incomputable. It gives me lots of questions.

I don't really have answers to these questions. One thing you could do is replace the set of all policies (P) with the set of all computable policies, so that the agent would never output an uncomputable action sequence [Edit: oops, not true. You could consider only computable policies, but then end up at an uncomputable policy anyway by chasing the supremum].

Making the assumption that...

Yeah, I was intentionally vague with "the probabilistic nature of things". I am also thinking about how any AI will have logical uncertainty, uncertainty about the precision of its observations, et cetera, so that as it considers further points in the future, its distribution becomes flatter. And having a non-dualist framework would introduce uncertainty about the agent's self, its utility function, its memory, ...

[-]Anja00

I think there is something off with the formulas that use policies: If you already choose the policy

=y_{%3Ck}y_k)

then you cannot choose an y_k in the argmax.

Also for the Solomonoff prior you must sum over all programs

=x_{1:m_k}) .

Could you maybe expand on the proof of Lemma 1 a little bit? I am not sure I get what you mean yet.

If you already choose the policy ... then you cannot choose an y_k in the argmax.

The argmax comes before choosing a policy. In , there is already a value for y_k before you consider all the policies such that p(x_<k) = y_<k y_k.

Also for the Solomonoff prior you must sum over all programs

Didn't I do that?

Could you maybe expand on the proof of Lemma 1 a little bit?

Look at any finite observation sequence. There exists some action you could output in response to that sequence that would allow you to get arbitrarily close to the supremum expected utility with suitable responses to the other finite observation sequences (for instance, you could get within 1/2 of the supremum). Now look at another finite observation sequence. There exists some action you could output in response to that, without changing your response to the previous finite observation sequence, such that you can get arbitrarily close to the supremum (within 1/4). Look at a third finite observation sequence. There exists some action you could output in response to that, without changing your responses to the previous 2, that would allow you to get within 1/8 of the supremum. And keep going in some fashion that will eventually consider every finite observation sequence. At each step n, you will be able to specify a policy that gets you within 2^-n of the supremum, and these policies converge to the policy that the agent actually implements.

I hope that helps. If you still don't know what I mean, could you describe where you're stuck?

[-]Anja00

I get that now, thanks.

I wouldn't use N element of N. But rather n element of N or something. Just a small nitpick.

EDIT: I was watching on my laptop and didn't see that the second N is a bit fatter.

means the set of natural numbers.

Yes, the fat N. It's clear in a higher resolution.