Wiki Contributions



Thanks for the comment! Note that we use state-action visitation distribution, so we consider trajectories that contain actions as well. This makes it possible to invert  (as long as all states are visited). Using only states trajectories, it would indeed be impossible to recover the policy.


Yes, I agree that the politicisation is the central issue. But this is exactly why I wrote the first part - I feel that this section is true despite it (I didn't claim that most people agree with the solution, only that the elites, experts, and the reader's social bubble does!).

So one question I'm trying to understand is: since politicisation happened to climate change, why do we think that it won't happen to AI governance? I.e. the point is that pursuing goals by political means might just usually end up like that, because of the basic structure of the political discourse (you get points for opposing the other side, etc).


Hm, so one comment is that the proof in the post was not meant to convey the intuition for the existence of the concrete probability distribution - the measurability of the POWER inequality is a necessary first step, but not really technically related to the (potential) rest of the proof (although I had initially hoped that lifting some distribution on rewards by the Giry monad might produce something interesting).

As for why the additional structure might be helpful: the issue with there being no Lebesgue-like uniform measure is that in the infinite-dimensional space like , one cannot assign any positive measure to any subset. For example, in , each of the halves have to have equal measures, because the measure has to be shift-invariant. In , we can do this with each of the four squares like . Repeating this process, in the limit, there is no measure we can assign to those intervals, because they can be divide into countably-many non-negligible sets (c.f.

So the problem is that first, the space is too big, and second, there is too much freedom of cutting the space into pieces and shifting them. The EPIC metric paper I linked to in the post (or some related research) might be helpful in solving both of these issues.

First, we can make the space smaller by dividing it by some equivalence relation - reward shaping properties in MDPs provide such relation (although EPIC considers the relation from the original paper by Ng et al which is too weak - something stronger is needed). To give a concrete (although a bit silly) example: there is no uniform measure on the space of real-valued functions on . But suppose we have a (very strong) equivalence relation iff . Then, the space collapses to just , which has a normal measure.

The second problem is that we had too much freedom in shifting the subsets (or, the shift-invariance was too strong). In our case, "shifting" is applied to the sets of probability distributions of rewards. But individual rewards cannot always be shifted, since this operation doesn't preserve optimal policies. So maybe this puts some restrictions on the transformations we can apply to the space, and the measures don't blow up.

So, briefly, I don't understand those behaviours very well yet, but my intuitive optimism comes from:

  • first, the fact that of rewards seems to be rich, so if the space of distributions of rewards inherits some of the properties, the induced symmetries would limit the allowed transformations
  • second, there is another approach I which forgot to write about in the post, which is to consider non-shift-invariant uninformative priors - for example, Jeffrey prior on is not the uniform distribution. It seems that the problem we are dealing with here is quite common in math, and people have invented workarounds (like the abstract Wiener spaces mentioned in the wikipedia article) - the issue is checking whether any of those workaround applies here