Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 1:21 PM

skimmed it.

It would be helpful to define "stopping point" and "stopping distance".


Wrt local optima:

Deep Neural Nets were historically thought to suffer from local optima. Recently, this viewpoint has been challenged; see, e.g. "The Loss Surfaces of Multilayer Networks" http://arxiv.org/abs/1412.0233 and references.

Although the issue remains unclear, I currently suspect that local optima are not a practical obstacle for an (omniscient) hill-climber in the real world.


I wasn't convinced overall by the statement about tiling (or not). I think you should give more detailed arguments about why you do or don't expect these agents to tile, and explain the set-up a bit more, too: are you imagining agents that take a single action, based on their current policy, to adopt a new policy, which is then not subject to further modification? Or how can you ensure that agents do not modify their policy in such a way that policy_new encourages further modifications which can compound?