flodorner

flodorner's Posts

Sorted by New

flodorner's Comments

On characterizing heavy-tailedness

Do you maybe have another example for action relevance? Nonfinite variance and finite support do not go well together.

In theory: does building the subagent have an "impact"?

So the general problem is that large changes in ∅) are not penalized?

Attainable utility has a subagent problem

"Not quite... " are you saying that the example is wrong, or that it is not general enough? I used a more specific example, as I found it easier to understand that way.

I am not sure I understand: In my mind "commitments to balance out the original agent's attainable utility" essentially refers to the second agent being penalized by the the first agent's penalty (although I agree that my statement is stronger). Regarding your text, my statement refers to "SA will just precommit to undermine or help A, depending on the circumstances, just sufficiently to keep the expected rewards the same. ".

My confusion is about why the second agent is only mildy constrained by this commitment. For example, weakening the first agent would come with a big penalty (or more precisely, building another agent that is going to weaken it gives a large penalty to the original agent), unless it's reversible, right?

The bit about multiple subagents does not assume that more than one of them is actually built. It rather presents a scenario where building intelligent subagents is automatically penalized. (Edit: under the assumption that building a lot of subagents is infeasible or takes a lot of time).

Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1)

I found it a bit confusing that you first reffered to selection and control as types of optimizers and then (seemingly?) replaced selection by optimization in the rest of the text.

When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors

I was thinking about normalisation as linearly rescaling every reward to when I wrote the comment. Then, one can always look at , which might make it easier to graphically think about how different beliefs lead to different policies. Different scales can then be translated to a certain reweighting of the beliefs (at least from the perspective of the optimal policy), as maximizing is the same as maximizing

When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors

After looking at the update, my model is:

(Strictly) convex Pareto boundary: Extreme policies require strong beliefs. (Modulo some normalization of the rewards)

Concave (including linear) Pareto boundary: Extreme policies are favoured, even for moderate beliefs. (In this case, normalization only affects the "tipping point" in beliefs, where the opposite extreme policy is suddenly favoured).

In reality, we will often have concave and convex regions. The concave regions then cause more extreme policies for some beliefs, but the convex regions usually prevent the policy from completely focusing on a single objective.

From this lens, 1) maximum likelihood pushes us to one of the ends of the Pareto boundary, 2) an unlikely true reward pushes us close to the "bad" end, 3) Difficult optimization messes with normalisation (I am still somewhat confused about the exact role of normalization) and 4) Not accounting for diminishing returns bends the pareto boundary to become more concave.

When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors

But no matter, how I take the default outcome, your second example is always "more positive sum" than the first, because 0.5 + 0.7 + 2x < 1.5 - 0.1 +2x.

Granted, you could construct examples where the inequality is reversed and Goodhart bad corresponds to "more negative sum", but this still seems to point to the sum-condition not being the central concept here. To me, it seems like "negative min" compared to the default outcome would be closer to the actual problem. This distinction matters, because negative min is a lot weaker than negative sum.

Or am I completely misunderstanding your examples or your point?


When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors

To clear up some more confusion: The sum-condition is not what actually matters here, is it? In the first example of 5), the sum of utilities is lower than in the second one. The problem in the second example seems to rather be that the best states for one of the (Edit: the expected) rewards are bad for the other?

That again seems like it would often follow from resource constraints.

When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors

Right. I think my intuition about negative-sum interactions under resource constrainrs combined the zero-sum nature of resource spending with the (perceived) negative-sum nature of competition for resources. But for a unified agent there is no competition for resources, so the argument for resource constraints leading to negative-sum interactions is gone.

Thank you for alleviating my confusion.

When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors

My model goes something like this: If increasing values requires using some resource, gaining access to more of the resource can be positive sum, while spending it is negative sum due to opportunity costs. In this model, the economy can be positive sum because it helps with alleviating resource constraints.

But maybe it does not really matter if most interactions are positive-sum until some kind of resource limit is reached and negative-sum only after?

Load More