Antoine de Scorraille

20

Oh. As I read those first lines, I thought, "Isn't it obvious?!!! How the hell did the author not notice that at like, 5 years old?" I mean, it's such a common plot in fiction. And paradoxically, I thought I wasn't good at knowing social dynamics. But maybe that's an exception: I have a really good track record at guessing werewolves in the werewolf game. So maybe I'm just good at theory (very relevant to this game) but still bad at reading people.

The idea of applying it to wealth is interesting, though.

30

Excellent post! I think it's closely related to (but not reducible to) to the general concept of Pressure. Both pressure as

- the physical definition (which is, indeed, a sum of many micro forces on a frontier surface, but able to affect at macroscale the whole object).
- the metaphorical common use of it (social pressure etc): see the post.

10

It's not about bad intentions in most practical cases, but about biases. Hanlon's razor doesn't apply (or, very weakly) to systemic issues.

20

It's fixed ;)

20

Exercise 1:

The empty set is the only one. For any nonempty set X, you could pick as a counterexample:

Exercise 2:

The agent will choose an option which scores better than the threshold .

It's a generalization of satisficers, these latter are thresholders such as is nonempty.

Exercise 3:

Exercise 4:

I have discovered a truly marvelous-but-infinite solution of this, which this finite comment is too narrow to contain.

Exercise 5:

The generalisable optimisers are the following:

i.e. argmin will choose the minimal points.

i.e. satisfice will choose an option which dominates some fixed anchor point . Note that since R is only equipped with a preorder, it means it might be a more selective optimiser (if not total, it's harder to get an option better then the anchor). More interestingly, if there are indifferent options with the anchor (some x / and ), it could choose it rather than the anchor even if there is no gain to do so. This degree of freedom could be eventually not desirable.

Exercise 6:

Interesting problem.

First of all, is there a way to generalize the trick?

The first idea would be to try to find some equivalent to the destested element . For context-dependant optimiser such as better-than-average, there isn't.

A more general way to try to generalize the trick would be the following question:

For some fixed , and , could we find some other such as and

i.e. is there a general way to replace values outside of without modify the result of the constrained optimisation?

Answer: no. *Counter-example: The optimiser for some infinite set X and finite nonempty sets and R.*

**So it seems there is no general trick here. But why bother?** We should refocus on what we mean by constrained optimisation in general, and it has nothing to do with looking for some u'. What we mean is value outside are totally irrelevant.

How? For any of the current example we have here, what we actually want is not , but : we only apply the optimiser on the legal set.

Problem: in the current formalism, an optimiser has type , so I don't see obvious way to define the "same" optimiser on a different X. and the others here are implicitly parametrized, so it's not that a problem, but we have to specify this. **This suggests to look for categories** (e.g. for argmax...).

20

Even with finite sets, it doesn't work because the idea to look for "closest to " is not what we looking for.

Let a class of students, scoring within a range , . Let the (uniform) better-than-average optimizer, standing for the professor picking any student who scores better than the mean.

Let (the professor despises Charlie and ignores him totally).

If u(Alice) = 5 and u(Bob) = 4, their average is 4.5 so only Alice should be picked by the constrained optimisation.

Howewer, with your proposal, you run into trouble with u' defined as u'(Alice) = u(Alice), u'(Bob) = u(Bob), and u'(Charlie) = 0.

The average value for this u' is , and both Alice and Bob scores better than 3: . The size of the intersection with is then maximal, so your proposal suggests to pick this set as the result. But the actual result should be , because the score of Charlie is irrelevant to constrained optimisation.

50

Natural language is lossy because the communication channel is narrow, hence the need for lower-dimensional representation (see ML embeddings) of what we're trying to convey. Lossy representations is also what Abstractions are about.

But in practice, you expect Natural Abstractions (if discovered) cannot be expressed in natural language?

Thanks for these last three posts!

Just sharing some vibe I've got from your.. framing!

Minimalism ~ path ~ inside-focused ~ the signal/rewardMaximalism ~ destination ~ outside-focused ~ the worldThese two opposing aesthetics is a well-known confusing bit within agent foundation style research. The classical way to model an agent is to think as it is maximizing outside world variables. Conversely, we can think about minimization ~ inside-focused (reward hacking type error) as a drug addict accomplishing "nothing"

Feels there is also something to say with dopamine vs serotonine/homeostasis, even with deontology vs consequentialism, and I guess these two clumsy clusters mirrors each other in some way (feels isomorph by reverse signe function). Will rethink about it for now.

As an aside note: I'm French too, and was surprised I'm supposed to yuck maximalist aesthetic, but indeed it's consistent with my reaction reading you about TypeScript, also with myK-type brain.. Anecdotally, not with my love for spicy/rich foods ^^'