sen

Posts

Sorted by New

Comments

What counts as defection?

Understood. I do think it's significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn't intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.

What counts as defection?

This can turn into a very long discussion. I'm okay with that, but let me know if you're not so I can probe only the points that are likely to resolve. I'll raise the contentious points regardless, but I don't want to draw focus on them if there's little motivation to discuss them in depth.

I agree that a split in terminology is warranted, and that "defect" and "cooperate" are poor choices. How about this:

  • Coalition members may form consensus on the coalition strategy. Members of a coalition may follow the consensus coalition strategy or violate the consensus coalition strategy.
  • Members of a coalition may benefit the coalition or hurt the coalition.
  • Benefiting the coalition means raising its payoff regardless of consensus. Hurting the coalition means reducing its payoff regardless of consensus. A coalition may form consensus on the coalition strategy regardless of the optimality of that strategy.

Contentious points:

  • I expect that treating utility so generally will lead to paradoxes, particularly when utility functions are defined in terms of other utility functions. I think this is an extremely important case, particularly when strategies take trust into account. As a result, I expect that such a general notion of utility will lead to paradoxes when using it to reason about trust.
  • "Utility is not a resource." I think this is a useful distinction when trying to clarify goals, but not a useful distinction when trying to make decisions given a set of goals. In particular, once the payoff tables are defined for a game, the goals must already have been defined, and so utility can be treated as a resource in that game.
What counts as defection?
The "expected coalition strategy" is, let's say, "no one gets any". By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?

In my view, yes. If we agreed that no one should get any resources, then it's a violation for you to get resources or for you to deceive me into getting resources.

I think the difference is in how the two of us view a strategy. In my view, it's perfectly acceptable for the coalition strategy to include a clause like "it's okay to do X if it's a pareto improvement for our coalition." If that's part of the coalition strategy we agree to, then pareto improvements are never defections. If our coalition strategy does exclude unilateral actions that are pareto improvements, then it is a defection to take such actions.

Another question: how does this idea differ from the core in cooperative game theory?

I'm not a mathematician or an economist, my knowledge on this hasn't been tested, and I just discovered the concept from your reply. Please read the following with a lot of skepticism because I don't know how correct it is.

Some type differences:

  • A core is a set of allocations. I'm going to call it core allocations so it's less confusing.
  • A defection is a change in strategy (per both of our definitions).

As far as the relationship between the two:

  • A core allocation satisfies a particular robustness property: it's stable under coalition refinements. A "coalition refinement" here is an operation with a coalition is replaced by a partition of that coalition. Being stable under coalition refinements, the coalition will not partition itself for rational reasons. So if you have coalitions {A, B} and {C}, then every core allocation is robust against {A, B} splitting up into {A}, {B}.
  • Defections (per my definition) don't deal strictly with coalition refinements. If one member leaves a coalition to join another, that's still a defection. In this scenario, {A, B}, {C} is replaced with {A}, {B, C}. Core allocations don't deal with this scenario since {A}, {B, C} is not a refinement of {A, B}, {C}. As a result, core allocations are not necessarily robust to defections.

I could be wrong about core allocations being about only refinements. I think I'm safe in saying though that core allocations are robust against some (maybe all) defections.

What counts as defection?

I think your focus on payoffs is diluting your point. In all of your scenarios, the thing enabling a defection is the inability to view another player's strategy before committing to a strategy. Perhaps you can simplify your definition to the following:

  • "A defect is when someone (or some sub-coalition) benefits from violating their expected coalition strategy."

You can define a function that assigns a strategy to every possible coalition. Given an expected coalition strategy C, if the payoff for any sub-coalition strategy SC is greater than their payoff in C, then the sub-coalition SC is incentivized to defect. (Whether that means SC joins a different coalition or forms their own is irrelevant.)

This makes a few things clear that are hidden in your formalization. Specifically:

  • The main difference between this framing and the framing for Nash Equilibrium is the notion of an expected coalition strategy. Where there is an expected coalition strategy, one should aim to follow a "defection-proof" strategy. Where there is no expected coalition strategy, one should aim to follow a Nash Equilibrium strategy.
  • Your Proposition 3 is false. You would need a variant that takes coalitions into account.

I believe all of your other theorems and propositions follow from the definition as well.

This has other benefits as well.

  • It factors the payoff table into two tables that are easier to understand: coalition selection and coalition strategy selection.
  • It's better-aligned with intuition. Defection in the colloquial sense is when someone deserts "their" group (i.e., joins a new coalition in violation of the expectation). Coalition selection encodes that notion cleanly. The payoff tables for coalitions cleanly encodes the more generalized notion of "rational action" in scenarios where such defection is possible.
Lurking More Before Joining Complex Conversations

"That's a good point, but I think you're behind on some of the context on what we were discussing. Can you try to get more of a feel for the conversation before joining it?"

  • It gives the person an understandable reason for their misstep. ("Sorry, I must have misunderstood what you were talking about.")
  • It gives the person a reason to stick around. ("I messed up. If I want to correct this, I need to listen and get more context.")
  • It adjusts the person's behavior in future interactions. ("I should get a feel for the conversation before joining it to avoid messing up in the future.")
  • It's no more aggressive than it needs to be to allow you to disregard what the person said and continue your conversation.
  • What little aggression is there is reasoned and grounded in something easily understood, so it doesn't come off as rude.

The above sounds better in text than it does in an actual conversation, but the same principles should apply in an actual conversation. "Name, hold on. I think you're missing some context. Can you listen for a few minutes to catch up?"

Epistemic Laws of Motion

I don't see how your comment contradicts the part you quoted. More pressure doesn't lead to more change (in strategy) if resistance increases as well. That's consistent with what /u/SquirrelInHell stated.

Epistemic Laws of Motion

That mass corresponds to "resistance to change" seems fairly natural, as does the correspondence between "pressure to change" and impulse. The strange part seems to be the correspondence between "strategy" and velocity.'' Distance would be something like strategy * time.

Does a symmetry in time correspond to a conservation of energy? Is energy supposed to correspond to resistance? Maybe, though that's a little hard to interpret, so it's a little difficult to apply Lagrangian or Hamiltonian mechanics. The interpretation of energy is important. Without that, the interpretation of time is incomplete and possibly incoherent.

Is there an inverse correspondence between optimal certainty in resistance strategy (momentum) and optimal certainty in strategy time (distance)? I guess, so findings from quantum uncertainty principles and information geometry may apply.

Does strategy impact one's perception of "distances" (strategy * time) and timescales? Maybe, so maybe findings from special relativity would apply. A universally-observable distance isn't defined though, and that precludes a more coherent application of special/general relativity. Some universal observables should be stated. Other than the obvious objectivity benefits, this could help more clearly define relationships between variables of different dimensions. This one isn't that important, but it would enable much more interesting uses of the theory.

The Unreasonable Effectiveness of Certain Questions

The process you went through is known in other contexts as decategorification. You attempted to reduce the level of abstraction, noticed a potential problem in doing so, and concluded that the more abstract notion was not as well-conceived as you imagined.

If you try to enumerate questions related to a topic (Evil), you will quickly find that you (1) repeatedly tread the same ground, (2) are often are unable to combine findings from multiple questions in useful ways, and (3) are often unable to identify questions worth answering, let alone a hierarchy that suggests which questions might be more worth answering than others.

What you are trying to identify are the properties and structure of evil. A property of Evil is a thing that must be preserved in order for Evil to be Evil. The structure of Evil is the relationship between Evil and other (Evil or non-Evil) entities.

You should start by trying to identify the shape of Evil by identifying its border, where things transition from Evil to non-Evil and vice versa. This will give you an indication of which properties are important. From there, you can start looking at how Evil relates to other things, especially in regards to its properties. This will give you some indication of its structure. Properties are important for identifying Evil clearly. Structure is important for identifying things that are equivalent to Evil in all ways that matter. It is often the case that the two are not the same.

If you want to understand this better, I recommend looking into category theory. The general process of identifying ambiguities, characterizing problems in the right way, applying prior knowledge, and gluing together findings into a coherent whole is fairly well-worn. You don't have to start from scratch.

We need a better theory of happiness and suffering

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, ..."

I don't see how the existence of subagents complicates things in any substantial way. If the existence of competing subagents is a hindrance to optimality, then one should aim to align or eliminate subagents. (Isn't this one of the functions of meditation?) Obviously this isn't always easy, but the goal is at least clear in this case.

It is nonsensical to treat animal welfare as a special case of happiness and suffering. This is because animal happiness and suffering can be only be understood through analogical reasoning, not through logical reasoning. A logical framework of welfare can only be derived through subjects capable of conveying results since results are subjective. The vast majority of animals, at least so far, cannot convey results, so we need to infer results on animals based on similarities between animal observables and human observables. Such inference is analogical and necessarily based entirely on human welfare.

If you want a theory of happiness and suffering in the intellectual sense (where physical pleasure and suffering are ignored), I suspect what you want is a theory of the ideals towards which people strive. For such an endeavor, I recommend looking into category theory, in which ideals are easily recognizable, and whose ideals seem to very closely (if not perfectly) align with intuitive notions.

Idea for LessWrong: Video Tutoring

I meant it as "This seems like a clear starting point." You're correct that I think it's easy to not get lost with those two starting points.

I'm my experience with other fields, it's easy to get frustrated and give up. Getting lost is quite a bit more rare. You'll have to click through a hundred dense links to understand your first paper in machine learning, as with any other field. If you can trudge through that, you'll be fine. If you can't, you'll at least know what to ask.

Also, are you not curious about how much initiative people have regarding the topics they want to learn?

Load More