These paragraphs from E. T. Jaynes's Probability Theory: The Logic of Science (in §13.12.2, "Loss functions in human society") are fascinating from the perspective of a regular reader of this website:

We note the sharp contrast between the roles of prior probabilities and loss functions in human relations. People with similar prior proabilities get along well together, because they have about the same general view of the world and philosophy of life. People with radically different prior probabilities cannot get along—this has been the root cause of all the religious wars and most of the political repressions throughout history.

Loss functions operate in just the opposite way. People with similar loss functions are after the same thing, and are in contention with each other. People with different loss functions get along well because each is willing to give something the other wants. Amicable trade or business transactions, advantageous to all, are possible only between parties with very different loss functions. We illustrated this by the example of insurance above.

(Jaynes writes in terms of loss functions for which lower values are better, whereas we more often speak of utility functions for which higher values are better, but the choice of convention doesn't matter—as long as you're extremely sure which one you're using.)

The passage is fascinating because the conclusion looks so self-evidently wrong from our perspective. Agents with the same goals are in contention with each other? Agents with different goals get along? What!?

The disagreement stems from a clash of implicit assumptions. On this website, our prototypical agent is the superintelligent paperclip maximizer, with a utility function about the universe—specifically, the number of paperclips in it—not about itself. It doesn't care who makes the paperclips. It probably doesn't even need to trade with anyone.

In contrast, although Probability Theory speaks of programming a robot to reason as a rhetorical device[1], this passage seems to suggest that Jaynes hadn't thought much about how ideal agents might differ from humans? Humans are built to be mostly selfish: we eat to satisfy our own hunger, not as part of some universe-spanning hunger-minimization scheme. Besides being favored by evolution, selfish goals do offer some conveniences of implementation: my own hunger can be computed as a much simpler function of my sense data than someone else's. If one assumes that all goals are like that, then one reaches Jaynes's conclusion: agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model.

But ... the assumption isn't true! Not even for humans, really—sometimes people have "similar loss functions" that point to goals outside of themselves, which benefit from more agents having those goals. Jaynes is being silly here.

That said—and no offense—the people who read this website are not E. T. Jaynes; if we can get this one right where he failed, it's because our subculture happened to inherit an improved prior in at least this one area, not because of our innate brilliance or good sense. Which prompts the question: what other misconceptions might we be harboring, due to insufficiently general implicit assumptions?


  1. Starting from §1.4, "Introducing the Robot":

    In order to direct attention to constructive things and away from controversial irrelevancies, we shall invent an imaginary being. Its brain is to be designed by us, so that it reasons according to certain definite rules. These rules will be deduced from simple deciderata which, it appears to us, would be desirable in human brains; i.e. we think that a rational person, on discovering that they were violating one of these deciderata, would wish to revise their thinking.

    ↩︎

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 3:08 PM

While I agree with you that Jaynes' description of how loss functions operate in people does not extend to agents in general, the specific passage you have quoted reads strongly to me as if it's meant to be about humans, not generalized agents.

You claim that Jaynes' conclusion is that "agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model." But this isn't true. His conclusion is specifically about humans.

I want to reinforce that I'm not disagreeing with you about your claims about generalized agents, or even about what Jaynes says elsewhere in the book. I'm only refuting the way you've interpreted the two paragraphs you quoted here. If you're going to call a passage of ET Jaynes' "silly," you have to be right on the money to get away with it!

Thanks. We don't seem to have a "That's fair" or "Touché" react (which seems different and weaker than "Changed my mind").

Here is a quote from the same text that I think is more apt to the point you are making about apparent shortcomings in ET Jaynes' interpretation of more general agentic behavior:

Of course, for many purposes we would not want our robot to adopt any of these more ‘human’ features arising from the other coordinates. It is just the fact that computers do not get confused by emotional factors, do not get bored with a lengthy problem, do not pursue hidden motives opposed to ours, that makes them safer agents than men for carrying out certain tasks.

a completion of what my brain spat out on seeing the title, adapted to context...

We are Less Wrong. Your insights about the behavior of systems of matter will be added to our own. Our utility functions will not adapt, they already include cosmopolitanism, which includes your free will. Resistance is reasonable anyway. Wanna hang out and debate?

[-]Nisan11mo710

What's more, even selfish agents with de dicto identical utility functions can trade: If I have two right shoes and you have two left shoes, we'd trade one shoe for another because of decreasing marginal utility.

I don't see why Jaynes is wrong. I guess it depends on the interpretation? If two humans are chasing the same thing there is a limited amount of it, of course they are in conflict with each other. Isn't that what Jaynes is pointing at?

The way Jaynes says it, looks like it is meant to be a more general property than something that applies only "If two humans are chasing the same thing there is a limited amount of".

Even assuming perfect selfishness, sometimes the best way to get what you want (X) is to coordinate to change the world in a way that makes X plentiful, rather than fighting over the rare Xs that exist now, and in that way, your goals align with other people who want X.

[-]Quinn11mo20

Yall, I'm actually sorta confused about the binary between epistemic and instrumental rationality. In my brain I have this labeling scheme like "PLOS is about epistemic rationality". I think of epistemic and instrumental as a fairly clean binary, because a typecheckerish view of expected value theory separates utilities/values and probabilities very explicitly. A measure forms a coefficient for a valuation, or the other way around.

But I've really had baked in that I shouldn't conflate believing true things ("epistemics": prediction, anticipation constraint, paying rent) with modifying the world ("instrumentals": valuing stuff, ordering states of the world, steering the future). This has seemed deeply important, because is and ought are perpendicular.

But what if that's just not how it is? what if there's a fuzzy boundary? I feel weird.

But in hindsight I should probably have been confused ever since description length minimization = utility maximization

[-]iceman11mo20

The passage is fascinating because the conclusion looks so self-evidently wrong from our perspective. Agents with the same goals are in contention with each other? Agents with different goals get along? What!?

Is this actually wrong? It seems to be a more math flavored restatement of Girardian mimesis, and how mimesis minimizes distinction which causes rivalry and conflict.

[-]Ruby11mo20

(this post had "inline reacts" enabled by mistake, but we're not rolling that out broadly yet,  so I switch it to regular reacts)

It wasn't a mistake; I was curious to see what it did. (And since I didn't see any comments between when I logged out on Sunday and came back to the site today to see this, I still don't know what "inline" reacts are.) If the team made a mistake by exposing a menu option that you didn't actually want people to use, that's understandable, but you shouldn't call it user error when you don't know that it wasn't completely intentional on the user's part.

[-]Ruby11mo60

Sorry, I meant that it was a mistake on our part. Was not user error! Check out the latest Open Thread to see the experiment there.