AnnaSalamon

Sequences

Decision Theory: Newcomb's Problem

Comments

Sorted by

Seems helpful for understanding how believing-ins get formed by groups, sometimes.

"Global evaluation" isn't exactly what I'm trying to posit; more like a "things bottom-out in X currency" thing.

Like, in the toy model about $ from Atlas Shrugged, an heir who spends money foolishly eventually goes broke, and can no longer get others to follow their directions.  This isn't because the whole economy gets together to evaluate their projects.  It's because they spend their currency locally on things again and again, and the things they bet on do not pay off, do not give them new currency.

I think the analog happens in me/others: I'll get excited about some topic, pursue it for awhile, get back nothing, and decide the generator of that excitement was boring after all.

Hmm.  Under your model, are there ways that parts gain/lose (steam/mindshare/something)?

Does it feel to you as though your epistemic habits / self-trust / intellectual freedom and autonomy / self-honesty takes a hit here?

Fair point; I was assuming you had the capacity to lie/omit/deceive, and you're right that we often don't, at least not fully.

I still prefer my policy to the OPs, but I accept your argument that mine isn't a simple Pareto improvement.

Still:

  • I really don't like letting social forces put "don't think about X" flinches into my or my friends' heads; and the OPs policy seems to me like an instance of that;
  • Much less importantly: as an intelligent/self-reflective adult, you may be better at hiding info if you know what you're hiding, compared to if you have guesses you're not letting yourself see, that your friends might still notice.  (The "don't look into dragons" path often still involves hiding info, since often your brain takes a guess anyhow, and that's part of how you know not to look into this one.  If you acknowledge the whole situation, you can manage your relationships consciously, including taking conscious steps to buy openness-offsets, stay freely and transparently friends where you can scheme out how.)

I don't see advantage to remaining agnostic, compared to:

1) Acquire all the private truth one can.

Plus:

2) Tell all the public truth one is willing to incur the costs of, with priority for telling public truths about what one would and wouldn't share (e.g. prioritizing to not pose as more truth-telling than one is).

--

The reason I prefer this policy to the OP's "don't seek truth on low-import highly-politicized matters" is that I fear not-seeking-truth begets bad habits.  Also I fear I may misunderstand how important things are if I allow politics to influence which topics-that-interest-my-brain I do/don't pursue, compared to my current policy of having some attentional budget for "anything that interests me, whether or not it seems useful/virtuous."

Yes, this is a good point, relates to why I claimed at top that this is an oversimplified model.  I appreciate you using logic from my stated premises; helps things be falsifiable.

It seems to me:

  • Somehow people who are in good physical health wake up each day with a certain amount of restored willpower.  (This is inconsistent with the toy model in the OP, but is still my real / more-complicated model.)
  • Noticing spontaneously-interesting things can be done without willpower; but carefully noticing superficially-boring details and taking notes in hopes of later payoff indeed requires willpower, on my model.  (Though, for me, less than e.g. going jogging requires.)
  • If you’ve just been defeated by a force you weren’t tracking, that force often becomes spontaneously-interesting.  Thus people who are burnt out can sometimes take a spontaneous interest in how willpower/burnout/visceral motivation works, and can enjoy “learning humbly” from these things. 
  • There’s a way burnout can help cut through ~dumb/dissociated/overconfident ideological frameworks (e.g. “only AI risk is interesting/relevant to anything”), and make space for other information to have attention again, and make it possible to learn things not in one's model.  Sort of like removing a monopoly business from a given sector, so that other thingies have a shot again.

I wish the above was more coherent/model-y.

Thanks for asking.  The toy model of “living money”, and the one about willpower/burnout, are meant to appeal to people who don’t necessarily put credibility in Rand; I’m trying to have the models speak for themselves; so you probably *are* in my target audience.  (I only mentioned Rand because it’s good to credit models’ originators when using their work.)

Re: what the payout is:

This model suggests what kind of thing an “ego with willpower” is — where it comes from, how it keeps in existence:

  • By way of analogy: a squirrel is a being who turns acorns into poop, in such a way as to be able to do more and more acorn-harvesting (via using the first acorns’-energy to accumulate fat reserves and knowledge of where acorns are located).
  • An “ego with willpower”, on this model, is a ~being who turns “reputation with one’s visceral processes” into actions, in such a way as to be able to garner more and more “reputation with one’s visceral processes” over time.  (Via learning how to nourish viscera, and making many good predictions.)

I find this a useful model.

One way it’s useful:

IME, many people think they get willpower by magic (unrelated to their choices, surroundings, etc., although maybe related to sleep/food/physiology), and should use their willpower for whatever some abstract system tells them is virtuous.

I think this is a bad model (makes inaccurate predictions in areas that matter; leads people to have low capacity unnecessarily).

The model in the OP, by contrast, suggests that it’s good to take an interest in which actions produce something you can viscerally perceive as meaningful/rewarding/good, if you want to be able to motivate yourself to take actions.

(IME this model works better than does trying to think in terms of physiology solely, and is non-obvious to some set of people who come to me wondering what part of their machine is broken-or-something such that they are burnt out.)

(Though FWIW, IME physiology and other basic aspects of well-being also has important impacts, and food/sleep/exercise/sunlight/friends are also worth attending to.)

I mean, I see why a party would want their members to perceive the other party's candidate as having a blind spot.  But I don't see why they'd be typically able to do this, given that the other party's candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded.  It isn't enough to know there's an incentive in one direction; there's gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something.  So, I'm somehow still not hearing a model that gives me this prediction.

You raise a good point that Susan’s relationship to Tusan and Vusan is part of what keeps her opinions stuck/stable.

But I’m hopeful that if Susan tries to “put primary focal attention on where the scissors comes from, and how it is working to trick Susan and Robert at once”, this’ll help with her stuckness re: Tusan and Vusan.  Like, it’ll still be hard, but it’ll be less hard than “what if Robert is right” would be.

Reasons I’m hopeful:

I’m partly working from a toy model in which (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all used to be members of a common moral community, before it got scissored.  And the norms and memories of that community haven’t faded all the way.

Also, in my model, Susan’s fear of Tusan’s and Vusan’s punishment isn’t mostly fear of e.g. losing her income or other material-world costs.  It is mostly fear of not having a moral community she can be part of.  Like, of there being nobody who upholds norms that make sense to her and sees her as a member-in-good-standing of that group of people-with-sensible-norms.

Contemplating the scissoring process… does risk her fellowship with Tusan and Vusan, and that is scary and costly for Susan.

But:

  • a) Tusan and Vusan are not *as* threatened by it as if Susan had e.g. been considering more directly whether Candidate X was good.  I think.
  • b) Susan is at least partially compensated by her partial-risk-of-losing-Tusan-and-Vusan, by the hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here
  • b2) Tusan and Vusan are maybe also a bit tempted by this, which on their simpler models (since they’re engaging with Susan’s thoughts only very loosely / from a distance, as they complain about Susan) renders as “maybe she can change some of the candidate X supporters, since she’s discussing how they got tricked”
  • c) There are maybe some remnant-norms within the larger (pre-scissored) community that can appreciate/welcome Susan and her efforts.

I’m not sure I’m thinking about this well, or explicating it well.  But I feel there should be some unscissoring process?

Load More