"Global evaluation" isn't exactly what I'm trying to posit; more like a "things bottom-out in X currency" thing.
Like, in the toy model about $ from Atlas Shrugged, an heir who spends money foolishly eventually goes broke, and can no longer get others to follow their directions. This isn't because the whole economy gets together to evaluate their projects. It's because they spend their currency locally on things again and again, and the things they bet on do not pay off, do not give them new currency.
I think the analog happens in me/others: I'll get excited about some topic, pursue it for awhile, get back nothing, and decide the generator of that excitement was boring after all.
Hmm. Under your model, are there ways that parts gain/lose (steam/mindshare/something)?
Does it feel to you as though your epistemic habits / self-trust / intellectual freedom and autonomy / self-honesty takes a hit here?
Fair point; I was assuming you had the capacity to lie/omit/deceive, and you're right that we often don't, at least not fully.
I still prefer my policy to the OPs, but I accept your argument that mine isn't a simple Pareto improvement.
Still:
I don't see advantage to remaining agnostic, compared to:
1) Acquire all the private truth one can.
Plus:
2) Tell all the public truth one is willing to incur the costs of, with priority for telling public truths about what one would and wouldn't share (e.g. prioritizing to not pose as more truth-telling than one is).
--
The reason I prefer this policy to the OP's "don't seek truth on low-import highly-politicized matters" is that I fear not-seeking-truth begets bad habits. Also I fear I may misunderstand how important things are if I allow politics to influence which topics-that-interest-my-brain I do/don't pursue, compared to my current policy of having some attentional budget for "anything that interests me, whether or not it seems useful/virtuous."
Yes, this is a good point, relates to why I claimed at top that this is an oversimplified model. I appreciate you using logic from my stated premises; helps things be falsifiable.
It seems to me:
I wish the above was more coherent/model-y.
Thanks for asking. The toy model of “living money”, and the one about willpower/burnout, are meant to appeal to people who don’t necessarily put credibility in Rand; I’m trying to have the models speak for themselves; so you probably *are* in my target audience. (I only mentioned Rand because it’s good to credit models’ originators when using their work.)
Re: what the payout is:
This model suggests what kind of thing an “ego with willpower” is — where it comes from, how it keeps in existence:
I find this a useful model.
One way it’s useful:
IME, many people think they get willpower by magic (unrelated to their choices, surroundings, etc., although maybe related to sleep/food/physiology), and should use their willpower for whatever some abstract system tells them is virtuous.
I think this is a bad model (makes inaccurate predictions in areas that matter; leads people to have low capacity unnecessarily).
The model in the OP, by contrast, suggests that it’s good to take an interest in which actions produce something you can viscerally perceive as meaningful/rewarding/good, if you want to be able to motivate yourself to take actions.
(IME this model works better than does trying to think in terms of physiology solely, and is non-obvious to some set of people who come to me wondering what part of their machine is broken-or-something such that they are burnt out.)
(Though FWIW, IME physiology and other basic aspects of well-being also has important impacts, and food/sleep/exercise/sunlight/friends are also worth attending to.)
I mean, I see why a party would want their members to perceive the other party's candidate as having a blind spot. But I don't see why they'd be typically able to do this, given that the other party's candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn't enough to know there's an incentive in one direction; there's gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I'm somehow still not hearing a model that gives me this prediction.
You raise a good point that Susan’s relationship to Tusan and Vusan is part of what keeps her opinions stuck/stable.
But I’m hopeful that if Susan tries to “put primary focal attention on where the scissors comes from, and how it is working to trick Susan and Robert at once”, this’ll help with her stuckness re: Tusan and Vusan. Like, it’ll still be hard, but it’ll be less hard than “what if Robert is right” would be.
Reasons I’m hopeful:
I’m partly working from a toy model in which (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all used to be members of a common moral community, before it got scissored. And the norms and memories of that community haven’t faded all the way.
Also, in my model, Susan’s fear of Tusan’s and Vusan’s punishment isn’t mostly fear of e.g. losing her income or other material-world costs. It is mostly fear of not having a moral community she can be part of. Like, of there being nobody who upholds norms that make sense to her and sees her as a member-in-good-standing of that group of people-with-sensible-norms.
Contemplating the scissoring process… does risk her fellowship with Tusan and Vusan, and that is scary and costly for Susan.
But:
I’m not sure I’m thinking about this well, or explicating it well. But I feel there should be some unscissoring process?
Seems helpful for understanding how believing-ins get formed by groups, sometimes.