Can you explain the no-loss competition idea further?
Thanks, I read that, and while I wouldn't say I'm completely enlightened, I feel like I have a good basis for reading it a few more times until it sinks in.
I interpret you as saying in this post: there is no fundamental difference between base and noble motivations, they're just two different kinds of plans we can come up with and evaluate, and we resolve conflicts between them by trying to find frames in which one or the other seems better. Noble motivations seem to "require more willpower" only because we often spend more time working on coming up with positive frames for them, because this activity flatters our ego and so is inherently rewarding.
I'm still not sure I agree with this. My own base motivation here is that I posted a somewhat different model of willpower at https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower , which is similar to yours except that it does keep a role for the difference between "base" and "noble" urges. I'm trying to figure out if I still want to defend it against this one, but my thoughts are something like:
- It feels like on stimulants, I have more "willpower" : it's easy to take the "noble" choice when it might otherwise be hard. Likewise, when I'm drunk I have less ability to override base motivations with noble ones, and (although I guess I can't prove it) this doesn't seem like a purely cognitive effect where it's harder for me to "remember" the important benefits of my noble motivations. The same is true of various low-energy states, eg tired, sick, stressed - I'm less likely to choose the noble motivation in all of them. This suggests to me that baser and nobler motivations are coming from different places, and stimulants strengthen (in your model) the connection between the noble-motivation-place and the striatum relative to the connection between the base-motivation-place the striatum, and alcohol/stress/etc weaken it.
- I'm skeptical of your explanation for the "asymmetry" of noble vs. base thoughts. Are thoughts about why I should stay home really less rewarding than thoughts about why I should go to the gym? I'm imagining the opposite - I imagine staying home in my nice warm bed, and this is a very pleasant thought, and accords with what I currently really want (to not go to the gym). On the other hand, thoughts about why I should go to the gym, if I were to verbalize them, would sound like "Ugh, I guess I have to consider the fat that I'll be a fat slob if I don't go, even though I wish I could just never have to think about that".
- Base thoughts seem like literally animalistic desires - hunger seems basically built on top of the same kind of hunger a lizard or nematode feels. We know there are a bunch of brain areas in the hypothalamus etc that control hunger. So why shouldn't this be ontologically different from nobler motivations that are different from lizards'? It seems perfectly sensible that eg stimulants strengthen something about the neocortex relative to whatever part of the hypothalamus is involved in hunger. I guess I'm realizing now how little I understand about hunger - surely the plan to eat must originate in the cortex like every other plan, but it sure feels like it's tied into the hypothalamus in some really important way. I guess maybe hunger could have a plan-generator exactly like every other, which is modulated by hypothalamic connections? It still seems like "plans that need outside justification" vs. "plans that the hypothalamus will just keep active even if they're stupid" is a potentially important dichotomy.
- Base motivations also seem like things which have a more concrete connection to reinforcement learning. There's a really short reinforcement loop between "want to eat candy" and "wow, that was reinforcing", and a really long (sometimes nonexistent) loop between going to the gym and anything good happening. Again, this makes me suspicious that the base motivations are "encoded" in some way that's different from the nobler motivations and which explains why different substances can preferentially reinforce one relative to the other.
- The reasons for thinking of base motivations as more like priors, discussed in that post.
- Kind of a dumb objection, but this feels analogous to other problems where a conscious/intellectual knowledge fails to percolate to emotional centers of the brain, for example someone who knows planes are very safe but is scared of flying anyway. I'm not sure how to use your theory here to account for this situation, whereas if I had a theory that explained the plane phobia problem I feel like it would have to involve a concept of lower-level vs. higher-level systems that would be easy to plug into this problem.
- Another dumb anecdotal objection, but this isn't how I consciously experience weakness of will. The example that comes to mind most easily is wanting to scratch an itch while meditating, even though I'm supposed to stay completely still. When I imagine my thought process while worrying about this, it doesn't feel like trying to think up new reframings of the plan. It feels like some sensory region of the brain saying "HEY! ITCH! YOU SHOULD SCRATCH IT!" and my conscious brain trying to exert some effort to overcome that. The effort doesn't feel like thinking of new framings, and the need for the effort persists long after every plausible new framing has been thought of. And it does seem relevant that "scratch itch" has no logical justification (it's just a basic animal urge that would persist even if someone told you there was no biological cause of the itch and no way that not scratching it could hurt you), whereas wanting to meditate well has a long chain of logical explanations.
Can you link to an explanation of why you're thinking of the brainstem as plan-evaluator? I always thought it was the basal ganglia.
Mental hospitals of the type I worked at when writing that post only keep patients for a few days, maybe a few weeks at tops. This means there's no long-term constituency for fighting them, and the cost of errors is (comparatively) low.
The procedures for these hospitals would be hard to change. It's hard to have a law like "you need a judge to approve sending someone to a mental hospital", because maybe someone's trying to kill themselves right now and the soonest a judge has an opening is three days from now. So the standard rule is "use your own judgment and a judge will review it in a week or two", but most psychiatric cases resolve before then and never have to see a judge. In theory patients can sue doctors if they think they were being held improperly, but they almost never get around to doing this and when they do they almost never win, for a combination of "they're usually wrong about the law and sometimes obviously insane" and "judges are biased towards doctors because they seem to know what they're talking about". Also, the law just got done instituting extremely severe and unpredictable punishments to any doctor who doesn't commit someone to a mental hospital and then that person does anything bad ever, and the law has kindly decided not to be extremely severe on both sides.
There are other mental hospitals that keep people for months or years, but these do have very strict requirements for getting someone into them and are much more careful.
I have some patients on disulfiram and it works very well when they take it. The problem is definitely that they can choose not to take it if they want alcohol (or sometimes just forget for normal reasons, then opportunistically drink after they realize they've forgotten).
The implants are a great idea. As far as I know, the reason they're not used is because someone would have to pay for lots and lots of studies and the economics don't work out. Also because there are vague concerns about safety (if something went catastrophically wrong and the entire implant got released at once and then the patient drank, it would be potentially fatal) and ethics (should a realistically-probably-heavily-pressured patient be allowed to make decisions that bind their future selves)? I think this is dumb and we should just do the implant, but I don't think it's mysterious why we don't, or why (in the absence of the implant) disulfiram doesn't solve everything.
I tried to bet on this on Polymarket a few months ago. Their native client for directing money into your account didn't work (I think it was because I was in the US and it wasn't legal under US law). I tried to send money from another crypto account, and it said Polymarket didn't have enough money to pay the Ethereum gas fees to receive my money. It originally asked me to try reloading the page close to an odd numbered GMT hour, when they were sending infusions of money to pay gas fees, but I tried a few times and never got quite close enough. I just checked again and they're asking me to send them more money for gas fees, which I should probably do but which is a tough sell when they just ate the last chunk of money I sent them.
I assume the person you're talking about who made $100K is Vitalik. Vitalik knows much more about making Ethereum contracts work than the average person, and details the very complicated series of steps he had to take to get everything worked out in his blog post. There probably aren't very many people who can do all that successfully, and the people who can are probably busy becoming rich some other way.
Agreed - see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4236403/ and my writeup at https://slatestarcodex.com/2018/10/22/cognitive-enhancers-mechanisms-and-tradeoffs/ .
Thanks, this is a great clarification.
Thanks for this.
I think the UFH might be more complicated than you're making it sound here - the philosophers debate whether any human really has a utility function.
When you talk about the CDC Director sometimes doing deliberately bad policy to signal to others that she is a buyable ally, I interpret this as "her utility function is focused on getting power". She may not think of this as a "utility function", in fact I'm sure she doesn't, it may be entirely a selected adaptation to execute, but we can model it as a utility function for the same reason we model anything else as a utility function.
I used the example of a Director who genuinely wants the best, but has power as a subgoal since she needs it in order to enact good policies. You're using the example of a Director who really wants power, but (occasionally) has doing good as a subgoal since it helps her protect her reputation and avoid backlash. I would be happy to believe either of those pictures, or something anywhere in between. They all seem to me to cash out as a CDC Director with some utility function balancing goodness and power-hunger (at different rates), and as outsiders observing a CDC who makes some good policy and some bad-but-power-gaining policy (where the bad policy either directly gains her power, or gains her power indirectly by signaling to potential allies that she isn't a stuck-up goody-goody. If the latter, I'm agnostic as to whether she realizes that she is doing this, or whether it's meaningful to posit some part of her brain which contains her "utility function", or metaphysical questions like that).
I'm not sure I agree with your (implied? or am I misreading you?) claim that destructive decisions don't correlate with political profit. The Director would never ban all antibiotics, demand everyone drink colloidal silver, or do a bunch of stupid things along those lines; my explanation of why not is something like "those are bad and politically-unprofitable, so they satisfy neither term in her utility function". Likewise, she has done some good things, like grant emergency authorization for coronavirus vaccines - my explanation of why is that doing that was both good and obviously politically profitable. I agree there might be some cases where she does things with neither desideratum but I think they're probably rare compared to the above.
Do we still disagree on any of this? I'm not sure I still remember why this was an important point to discuss.
I am too lazy to have opinions on all nine of your points in the second part. I appreciate them, I'm sure you appreciate the arguments for skepticism, and I don't think there's a great way to figure out which way the evidence actually leans from our armchairs. I would point to Dominic Cummings as an example of someone who tried the thing, had many advantages, and failed anyway, but maybe a less openly confrontational approach could have carried the day.
Bronze Age war (as per James Scott) was primarily war for captives, because the Bronze Age model was kings ruling agricultural dystopias amidst virgin land where people could easily escape and become hunter-gatherers. The laborers would gradually escape, the country would gradually become less populated, and the king would declare war on a neighboring region to steal their people to use as serfs or slaves.
Iron Age to Industrial Age war (as per Peter Turchin) was primarily war for land, because of Malthus. Until the Industrial Revolution, you needed a certain amount of land to support a unit of population. Population was constantly increasing, land wasn't, and so every so often population would outstrip land, everyone would be starving and unhappy, and something would restore the situation to equilibrium. Absent any other action, that would be some sort of awful civil war or protracted anarchy where people competed for limited resources - aided by wages being very low (so they could hire soldiers easily) and people being very angry (so becoming a pretender and raising an army against the current king was a popular move). Kings' best way to forestall this disaster was to preemptively declare war against a foreign enemy. If they won, they could steal the enemy's land, which resolved the land/population imbalance and fed the excess population. If they lost, then (to be cynical about it), they still eliminated their excess population and successfully resolved the imbalance.