5

Ω 2

This is a special post for quick takes by kave. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
7 comments, sorted by Click to highlight new comments since:
[-]kave150

Sometimes running to stand still is the right thing to do

It's nice when good stuff piles up into even more good stuff, but sometimes it doesn't:

  • Sometimes people are worried that they will habituate to caffeine and lose any benefit from taking it.
  • Most efforts to lose weight are only temporarily successful (unless using medicine or surgery).
  • The hedonic treadmill model claims it’s hard to become durably happier.
  • Productivity hacks tend to stop working.

These things are like Alice’s red queen’s race: always running to stay in the same place. But I think there’s a pretty big difference between running that keeps you exactly where you would have been if you hadn’t bothered, and running that either moves you a little way and then stops, or running that stops you moving in one direction.

I’m not sure what we should call such things, but one idea is hamster wheels for things that make no difference, bungee runs for things that let you move in a direction a bit but you have to keep running to stay there, and backwards escalators for things where you’re fighting to stay in the same place rather than moving in a direction (named for the grand international pastime of running down rising escalators).

I don't know which kind of thing is most common, but I like being able to ask which dynamic is at play. For example, I wonder if weight loss efforts are often more like backwards escalators than hamster wheels. People tend to get fatter as they get older. Maybe people who are trying (but failing) to lose weight are gaining weight more slowly than similar people who aren’t trying to do so?

Or my guess is that most people will have more energy than baseline if they take caffeine every day, even though any given dose will have less of an effect than taking the same amount of caffeine while being caffeine-naive, so they've bungee ran (done a bungee run?) a little way forward and that's as far as they'll go.

I am currently considering whether productivity hacks, which I’ve sworn off, are worth doing even though they only last for a little while. The extra, but finite, productivity could be worth it. (I think this would count as another bungee run).

I'd be interested to hear examples that fit within or break this taxonomy.

  • Most efforts to lose weight are only temporarily successful (unless using medicine or surgery).

 

Weight science is awful, so grain of salt here, but: losing weight and gaining it back is thought to be more harmful than maintaining a constant weight, especially if either of those was fast. It's probably still good if you get to a new lower trajectory, even if that trajectory eventually takes you to your old weight, but usually when I hear about this it's dramatic gains over a fairly short period. 

why is it bad to lose/regain?

An incomplete and poorly vetted list:

  • calorie counting[1] or restrictive diets:
    • harder to get a full swath of micronutrients
      • osteoporosis
      • fatigue
      • worse brain function
    • muscle loss
    • durable reduction[2] in resting metabolic rate 
    • weakened immune system
    • generally lower energy
    • electrolyte imbalance. I believe you have to really screw up to get this, but it can give you a heart attack. 
  • stimulants
    • too many are definitely bad for your heart
  • excess exercise
    • injuries
    • joint problems- especially likely at a high weight
  • ozembic
    • We don't know what they are yet but I'll be surprised if there are literally zero
  • Problems you can get even if you do everything right
    • something something gallbladder
    • screws with your metabolism in ways similar to eating excess calories or fat
      • increase in cholesterol
      • chatGPT says it increases type 2 diabetes. That's surprising to me and if it happens it's through complicated hormonal stuff. 
  • Regain: everything bad about high weight, but worse. 
  1. ^

    People will probably bring up the claim that low calories extend lifespan. In the only primate study I'm aware of, low-cal diets indeed reduced deaths from old age, but increased deaths from disease and anesthesia. 

  2. ^

    I think some of the reduction just comes from being lighter, which is inconvenient but not a problem. But it does seem like people who lose and regain weight have a lower BMR than people who stayed at the same weight. 

Because when you lose weight you lose a mix of fat and muscle, but when you gain weight you gain mostly fat if you don't exercise (and people usually don't because they think it's optional) resulting in a greater bodyfat percentage (which is actually the relevant metric for health, not weight)

[-]kave84

I think when I'm tempted to use the following reacts, I shouldn't and should use a different one. Feel free to call me out on using these reacts, though ideally on things that are published later than this shortform (I'm not going to remove old reacts I made):

  1. Skeptical. I think I should probably just use a probability here
  2. Missed the point. I think I mostly don't like this when I see it used; it feels like an illicitly specific-but-not-evidenced claim against the writing it’s applied to.
  3. Locally invalid. It's tempted to rejoin "locally invalid" to poor argument, but I think most written arguments are somewhat locally invalid (in a strict sense of validity), and this is true whether they're good or bad arguments. So this basically boils down to "I don't like this argument", but makes it sound like I'm saying something specific and epistemically helpful.
  4. With a heavy heart, I checked it's true and I checked it's false. I like these a lot, but I feel like they don't sufficiently communicate whatever evidence I collected, and so I should just comment saying that evidence if I want to say something helpful.
[-]kave50

Suppose you are a government that thinks some policy will be good for your populace in aggregate over the long-term (that is, it's a Kaldor-Hicks improvement). For example, perhaps some tax reform you're excited about.

But this reform (we assume) is quite unpopular with a few people who would suffer concentrated losses. You're tempted to include in the policy a big cash transfer that makes those people happy (that is, making it closer to Pareto optimal). But you're worried about levering up too much on your guess that this is a good policy.

Here's one thing you can do. You auction securities (that is, tradeable assets) that pay off as follows: if you raise $X through auctioning off the securities, you are committed to the policy (including the big cash transfer) and the security converts into one that tracks something about how well your populace is doing (like a share of an index fund or something). If you raise less than that, the owner of the security gets back the money they spent on the asset.

Ignoring some annoying details like the operational costs of this scheme or the foregone interest while waiting for the security to activate a branch of the conditional, the value of that security (which should be equal to the price you paid for it, if you didn't capture any surplus) is just the value of the security it converts to.

(Solve  +  = )

So this scheme lets you raise the cash for your policy under exactly the conditions when the auction "thinks" the value of the security increases sufficiently. Which is kind of neat.