So I recently had a lot of fun reading HPMOR and the sequences, and I feel like I learned a bunch of cool things and new ways of looking at the world! I've also been trying to get my friends interested in the rationality community, particularly through the works of Eliezer. There, however, appeared an unexpected obstacle in my way. My friends saw a picture of Eliezer and immediately wondered, "If Eliezer is so rational (at least relative to other people) and also has living as long as possible high on his preference ordering, why is he fat?" It does seem like something that ought not take that much effort but has an overwhelmingly positive impact on one's life. health-wise, and maybe even performance-wise. I initially jokingly brushed the question off, saying his work is too important and doesn't leave him enough time to optimize his health, or that maybe he has some condition that prevents him from losing weight.
        But the question stuck with me.

New Answer
New Comment

3 Answers sorted by

Raemon

Mar 22, 2023

216

Actual answer is that Eliezer has tried a bunch of different things to lose weight and it's just pretty hard. (He also did a quite high-effort thing in 2019 which did work. I don't know how well he kept the pounds off in the subsequent time)

You can watch a fun video where he discusses it after the 2019 Solstice here.

(I'm not really sure how I feel about this post. It seems like it's coming from an earnest place, and I kinda expect a other people to have this question, but it's in a genre that feels pretty off to be picking on individual people about and I definitely don't want a bunch more questions similar to this on the site. I settled for downvoting but answering the question)

(He also did a quite high-effort thing in 2019 which did work. I don't know how well he kept the pounds off in the subsequent time)

I'm kinda confused why this is only mentioned in one answer, and in parentheses. Shouldn't this be the main answer -- like, hello, the premise is likely false? (Even if it's not epistemically likely, I feel like one should politely not assume that he since gained weight unless one has evidence for this.)

Mar 22, 2023

-40

Because his rational mind is bolted to an evolved lower brain that betrays him with slightly incorrect preference for extra calories when calories are plentiful.

And the conservation of willpower hypothesis says if he fixes his fatness through willpower this comes at the cost of other things.

Eliezer should probably go get a semaglutide or tirzepatide script like everyone else and lose the extra weight. Literally until a few years ago no clinically validated method of weight loss, save extremely dangerous gastric bypass surgery, existed. Diet and exercise do not work for most people.

Epistemic status : I am also slightly fat, though thinner than EY, and intend to cheat with these new drugs soon.

Derek M. Jones

Mar 22, 2023

-15-5

Eliezer's fatness signals his commitment to the belief that AI is a short-term risk to humanity, i.e., he does not expect to live long enough to experience the health problems.

[+][anonymous]1y-21-6
1 comment, sorted by Click to highlight new comments since: Today at 5:36 AM

There's already a good answer to the question, but I'll add a note.

Different people value different things, and so are willing to expend different amounts of effort to achieve different ends. As a result, even rational agents may not all achieve the same ends because they care about different things.

Thus we can have two rational agents, A and B. A cares a lot about finding a mate and not much else. B cares a lot about making money and not much else. A will be willing to invest more effort into things like staying in shape to the extent that helps A find a mate. B will invest a lot less in staying in shape and more in other things to the extent that's the better tradeoff to make a lot of money.

Rationality doesn't prescribe the outcome, just some of the means. Yes, some outcomes are convergent for many concerns, so many agents end up having the same instrumental concerns even if they have different ultimate concerns (e.g. power seeking is a common instrumental goal), but without understanding what an agent cares about you can't judge how well they are succeeding since success must be measures against their goals.