Sequences

My Favorite Theorems

Wiki Contributions

Comments

Suggestion: could you also transcribe the Q&A? 4 out of the 10 minutes of content is Q&A. 

Answer by just_browsingMay 16, 202321

Here I cite reddit posts, not literature, because /r/fasting has a lot of good anecdotal data, and many weight loss studies are limited in scope. 

The answers to any of these questions will likely depend on your starting weight. 

On Question 2: In theory this is just a function of your BMR (basal metabolic rate) and TDEE (total daily energy expenditure). For example, if you are large enough to have a TDEE of 3000kcal, then you will lose 1lb of body mass per day (how much is muscle vs fat unclear). 

In practice this is a bit of an overestimate. For anecdotal success stories you could go to /r/fasting. On Top All I see:

Searching for "14 day" I see: (keep in mind, about 10+lbs of this is water weight)

Common wisdom on this subreddit is you get 0.5lbs/day of "real fat loss" during an extended fast. 

Retrospective: This comment was helpful

Write in order to organize your thoughts [...] then record yourself giving a short explanation of what you've learned about the topic [...] Watch the recording and process the emotions/discomforts with your speaking that come up

Haven't done the "record yourself" part but I have since started deliberately practicing explaining particular concepts. Typically I will practice it 5 times in a row, and after each time think carefully about what went well/poorly. Multiple comments suggested practice but I think this one resonated with me best (even though I'm not into focusing stuff) 

Retrospective: I found this particularly helpful

Watch podcast interviews. Pay attention to how the host asks questions.

Retrospective: I found this particularly helpful 

The best way to sound smart is to spend hours preparing something and present it as if you made it up on the spot. Really smart people will have a ton of prepared phrases, so many that they can talk on a wide variety of topics by saying something they already know how to say and just modifying it a little.

I think you can 80/20 all this stuff by being "moderately active" instead of "an athlete". 

Average BMI in the United States increased from 25.2 in 1975 to 28.9 in 2014, so a 3 point increase. Compare an average 1975 person with an average 2014 person. It's far more likely that the 3 point increase is due to overeating, rather than other explanations like packing on muscle (3 whole points of muscle is a lot) or variation in bone mass (this is likely negligible). Overeating is the path of least resistance in wealthy Western countries. So yes, technically BMI is not the same thing as fatness, but they are highly correlated. 

Also as Rockenots points out, the direction of your height claim is going in the wrong way. BMI is an underestimate for fatness for very tall people. For example, a healthy weight 6'2" man's BMI might be 17 or 18, which according to the standard BMI scale is underweight. That's why measures like better BMI exist.

AI capabilities are advancing rapidly. It's deeply concerning that individual actors can plan and execute experiments like "give a LLM access to a terminal and/or the internet". However I need to remember it's not worth spending my time worrying about this stuff. When I worry about this stuff, I'm not doing anything useful for AI Safety, I am just worrying. This is not a useful way to spend my time. Instead it is more constructive to avoid these thoughts and focus on completing projects I believe are impactful. 

Wow thanks for sharing. I might steal the NFC / walk scheduling ideas -- those sound like they could be useful. 

Long shot but you haven't happened to figure out how to get Tasker to interface with "Focus Mode" have you? That's one thing I haven't managed to get Tasker to detect yet.

"Don't make us look bad" is a powerful coordination problem which can have negative effects on a movement. Examples:

  • Veganism has a bad reputation of being holier than thou. It's hard to be a vegan without getting lumped in with "those vegans". So, it's hard to be open about being a vegan, which makes making veganism more socially acceptable tricky.
  • Ideas perceived as crazy are connected to the EA movement. For example, EAs discuss the possibility that we are living in a simulation seriously. So do flat earthers. Similarly, outsiders could dismiss EA as being too crazy for many other superficial reasons. The NYT's article on Scott Alexander (https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html) sort of acts as an example -- juxtaposing "MIRI" and "NRx" implicitly undermines the credibility of AI Safety research. EAs trying to work in public policy for example might not want to publicly identify as "EA" to the same extent because "the other EAs are making them look bad". 
  • A person who is part of a movement does something controversial. It makes the movement look bad. For example, longevity has been getting negative press due to the Aubrey de Grey scandal. 
     
  • The coordination problems the US democratic party faces, described by David Shor in this Rationally Speaking podcast episode (http://rationallyspeakingpodcast.org/wp-content/uploads/2020/11/rs248transcript.pdf). 

And that’s -- coordination's a very hard thing to do. People have very 
strong incentives to defect. If you're an activist going out and saying a very 
controversial thing, putting it out there in the most controversial, least 
favorable light so that you get a lot of negative attention. That's mostly 
good for you. That's how you get attention. It helps your career. It's how 
you get foundation money. [...]

And we really noticed that all of these campaigns, other than, I guess, Joe 
Biden, were embracing these really unpopular things. Not just stuff around 
immigration, but something like half the candidates who ran for president 
endorsed reparations, which would have been unthinkable, it would have 
been like a subject of a joke four years ago. And so we were trying to figure 
out, why did that happen? [...]

But we went and we tested these things. It turns out these unpopular 
issues were also bad in the primary. The median primary voter is like 58 
years old. Probably the modal primary voter is a 58-year-old black woman. 
And they're not super interested in a lot of these radical sweeping policies 
that are out there.

And so the question was, “Why was this happening?” I think the answer 
was that there was this pipeline of pushing out something that was 
controversial and getting a ton of attention on Twitter. The people who 
work at news stations -- because old people watch a lot of TV -- read 
Twitter, because the people who run MSNBC are all 28-year-olds. And 
then that leads to bookings. 
And so that was the strategy that was going on. And it just shows that 
there are these incredible incentives to defect.

One takeaway: a moderate democrat like Joe Biden suffers because the crazier looking democrats like AOC are "making him look bad", even if his and AOC's goals are largely aligned. I can only assume that the republican party faces similar issues (not discussed in this podcast episode though)

Are there more examples of "don't make us look bad" coordination problems like these? Any examples of overcoming this pressure and succeeding as a movement? 

How much to extreme people harm movements? What affects this?

  • For example, in politics, there are a few high-stakes all or nothing elections, where having extreme people quiet down could be beneficial to a particular party. On the other hand, no extreme voices could mean no progress. 
  • In veganism/EA, maybe extreme voices have less of a negative effect because there aren't as many high-stakes all or nothing opportunities. Instead, a bunch of decentralized actors do stuff. Clearly so far EAs seem to be doing fine interfacing with governments (e.g. CSET) so maybe the "don't make us look bad" factor is less here. 

This seems interesting and important. 

Load More