just_browsing's Shortform

by just_browsing21st Dec 202010 comments
8 comments, sorted by Highlighting new comments since Today at 5:23 PM
New Comment

I gave somebody I know (50yo libertarian-leaning conservative) Doing Good Better by William MacAskill. I told them I think they might like the book because it has "interesting economic arguments", in order to not seem like a crazy EA-evangelist. I thought their response to it was interesting so I am sharing it here. 

They received the book mostly positively. Their main takeaway was the idea that thinking twice about whether a particular action is really sensible can have highly positive impacts.

Here were their criticisms / misconceptions (which I am describing in my own words):

(1) Counterfactual concerns. Society benefits from lots of diverse institutions existing. It would be worse off if everybody jumped ship to contribute to effective causes. This is particularly the case with people who would have gone on to do really valuable things. Example: what if the people who founded Netflix, Amazon, etc. instead went down provably effective but in hindsight less value-add paths? 

(2) When deciding which action to take, the error bars in expected value calculations are quite high. So, how can we possibly choose? In cases where the expected value of the "effective" option is higher but within error bars, to what extent am I a bad person for not choosing the effective option? 

(12) Example: should a schoolteacher quit their job and go do work for charity? 

My response to them was the following: 

On (1): I think it's OK for people to choose paths according to comparative advantage. For the Netflix example, early on it was high-risk-high-reward, but the high risk was not ludicrously high because the founders had technical expertise, a novel idea, really liked movies, whatever. Basically the idea here is the Indian TV show magician example 80000 Hours talks about here

On (2): If the error bars around expected value overlap significantly, then I think they cease to be useful enough to be the main decision factor. So, switch to deciding by some other criterion. Maybe one option is more high-risk-high-reward than the other, so if a person is risk averse / seeking they will have different preferences. Maybe one option increases personal quality of life (this is a valid decision factor!!). 

On (12): This combines the previous two. If one option (teaching vs charity) has an expected value of much larger than that of the other option, the teacher should probably pick the higher impact option. This doesn't have to be charity—maybe the teacher is a remarkable teacher and a terrible charity worker. 

As described in my response to (1), the expected value calculation takes into account high-risk-high-reward cases as long as the calculations are done reasonably. (If the teacher thinks their chances of doing extreme amounts of good with the charity are 1%, when in reality they are 0.0001%, this is a problem.)

If the expected values are too close to compare, the teacher is "allowed" to use other decision criteria, as described in my response to (2). 

In the end, what we were able to agree on was:

(*) There exist real-world decisions where one option is (with high probability) much more effective than the other option. It makes to choose the effective option in these cases. Example: PlayPump versus an effective charity

(*) High-risk-high-reward pursuits can be better choices than "provably effective" pursuits (e.g. research as opposed to earning to give). 

What I think we still disagree on is the extent to which one is morally obligated to choose the effective option when the decision is in more of a gray area. 

I take the "glass is half full" interpretation. In a world where most people do not consider the qualitative impact of their actions, choosing the effective option outside the gray areas is already a huge improvement.

I used to struggle to pay attention to audiobooks and podcasts. No matter how fascinating I found the topic, whenever I tried to tune in I would quickly zone out and lose the thread. However I think I am figuring out how to get myself to focus on these audio-only information sources more consistently. 

I've tried listening to these audio information sources in three different environments: 

  1. Doing nothing else
  2. Going on a walk (route familiar or randomly chosen as I go)
  3. Doing menial tasks in minecraft (fishing, villager trading, farming, small amounts of inventory management)

My intuition would have been that my attention would be best with (1), then (2), then (3). In fact the opposite seems to be true. I focus best while playing minecraft, then while walking, then doing nothing else. 

I think the explanation for this is fairly self-evident if you turn it around the other way. The reason why I am not able to focus on podcasts while doing nothing else is usually because my mind goes off on tangents, tunes out the audio, and loses the thread. To a lesser extent, this happens on walks. It seems like menial tasks in minecraft take up just enough mental energy for me not to additionally think up tangents, but not so much mental energy that I can no longer follow the discussion. In summary: "Being focused" on a fast-paced one-way stream of information requires not going off on tangents, which my brain can only do if it is sufficiently idle.

Something I am aware of but haven't tested is that it could be that minecraft is too taxing and I am not absorbing as much as I would be if I were going on a walk. However, I would argue that it is better to consistently absorb 80% of a podcast than it is to absorb 100% of a podcast's content 80% of the time and be completely zoned out for the other 20% (as is perhaps the case when I am walking). Pausing and rewinding is inefficient and annoying. This is also an argument for listening to podcasts at a faster speed (perhaps at the cost of absorption rate). 

Moreover, I am listening to podcasts with the goal of gaining high-level understanding of the topics covered. So, "everything but slightly fuzzy on the details" is better than "the details of 80% of everything" for my purposes. Perhaps if I was listening with a different goal (for example, a podcast discussing a paper I wanted to deeply understand), more of my focus would be required and it would be better for me to walk (or even sit still) than play minecraft. 

Initially, I thought I was bad at focusing on podcasts since I lacked the brainpower to follow a fast-paced audio. Having experienced decreased distractability while listening to a podcast and playing minecraft, I have now updated my model of how I focus. I think focus might follow a sort of Laffer curve (upside down U) shape, where the x axis is # external stimuli and the y axis is # content absorbed. 

More precisely (a picture really would do better here but I don't know how to put one in a shortform): Call the most # content absorbed y0 and the corresponding # external stimuli x0. I used to think podcasts were more than x0 stimulus for me, meaning that I could never absorb a near-optimal amount of content. However the minecraft+podcast experiment showed me that podcasts take less than x0 stimulus for me, and minecraft just enoug boosted the amount of stimuli to get me to the optimal (x0, y0) focus situation. 

Going forward I definitely want to experiment with different combinations of stimuli (media, physical activity, environment) and see how I can optimize my focus. Some thoughts which seem like other people have them / do them: 

  1. What can I focus on best while exercising? Previously I have been putting dumb tv shows on in the background—is this all I can focus on or can I use this time more productively? (If I can be more productive then I will probably exercise more—win win!)
  2. Within the realm of podcasts, can I come up with different "categories" and associate optimal actions to each? Three categories I have experience with are "technical" (AXRP, more hardcore episodes of 80k Hours podcast), "soft skills" (less hardcore episodes of 80k Hours podcast), "fun" (e.g. podcasts about a tv show). Then I could build habits based off this (e.g. pairing "soft skills" with minecraft or "technical" with sitting outside) without having to put as much effort into decision making.
  3. Podcasts + fixed stimuli make for good benchmarks which will help me measure whether my focus is higher or lower than usual. For example, maybe if I am unable to focus on a combo that I usually am able to focus on, that could be a sign there is something wrong with my physical health or that I am mentally exhausted. 
  4. Some people report being able to focus on difficult tasks (e.g. theoretical research) best when in a noisy place but otherwise undistracted (e.g. coffee shop). This seems like an instance of what I am talking about here.

Outcome: I will try to think about this more deliberately when planning which activities I do when, and in particular how I pair activities which can be done simultaneously. Who knows—maybe I will finally be able to get through some of those 3+ hour long episodes of the 80,000 Hours podcast! :) 

Fun brainteaser for students learning induction:

Here is a "proof" that  is rational. It uses the fact

as well as induction. It suffices to show that the right-hand side of  is rational. We do this by induction. For the base case, we have that  is rational. For the inductive step, assume  is rational. Then adding or subtracting the next term  (which is rational) will result in a rational number. 

The flaw is of course that we've only shown that the partial sums are rational. Induction doesn't say anything about their limit, which is not necessarily rational. 

Here is much shorter proof: 22/7 is rational :D 

I've thought through an explanation as to why there exist people who are not effective altruists. I think it's important to understand these viewpoints if EAs want to convert more people to their side. 

As an added bonus, I think this explanation generalizes to many cases where a person's actions contradict their knowledge—thinking through this helped me better understand why I think I take actions which contradict my knowledge. 

Summary: people's gut feel (which actually governs most decision-making) takes time, thought and effort to catch up to their systematic reasoning (which is capable of absorbing new information much quicker). This explains phenomena such as "why not everyone who has heard of EA is an EA" and "why not everyone who has heard of factory farming is a vegan". 

Outcome / why this was useful for me to think about: This framework of "systematic reasoning" vs "gut feel" is useful for me when thinking about what I know, how well I know it, and whether I act on this knowledge. This helps distinguish between two possible types of "this person is acting contrary to knowledge they have": 1) the person's actions disagree with their gut feel and systematic reasoning (= lack of control) or 2) the person's actions agree with their systematic reasoning but not gut feel (= still processing the knowledge). 

Full explanation: People's views on career choices, moral principles, and most generally the moral value of particular actions are quite rarely influenced by systematic reasoning. Instead, people automatically develop priors on these things by interacting with society and make most decisions according to gut feel.  

Making gut feel decisions instead of using systematic reasoning is generally a good move. At any moment, we are deciding not to do an insanely high number of technically feasible actions. Evaluating all of these is computationally intractable. (for arguments like these see "Algorithms to Live By")

When people are introduced to EA, they will usually not object to premises such as "we should make choices to do more good at the margin" and "some charities are 10-100x more effective than others". However just because they agree with this doesn't mean they're going to immediately become an EA. In other words, anybody can quickly understand EA concepts through their systematic reasoning, but that doesn't mean it has also reached their gut feel reasoning (= becoming an EA). 

A person's gut feel on EA topics is all of their priors on charitable giving, global problems, career advice, and doing good in general. Even the most well-worded argument isn't immediately going to sway a person's priors so much that they immediately become an EA. But over time, a person's priors can be updated via repeated exposure and internal reflection. So maybe you explain EA to someone and they're initially skeptical, but they continue carefully considering EA ideas and become more and more of an EA. 

This framework is actually quite general. Here's another example: consider a person who is aware that factory farming is cruel but regularly eats meat. This is because their gut feel on whether meat is OK hasn't caught up to systematic reasoning about factory farming being unethical. 

Just like the EA example explained above, there is often no perfect explanation which can instantly turn somebody into a gut feel vegan. Rather, they have to put in the work to reflect on pro-vegan evidence presented to them.

(n.b: the terms "systematic reasoning" and "gut feel" are not as thoughtfully chosen as they could be—I'd appreciate references to better or more standard terms!)

The standard terms: Gut feel = 'System 1', systematic reasoning = 'system 2' :)

Ah, I googled those and the results mostly mentioned "Thinking Fast and Slow". The book has been on my list for a while but it sounds like I should give it higher priority. Thanks for the pointer!