I’ve argued in my unawareness sequence that when we properly account for our severe epistemic limitations, we are clueless about our impact from an impartial altruistic perspective.
However, this argument and my responses to counterarguments involve a lot of moving parts. And the term “clueless” gets used in various importantly different ways. It can be easy to misunderstand which claims I am (not) making, in the context of previous EA and academic writings on cluelessness.
So, as a “guide” to these arguments, I’ve written this list of questions and resources that answer them. Caveats:
Most of the resources are my own work — not because I necessarily think I’ve given the best answers, but because the precise claims and framings that other works use might be subtly yet importantly different from mine. I also include references to writings that I have not (co-)authored, for more context. But these authors don’t necessarily endorse my claims.
When I link to a reply to someone else’s comment, I don’t mean to claim that the person being replied to endorses the exact statement of the objection I’ve given in this post.
What are unawareness, indeterminacy, and cluelessness?: The basics
What’s the connection between unawareness and cluelessness? Are there arguments for cluelessness besides the argument from unawareness?
“Motivating example” in “Should you go with your best guess?: Against precise Bayesianism and related views”
What’s the difference between (A) accounting for unawareness, or having imprecise credences, and (B) just being really uncertain, or needing to think more before acting? You say we should use intervals of {probabilities} / {values of outcomes} / {expected values} instead of single numbers. What do these intervals mean?
“Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
You say we should have imprecise credences (etc.) because picking a precise credence is “arbitrary”. Are you saying we need to justify everything from precisely formalizable principles? That seems doomed.
“Why not just do what works?” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
Why aren’t precise credences and EV the appropriate response to these problems?
Sure, we don’t have an exact probability distribution over possible outcomes with exact values assigned to them. But aren’t we still ultimately aiming for the highest-EV action? And can’t we do that using best-guess proxies for the EV?[1]
“Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
Why not aggregate our interval of {probabilities} / {values of outcomes} / {expected values} using a meta-distribution? (E.g., just take the midpoint.) Don’t we leave out information otherwise?
Can’t we always say which action is net-better as long as our intuitions are at least somewhat better than chance? Or, as long as there’s some similarity between promoting the impartial good and decision problems we’re much more familiar with?
You say that picking a precise credence/EV is arbitrary. Isn’t the cutoff between the numbers you include vs. exclude in imprecise credences/intervals of EVs also arbitrary?
“Indeterminate Bayesianism” in “Should you go with your best guess?: Against precise Bayesianism and related views”
If you have imprecise credences or incomplete preferences, can’t you get money-pumped or otherwise take a dominated strategy? (And if you apply some patch to avoid dominated strategies, aren’t you just acting like a precise EV maximizer?)
Sure, you don’t need to have precise probabilities and evaluate actions based on EV to avoid money pumps. Still, don’t coherence/representation theorems collectively suggest that precise EV maximization is normatively correct? (As Yudkowsky puts it, “We have multiple spotlights all shining on the same core mathematical structure [of expected utility]”.)[3]
“Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
We’re surely not entirely clueless in mundane contexts. And it would be arbitrary to posit a sharp discontinuity between those contexts and promoting the impartial good. The complexity of a decision problem is continuous and on a spectrum. Thus, aren’t we not entirely clueless about promoting the impartial good?
Sure, there’s some imprecision in our estimates, but aren’t at least some interventions good by a wide enough margin that the imprecision doesn’t matter?
Come on, do you really think [obviously good/bad thing] is no better/worse than staying at home watching cat videos? Isn’t this just radical skepticism?
Shouldn’t we treat the unknown unknowns as canceling out in expectation, since we can’t say anything about them either way? Or at least, can’t we extrapolate from what we do know? Even if we’re biased, it would be surprising for our biases to be highly anti-inductive in expectation.
“Symmetry” and “Extrapolation” in “Why existing approaches to cause prioritization are not robust to unawareness”
Who, and which interventions, are these problems relevant to?
Isn’t cluelessness only a problem if you’re trying to directly shape the far future? But I’m not doing that, I’m trying to (e.g.) stop x-risks in the next few years.
… do more research, spread better values or decision-making practices, gain more influence on AI, or save money?
“Capacity-Building” in “Why existing approaches to cause prioritization are not robust to unawareness”
… follow strategies whose least conjunctive effects are positive?
“Simple Heuristics” in “Why existing approaches to cause prioritization are not robust to unawareness”
Your case for indeterminacy appeals a lot to “arbitrariness”. I’m fine with some arbitrariness in my beliefs and preferences. Isn’t that enough for me to not be clueless?
What about extremely small decisions, like helping an old lady cross the street? If we help the old lady, isn’t it reasonable to treat the expected value of the off-target effects as so negligible that the benefit to the old lady dominates?
Note: I’m not sure the references included here fully respond to this question. But it’s not yet clear to me what people mean by this question, so I encourage anyone who finds the included references inadequate to say in the comments what they have in mind.
This work argues against the view that diachronic (i.e., sequential) money pump / dominated strategy arguments, such as the arguments against incompleteness, are normatively relevant in the first place.
Note: Again, I’m not entirely sure what the argument for this objection is supposed to be, so it’s hard to say whether these references adequately address it.
I’ve argued in my unawareness sequence that when we properly account for our severe epistemic limitations, we are clueless about our impact from an impartial altruistic perspective.
However, this argument and my responses to counterarguments involve a lot of moving parts. And the term “clueless” gets used in various importantly different ways. It can be easy to misunderstand which claims I am (not) making, in the context of previous EA and academic writings on cluelessness.
So, as a “guide” to these arguments, I’ve written this list of questions and resources that answer them. Caveats:
What are unawareness, indeterminacy, and cluelessness?: The basics
Why aren’t precise credences and EV the appropriate response to these problems?
Sure, we don’t have an exact probability distribution over possible outcomes with exact values assigned to them. But aren’t we still ultimately aiming for the highest-EV action? And can’t we do that using best-guess proxies for the EV?[1]
“Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
“Okay, But Shouldn’t We Try to Approximate the Bayesian Ideal?” in Violet Hour (2023)
Further reading:
Comment by Clifton
Hedden (2015)[2]
Sure, you don’t need to have precise probabilities and evaluate actions based on EV to avoid money pumps. Still, don’t coherence/representation theorems collectively suggest that precise EV maximization is normatively correct? (As Yudkowsky puts it, “We have multiple spotlights all shining on the same core mathematical structure [of expected utility]”.)[3]
“Unawareness vs. uncertainty” in “The challenge of unawareness for impartial altruist action guidance: Introduction”
“Avoiding dominated strategies” in “Winning isn’t enough”
Further reading:
Rethink Priorities (2023, Sec. 3.2)
Hájek (2008)
Aren’t we not clueless (in practice) because…?
Who, and which interventions, are these problems relevant to?
What implications do these problems (not) have for our decisions?
Note: I’m not sure the references included here fully respond to this question. But it’s not yet clear to me what people mean by this question, so I encourage anyone who finds the included references inadequate to say in the comments what they have in mind.
This work argues against the view that diachronic (i.e., sequential) money pump / dominated strategy arguments, such as the arguments against incompleteness, are normatively relevant in the first place.
Note: Again, I’m not entirely sure what the argument for this objection is supposed to be, so it’s hard to say whether these references adequately address it.