Strong Evidence is Common

Linking my comment from the Forum:

I think in the real world there are many situations where (if we were to put explicit Bayesian probabilities on such beliefs, which we almost never do), beliefs with ex ante ~0 credence quickly get extraordinary updates. My favorite example is sense perception. If I woke up after sleeping on a bus and were to put explicit Bayesian probabilities on anticipating what I will see next time I open my eyes, then my belief I'd assign in the true outcome (ignoring practical constraints like computation and my near inability to have any visual imagery) has ~0 credence. Yet it's easy to get strong Bayesian updates: I just open my eyes. In most cases, this should be a large enough update, and I go on my merry way. 

But suppose I open my eyes and instead see  people who are  approximate lookalikes of dead US presidents sitting around the bus. Then at that point (even though the ex ante probability of this outcome and that of a specific other thing I saw isn't much different), I will correctly be surprised, and have some reasons to doubt my sense perception.

Likewise, if instead of saying your name is Mark Xu, you instead said "Lee Kuan Yew", I at least would be pretty suspicious that your actual name is Lee Kuan Yew.

I think a lot of this confusion in intuitions can be resolved by looking at what MacAskill calls the difference between unlikelihood and fishiness:

Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form.  But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards. 

Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence.  If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.

Put another way, we can dissolve this by looking explicitly at Bayes' theorem. 

and in turn, 

 is high in both the "fishy" and "non-fishy" regimes. However, is much higher for fishy hypotheses than  for non-fishy hypotheses, even if the surface-level evidence looks similar!

Strong Evidence is Common

Also many people with East Asian birth names go by some Anglicized birth name informally, enough that I'm fairly sure randomly selected "Mark Xu"s in the US will have well below 95% at "has a driver's license that says "Mark Xu"

Linch's Shortform

Crossposted from an EA Forum comment.

There are a number of practical issues with most attempts at epistemic modesty/deference, that theoretical approaches do not adequately account for. 

1) Misunderstanding of what experts actually mean. It is often easier to defer to a stereotype in your head than to fully understand an expert's views, or a simple approximation thereof. 

Dan Luu gives the example of SV investors who "defer" to economists on the issue of discrimination in competitive markets without actually understanding (or perhaps reading) the relevant papers. 

In some of those cases, it's plausible that you'd do better trusting the evidence of your own eyes/intuition over your attempts to understand experts.

2) Misidentifying the right experts. In the US, it seems like the educated public roughly believes that "anybody with a medical doctorate" is approximately the relevant expert class on questions as diverse as nutrition, the fluid dynamics of indoor air flow (if the airflow happens to carry viruses), and the optimal allocation of limited (medical)  resources. 

More generally, people often default to the closest high-status group/expert to them, without accounting for whether that group/expert is epistemically superior to other experts slightly further away in space or time. 

2a) Immodest modesty.* As a specific case/extension of this, when someone identifies an apparent expert or community of experts to defer to, they risk (incorrectly) believing that they have deference (on this particular topic) "figured out" and thus choose not to update on either object- or meta- level evidence that they did not correctly identify the relevant experts. The issue may be exacerbated beyond "normal" cases of immodesty, if there's a sufficiently high conviction that you are being epistemically modest!

3) Information lag. Obviously any information you receive is to some degree or another from the past, and has the risk of being outdated. Of course, this lag happens for all evidence you have. At the most trivial level, even sensory experience isn't really in real-time. But I think it should be reasonable to assume that attempts to read expert claims/consensus is disproportionately likely to have a significant lag problem, compared to your own present evaluations of the object-level arguments. 

4) Computational complexity in understanding the consensus. Trying to understand the academic consensus (or lack thereof) from the outside might be very difficult, to the point where establishing your own understanding from a different vantage point might be less time-consuming. Unlike 1), this presupposes that you are able to correctly understand/infer what the experts mean, just that it might not be worth the time to do so.

5) Community issues with groupthink/difficulty in separating out beliefs from action. In an ideal world, we make our independent assessments of a situation, report it to the community, in what Kant calls the "public (scholarly) use of reason" and then defer to an all-things-considered epistemically modest view when we act on our beliefs in our private role as citizens.

However, in practice I think it's plausibly difficult to separate out what you personally believe from what you feel compelled to act on. One potential issue with this is that a community that's overly epistemically deferential will plausibly have less variation, and lower affordance for making mistakes.


*As a special case of that, people may be unusually bad at identifying the right experts when said experts happen to agree with their initial biases, either on the object-level or for meta-level reasons uncorrelated with truth (eg use similar diction, have similar cultural backgrounds, etc)

Pain is not the unit of Effort

Fun fact, Hillary Clinton's autobiography quoted Clarkson quoting Nietzsche. 

Covid-19 IFR from a Bayesian point of view

Apologies if you've already thought of this, but some quick points:

  1. I think it's probably wrong to assume that covid19 IFR is a static quantity.
  2. It seems very plausible to me that (esp in the US) empirical covid-19 IFR dropped a lot over time, through a combination of better treatment and self-selection in who gets infected.
  3. In addition, IFR varies a lot from location to location due to demographic differences.
  4. Finally, one issue with using anti-body testing as ground truth for "once infected" that it's plausible that people lose antibodies over time.
Linch's Shortform

There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.

I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member here.

For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be closer to the mission of this site than that of the EA Forum).

The Treacherous Path to Rationality

The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.

Based on anecdotal reports, I'm not convinced that rationalist social media early on is substantially better than educated Chinese social media. I'm also not convinced that I would rather have rationalists in charge of the South Korean or Taiwanese responses than the actual people on the ground.

It's probable that this group did better than many Western authorities, but the bar of Kalia Beach, Palestine, is not very high

I think it is true that in important ways the rationalist community did substantially better than plausible "peer" social groups, but nonetheless, ~2 million people still died, and the world is probably worse off for it. 

And yet, we basically got everything about COVID-19 right

This specifically is quite surprising to me. I have a list of >30 mistakes I've made about covid*, and my impression is that I'm somewhat above average at getting things right. Certainly my impression is that some individuals seem to be noticeably more accurate than me (Divia Eden, Rob Wiblin, Lukas Gloor, and several others come to mind), but I would guess that a reasonably high fraction of people in this community are off by at least as much as I am, were they to venture concrete predictions. 

(I have not read most of the post so I apologize if my points have already been covered elsewhere).

* I have not updated the list much since late May. If I were to do so, I suspect the list would at least double in size.

DanielFilan's Shortform Feed

Brazil is another interesting place. In addition to the large populations and GDP, anecdotally based on online courses I've taken, philosophy meme groups etc, Brazilians seem more interested in Anglo-American academic ethics than people from China or India, despite the presumably large language barrier.

Top Time Travel Interventions?

Couldn't that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?

You aren't bringing democracy or other significantly improved governmental forms to the world. In the end it's just another empire. It might last a few thousand years if you're really lucky.

Hmm I don't share this intuition. I think a possible crux is answering the following question:

Relative to possible historical trajectories, is our current trajectory unusually likely or unlikely to navigate existential risk well?

I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we're better than average is anthropic survivorship bias, but I don't find it plausible since I'm not aware of any extinction-level near misses). 

With the 50th percentile baseline in mind, I think that a culture that is broadly 

  • consequentialist
  • longtermist
  • one-world government (so lower potential for race dynamics)
  • permissive of privacy violations for the greater good
  • prone to long reflection and careful tradeoffs
  • has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
  • Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)

seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.

Does this seem right to you? If not, approximately what percentile you will place our current trajectory?


In that case I think what you've done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.

This seems like a bad bargain to me.

Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.

Load More