4 comments, sorted by Click to highlight new comments since: Today at 11:12 AM
New Comment

Crossposted from an EA Forum comment.

There are a number of practical issues with most attempts at epistemic modesty/deference, that theoretical approaches do not adequately account for. 

1) Misunderstanding of what experts actually mean. It is often easier to defer to a stereotype in your head than to fully understand an expert's views, or a simple approximation thereof. 

Dan Luu gives the example of SV investors who "defer" to economists on the issue of discrimination in competitive markets without actually understanding (or perhaps reading) the relevant papers. 

In some of those cases, it's plausible that you'd do better trusting the evidence of your own eyes/intuition over your attempts to understand experts.

2) Misidentifying the right experts. In the US, it seems like the educated public roughly believes that "anybody with a medical doctorate" is approximately the relevant expert class on questions as diverse as nutrition, the fluid dynamics of indoor air flow (if the airflow happens to carry viruses), and the optimal allocation of limited (medical)  resources. 

More generally, people often default to the closest high-status group/expert to them, without accounting for whether that group/expert is epistemically superior to other experts slightly further away in space or time. 

2a) Immodest modesty.* As a specific case/extension of this, when someone identifies an apparent expert or community of experts to defer to, they risk (incorrectly) believing that they have deference (on this particular topic) "figured out" and thus choose not to update on either object- or meta- level evidence that they did not correctly identify the relevant experts. The issue may be exacerbated beyond "normal" cases of immodesty, if there's a sufficiently high conviction that you are being epistemically modest!

3) Information lag. Obviously any information you receive is to some degree or another from the past, and has the risk of being outdated. Of course, this lag happens for all evidence you have. At the most trivial level, even sensory experience isn't really in real-time. But I think it should be reasonable to assume that attempts to read expert claims/consensus is disproportionately likely to have a significant lag problem, compared to your own present evaluations of the object-level arguments. 

4) Computational complexity in understanding the consensus. Trying to understand the academic consensus (or lack thereof) from the outside might be very difficult, to the point where establishing your own understanding from a different vantage point might be less time-consuming. Unlike 1), this presupposes that you are able to correctly understand/infer what the experts mean, just that it might not be worth the time to do so.

5) Community issues with groupthink/difficulty in separating out beliefs from action. In an ideal world, we make our independent assessments of a situation, report it to the community, in what Kant calls the "public (scholarly) use of reason" and then defer to an all-things-considered epistemically modest view when we act on our beliefs in our private role as citizens.

However, in practice I think it's plausibly difficult to separate out what you personally believe from what you feel compelled to act on. One potential issue with this is that a community that's overly epistemically deferential will plausibly have less variation, and lower affordance for making mistakes.
 

--

*As a special case of that, people may be unusually bad at identifying the right experts when said experts happen to agree with their initial biases, either on the object-level or for meta-level reasons uncorrelated with truth (eg use similar diction, have similar cultural backgrounds, etc)

[Job ad]

Rethink Priorities is hiring for longtermism researchers (AI governance and strategy), longtermism researchers (generalist), a senior research manager, and fellow (AI governance and strategy). 

I believe we are a fairly good option for many potential candidates, as we have a clear path to impact, as well as good norms and research culture. We are also remote-first, which may be appealing to many candidates.

I'd personally be excited for more people from the LessWrong community to apply, especially for the AI roles, as I think this community is unusually good at paying attention to the more transformative aspects of artificial intelligence. relative to other nearby communities, in addition to having useful cognitive traits and empirical knowledge.

See more discussion on the EA Forum

There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.

I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member here.

For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be closer to the mission of this site than that of the EA Forum).

What are the limitations of using Bayesian agents as an idealized formal model of superhuman predictors?

I'm aware of 2 major flaws:


1. Bayesian agents don't have logical uncertainty. However, anything implemented on bounded computation necessarily has this.

2. Bayesian agents don't have a concept of causality. 

Curious what other flaws are out there.