As far as I can tell:

  1. It's not possible for me (or most/all other people, for that matter) to individually evaluate claims every topic I hear about, as it takes too long time and too much effort to learn the foundations needed to evaluate things accurately. Therefore I (and all other people?) need to rely on other people to evaluate topics that I am unfamiliar with.
  2. On a lot of topics, common experts are incompetent; for instance most social science research is just bad, and it's mainly bad because the entire fields have no clue what they're doing. Therefore it seems unwise to rely on those fields. It's unclear why they are so bad, but IME rationalist experts are much better, perhaps because they share my biases, perhaps because rationalism teaches useful tricks, but I think more likely because rationalism just tends to attract incredibly intelligent and curious people.
  3. However, one cannot just blindly rely on opinions by rationalists, as rationalists are not automatically good at everything; when rationalists are unfamiliar with a topic, they often say lots of totally clueless stuff.[1]

So in my view there's basically a lot of utility in finding the rationalists most specialized in each topic to make it easy to learn stuff from them. Has anyone worked on this in the past? Is this something someone (Lightcone Infrastructure, perhaps?) would be interested in setting up?

And I guess if you know of any underrated rationalists who specialize in some topic, feel encourage to share in the comments for this post?

  1. ^

    This also seems to apply to non-rationalists, but that's not as important for this purpose.

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

ChristianKl

Sep 30, 2022

20

Learning from experts is very useful. On the other hand, the time of experts is often scarce. If I have a random basic level question about quantum physics, I would expect that those rationalists who are experts in quantum physics and who don't know me would have little interest in getting a cold email from me to answer a basic question about the field. 

My Habryka (and thus Lightcone Infrastructure) model would worry about how to do the gatekeeping to protect the valuable time of experts. That's likely the more important problem to solve than thinking about how to validate whether the expertise people claim for themselves is correct.

Expertise by its nature is also complex. A surgeon and a massage therapist might both be experts in anatomy but understand different parts of it well. Expertise gets acquired by studying a given paradigm for a topic and when there are multiple paradigms that gather knowledge about a given topic it can be hard to know which of those will give the best answer to a given question.

My Habryka (and thus Lightcone Infrastructure) model would worry about how to do the gatekeeping to protect the valuable time of experts. That's likely the more important problem to solve than thinking about how to validate whether the expertise people claim for themselves is correct.

Hmm good point. Maybe money can solve it? Put up prices based on the value of your time, and the incentives can sort out the rest.

Expertise by its nature is also complex. A surgeon and a massage therapist might both be experts in anatomy but understand different parts of

... (read more)
6 comments, sorted by Click to highlight new comments since: Today at 7:11 PM

I imagine something like Stack Exchange, except that people could get certified for rationality and for domain knowledge, and then would have corresponding symbols next to their user names.

Well, the rationality certification would be a problem. An ideal test could provide results like "rational in general, except when the question is related to a certain political topic". Because, it would be difficult to find enough perfectly rational people, so the second best choice would be to know their weaknesses.

I'm not sure we'd need anything that elaborate. The rationalist community isn't that big. I was more thinking that rationalists could self-nominate their expertise, or that a couple of people could come together and nominate someone if they notice that that person is has gone in depth with the topic.

I've previously played with the idea of more elaborate schemes, including tests, track records and in-depth arguments. But of course the more elaborate the scheme, the more overhead there is, and I'm not sure that much overhead is affordable or worthwhile if one just wants to figure stuff out.

I agree. We could afford more overhead if we had thousands of rationalists active on the Q&A site. Realistically, we will be lucky if we get twenty.

But some kind of verification would be nice, to prevent the failure mode of "anyone who creates an account is automatically considered a rationalist". Similarly, if people simply declare their own expertise, it gives more exposure to overconfident people.

How to achieve this as simply as possible?

One idea is to have a network of trust. Some people (e.g. all employees of MIRI and CFAR) would automatically be considered "rationalists"; other people become "rationalists" only if three existing rationalists vouch for them. (The vouch can be revoked or added at any moment. It is evaluated recursively, so if you lose the flag, the people you vouched for might lose their flags too, unless they already have three other people vouching for them.) There is a list of skills, but you can only upvote or downvote other people having a skill; if you get three votes, the skill is displayed next to your name (tooltip shows the people who upvoted it, so if you say something stupid, they can be called out).

This would be the entire mechanism. The meta debate could be in special LW threads, or perhaps in shortform, you could post there e.g. "I am an expert on X, could someone please confirm this? you can interview me by Zoom", or you could call out other people's misleading answers, etc.

[-]P.2y52

It's not quite what you want, but there's this: https://forum.effectivealtruism.org/community#individuals and this: https://eahub.org/

Somewhat related: Rationalists should have mandatory secret identities (or rather sufficiently impressive identities).

One (non-goodhart resistent) way could be to use some kind of Page Rank method based on the tags that users comment and post on. Eg. People Rank high in AI posts, if their comments and AI posts were upvoted by other people with the same Expertise.