In Superforecasters, Tetlock describes one superforecaster as having built a collection of automated scripts and sources to get useful information from a variety of different sources and different perspectives. This seems very useful and I'd like to emulate it.

I'm looking for a generalizable approach where, given a specific topic (ex. AI) I can curate sources (ex. Twitter accounts) that maximizes diversity of points (ex. different perspectives on AI Alignment risk). I'd like to avoid the bias where I only use sources that are well known and popular, which seems particularly likely to happen in cases where I am not familiar with the field

I have a few different tactics in mind, but I haven't yet settled on a cohesive strategy, and am interested in suggestions, in particular if you've done this before.

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

I remember there being a bunch of websites dedicated to doing this, though I am not using any of them. AllSides is I think the example that I have been referred to most often.

Speaking about things that I do often do:

  • On Wikipedia articles I often read the talk pages in order to get a sense of what viewpoints I might be missing by just reading the article
  • In terms of general diversity of view perspective, I think the following ordering in terms of media sources roughly holds (starting with the most diverse):
    • Online Forums/Reddit/LessWrong (mostly because healthy comment sections tend to cover a lot of ideological ground)
    • Facebook discussions among my friends
    • Textbooks (there is a particular genre of handbook-like/overview-based textbooks that tend to do a good job of summarizing many viewpoints in a field)
    • Private online blogs
    • Educational Youtube channels
    • News articles
  • For politics and culture-war related stuff, there is r/theMotte, which I have heard has an exceptional amount of viewpoint diversity compared to most other forums
  • I know that some people successfully use Twitter for this, but I don't use Twitter, so no idea
  • In AI Alignment we luckily have the AI Alignment Newsletter, which seems to cover a lot of the things happening around AI-Alignment

In AI Alignment we luckily have the AI Alignment Newsletter, which seems to cover basically everything happening in the field

Depends on what you call "the field": there's a fair number of judgment calls on my part, and the summaries are definitely biased towards things I can understand quickly. (For example, many short LW posts about AI alignment don't make it into the newsletter.)

2habryka5y
Yeah, agree. Edited to clarify a bit.