This post is about a distinction between three different ways of providing evidence that I've found clarifying.

The first is generating data. Any study or experiment is in this category. AI Alignment examples include testing approaches in toy models and conducting human experiments.

The second is pointing out data. This is when everyone already knows about the empirical evidence, but many haven't realized what it implies in a particular context. An AI alignment example is the observation that humans have an easier time learning norms than values: everyone "knows" this, but not everyone has realized that it's an argument for building norm-following AI.

(I would count reporting on intuition as a variant of this: it's also about interpreting existing data, but in this case, the person cannot explain their own argument.)

The third is arguments that aren't about data at all. Anything purely mathematical or philosophical is in this category. An AI alignment example is Functional Decision Theory.

My impression is that the first category and the mathematical subset of the third are generally considered the most respectable within the scientific literature, probably because they're in some sense the most objective. It may be that arguments in the second category are the hardest to judge. However, they have the potential to access several orders of magnitude more data than arguments in the first, and I think it's fair to say that a substantial part of the content on LessWrong is primarily about pointing out data. It may be that the blog post + karma system has an advantage over the peer-review process precisely because it's better equipped to assess the quality of such arguments.

New to LessWrong?

New Comment