All of sawyer's Comments + Replies

I used to work in manufacturing. The vast majority of lead time in most manufacturing processes is parts/jobs waiting to move on to the next step (for a variety of reasons). So all you have to do to rush things is to move the rushed job to the top of the queue on step 1, then when it's done move it to the top of the queue on step 2, etc. It's somewhat common practice for manufacturers to employ people whose job is to expedite certain orders, basically by shepherding them through this process. 

In other words, most manufacturing lead time isn't stuff th... (read more)

This reminds me of attempts to rate the accuracy of political pundits. Maybe this was in Superforecasting? Pundits are a sort of public intellectual. I wonder if one place to start with this intellectual-sabermetrics project would be looking for predictions in the writings of other intellectuals, and evaluating them for accuracy.

Expert Political Judgement discussed this. From what I remember they used a variety of "Experts", many I believe were Academics. I believe these crossed over significantly with what we would think of as intellectuals. I like the name "intellectual-sabermetrics". Luke Muehlhauser [] has been calling out people online for making bad predictions. One common issue though is that many intellectuals are trained specifically not to communicate falsifiable predictions. They often try to word things in ways that seem confident, but are easy to argue against after the fact.

I think there's probably a fundamental limit to how good the ranking could be. For one thing, the people coming up with the rating system would probably be considered "intellectuals". So who rates the raters?

But it seems very possible to get better than we are now. Currently the ranking system is mostly gatekeeping and social signaling.

Agreed there's a limit. It's hard. But, to be fair, so are challenges like qualifying students, government officials, engineers, doctors, lawyers, smart phones, movies, books. Around "who rates the raters", the thought is that: 1. First, the raters should rate themselves. 2. There should be a decentralized pool of raters, each of which rates each other. There are also methods that raters could use to provide additional verification, but that's for another post.