As George Carlin says, some people need practical advice. I didn't know how to go about providing what such a person would need, on that level. How would you go about doing that?
The solution is probably not a book. Many books have been written on escaping the rat race that could be downloaded for free in the next 5 minutes, yet people don't, and if some do in reaction to this comment they probably won't get very far.
Problems that are this big and resistant to being solved are not waiting for some lone genius to find the 100,000 word combination that will drive a stake right through the middle. What this problem needs most is lots of smart but unexceptional people hacking away at the edges. It needs wikis. It needs offline workshops. It needs case studies from people like you so it feels like a real option to people like you.
Then there's the social and financial infrastructure part of the problem. Things such as:
I've been following your whole series on moral mazes. I felt the rest of them were important because they explained why "working for the man" was bad in explicit terms, but this one was a pleasant surprise. Until about halfway through this post, I was under the impression you were articulating the dangers of moral mazes in the abstract while carefully ignoring any implications it would have for your own career on Wall Street. The point I realised you'd actually quit was a jaw-dropping moment, given that I already knew you weren't staying in that situation because you had a good use for the money.
My only complaint about this post would be that the intellectually detached way that it's written and lack of object-level game plans will prevent it from feeling like a real option to a lot of readers. Most people know that something is wrong with these systems, but when the rubber meets the road, they default to the familiar script the same way you did. Intellectual understanding of a problem is necessary for a certain kind of person to take action, but it isn't sufficient, and in some cases it can leave people dangerously unprepared for reality the same way that learning karate does for a street-fight.
Often what needs reviewing is less like "author made an unsubstantiated claim or logical error" and more like "is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable?
I agree with this, but given that these posts were popular because lots of people thought they were true and important, deeming the entire worldview of the author flawed would also imply the worldview of the community was flawed as well. It's certainly possible that the community's entire worldview is flawed, but even if you believe that to be true, it would be very difficult to explain in a way that people would find believable.
Those numbers look pretty good in percentage terms. I hadn't thought about it from that angle and I'm surprised they're that high.
FWIW, my original perception that there was a shortage was based on the ratio between the quantity of reviews and the quantity of new posts that have been written since the start of the review period. In theory, the latter takes a lot more effort than the former, so it would be unexpected if more people do the higher effort thing automatically and less people do the lower effort thing despite explicit calls to action and $2000 in prize money.
I'm not surprised to learn that is the case.
This is my understanding of how karma maps to social prestige:
The shortage of reviews is both puzzling and concerning, but one explanation for it is that the expected financial return of writing reviews for the prize money is not high enough to motivate the average LessWrong user, and the expected social prestige for commenting on old things is lower per unit of effort than writing new things. (It's certainly true for me, I find commenting way easier than posting but I've never got any social recognition from it, whereas my single LW post introduced me to about 50 people.)
Another potential reason is that it's pretty hard to "review" the submissions. Like most essays on LessWrong, they state one or two big ideas and then spend the vast majority of the words on explaining the ideas and connecting them to other things we know. This insight density is what makes them interesting, but it also makes it very hard to evaluate the theories within them. If you can't examine the evidence that's behind a theory, you have to either assume it or challenge the theory as a whole, which is what usually happens in the comments section after it's first published. If true, this means that you're not really asking for reviews, but lengthy comments that can say something that wouldn't have been said last year.
I find this theory intuitively plausible, and I expect it will be very important if it's true. Having said that, you didn't provide any evidence for this theory, and I can't think of a good way to validate it using what I currently know.
Do you have any evidence that people could use to check this independently?
One possibility is that
1. The DMV is especially bad, because people don't have to tolerate using it on a weekly basis.
2. The USPS isn't especially good, but it's hard to notice because American delivery companies aren't much better by comparison.
I've already given this an upvote, but I'm also leaving a comment because I think LessWrong has a shortage of this kind of content. I think broad personal overviews are particularly important because a lot of useful information you can get from "comparing notes" is hard to turn into standalone essays.
Yesterday I noticed that some of what I'd attributed to cultural differences in communication strength between myself and the LessWrong audience was actually due to differences in when I would choose to verbalise something. I originally thought this was me opting to state my positions clearly instead of couching them in false uncertainty so they would sound less abrasive, but yesterday I left some comments where I found myself wanting to use vocabulary that was a significantly more "nuanced" than it used to be (example) and yet I didn't feel like I was being insincere.
I don't think this is a case of learning from my youthful hubris or assimilating into rationalist culture, as I still endorse both the opinion and the tone it was expressed in. The real difference seems to be the *stage* at which I voiced my opinion. In the old comment, I was discussing a topic I had spent a lot of time thinking about and researching, and came to the conclusion that the community was making insane decisions because they were the default option. Whereas in yesterday's set of comments, I had a few strong points, but I hadn't reached a strong conclusion overall before I entered the discussion.
I think this raises an important problem with our discussion norms. If you've figured out that the community has made a big mistake, you are at a disadvantage if you've managed to "read ahead of the class" because effective persuasion requires you to emulate ignorance of information more than a few inferential steps ahead of the audience.