I recently heard about SIAI's Rationality Minicamp and thought it sounded cool, but for logistical/expense reasons I won't be going to one. 

There are probably lots of people who are interested in improving their instrumental rationality, know about and like LessWrong, but haven't read the vast majority of content because there is just so much material, and the practical payoff is uncertain. 

It would be cool if it was much easier for people to find the highest ROI material on LessWrong.

My rough idea for how this new instrumental rationality tool might work:

 

  • It starts off as a simple wiki focused on instrumental rationality. People only add things to the wiki (often just links to existing LessWrong articles) if they have tried them and found them very useful for achieving their goals.
  • People are encouraged to add "exercises" that help you develop the skill represented by the article, of the type that are presumably done at the Rationality Minicamps.
  • Only people who have tried the specific thing in question should add comments about their experiences with it.
  • Long Term Goal: Every LessWrong user can define their own private stack rank of the most important concepts/techniques/habits for instrumental rationality. These stack ranks are globally merged by some LessWrong software to create an overall stack rank of the highest ROI ideas/behaviors/techniques as judged by the LessWrong community at any given time. People looking to improve their instrumental rationality can then just visit this global stack rank and pick the highest item that they haven't tried yet to experiment with, and work backwards from there if there are any prerequisites.

 

Do you think others would find this useful? Anyone have suggested improvements?

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 12:44 PM

Pretests. Pretests are critical.

Whoa! Interesting! Here's the pdf for the curious.

Nifty. I think I'll add those 2 links to my Spaced repetition page.

EDIT: OK, maybe not. The PDF turns out to not support the blog post claims of causal efficacy, since it's doing something different.

This is something I've been thinking about for quite some time:

I had an idea for a web-based app for evaluating instrumental rationality techniques, something like Digg or a UserVoice-based forum where techniques get upvoted, downvoted, merged, separated and discussed. However, I don't currently have a solution for the problem of 'impulse upvoting' ("hey, this technique sounds cool, let's upvote it!") -- I don't know how to make the upvotes reflect long-term usefulness of the techniques.

My current best idea regarding the impulse upvote problem is "making techniques pay rent", literally:

Each app user has a fixed small number of homepage slots for techniques. If a technique doesn't work for a user, they can kick it out of the slot and replace it with another promising technique. Also, users can purchase more homepage slots for real-world money. Or the app can go even further: it can be free to download, but each slot will cost the user $0.99 per month.

This way we can rank the techniques based on their "rent time", i.e. the time they spend in their slot, which is only counted for active users of the app.

Many of the techniques I've found on Less Wrong have increased my available time, money, energy, mood... if the only way I could have learned and used the technique was to have paid money for it, I would gladly. If there was a way to pay back, say, 10% of my actual gains from How to Beat Procrastination to Luke to do with as he wishes, I would press that button. Issues include not correctly estimating the counterfactual (without technique X, how well would I really have done? Surely not a complete crash-and-burn... and what were the actual consequences of not doing well? Surely not as bad as my overestimating-losses-brain estimates...), overcounting extra time that in part gets filled with things I will remove later for "extra time", sending the rewards to the proximate cause of my learning the technique rather than someone further up the origin tree, people gaming the system as soon as it involves money they can steal, and probably others that marginal consideration is too small to bring to mind.

If there was a way to pay back, say, 10% of my actual gains from How to Beat Procrastination to Luke to do with as he wishes, I would press that button.

I think it's the grey button midway down http://singularity.org/donate/

Sure, maybe, for Luke. What I want is not for SI to receive more money because Luke shared content that improved my life.

Instead, I want to have removed the inconveniences which make it hard to set in place the incentive "whosoever shares content that significantly improves lives will receive part of that improvement in compensation", regardless of whether they might turn around and send that compensation to SI, and then to do my part in setting that incentive in place by pressing and letting it be known that I press all of those buttons, also magically not incurring the cost of lots of low quality fishing posts.

That said, parts of me were not happy with my giving as much as I have to SI until I pointed out to them that I clearly paid more per benefit-accrued for a college education... those parts are a little silly anyway so they didn't notice that I overpaid for the benefits I gained from the college education and they're happy now. ;)

Usefulness of some techniques may depend on domain where one wants to use them. For example a technique "if you don't know something, use google and follow 3 highest links" depends on whether your problems are described on internet, and how much trustworthy are those answers. -- For "how do I join two strings in Python?" this technique works great. For financial questions, not so great, because website owners have a huge incentive to promote wrong answers.

Also, the same technique may have different results for different kinds of people, because of their environment, previous knowledge, personality, gender, social class, financial situation, or whatever. If you omit those details, you only get average results in general population, which is also not bad, but does not lead to optimal choice.

Measuring an impact of a technique is difficult. How much sure are you it was this technique that helped, and not something else? Maybe it was a placebo effect or just a coincidence. If we had hundreds of data points, the coincidences would average out, but we probably won't have so much data.

It always amazes me how a self proclaimed rationalist community who wouldn't accept a medical intervention based on anecdotal evidence have no problem accepting techniques for improving rationality based on anecdotal evidence.

It ... amazes me how a self proclaimed rationalist community ... have no problem accepting techniques ... based on anecdotal evidence.

What evidence do you base this observation on? :-) The post is at +4 after a few days, which is barely better than getting into negatives, and there isn't much on-topic discussion in the comments. Whatever positive reception it gets is probably due to hypothetical variations on what is suggested that would indeed have some merit, unlike the specific thing proposed.

I would like basic short answer/MC tests for Lesswrong content. I have not been through a lot of sequences but I have been around people that have and been in discussions about those sequences, and I have also read a lot of important posts from sequences that I haven't read entirely. I would like to know if I actually know the material. It can also be a good teaching tool in itself (see the comment on pretests), and it can also assess knowledge of Lesswrongian terms and phrases.

Even if it doesn't really test one's skill as a rationalist, they can still be incredibly useful.

This is a nicely written proposal for a practical, actionable idea. If you're not in tech, then you should consider doing this. It starts at a practical idea and has ways to branch out from the long term goal to other like including psychometrics and so forth.

I'm biased due to my open source project, but I think this is the kind of idea that fits well with cryptographically secure peer to peer systems that then aggregate into some groups, as the individual opinions are highly variable (correctly, as different brains need different training)