What should a not-very-smart person do?  Suppose you know a not-very-smart person (around or below average intelligence).  S/he read about rationality, has utilitarian inclinations, and wants to make the world better.  However, s/he isn't smart enough to discover new knowledge in most fields, or contribute very much to a conversation of more knowledgeable experts on a given topic.  Let's assume s/he has no exceptional talents in any area.

How do you think a person like that could best use his/her time and energy?  What would you tell the person to do?  This person may be, compared to average LW readership, less capable of noticing the irrationality in his/her actions even if s/he wants to be rid of it, and less easily able to notice the flaws in a bad argument.  S/he may never be able to deeply understand why certain arguments are correct, certain scientific facts have to be the way they are, and telling him/her to be unsure or sure about anything seems dangerous if s/he doesn't really understand why.  

My practical advice might be:

1) If you want to give to charity, follow GiveWell recommendations.  

2) Learn about the basic biases, and commit to resisting them in your own life. 

3)  Follow advice that has been tested, that correctly predicts a positive outcome.  If a hypothesis is untestable (there's an unsensible dragon in your garage) or doesn't predict anything (fire comes from phlogiston in combustable substances), or is tested and demonstrably false (god will smite you if you say it doesn't exist), don't waste time and energy on it.  If you want to improve, look for tested methods that have significant positive results relevant to the area of interest.  Similarly, if a person regularly gives you advice that does not lead to good outcomes, stop following it, and if someone gives advice that leads to good outcomes, start paying attention even if you like that person less.  


At a more general level, my thoughts are tentative, but might include basic LW tenets such as:

1) Don't be afraid of the truth, because you're already enduring it.

2) If all the experts in a field agree on something, they might be wrong, but you are extremely unlikely to be better at uncovering the truth, so follow their advice, which might appear to conflict with...

3) Don't trust deep wisdom.  Use Occam's razor, think about simple, basic reasons something might be true (this seems good for religion and moral issues, bad for scientific ideas and understanding)

4) If you find yourself flinching away from an idea, notice that, and give it extra attention.  

Note:  I mean this as a serious, and hopefully non-insulting question.  Most people are intellectually near-average or below-average, and I have not seen extensive discussion on how to help them lead happier lives that make the world a better place. 

New Comment
94 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here's my advice: always check Snopes before forwarding anything.

I wish there was a checkbox in email sites and clients "check incoming messages against known urban myths". Probably no harder to implement than the current automatic scam and spam filtering.
Do people actually still get those things? I have literally never recieved one of those chain letters or story-forwardings.
Then there would a next level in arms race. Just like spammers used to add "this is not a spam" disclaimers, people who create hoax mails would add something like: "When you send this e-mail to your friends, ask them later whether they received it, because is removing criticism against them from internet." Or the hoaxes would be sent as attached images.
There is hardly any with spam anymore. Gmail detects virtually 100% of it these days. Maybe a few spam messages a year make it through to my Inbox.

More actionable rules might be better such as:

Wear a seat belt when driving. Save 10% of your income through your pension plan's index fund option. Don't smoke. Practice safe sex. Sign up for cryonics.

Wear a seat belt when driving.

Don't smoke.

Practice safe sex.

Safe bet. These significantly increase your life and health expectations at almost no cost.

Save 10% of your income through your pension plan's index fund option.

This heavily depends on your age, country, social level (which affects future discounting) and what not and is thus in its generality questionable.

Sign up for cryonics.

I don't know what kind of advice this is. Sure you are convinced that it may be right for you, but it is as far away from item 2) above as it can be.

From the standpoint of normal people cryonics is not very different from other afterlife memes and thus adding it to the list has the risk of descrediting (to normal people) the whole list.

It seems to me that the advice about Givewell has a lot of evidence behind it, but the rest of the advice doesn't have much evidence that it gives any benefit at all, for people of average intelligence or otherwise. It would be good to have a Givewell-like project that evaluated the costs and benefits of following various rationality advice.

CFAR is kind of working along these lines.
Heck, just having some kind of metric to see whether people were following rationality advice would be a big step forward. We can get a visceral impression that someone is more or less formidable, and we can spot patterns of repeated mistakes, but we don't really have a good way of seeing the extent to which someone is applying rationality advice in their daily lives. (Of course this is just a restatement of the good old "Rationality Dojo" problem, one of the very first posts in the Sequences.) Paper tests don't really capture the ability to apply the lessons to real-world problems that people actually care about.

I have a fairly wide variety of friends. Here's some advice I find myself giving often, because it seems to cover a lot of what I think are the most common problems. The wording here isn't how I'd say it to them.

Health and lifestyle

  • Don't engage the services of any non-evidence based medical practitioner.
  • If you have a health problem, you're receiving treatment/advice, and not obviously improving, get a second opinion. And probably also a third. (I am not in the US)
  • Don't smoke cigarettes. If you already smoke cigarettes, buy an e-cigarette (like, right now), even if you're sure you'll never use it. Once you've bought it, do try to use it, even if you don't think you'll like it.
  • Always use barriers for vaginal and anal penetration outside a monogamous relationship
  • Use either barriers or hormonal birth control for p-in-v intercourse
  • If you or your partner is having non-monogamous sex, get STI tests twice a year
  • If you frequently feel depressed or anxious, see a psychologist who practices cognitive behavioural therapy. Do not see one who practices psychoanalysis (see "evidence based health care"). Expect to trial several psychologists before finding one who is a good fit f
... (read more)
Make that “never gamble large sums of money” -- spending €7 for a poker tournament with your friends isn't obviously worse than spending €7 on a movie ticket IMO. I agree about pretty much all of the list -- and most of it is also good advice for pretty much all people, not just normal ones.
Don't hold a credit balance on a credit card might be valid general advice. There are however many cases where the miles or cashback you can get through credit cards provide a valuable benefit. It also builds a credit rating that might be valuable to get a mortage and given the tax reducted status of mortages for buying a home they aren't completely bad.
I don't think miles and cashback are the primary benefit of a credit card, although they're handy. A credit rating on the other hand is very important: at least in the US, finding housing (including apartment housing) is seriously complicated by having bad or no credit, and the same goes for buying vehicles or anything else customarily paid for on an installment plan. Making major purchases on credit is a good deal if you think hanging onto the money is worth more to you yearly than the APR, which isn't unlikely if you're investing; basically it's leverage. If you find credit cards morally objectionable or are absent-minded enough not to always pay them off on time, can probably drop the card once you've been approved for an automotive loan or something comparably serious. I use mine to handle gas and certain other predictable expenses.
That's an interesting list. Without going into individual items, what goal structure does it support? In other words, what is it that you want to do (or, maybe, be) that following the advice on this list enables you?

Just wanted to point out an implicit and not necessarily correct assumption, leading to poor-quality advice:

Suppose you know a not-very-smart person (around or below average intelligence)

It seems that you assume that intelligence is one-dimensional. In my experience, while there is a correlation, most people are smarter in some areas than in others. For example, a mathematical genius may be incapable of introspection and have to interest in rational thinking outside math. Let's take your example:

S/he read about rationality, has utilitarian inclinations, and wants to make the world better. However, s/he isn't smart enough to discover new knowledge in most fields, or contribute very much to a conversation of more knowledgeable experts on a given topic. Let's assume s/he has no exceptional talents in any area.

First, an "average person" does not read about rationality and has no "utilitarian inclinations". They often do want to make the world better if socially conditioned to do so by their church or by the TV commercials showing a sick child in the 3rd world whom you can save for a dollar a day or something. So, the person you describe is not "averag... (read more)

IIRC the optimization power has to be cross-domain according to his definition, otherwise Deep Blue would count as intelligent.
That doesn't seem to count as a problem with the above definition. Taboo "intelligent." Is Deep Blue an optimizing process that successfully optimizes a small part of the universe? Yes. Is it an optimizing process that should count as sentient for the purposes of having legal rights? Should we be worried about it taking over the world? No.
Deep Blue is a narrow AI...
I agree that the word intelligence is too vague, but I'm specifically not including a mathematical genius (who would have an exceptional talent in the area of mathematics). I strongly disagree that average people can't or don't have utilitarian inclinations. I think utilitarianism is one of the easiest philosophies to grasp, and I know a lot of average-IQ people who express the desire to "do as much good as possible" or "help as many people as possible." Even the advertisements for charities that you mention tend to stress how much good can be achieved with how little money. It's certainly good to customize advice, but I think there is a class of advice I would offer to smart, skeptical people that I would hesitate to give to others. For example, I would tell my brightest students to question expert advice, because then they can more deeply understand why experts think what they do, or potentially uncover a true fault in expert reasoning. With my less-bright pupils, I find that this pushes towards conspiracy theories and pseudoscience, and thus more frequently advise them to trust experts and distrust people on the fringe. When smart people question mainstream scientific thinking, they may go astray. In my experience, when average-or-below intelligence people question mainstream scientific thinking, they almost always go astray, and when they don't it's usually coincidence. I'm trying to figure out how to help them understand things more deeply and question things in a more productive manner, and definitely borrowing lots of ideas from LW, but I still think there is a lot more room for improvement.
I'm sure they express the desire, but do they actually desire it and do they actually do it?

Read Yvain's Epistemic Learned Helplessness. You can be convinced of anything by good arguing, but forewarned is forearmed.

"Study rationality anyway. Work harder to make up for your lack of intelligence." I don't think most of LessWrong's material is out of reach of an average-intelligence person.

"Think about exactly what people mean by words when they use them; there are all kinds of tricks to watch out for involving subtle variations of a word's meaning." Read Yvain's "The Worst Argument in the World"..

"Don't fall for the sunk cost fallacy, which is what you're doing when you say 'This movie I'm watching sucks [absolutely, not just relativ... (read more)

Wasn't the average IQ here from the survey something like 130+?

These statements don't necessarily contradict each other. Even if average-intelligence people don't read Less Wrong, perhaps they could. Personally, I suspect it's more because of a lack of interest (and perhaps a constellation of social factors).

I bet the average LessWrong person has a great sense of humour and feels things more than other people, too.

Seriously, every informal IQ survey amongst a group/forum I have seen reports very high IQ. My (vague) memories of the LessWrong one included people who seemed to be off the scale (I don't mean very bright. I mean that such IQs either have never been given out in official testing rather than online tests, or possibly that they just can't be got on those tests and people were lying).

There's always a massive bias in self-reporting: those will only be emphasised on an intellectual website that starts the survey post by saying that LessWrongers are, on average, in the top 0.11% for SATs, and gives pre-packaged excuses for not reporting inconvenient results - "Many people would prefer not to have people knowing their scores. That's great, but please please please do post it anonymously. Especially if it's a low one, but not if it's low because you rushed the test", (my emphasis).

If there's a reason to be interested in average IQ beyond mutual ego-massage, I guess the best way would be to have an IQ test where you logged on as 'Less Wrong member X' and then it reported all the results, not just the ones that people chose to share. And where it revealed how many people pulled out halfway through (to avoid people bailing if they weren't doing well).

Selection bias - which groups and forums actually asked about IQ?

Your average knitting/auto maintenance/comic book forum probably has a lower average IQ but doesn't think to ask. And of course we're already selecting a little just by taking the figures off of web forums, which are a little on the cerebral side.

True. I don't think I can define the precise level of inaccuracy or anything. My point is not that I've detected the true signal: it's that there's too much noise for there to be a useful signal. Do I think the average LessWronger has a higher IQ? Sure. But that's nothing remotely to do with this survey. It's just too flawed to give me any particularly useful information. I would probably update my view of LW intelligence more based on its existence than its results. In that reading the thread lowers my opinion of LW intellgence, simply because this forum is usually massively more rational and self-questioning than every other forum I've been on, which I would guess is associated with high IQ, and people taking the survey seriously is one of the clearest exceptions. BTW, I'm not sure your assessments of knitting/auto maintenance/comic books/web forums are necessarily accurate. I'm not sure I have enough information on any of them to reasonably guess their intelligence. Forums are particularly exceptional in terms of showing amazing intelligence and incredible stupidity side by side.
Would still suffer from selection effects. People that thought they might not do so well would be disinclined to do it, and people who knew they were hot shit would be extra inclined to do it. The phrase "anonymous survey" doesn't really penetrate into our status-aware hindbrains.
Yep! But it's the best way I can imagine that someone could plausibly create on the forum.
Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn't take it, along with the results (although don't associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the selection effect caused by people not participating. You can also improve participation by giving those users an easily accessible icon on Less Wrong itself which takes them directly to the test, and maybe a popup reminder once a day or so when they log on to the site if they've been selected but haven't done it yet. Requires moderate coding.
I would find such a feature to be extraordinarily obnoxious, to the point that I'd be inclined to refused such a test purely out of anger (and my scores are not at all embarrassing). I can't think of any other examples of a website threatening to publicly shame you for non-compliance.
btw, in Markdown use double asterisks at each end for bold, like this **bold text. with two at the end also.
The average self-reported IQ. If we really wanted to measure LWs collective IQ, I'd suggest using the education data as a proxy; we have fairly good information about average IQs by degree and major, and people with less educational history will likely be much less reticent to answer than those with a low IQ test result since there are so many celebrated geniuses who didn't complete their schooling.
The average tested IQ on the survey was about 125, which is close to my estimate of the true average IQ around here; I don't entirely trust the testing site that Yvain used, but I think it's skewing low, and that ought to counteract some of the reporting bias that I'd still expect to see. 125 is pretty much in line with what you'd expect if you assume that everyone here is, or is going to be, a four-year college graduate in math, philosophy, or a math-heavy science or engineering field (source). That's untrue as stated, of course, but we do skew that way pretty hard, and I'm prepared to assume that the average contributor has that kind of intellectual chops.
I think that's a fair assessment, although it might be because my guess was around 120 to start with. I never meant to say we're not smart around here, far from it, but I don't think we're all borderline geniuses either. It's important to keep perspective and very easy to overestimate yourself.
To comprehend a text, a person must: 1. Become aware of it 2. Have some reason for reading it 3. Find those reasons to be sufficient to spend time reading it 4. Read it 5. Put forth the cognitive effort to understand it (reading something and putting forth cognitive resources to understand it are not the same thing) 6. Succeed in understanding it Intelligence is just one component of knowledge acquisition, and probably less important than affective issue. Often, intelligence acts indirectly by affecting affect, but in such cases, those effects can be counteracted. The mistaking of performance of cognitive tasks for intelligence is, I believe, often an aspect of the fundamental attribution error.
Not anymore, though only just. The 2012 survey reports a mean of 138 and change with a SD of 12.7. It was 140 or higher on the 2011 and 2009 surveys, though. All the usual self-reporting caveats apply, of course.
I'm interested in operationalising your advice. By study rationality, I assume you mean read rationality blogs and try to practice persuasive prescrptions. At the moment I only read Lesswrong regularly, but I try to give the blogs mentioned in the sidebar a go once in a while. Robin Hansin opened my mind in this Youtube interview but I find it hard to understand his blog posts on Overcoming Bias. I thought maybe that's because the blog posts are very scholastic and I'm not up to date. I don't find this is the case on SSC, but it is occasionally the case here on Lesswrong. If you could describe the intended readership of each rationality blog in a way for potential audience members to decide which to commit to reading, how would you do it? Could you come up with a scale of academic rigour vs accessibility, or similar?
The thing that struck me most about the sequences was how accessible they were. Minimal domain-specific jargon and a comprehensive (excessive at times, in my opinion) explanation of each concept in turn. I do believe LessWrong is not over-the-top inaccessible, but as the existence of this post implies, it seems that's not always agreed upon.

I think this underestimates the difficulty average humans have with just reading upwards of 2500 words about abstract ideas. It's not a question even of getting the explanation, it's a question of simply being able to pay attention to it.

I keep repeating this: The average human is extremely average. Check your privilege, as the social-justice types might say. You're assuming a level of comfort with, and interest in, abstraction that just is not the case for most of our species.

Upvoted. Every time I'm tempted to provide a long post aimed at the general public, I've found it worthwhile to look at a math or biochemistry paper that is far outside of my knowledge domains -- the sort of stuff that requires you to go searching for a glossary to find the name of a symbol. Sufficiently abstract writing feels like that to a significant amount of the populace, and worse, even Up-Goer 5-level writing looks like it will feel like that, to a lot of people who've been trained into thinking they're not good enough at this matter. ((I still tend to make a lot of long posts, because apparently I am a terrible person.))
Datum: I know at least one person who refuses to read LW links, explicitly because they are walls of text about abstract ideas that she thinks (incorrectly, IMO) are over her head. This occurs even for topics she's normally interested in. So things like that do limit LW's reach. Whether they do so in a significant fraction of cases, I don't know. But the impact is non-zero.
This doesn't seem to me to be about fudamental intelligence, but upbringing/training/priorities. You say in another response that IQ correlates heavily with conscientiousness (though others dispute it). But even if that's true, different cultures/jobs/education systems make different sort of demands, and I don't think we can assume that most people who aren't currently inclined to read long, abstract posts can't do so. I know from personal experience that it can take quite a long while to get used to a new sort of taking in information (lectures rather than lessons, reading rather than lectures, reading different sorts of things (science to arguments relying on formal or near-formal logic to broader humanities). And even people who are very competent at focusing on a particular way of gaining information can get out of the habit and find it hard to readjust after a break. In terms of checking privilege, there is a real risk that those with slightly better training/jargon, or simply those who think/talk more like ourselves are mistaken for being fundamentally more intelligent/rational.
Well, then I have to ask what you think "fundamental intelligence" consists of, if not ability with (and consequently patience for and interest in) abstractions. Can we taboo 'intelligence', perhaps? We are discussing what someone ought to do who is average in something, which I think we are implicitly assuming to be bell-curved-ish distributed. How changeable is that something, and how important is its presence to understanding the Sequences?
I reject the assumption behind 'ability with (and consequentially patience for and interest in)'. You could equally say 'patience for and interest in (and consequentially ability in)', and it's entirely plausible that said patience/interest/ability could all be trained. Lots of people I know went to schools were languages were not prioritised in teaching. These people seem to be less inherently good at languages, and to have less patience with languages, and to have less interest in them. If someone said 'how can they help the Great Work of Translation without languages', I could suggest back office roles, acting as domestic servants for the linguists, whatever. But my first port of call would be 'try to see if you can actually get good at languages' So my answer to your question is basically that by the time someone is the sort of person who says 'I am not that intelligent but I am a utilitarian rationalist seeking advice on how to live a more worthwhile life' that they are either already higher on the bellcurve than simple 'intelligence' would suggest, or at least they are highly likely to be able to advance.
Oh no, I don't expect very many people to read it all. I expect a select few articles to go viral every now and then, though. This wouldn't be possible if the writing wasn't clear and accessible.
Sure, but I suggest that "viral on the Internet" for a long text article does not in fact mean that humans of average intelligence are reading it. The Internet skews up in intelligence to start with, but the stuff that goes viral enough to be noticed by mainstream media - which at least in principle reach down to the average human - is cat videos and cute kids, not long articles. Sequence posts may certainly go viral among a Hacker-News-ish, technical, college-educated, Populares-ish sort of crowd, but that's already well outside the original "average intelligence" demographic.
I think you're vastly underestimating internet usage here. One of the best things Facebook has done (in my opinion) is massively proliferate the practice of internet arguing. The enforced principle of not getting socked by someone in a fit of rage just makes the internet so irresistible for speaking your mind, you know? Additionally, every so often I see my siblings scrolling through Facebook or some "funny image collection" linked from Facebook, seeing for the first time images I saw years ago. If the internet has a higher-than average intelligence, then the internet usage resulting from Facebook is a powerful intelligence boost to the general population. I suppose I should write my analysis here into a proper post some time, as I do consider it a significant modern event.
I agree that the internet usage has lead to a massive proliferation of certain types of knowledge and certain types of intelligent thought. At the same time, it's important to note that image memes, Twitter, and Tumblr have increasingly replaced Livejournal or other long-form writing at the same time that popular discussion has expanded, and style guides have increasingly encourage three-sentence paragraphs over five-sentence paragraphs for internet publishing. There are a few exceptions -- fanfiction has been tending to longer and longer-form, often exceeding the length of what previous generations would traditionally consider a doorstopper by orders of magnitude* -- but much social media focuses on short and often very short form writing. * There are at least a dozen Harry Potter fanfictions with a higher wordcount than the entire Harry Potter series, spinoff media included. Several My Little Pony authors have put out similar million-word-plus texts in just a few years, including a couple of the top twenty read fictions. This may increase tolerance for nonfiction long reads, although I'm uncertain the effects will hit the general populace.
I agree that the Internet is a boost to human intelligence, relative to the TV that it is replacing and to whatever-it-was that TV replaced - drinking at the pub, probably. I don't think the effect is large compared to the selection bias of hanging out in LW-ish parts of the Internet.
I'd agree if I thought LessWrong performed better than average.
What metric would you propose to measure LW performance?
My current heuristic is to take special note of the times LessWrong has a well-performing post identify one of the hundreds of point-biases I've formalized in my own independent analysis of every person and disagreement I've ever seen or imagined. I'm sure there are better methods to measure that LessWrong can figure out for itself, but mine works pretty well for me.
Not quite sure what you mean here; could you give an example? But this aside, it seems that you are in some sense discussing the performance of LessWrong, the website, in identifying and talking about biases; while I was discussing the performance of LessWrongers, the people, in applying rationality to their real lives.
A good example would be any of the articles about identity. It comes down to a question of what frequency of powerful realizations individual rationalists are having that make their way back to LessWrong. I'm estimating it's high, but I can easily re-assess my data under the assumption that I'm only seeing a small fraction of the realizations individual rationalists are having.
I think the sequences are accessible for how abstract they are and how unfamiliar the ideas are (usually, abstraction and unfamiliarity decrease accessibility). I work as a tutor in a program for young people, and one of the interesting parts of the program is that all of the students are given a variety of tests, including IQ tests, which the tutors have access to as part of an effort to customize teaching approach to best suit students' interests and abilities. I have all kinds of doubts about how ethical and useful the program is, but it has taught me a lot about how incredibly widely people vary. I don't believe most of my students would get much out of the sequences, but perhaps I'm too pessimistic. I think even if they understood the basic argument, they would not internalize it or realize its implications. I'd guess that their understanding would be crappily correlated with IQ. I have been spending a lot of time trying to figure out how to communicate those ideas without simplifying away the point.

Play to your strengths; do what you're best at. You don't have to be best in the world at it for it to be valuable.

Good things about this advice are (a) it has a fairly-sound theory behind it (Comparative advantage), and (b) it applies whether or not you're smart, normal or dumb, so you don't get in to socially-destructive comparisons of intelligence.

When in doubt, ask. The stackexchange network is great to get answers to questions.

skeptics stackexchange is for example great to get answers to general questions.
If you encouter a significant claim on the internet it's often a useful website to check. Recently I came about the claim that batteries follow something like Moore's law. I headed over to skeptics stackexchange and post a question.

Another useful habit is Anki. Especially if you don't trust your brain to remember information on it's own, let Anki help you.

look for tested methods that have significant positive results relevant to the area of interest

This part of advice needs to be more specific. For example which "positive results" should be trusted and which not. Because everyone who wants to sell you something will tell you about "positive results".

My 3 pieces of advice, for someone already convinced that something fairly accurate is going on, would be:

1) Try to sign up for cryonics before dying.

2) When donating to charity, use the recommendations of an organization like GiveWell ("the practical approach"), or donate to a charity working on existential risk ("the 'taking ideas seriously' approach).

3) (My one best piece of rationality advice) Other people have good reasons for their actions, according to themselves. That doesn't mean you'll think they're good once you find them out - but it does mean you should try to find them out.

Do they? (Or are you referring to the fact that people, when asked explicitly why they did something, make up some reason and convince themselves that that was it? Context suggests to me, though, that "according to themselves" refers to what they think, not what they say (and maybe then think) upon asking.)
I believe that people can only do what makes sense to them, for some very expansive meaning of "makes sense". The gain from believing this is to give up the delusion that what feels right to me should automatically transfer to other people.
True, but I know I do a lot of things without first thinking about whether they make sense. I don't generally have the time to check that for every single action I take (e.g. performing a speech act in a conversation).
That's kind of where I was pointing with "very expansive meaning of "makes sense" "-- system 1 has its own background premises, even if they aren't verbal or filtered through the conscious mind. I'll see if I can come up with a better phrasing.
It seems to me that the situation is this: everybody does everything for a reason (surprise, surprise), but they may not know it, you may not know it, it may not be what they say it is even if they try to be honest, and it may not be a good reason. That, unfortunately, is not a neatly summarisable point, and the question of what moral to draw from it is not trivial.
I think the latter.

I'm not sure how much raw intelligence matters. If a person who's average intelligence stays with a problem which doesn't get much attention for 10 years I see no reason why they shouldn't be able to contribute something to it.

Being intellectual means staying with intellectual problems over years instead of letting them drop because a new television series is more important.

Since IQ correlates with practically everything, including conscientousness and the ability to concentrate, I'm not convinced this advice is helpful. The average human may be plain unable to meaningfully stick with a problem for ten years. (That is, to actually productively work on the problem daily, not just have it on the to-do list and load up the data or whatever every so often.) I fear the LW bubble gives most people here a rather exaggerated estimate of the "average"; your median acquaintance is likely one or two standard deviations above the real population average, and that already makes a big difference.

I don't think working every day on the problem is necessary. For a lot of problems visiting them monthly does a lot. If you want to formalize the approach it's something like: I have learned something new X, how does X related to problem Y_1 to Y_n? If you inform yourself widely, I think you have the potential to contribute. Most people aren't intellectual because they don't invest any effort in being intellectual. Given that papers get published with titles like Why is Conscientiousness negatively correlated with intelligence? I don't think that's the case.
Could you give examples of problems like this?
I think will give three examples of problems with whom I stayed over longer time: Spaced repetition learning, polyphasic sleep and quantified self. Quantified Self is the example where I have the most to show publically. I did community work in QS. My name is in a dozen mainstream media pieces in a total of three languages. Piece means either newspaper, radio or TV I did all of them multiple times. Spaced repetition learning would be one problem which is extremly important but has very few people who are working on it. The Mnemosyth data lies around for years without anyone analysing it. Going through that data and doing a bit of modeling with it should be easy for anyone who's searching a bachlor thesis for computer science or otherwise seeks a project. Another question would be: How do you calculate a good brainperformance score for a given day given Anki review data? (Anki stores all the review data internally in a SQL database) You don't need to be a genius to contribute to any of the those two issues. Both problems are pretty straightforward if you can program and have interest in modelling. Polyphasic sleep is a problem where I would say that I contribute to the discussion. I tried it probably 8/9 years ago and I stayed with the problem intellectually. Last year a friend of mine was trying uberman for a month and in researching the topic he came about something I wrote. When talking with him about the topic he quoted one of my online opinion on the topic to me and at first it surprised me because I haven't made that point in his physical presence. My highest rated answer on skeptic stackexchange is also about the uberman shedule: http://skeptics.stackexchange.com/questions/999/does-polyphasic-sleep-work-does-it-have-long-term-or-short-term-side-effects/1007#1007 It's not like I contributed a breakthrough in thinking about polyphasic sleep but I did contribute to the knowledge on the topic a bit.
It's a real pain to, though, because it's so big. A month after I started, I'm still only halfway through the logs->SQL step.
That sounds like you do one insert per transaction which is the default way SQL operates. It possible to batch multiple inserts together to one transaction. If I remember right the data was something in the size of 10GB. I think that a computer should be able to do the logs->SQL step in less than a day provided one doesn't do one insert per transaction.
I believe so, yeah. You can see an old copy of the script at http://github.com/bartosh/pomni/blob/master/mnemosyne/science_server/parse_logs.py (or download the Mnemosyne repo with bzr). My version is slightly different in that I made it a little more efficient by shifting the self.con.commit() call up into the exception handler, which is about as far as my current Python & SQL knowledge goes. I don't see anything in http://docs.python.org/2/library/sqlite3.html mentioning 'union', so I don't know how to improve the script. The .bz2 logs are ~4GB; the half-done SQL database is ~18GB so I infer the final database will be ~36GB. EDIT: my ultimate solution was to just spend $540 on an SSD, which finished the import process in a day; the final uploaded dataset was 2.8GB compressed and 18GB uncompressed (I'm not sure why it was half the size I expected).
Thanks for the round up! I thought that by "problems" you meant things like the millennium problems and friendly AI and couldn't picture how average people could make any progress in them (well maybe some with dedication) but these make more sense. How easy it is to get funding for these kind of projects? I'm just wondering because these are a bit fringe issues still, but of course very important.
Quantified Self is in it's nature about dealing with epistemology. It's not certain that you will learn something about how an AGI works by doing Quantified Self but the potential is there. A mathematical model of how human memory works that could be produced by looking at Mnemosyth data could also potentially matter for FAI. FAI is a hard problem and therefore it's difficult to predict, where you will find solutions to it. It very much depends on the project. I don't know how hard it is to get grants for the spaced repetition problems I mentioned. I however think that if someone seeks a topic for a bachelor or master thesis, they are good topics if you want an academic career. The daily Anki score would allow other academics to do experiments of how factor X effects memory. If you provide the metric that they use in their papers they will cite yourself. I don't understand why anyone would want to work on the Riemann Hypothesis. It doesn't seem to be a problem that matters. It one of those examples that suggests that people are really bad at prioritising. Mathemacians work at it because other mathematician think it's hard and solving it would impress them. It has a bit of Terry Pratchett's Unseen University which was created to prevent powerful wizards from endangering the world by keeping them busy with academic problems. The only difference is that math might advance in a way that makes an AGI possible and is therefore not completely harmless.
Could the fact that it doesn't seem to have many practical applications is what attracts certain people towards it? It doesn't have practical applications -> it's "purer" math. You're not trying to solve the problem for some external reason or using the math as a tool, you're trying to solve it for its own sake. I remember reading studies that mathematicians are on average more religious than scientists in general and I've also gotten the impression that some mathematicians relate to math a bit like it's religion. There is also this concept: http://en.wikipedia.org/wiki/Mathematical_beauty It could be that some are just trying to impress others but I don't think it's always that simple. And to my knowledge, there is some application for almost all the math that's been developed. Of course, if you optimized purely for applications, you might get better results.
Yes, you are right it's more complicated.

I find your third point for practical advice to be significantly dis-charitable to someone of average intelligence. There are people that miss obvious patterns like, "This person gives bad advice," but I think people of average intellect are already well equipped to notice simple patterns like that.

I don't believe this is a coherent set of general advice that can be given here. What specific details and methods of rationality any given "average" person is missing, and what specific cognitive biases they suffer from most severely will va... (read more)


One of the most important steps to becoming more rational for an average person would be to disentangle themselves from the default goals / values imposed by society or their peers. This would free up a lot of time for figuring out their own goals and developing relevant skills.

An average person could go far with instrumental rationality techniques like those taught at CFAR. Exercises like goal factoring and habit training don't require a high capacity for abstraction, only willingness to be explicit about one's motivations. For accumulating factual knowledge, spaced repetition software could be very useful.

Right now I'm reading this book : " The Art of Thinking Clearly: Better Thinking, Better Decision" , so that I can get myself familiar with the biases that I unconsciously do.

I would say asking for advice seems like a pretty useful heuristic then. Approach people with the same goals that are smart and ask them where to donate, or even what to believe. The fact that a smart person (who has given a lot of thought to something) believes something is good evidence that it is true. So basically: find a mentor.

However, s/he isn't smart enough to discover new knowledge in most fields, or contribute very much to a conversation of more knowledgeable experts on a given topic. Let's assume s/he has no exceptional talents in any area.

Most people are intellectually near-average or below-average, and I have not seen extensive discussion on how to help them lead happier lives that make the world a better place.

Upvoted for caring about other people. Most of your suggestions I agree with and there are some other good ones in the comments. I want to point out that th... (read more)


Specific advice: [ETA: If you decide that it is worth your time and effort to work directly on improving your general thinking skills, then one difficult but effective way to do that is to] learn to program and/or to learn math. Use google to find resources. Don't be embarrassed by books/articles with "for beginners" or "introductory" or "elementary" in their titles, and especially don't be embarrassed if even those are too hard (in fact, beware of the good old "elementary" math text, meaning "elementary... for ... (read more)

This is supposed to be for a person of average intelligence? ...
Yes it is. There was a big additional assumption I was making in my head, I've edited to clarify. Now does it make sense?
I don't think so. Now it basically reduces to the general claim "learning math and programming improves general thinking skills" - which, by the way, I'm not convinced of in full generality -, but has nothing to do with the average person. The problem is that learning programming and math takes so much time and effort, if it is at all possible, for the average person (and with no easily identifiable returns at that) that the antecedent of your conditional is unlikely to ever be true, thus rendering your advice largely irrelevant.
You're significantly overestimating how a) easy and b) enjoyable the average person finds math-related subjects. Most people don't get past algebra. I think a better option would be to gain a gut-level understanding of comparative advantage, and how much to value your time, so that you can get paid for what you do best, and outsource what you're bad at or find boring to other, more competent and enthusiastic people.
In my head, I was assuming motivation, edited to clarify. Yeah I know, that's why I commented. Even basic facility in proof based math is an extremely powerful mental technology, as I tried to say. I would not recommend calculus. I am talking about combinatorics or graph theory, or discrete math in general, where you can see the basic building blocks of proofs and proof strategies. This is worth years of effort.
Maybe proficiency in proof-based math is not a cause of mental superiority, but an indicator?