Cognitive psych is ovbiously important to people here, so I want to point out a CogSci book thread over from reddit/r/cogsci.

I would be interested in an extension of this thread here, since LW has somewhat more computational theory of mind slant.

42 comments, sorted by Click to highlight new comments since: Today at 9:29 AM
New Comment

That's a pretty good list they have going, but in my opinion the Gigerenzer et al. volume should be replaced by one published 3 years earlier by the same research group: Simple Heuristics That Make Us Smart. It's the same basic thing, but a bit more comprehensive and more directly relevant to cognitive psych (no chapters on animal rationality and etc).

Also, while the 1982 H&B volume is obviously very good and certainly belongs on the list, the picture is pretty incomplete without the updated 2002 H&B volume as well as Choices, Values, and Frames (1999).

I'll have to echo the recommendation for for Pinker's How the Mind Works. It's 600+ pages, but not a word wasted (until you get to the chapter on art...meh). I was planning to write a review for LW.

The best summary is that it competes with Drescher's Good and Real for the title of "LW in book form", except that it doesn't talk about quantum mechanics (except for a few paragraphs). But it does tie together all of the discussions you see here of the various biases at work, game theory, impact of evolutionary history on the mind, and reductionist accounts of cognitive function (with a particular emphasis on vision and its problem of extracting 3D data from a 2D image by using cues from light gradient, line orientation, etc.).

Having seen the thread, I'll have to check out Fodor's reply book (The Mind Doesn't Work Like That).

After having read a bit about Fodor's new book What Darwin Got Wrong (Wikipedia; Amazon), I think of him as (pardon the snipe -- maybe I'll write in more detail about this sometime) someone whose mind has gotten messed up by doing the word games that pass for philosophy in some circles.

I'll have to check out Fodor's reply book (The Mind Doesn't Work Like That).


Thanks. I followed a link within that to Fodor's review of How the Mind Works, and, like arundelo suggests, it just comes off as a long ramble. I can't really tell what specific point he's claiming Pinker erroneously argues for, just generalities that don't contradict anything I found in Pinker. I felt the same way about that link.

Hm, maybe I won't read Fodor.

Can anyone here recommend reading Fodor? He seems to be very important in the field, but every time I read a short review, he seems to entirely discredit himself, betraying fundamental misunderstandings of the subjects that he criticizes. He's now attacking evolutionary theory in the same way.


The present worry is that the explication of natural selection by appeal to selective breeding is seriously misleading, and that it thoroughly misled Darwin. Because breeders have minds, there’s a fact of the matter about what traits they breed for; if you want to know, just ask them. Natural selection, by contrast, is mindless; it acts without malice aforethought. That strains the analogy between natural selection and breeding, perhaps to the breaking point. What, then, is the intended interpretation when one speaks of natural selection? The question is wide open as of this writing.

I feel obligated give him a chance, given his importance, but he comes off as absolutely ignorant of the things he attempts to criticize. Is there anyone here to disagree and encourage further reading here? Or is he truly as obvious a waste of time as he seems?

I wouldn't recommend agreeing with him about a lot of things, but he's definitely worth paying attention to.

The gist of "The Mind Doesn't Work That Way," from what I can tell so far:

So partly sparked by his own work, modularity became an important idea in cognitive science; not all parts of your mind do the same jobs, or have access to the same information. For example, knowing the Müller-Lyer illusion is an illusion doesn't ruin the effect.

Some cognitive scientists of an evolutionary bent saw functional modularity, with the functions defined by the adaptive problems they were designed to solve, as the key to predicting and understanding the mind's entire functional architecture. If the modules are information-encapsulated, then massive modularity also offers a solution to the frame problem. A computational version of this is the picture that Pinker presents in How the Mind Works.

Fodor's position seems to be something like: there are modules; computation is a good way of thinking about modules; but they seem to be restricted to input (eg. perception) and output (eg. maintaining balance) processes (both in the sense of having clear functional success-criteria and in the sense of being informationally-encapsulated). The things cognitive scientists are most interested in - and have had the least success in studying - seem to be nonmodular; when you "believe a belief" or "think a thought", you seem to have at least potential access to most of the information you've ever had access to before. If belief and thought and other things he calls "global processes" are nonmodular, then computation may not be the right way to think about them, despite being the best hypothesis we've had so far.

Fodor's arguments for a "language of thought" make sense (see his book of the same name). In a nutshell, thought seems to be productive – out of given concepts, we can always construct new ones, e.g. arbitrary nestings of "the mother of the mother of ..." – systematic – knowing certain concepts automatically leads to the ability to construct other concepts, e.g. knowing the concept "child" and the concept "wild", I can also represent "wild child" – and compositional, e.g. the meaning of "wild child" is a function of the meaning of "wild" and "child".

Isn't the obvious answer to his pondering just "Natural selection selects for gene frequency"? And hasn't that been pretty well known for a while? If so, that's pretty bad.

And it doesn't help that everything I've read by him so far comes across as disconnected, unmotivated rambling. :-/ I'm gonna have to agree with you.

Gene frequency is true but not terribly informative, or at least I'm more interested in what sort of organisms you end up with.

Selective breeding is for easily identified traits that people can understand. Natural selection produces something more complex and less obvious.

[-][anonymous]12y 0

How the mind works is also available in Audio. I found that very helpful.

Does the Audio version pretty much clip out the chapter on vision, which requires extensive use of diagrams?

My mistake, was thinking about "Blank slate", which is unabridged (and great). THe mind book is not available.

It probably won't make me terribly popular around here, but I'd say books have lost their informational function, and these days are used primarily for signaling sophistication.

On nearly any subject, you'll get vastly more useful knowledge from googling than from reading any book.

This argument doesn't apply to fiction or to entertainment-value non-fiction - only for sources of serious learning. Linear non-interactive format is simply horrible at getting knowledge anywhere, and brains are great at filtering this out.

In my experience, Googling tends to give you shallow, surface knowledge while books are more likely to give you deep knowledge. (Needless to say, though, there are also plenty of books that only provide surface knowledge.)

In my experience, Googling tends to give far deeper knowledge, as everything has explorable context - you can focus onto anything and out for wider view until it integrates well with your mental model of the world. This kind of integrated contextual knowledge together with some practice can go very deep.

Books just try to mindlessly ram information things through, but it just doesn't work for me. At all. I only ever use them for entertainment.

It's possible that people work differently, but I don't terribly like such hypotheses. It's also possible many people are simply not terribly good at using Internet, or that many disciplines don't yet have information available on the Internet - in the long term the normal case will far more information than you ever need available online, but this might not always be the case yet.

It's also possible many people are simply not terribly good at using Internet, or that many disciplines don't yet have information available on the Internet - in the long term the normal case will far more information than you ever need available online, but this might not always be the case yet.

It's not the first possibility, it's the second. I'm quite comfortable in saying that I am very capable at finding specific online content if it's out there to be found. The problem is that most of the disciplines I'm interested in reading about don't have the good, hard content available for free on the web. (Scientific journals can be accessed online, but these are just books.) It is not the "normal case" that far more information than one could want is available online for most domains, it is absolutely the abnormal case. To be frank, the idea that one could ever get a thorough grounding in any serious, empirical scientific discipline by scouring the Internet is, at least at this time, laughable.

To be frank, the idea that one could ever get a thorough grounding in any serious, empirical scientific discipline by scouring the Internet is, at least at this time, laughable.

Is this because of the lack of lab work as well as the lack of textbook-level information?

Incidentally, Google is clearly aware of this, and willing to step into some hot (and unprofitable in the short term) waters to get to the book-stored knowledge. They also revived decent open-source OCR probably for this purpose.

Are there any well-regarded textbooks you have tried before turning to the Internet? If, say, SICP or The Art of Computer Programming comes off as "mindlessly ram[ming] raw information things through", you may want to promote "people work differently" back to prominence as a hypothesis.

I will say, though, that in my experience reading a textbook is very different from reading on the Internet - rather like switching abruptly to rifle shooting from cross-country skiing, metaphorically speaking.

If you want to learn the fundamental concepts of a field, I find most of the time that textbooks with exercises are still the best option. The more introductory chapters of PhD theses are also helpful in this situation.

I think the current situation is that students still need textbooks to get up to speed on a new field, since the textbook takes the time to pedagogically explain the initial assumptions of the field which the practicioners don't take time to spell out in the articles they write. People already familiar with the field can come up with questions that have concise answers they can understand, so they can just google up the relevant articles and don't need to rely on books that much.

Possibly relevant: My brain is great at jumping onto any attention hijack it comes within reach of. That tends to interfere with trying to study a specific subject online.

(Message edited 3 times since first vote.)

I'd like to see some examples on this. Anyone care to nominate best online resources for various subjects vs. best books?

A fast google did turn up a couple of serious looking math websites for learning math from arithmetic to what looked like intermediate subjects, and I'm sure there are sites for more advanced math. On the other hand, except for learning more quickly whether you've gotten correct answers to the problems (not a small thing), I'm not sure that using them is very different from using a textbook.

Especially for history, there's a tremendous amount that isn't online. Even if the best material (whatever that means) is online, if you really want to dig into a subject, you'll need books and other written material.

Anyone care to nominate best online resources for various subjects vs. best books?

Amazon. :-)

More seriously, Wikipedia and Mathworld are both good for looking up particular mathematical details, but they're better used as entry points than as final answers. In fact, that's going to be true of any general source for any subject. The quest for knowledge begins with 100 web pages open on the desktop.

Are we comparing best to best, or likely-to-be-employing to likely-to-be-employing? Because I think the former is a bit of an impossible task.

I don't find this with mathematics at all. If I want to learn some new area, I'll begin by looking on the Internet, but the main question motivating that search will be, "what books do I need to read?" Most of the knowledge simply isn't (yet) online in the form it needs to be for real learning.

Google and Wikipedia are entrance doors. Just reading a few Wiki pages or the top Google hits is like never entering further than the foyer of a library.

In my experience, Internet searches are almost completely useless for locating proper educational material in engineering. I have to add the caveat that I am not good at searching the Internet, however, or at least not as good as some people I know.

What, specifically, are you seriously studying using Internet sources?

My reference case is computer science which went online earlier than many other domains, but I honestly cannot think of anything I've learned by reading any textbook during my entire life - all learning I ever get is by doing or by environmental absorption of contextually relevant small bits of information into my knowledge network.

My reference case is computer science which went online earlier than many other domains,

You sound partly aware of this, but I think your reference case is likely to be injecting a heavy bias. I'd guess that computer science is better represented on Google than any other research discipline.

Even with respect to computer science though, I think the point is dead wrong. For example, take an awesome introductory textbook like Sipser's Introduction to the Theory of Computation. You can work through that book doing all the exercises in a few weekends, and it is such a pleasure to read that it's hard to put it down. There's no way you could learn as much as efficiently by randomly jumping from topic to topic on Wikipedia and reading online resources.

A good textbook is still pretty much the greatest bargain in the known universe.

I assume this is only a matter of time, not anything fundamental - it happened earlier to computer science than to other disciplines, but it is probably more often than not true already, and will be nearly universally true in matter of years.

I agree that the information of almost all research disciplines is likely to eventually show up on Google, but as of 2010 - well, you'd have to take my nonfiction books from my cold dead hands!

Did you go to college?

Yes, I have MSc in Computer Science.

I mostly got through my classes by reading the textbooks... did you take Physics or Chemistry as an undergraduate?

I had physics and chemistry in high school, plus quite a bit of bioinformatics at university. No textbooks in either case.

From my blog last year

The Internet is the world's largest library. It's just that all the books are on the floor. -- John Allen Paulos


Most pages are very shallow; many others are too narrow for learning, though decent papers for those already knowledgeable in field for research. Too much emphasis on new results, but most new results are wrong, many of the rest are incomplete.

I also followed up, and discussed Robin's comments, including "Until you've read and understood textbooks, why bother with anything else?", in

"Cognitive Science: An Introduction" is a very broad but disjointed textbook, more than a bit dated now too, since it was published in 1987. But still a decent technical introduction.

Physiology of Behavior by Neil R. Carlson