I was actually quite amazed to find how far Gary Drescher had gotten, when someone referred me to him as a similar writer - I actually went so far as to finish my free will stuff before reading his book (am actually still reading) because after reading the introduction, I decided that it was important for the two of us to write independently and then combine independent components. Still ended up with quite a lot of overlap!
But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do. Drescher is an AI guy. And the comments on the back of his book seem to indicate that he was writing in a mode that philosophical readers found startling and new.
Drescher is not alternative mainstream philosophy. Drescher is alternative Yudkowsky.
I've referred to Drescher and SEP a few times. The main reason I don't refer more to conventional philosophy is that it doesn't seem very good as a field at distinguishing good ideas from bad ones. If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?
But I've taken into account that Greene was able to rescue Roko where I could not, and I've promoted him on my list of things to read.
"But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do."
I agree, and I didn't do so (I used Dennett-type compatibilism in my list of representative views that you conveyed). Even when you do something exceptionally good independently, it can help to defuse affective death spirals to make clear that it's not quite unique.
"If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?"
This is an authorial point of view. Readers need heuristics to confirm that this is what is going on, and not something less desirable, for particular authors and topics. If they can randomly check some of your claims against the leading rival views and see that the latter are weak, that's useful.
I don't think I agree with your conclusion. It seems to assume that ideas are somehow representation-independent -- and in practical programming as well as practical psychology, that idea is a non-starter.
Or to put it another way, someone who can state a point more eloquently than its originator knows something that its originator does not. Sure, the communicator shouldn't get all the credit... but more than a little is due.
How much non-Eliezer stuff is there on the practical "how to" of rationality, e.g. on techniques for improving one's accuracy in the manner that Something to protect, Leave a line of retreat, The bottom line, and taking care not to rehearse the evidence might improve one's accuracy?
EDIT: Sorry to add to the comment after Carl's response. I had the above list in there already, but omitted an "http://", which caused the second half of my sentence to somehow not show up.
I read decision theory, game theory, economics, evolutionary biology, epistemology, and psychology (including the heuristics and biases program), then tried to apply them to everyday life.
I'm not aware of any general rationality textbooks or how-to guides, although there are sometimes sections discussing elements in guides for other things. There are pop science books on rationality research, like Dan Ariely's Predictably Irrational, but they're rarely 'how-to' focused to the same extent as OB/LW.
The article on theistic modal realism is ingenious. (One-sentence summary: God's options when creating should be thought of as ensembles of worlds, and most likely he'd create every world that's worth creating, so the mere fact that ours is far from optimal isn't strong evidence that it didn't arise by divine creation.)
I don't find the TMR hypothesis terribly plausible in itself -- my own intuitions about what a supremely good and powerful being would do don't match Kraay's -- but of course a proponent of TMR could always just reject my intuitions as I'd reject theirs.
However, I think the TMR hypothesis should be strongly rejected on empirical grounds.
It is notable -- and this is one element of a typical instance of the Argument From Evil -- that our world appears to be governed by a bunch of very strict laws, which it obeys with great precision in ways that make substantial divine intervention almost impossible. It seems that there are many many many more possible worlds in which this property fails than in which it holds, simply because the more scope there is for intervention the more ways there are for things to happen. Therefore, unless the sort of lawlikeness we observe is
Excellent post. Having just read The Adapted Mind (and earlier the moral animal), I can see where Eliezer got a lot of his stuff on evolutionary psychology from.
However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own. I think he in general does a good job of erring on the side of more complexity, which is what I appreciate, so I of course forgive him. :)
A niche that a good author might consider filling is actually including the numbers of the experiments they reference, ie, the experimental scores and their standard errors, etc. It might turn off the innumerate but I think that pure numbers and effect sizes are grossly under reported by science writers.
"However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own."
I don't need to be appeased, and I strongly endorse the project of providing that introduction. My post was about ways for readers and authors to manage some of the drawbacks of success.
I agree that sample and effect sizes are grossly under-reported, often concealing that an experiment with a sexy conclusion had almost no statistical power, or that an effect was statistically significant but too small to matter. It seems possible for this to become a general science journalism norm, but only if a word-efficient and consistent format for in-text description can be devised, like conflict-of-interest acknowledgments.
EDIT: I agree with your conclusion, but...
(Checks Don Loeb reference.)
While, unsurprisingly, we end up adding to the same normality, I would not say that these folks have the same metaethics I do. Certainly Greene's paper title "The Terrible, Horrible, No Good, Very Bad Truth About Morality" was enough to tell me that he probably didn't have exactly the same metaethics and interpretation I did. I would not feel at all comfortable describing myself as a "moral irrealist" on the basis of what I've seen so far.
Drescher one-boxes on Newcomb's Problem, but doesn't seem to have invented quite the same decision theory I have.
I don't think Nick ever claimed to have invented the Simulation Argument - he would probably be quite willing to credit Moravec.
On many other things, I have tried to use standard terminology where I actually agree with standard theories, and provide a reference or two. Where I am knowingly being just a messenger, I do usually try to convey that. But you may be reading too much into certain similarities that also have important points of difference or further development.
EDIT2: I occasionally notice the problem you point to, and write a blog post telling people to read more textbooks. Probably this is not enough. I'll try to reach a higher standard in any canonicalized versions.
I think the biggest issue here is your tendency to not cite sources other than yourself, which is an immediate turn-off to academics. To an academic, it suggests the following questions (amongst others): If your ideas are so good, why hasn't anyone else thought of them? Doesn't anyone else have an opinion on this - do you have a response to their arguments? Are you actually doing work in your field without having read enough to cite those who agree or disagree with you?
(I know this isn't a new issue, but it seems it bears repeating.)
Other questions that are implicitly asked:
Is the idea here to counsel us against some sort of halo effect? Eliezer Yudkowsky has told me a lot of interesting things about heuristics and biases, and about how intelligence works, but I shouldn't let this affect my judgement too much if he recommends a movie?
Or is it more than that - just that I should be careful when reading anything by Eliezer, and take into account the fact that I'm probably slightly too inclined to trust it, because I've liked what came before? Because then of course, we have the issue that I should be more likely to trust an au...
For much of what EY is setting out, trust isn't an appropriate relationship to have with it. You trust that he's not misrepresenting the research or his knowledge of it, and you have a certain confidence that it will be interesting, so if an article doesn't seem rewarding at first you're more likely to put work in to squeeze the goodness out. But most of it is about making an argument for something, so the caution is not to trust it at all but to properly evaluate its merits. To trust it would be to fail to understand it.
I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.
The reason i love Elizier is how many people he must have attracted this art of rationality, and that without him and this site i wouldn't even know where to begin or where to look, and how he is one of surprisingly few people able to convey the information in such tasty little bits. He may not be the smartest in his field, and may 'just' be passing on things he learned from others, but his work is super valuable, for he does what the others don't. Also, Methods of Rationality happens to be on my top 3 list of Greatest Pieces of Writing IMO, so that adds a...
I recently read Greene's essay and I thought it was a nice buttressing of ideas that I was originally exposed to in 2001 while reading "Beyond anthropomorphism". The challenge with Eliezer's earlier writing is that it is too injected with future shock to be comfortable for most non-transhumanists to read. The challenge with Eliezer's more recent writing is that it is too long for a blog format and much more suited for a book, which forces people to focus on the one thing.
The title of Greene's thesis is tongue-in-cheek. Based on my understanding of Eliezer's conception of morality, I would definitely call him irrealist.
The work of Jon Haidt is very enlightening.
This evening I had the pleasure of reading his Edge article on the benefits of religion, where he takes on some prominent new atheists - Myers, Sam Harris, etc. I quote:
...When hurricane Katrina struck, religious groups across the country organized quickly to send volunteers and supplies. Like fraternities, religions may generate many positive externalities, including charity, social capital (based on shared trust), and even team spirit (patriotism). If all religious people lost their faith overnight and abandone
No, Enlightenment 2.0 requires rationalist task forces as tightly-knit, dedicated, and fast-responding as religious task forces, better coordinated and better targeted, maybe even more strongly motivated, to do every good thing that religion ever did and more.
IMHO.
It's really helpful to have good info borne to me, though, in a readable and engaging fashion. For some reason I never wound up reading the Stanford Encyclopedia of Philosophy, but I did read Eliezer's philosophical zombie movie script.
That pointer to Gary Drescher is much appreciated. Eliezer's explanations about determinism and QM make me feel "aha, now it's obvious, how could it be any other way", but I hate single-sourcing knowledge.
Just a brief mention since we're supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall's "Beyond AI", the overlap and differences with Eliezer's FAI are very interesting, and it is a very readable book.
EDIT: You all might notice I did write "overlap and differences"; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin's criticisms of Eliezer's FAI.
The Don Loeb and theistic modal realism links are broken. Also, the Stanford Encyclopedia of Philosophy link seems to "point" to a passage from another LW post rather than a URL.
I have never really regarded EY as anything other than the guy who wrote a bunch of good ideas in one place. The ideas are good on their own merits and after being made aware that Quine(?) invented that "Philosophy = Psychology" thing I have had some healthy 'he's right but probably not original.' And really, who cares? He is right, but don't shoot the messenger, ad hominem is still ad hominem even if it is positive. Empty agreements are as bad as empty dismissals.
Isn't this intuitively obvious? Or am I just very, very rational?
Terry Pratchett is another good person who seems to want to go out on his own terms.
There's a large overlap between the ideas on LW and OB, and a book by J. Storrs Hall called "Beyond AI". That book is a popularization. But so is OB/LW, most of the time - at least the posts, if not the comments.
Follow-up to: Every Cause Wants To Be A Cult, Cultish Countercultishness
One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.
Specifically, Eliezer's writing at Overcoming Bias has provided nice introductions to many standard concepts and arguments from philosophy, economics, and psychology: the philosophical compatibilist account of free will, utility functions, standard biases, and much more. These are great concepts, and many commenters report that they have been greatly influenced by their introductions to them at Overcoming Bias, but the psychological default will be to overrate the messenger. This danger is particularly great in light of his writing style, and when the fact that a point is already extant in the literature, and is either being relayed or reinvented, isn't noted. To address a few cases of the latter: Gary Drescher covered much of the content of Eliezer's Overcoming Bias posts (mostly very well), from timeless physics to Newcomb's problems to quantum mechanics, in a book back in May 2006, while Eliezer's irrealist meta-ethics would be very familiar to modern philosophers like Don Loeb or Josh Greene, and isn't so far from the 18th century philosopher David Hume.
If you're feeling a tendency to cultish hero-worship, reading such independent prior analyses is a noncultish way to diffuse it, and the history of science suggests that this procedure will be applicable to almost anyone you're tempted to revere. Wallace invented the idea of evolution through natural selection independently of Darwin, and Leibniz and Newton independently developed calculus. With respect to our other host, Hans Moravec came up with the probabilistic Simulation Argument long before Nick Bostrom became known for reinventing it (possibly with forgotten influence from reading the book, or its influence on interlocutors). When we post here we can make an effort to find and explicitly acknowledge such influences or independent discoveries, to recognize the contributions of Rational We, as well as Me.
Even if you resist revering the messenger, a well-written piece that purports to summarize a field can leave you ignorant of your ignorance. If you only read the National Review or The Nation you will pick up a lot of political knowledge, including knowledge about the other party/ideology, at least enough to score well on political science surveys. However, that very knowledge means that missing pieces favoring the other side can be more easily ignored: someone might not believe that the other side is made up of Evil Mutants with no reasons at all, and might be tempted to investigate, but ideological media can provide reasons that are plausible yet not so plausible as to be tempting to their audience. For a truth-seeker, beware of explanations of the speaker's opponents.
This sort of intentional slanting and misplaced trust is less common in more academic sources, but it does occur. For instance, top philosophers of science have been caught failing to beware of Stephen J. Gould, copying his citations and misrepresentations of work by Arthur Jensen without having read either the work in question or the more scrupulous treatments in the writings of Jensen's leading scientific opponents, the excellent James Flynn and Richard Nisbett. More often, space constraints mean that a work will spend more words and detail on the view being advanced (Near) than on those rejected (Far), and limited knowledge of the rejected views will lead to omissions. Without reading the major alternative views to those of the one who introduced you to a field in their own words or, even better, neutral textbooks, you will underrate opposing views.
What do LW contributors recommend as the best articulations of alternative views to OB/LW majorities or received wisdom, or neutral sources to put them in context? I'll offer David Chalmers' The Conscious Mind for reductionism, this article on theistic modal realism for the theistic (not Biblical) Problem of Evil, and David Cutler's Your Money or Your Life for the average (not marginal) value of medical spending. Across the board, the Stanford Encyclopedia of Philosophy is a great neutral resource for philosophical posts.
Offline Reference:
Ross, L. D., Amabile, T. M. & Steinmetz, J. L. (1977). Social roles, social control, and biases in social-perceptual processes. Journal of Personality and Social Psychology, 35, 485-494.