Today's post, Purpose and Pragmatism was originally published on 26 November 2007. A summary (taken from the LW wiki):

 

It is easier to get trapped in a mistake of cognition if you have no practical purpose for your thoughts. Although pragmatic usefulness is not the same thing as truth, there is a deep connection between the two.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Lost Purposes, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 10:37 AM

Although pragmatic usefulness is not the same thing as truth, there is a deep connection between the two.

It's the same according to pragmatists, and I agree with them. Truth is a leading which is useful. Usefulness can come in a great many ways besides greater predictive accuracy.

Usefulness can come in a great many ways besides greater predictive accuracy.

I think that's something that people will generally agree on. As a trivial example, a car is in many ways more useful than a horse drawn carriage, but it's not more useful due to greater predictive accuracy. The more common objection to a pragmatist theory of truth is one of an essentially moral value to truth. If believing something that is false will give you a net benefit (perhaps believing in religion makes you happier?), should you believe in it, even though it is false?

Did you really mean "in some way," rather than "net"?

I didn't. Fixed.

But do you agree that beliefs can have more uses than predictive accuracy?

If believing something that is false will give you a net benefit should you believe in it, even though it is false?

Without equivocating, net benefit should mean net benefit, so the answer is "of course". Saying no is making predictive accuracy override net benefit to all your values - making it into a fetish, a fixed idea.