Query: by what objective criteria do we determine whether a political decision is rational?

I propose that the key elements -- necessary but not sufficient -- are (where "you" refers collectively to everyone involved in the decisionmaking process):

  • you must use only documented reasoning processes:
    • use the best known process(es) for a given class of problem
    • state clearly which particular process(es) you use
    • document any new processes you use
  • you must make every reasonable effort to verify that:
    • your inputs are reasonably accurate, and
    • there are no other reasoning processes which might be better suited to this class of problem, and
    • there are no significant flaws in in your application of the reasoning processes you are using, and
    • there are no significant inputs you are ignoring

If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational and needs to be corrected or discarded.

This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.

This is not one great moral principle; it is more like a self-modifying working process (subject to rational criticism and therefore improvable over time -- optimization by successive approximation). It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to political discourse.

So... can we agree on this?


This is a hugely, vastly, mindbogglingly trimmed-down version of what I originally posted. All comments prior to 2010-08-26 20:52 (EDT) refer to that version, which I have reposted here for comparison purposes and for the morbidly curious. (It got voted down to negative 6. Twice.)

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 10:12 PM

Your title and section titles seem like they were optimized to be clever out-of-context, and then tacked on without anything to specifically inform them. For example, I'd like to have some mention of a brontosaurus in a section whose title contains a brontosaurus.

I'd like to have some mention of a brontosaurus in a section whose title contains a brontosaurus.

Or at least a link to this

They were references -- Hitchhicker's Guide to the Galaxy and Monty Python, respectively. I didn't expect everyone to get them, and perhaps I should have taken them out, but the alternative seemed too damn serious and I thought it worth entertaining some people at the cost of leaving others (hopefully not many, in this crowd of geeks) scratching their heads.

I hope that clarifies. In general, if it seems surrealistic and out of place, it's probably a reference.

Even references need to be motivated by textual concerns. For example, if you had a post titled "Mostly Harmless" because it talked about the people of Earth but it did not say anything related to harmlessness or lack thereof, it would not be a good title.

Yes, that is quite true. However, as you can see, I was indeed discussing how to spot irrationality, potentially from quite a long way away.

Suggestion: Supply links explaining references. You can't achieve common knowledge unless you have common priors.

Counter-suggestion: Only use references that you either expect everyone to pick up on, or ones which are mostly invisible to people who don't recognize them. It's tasteless to add incongruous references and then expect people to follow your link which describes the clever aside you just made.

Nobody likes me, everybody hates me, I'm gonna go eat worms...

I suppose it would be asking too much to just suggest that if a sentence or phrase seems out of place or perhaps even surreal, that readers could just assume it's a reference they don't get, and skip it?

If the resulting argument doesn't make sense, then there's a legit criticism to be made.

But I like you!!! I like humans!!!

It's just that I regard your expositions as disinformative.

Exposition... disinformative?... contradiction... illogical, illogical... Norman, coordinate!

For what it's worth, here are the references. I'll add a link here from the main post.

  • "Spot the Loonie!" was a Monty Python satire of a game show. I'm using it here to refer to the idea of being able to tell when someone's argument doesn't make sense.
  • "How to Identify the Essential Elements of Rationality from Quite a Long Way Away" refers to the title of a Monty Python episode whose title was, I think, "How to Identify Different Types of Trees from Quite a Long Way Away".
  • "Seven and a Half Million Years Later" refers to the length of time it took the computer Deep Thought, The Second-Greatest Computer in All Time and Space, to calculate The Answer to The Ultimate Question of Life, The Universe, And Everything (in The Hitch-Hiker's Guide to the Galaxy, aka H2G2).
  • "I really have no idea if you're going to like it" refers to Deep Thought's reluctance, seven and a half million years later, to divulge The Answer: "You're really not going to like it." "is... is..." refers to this same dialogue, where Deep Thought holds off actually giving The Answer as long as possible.
  • "The Question to the Ultimate Answer" refers to the fact that, having divulged The Answer, it pretty quickly became clear that it was necessary to know what the Ultimate Question of Life, The Universe, And Everything actually was, in order for the answer to make any sense.
  • "So, have we worked out any kind of coherent picture... Well no..." refers to a scene in H2G2 where usage of The Infinite Improbability Drive gives rise (quite improbably) to the existence of "a bowl of petunias and a rather surprised-looking sperm whale", the latter of which immediately begins trying to make cognitive sense of his surroundings. After assigning names (amazingly, they are the correct English words) to several collections of perceptions in his immediate environment, he pauses to ask "So, have we built up any coherent picture of things? Well... no, not really"... or something like that.
  • "you know what I am saying, darleengs?" is a catchphrase used by Billy Crystal's SNL parody of Fernando Lamas. (Note: comedian Billy Crystal should not be confused with evil neoconservative pundit Bill Krystol.)
  • "A Theory About the Brontosaurus" refers to a Monty Python sketch in which a talk show interviewee has a theory (about the brontosaurus) which she introduces many times ("This is my theory. (cough cough) It goes like this. (cough) Here is the theory that I have (cough cough cough) My Theory About the Brontosaurus, and what it is too. Here it goes.") before finally revealing her utterly trivial and non-enlightening conclusion.
  • "ahem ahem" refers to the interviewee's repeated coughing-delays in the above sketch.

I can certainly attempt that. I considered doing so originally, but thought it would be too much like "explaining the joke" (a process notorious for efficient removal of humor). I also had this idea that the references were so ubiquitous by now that they were borderline cliche. I'm glad to discover that this is not the case... I think.

Two years ago, I wouldn't have gotten the brontosaurus reference. I got it today only because last year someone happened to include "Anne Elk" in their reference and that provided enough context for a successful Google. There are no ubiquitous references.

That said, cata has a point too, as do you with the thing about explaining jokes. Like everything else in successful communication, it comes down to a balancing act.

Yes, I agree, it's a balancing act.

My take on references I don't get is either to ignore them, to ask someone ("hey, is this a reference to something? I don't get why they said that."), or possibly to Google it if looks Googleable.

I don't think it should be a cause for penalty unless the references are so heavy that they interrupt the flow of the argument. It's possible that I did that, but I don't think I did.

The problem is that the references have such a strained connection to what you're talking about that they are basically non sequiturs whether you understand them or not.

From the title I thought you were going to present a story of some sort and make us figure out which character was the irrational one. That might have been fun.

Query: by what objective criteria do we determine whether a political decision is rational?

What does "rational" mean in this context? Could you rephrase it without using words "rational", "objective", and if possible "political"?

Based on comparison with the original version of the post, I'm pretty sure that by "political" woozle just means any complicated decision, involving many independent pieces of evidence and chains of reasoning.

I quit trying to read the post half way through. Don't spent at least half of a long post explaining how you came to think about the topic in question!

Umm... why not?

Good question.

First attempt at good answer: Because Gonzo journalism, if done well, can certainly be entertaining, but it generally sucks at being informative. And because, in this community, we generally prefer informative.

Except when we don't.

Second attempt at good answer: Because Gonzo journalism is so rarely done well.

The subjective part probably could have been shortened, but I thought it was at least partly necessary in order to give proper context, as in "why are you trying to define rationality when this whole web site is supposed to be about that?" or similar.

The question is, was it informative? If not, then how did it fail in that goal?

Maybe I should have started with the conclusions and then explained how I got there.

I felt like I didn't get the informativeness I bargained for, somehow. Your list of requirements for a rational conversation and your definition of a moral rational decision seem reasonable, but straightforward; even after reading your long exposition, I didn't really find out why these are interesting definitions to arrive at.

EDIT: One caveat is that it's not totally clear to me where the line between "ethical" goals and other goals lies, if there is such a line. Consequently, I don't know how to distinguish between a moral rational decision and just a plain old rational decision. Are ethical goals ones that have a larger influence on other people?

(In particular, I didn't understand the point of contention in the comment thread you linked to, that prompted this post. It seems pretty obvious to me that rationality in a moral context is the same as rationality in any other context; making decisions that are best suited to fulfilling your goals. You never really did address his final question of "how can a terminal value be rational" (my answer would be that it's nonsense to call a value rational or irrational.))

I'm not sure it's important that my conclusions be "interesting". The point was that we needed a guideline (or set thereof), and as far as I know this need has not been previously met.

Once we agree on a set of guidelines, then I can go on to show examples of rational moral decisions -- or possibly not, in which case I update my understanding of reality.

Re ethical vs. other kinds: I'm inclined to agree. I was answering an argument that there is no such thing as a rational moral decision. Jack drew this distinction, not me. Yes, I took way too long coming around to the conclusion that there is no distinction, and I left too much of the detritus of my thinking process lying around in the final essay...

...but on the other hand, it seemed perhaps a little necessary to show a bit of my work, since I was basically coming around to saying "no, you're wrong".

If what you're saying is that there should have been no point of contention, then I agree with that too.

"How can a terminal value be rational?": As far as this argument goes, I assert no such thing. I'm not clear on how that question is important for supporting the point I was trying to make in that argument, much less this one.

I have another argument for the idea that it's not rational to argue on the basis of a terminal value which is not at least partly shared by your audience -- and that if your audience is potentially "all humanity", then your terminal value should probably be something approaching "the common good of all humanity". But that's not a part of this argument.

I could write a post on that too, but I think I need to establish the validity of this point (i.e. how to spot the loonie) first, because that point (rationality of terminal values) builds on this one.

I gave you an upvote because the topics you consider are important ones, things I have been thinking about myself recently. But I have to agree with the other commenters that you might have made the posting a bit shorter and the reasoning a bit tighter. But that is enough about you and your ideas. Lets talk about me and my ideas. :)

The remainder of this comment deals with my take on a couple of issues you raise.

The first issue is whether moral-value opinions, judgments, and reasonings can be evaluated as "rational" vs "irrational". I think they can be. Compare to epistemic opinions, judgments, and reasonings. We define a collection of probability assignments to be rational if they are consistent; if they are Bayesian updates from a fairly arbitrary set of priors; updates based on evidence. We may suspect, with Jaynes, that there is some rational objective methodology for choosing priors, but since we don't yet know of any perfect such methodology, we don't insist upon it.

Similarly, in the field of values (even moral values) we can define moral rationality as a kind of consistency of moral judgments, even if we do not yet know of a valid and objective methodology for choosing "moral priors" or "fundamental moral preferences". That is, we may not yet be able to recognize moral rationality, but, like Potter Stewart regarding pornography, we certainly know moral irrationality when we see it.

Your second major theme seems to be whether we can criticize conversations as rational or irrational. My opinion is that if we want to extend "rational" from agents and their methods to conversations, then maybe we need to view a conversation as a method of some agent. That is, we need to see the conversation as part of the decision-making methodology of some collective entity. And then we need to ask whether the conversation does, in fact, lead to the consequence that the collective entity in question makes good decisions.

Although this approach forces us into a long and difficult research program regarding the properties of collectives and their decision making (Hmmm. Didn't they give Ken Arrow a Nobel prize for doing something related to this?), I think that it is the right direction to go on this question, rather than just putting together lists of practices that might improve public policy debate in this country. As much as I agree that public policy debate sorely needs improvement.

I don't think I have anything to add to your non-length-related points. Maybe that's just because you seem to be agreeing with me. You've spun my points out a little further, though, and I find myself in agreement with where you ended up, so that's a good sign that my argument is at least coherent enough to be understandable and possibly in accordance with reality. Yay. Now I have to go read the rest of the comments and find out why at least seven people thought it sucked...

Yes, it could have been shorter, and that would probably have been clearer.

It also could have been a lot longer; I was somewhat torn by the apparent inconsistency of demanding documentation of thought-processes while not documenting my own -- but I did manage to convince myself that if anyone actually questioned the conclusions, I could go into more detail. I cut out large chunks of it after deciding that this was a better strategy than trying to Explain All The Things.

It could probably have been shorter still, though -- I ended up arriving at some fairly simple conclusions after a very roundabout process, and perhaps I didn't need to leave as much of the scaffolding and detritus in place as I did. I was already on the 4th major revision, though, having used up several days of available-focus-time on it, and after a couple of peer-reviews I figured it was time to publish, imperfections or no... especially when a major piece of my argument is about the process of error-correction through rational dialogue.

Will comment on your content-related points separately.

If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right?

All decisions are in a sense "moral decisions". You should distinguish the process of decision-making from the question of figuring out your values. You can't define values "on rational basis", but you use a rational process to figure out what your values actually are, and to construct a plan towards achieving given values (based, in particular, on epistemically rational understanding of the world).

I think a lot of confusion here comes from people lumping together ultimate and intermediate goals in their definitions of morality. Ultimate goals are parts of your utility function: what you really want. As you said, you can't derive these rationally; they're just there. Intermediate goals, on the other hand, are mental shortcuts, things that you want as a proxy for some deeper desire. An example would be the goal that violent criminals get thrown in jail or otherwise separated from society; the ultimate goal that this serves is our desire to avoid things like being savagely beaten by random ne'er-do-wells when we go to the 7/11 to buy delicious melon bread. But if there were a more effective, or cheaper, or more humane way to prevent violent crime, rationality can help you figure out that you should prefer it.

Rationality can and should define your intermediate goals, but can't define your ultimate goals. But when most people talk about morality, they make no distinction between these. As soon as you do distinguish between these, the question tends to dissolve. Just look at all the flak that Sam Harris is getting for saying that science can answer moral questions. What he's really saying is that science can help us determine our utility functions and figure out how to optimize them. The criticism he gets would probably evaporate if he would taboo "morality" for a little while, but he gets way more media attention by talking this way.

I'm not sure I follow. Are you using "values" in the sense of "terminal values"? Or "instrumental values"? Or perhaps something else?

This post was long and winding, and didn't seem to deliver much. This might just be because I was tired. Either way it certainly didn't deliver on either of its titles.

My main conclusions are, oddly, enough, in the final section:

[paste]

I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):

  • 1)you must use only documented reasoning processes: [1.1] using the best known process(es) for a given class of problem [1.2] stating clearly which particular process(es) you use [1.3] documenting any new processes you use
  • 2) making every reasonable effort to verify that: [2.1] your inputs are reasonably accurate, and [2.2] there are no other reasoning processes which might be better suited to this class of problem, and [2.3] there are no significant flaws in in your application of the reasoning processes you are using, and [2.4] there are no significant inputs you are ignoring ... So... can we agree on this? [/paste]

P.S. The list refuses to format nicely in comment mode; I did what I could.

I much prefer the new version.

[-][anonymous]14y20

Voted up since this doesn't deserve it's low score, but I'll echo what some other commenters are saying- don't spend so much time discussing your own meandering thought processes. It is not an effective writing style. You're not writing a whodunnit, you don't need to worry about spoiling the ending - give me the main points immediately so I can think about them while reading the rest.

With regards to the content, what examples can you give of a documented (or UNdocumented) reasoning process? If you're suggesting this as a normative procedure to follow, how do you plan on overcoming the enormous transaction costs this places on conversations?

(For what it's worth I fell into the same trap of writing a post, revising it heavily, getting tired of it, declaring "its done" and receiving a lukewarm response when I submitted it. My advice would be to simply put it aside for a week or so, then return to it with a more rested/critical eye.)

In particular, when you find yourself including a Monty Python reference suggesting, "I know this sucks, but I am sophisticated enough to laugh at myself about it," then you should realize that it is time to start over.

(Post-cutdown) Maybe things would have been clearer if you had just created a new post, and linked to it from the old one saying "whoops, this is too long and goes all over the place, see this insteead" with a link to the new one. Now it's not easy to navigate through the comments to find which refer to the new, which refer to the old :P

Are you trying to characterize good decisions, or cases in which good decision making took place, or good decision making processes, or good things to do if you are participating in decision making? From your bullet items, it appears that you mean the latter.

But what reason do you have for thinking that having everyone adhere to these practices would lead to good decisions? Are you really interested in good decisions, or rather do you merely wish everyone involved to have a good time and feel good about themselves later?

You mention an evolutionary process to improve your initial list of good practices. How would you know whether this process is going in the right direction? What exactly is the objective here?

Do you really think that the problem in collective decision making is to get the participants to reason well? Isn't it also necessary that they listen to learn each others interests as well as each others arguments? That they somehow first come to a consensus as to what balance of interests should be sought before they try to determine rationally how best to achieve those ends?

Downvoted. I thought you did better the first time when I couldn't quite see what your point was.

I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):

you must use only documented reasoning processes: using the best known process(es) for a given class of problem stating clearly which particular process(es) you use documenting any new processes you use

making every reasonable effort to verify that:

your inputs are reasonably accurate, and there are no other reasoning processes which might be better suited to this class of problem, and there are no significant flaws in in your application of the reasoning processes you are using, and there are no significant inputs you are ignoring

This definition seems to imply that something can only be rational if an immense amount of time and research is dedicated to it. But I can say something off the cuff, with no more of a reasoning process than "this was the output of my black-box intuition", and be rational. All that's required is that my intuition was accurate in that particular instance, and I reasonably expected it to be accurate with high enough probability relative to the importance of the remark. See How Much Thought.

"Immense" wouldn't be "reasonable" unless the problem was of such magnitude as to call for an immense amount of research. That's why I qualify pretty much every requirement with that word.