I've been studying a lot of finance lately, and it strikes me that it's a field that requires a very high degree of rationality and ability to cut through the noise to get to correct arguments.

What's nice about investing especially, though, is that it has a very similar utility curve for all players. People have slightly different goals in terms of finance and investing, but generally speaking, people are measuring utility in terms of financial return. There's some differences between time preferences and risk tolerance, but generally speaking, we can sort the winning strategies from the losing ones over time. There's a fairly clear and objective standard for what worked and what didn't, which could make it a very helpful field for the aspiring rationalist to study and learn from.

I originally wrote this post, "Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing" for my blog, so the tone is more colloquial than you'd normally see on LessWrong, and the audience is slightly different. A friend of mine suggested I post it up here too as it might be interesting to the LW crowd, so here we go -


"Convincing Arguments Aren’t Necessarily Correct – They’re Merely Convincing"

Things have been going well lately, and I now have a surplus of cash for the first time in a while. Err, rather, I have both a surplus of cash and some high consistency predictable future income. That’s nice! I envy all you salaried people when I think about predictable future income. Having a decent chunk of cash, but no predictable future income means you don’t really have a surplus of cash.

Anyways, I was thinking of what to invest a small bit of money in, and reading some papers and analyses and such. I’m reading a mix of finance, investment, politics, diplomacy, and history lately, which makes for a nice mix. It also has me interested in the topic.

Today, I read a really fantastically convincing argument, enough so that I was immediately ready to go buy a small amount of what the author was advocating.

Then I stopped myself! Wait, the author isn’t necessarily correct – he’s merely convincing.

I went back through the piece I was reading, which was quite a long piece. I started counting the number of premises the author had, and it went something like this:

Premise A
Premise B
Premise C
Premise D
Premise E
Premise F
If A, B, C, D, E, F, then G.
Premise H
Premise I
Premise J
Premise K
If G, H, I, J, K, then L.
Therefore, invest in L.

It was super convincing. But then it dawns on me, convincing doesn’t mean correct. For instance, though the author doesn’t explicitly state it, two of his premises are that the United States and Chinese economies, political leaderships, objectives, and currencies are going to be doing roughly similar things over the next 20 years to how they’re doing now, maybe with a little bit of change but nothing drastic.

Perhaps that’s not true! I don’t know much about the current governmental leadership of China, who the next projected/potential leadership is, and their thoughts and objectives. Also, American economic, political, and monetary policy can change fast, and would it really surprise anyone if 2012 or 2016 brought in someone with significantly different monetary views than recently?

So, two of the author’s biggest unstated premises are that China and America will behave roughly how we expect China and America to behave over the next 20 years. And if he’s wrong, the entire analysis might fall apart.

It was a super convincing argument, but there’s a problem with any argument that has a lot of premises chained together – if even one premise is wrong, then the whole conclusion might be faulty.

Convincing doesn’t mean correct – it just means convincing. Stay skeptical. Keep researching.


The tone is more colloquial than it would be if I wrote strictly for LW, but I think this distinction is an important one that most people don't pay attention to - convincing arguments aren't necessarily correct. If you're looking into an argument that has real world consequences, you need to thoroughly examine all the premises of the argument to make sure you're working with reality. And even a single flaw in a premise can result in you totally blowing your utility up.

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 1:30 PM

My internal name for this is "conjunction penalty". At some point in the last year I learned to penalize conjunctions and favor disjunctions in real life. For example, to stop being afraid of an imagined worst case scenario, I count the number of independent things that need to go wrong, roughly estimate the probability of each, multiply them, and end up with a comfortably small number. Also it feels very nice to carry out plans that depend on disjunctions of favorable events, because such plans work much more often than I intuitively expect.

This is really interesting. Could you give an example?

(Hopefully you don't mind an example from someone else; I seem to do similar, but can't speak for the original poster :))

As a random recent example, the probability that anything goes wrong with my equipment while SCUBA diving is fairly low to begin with, since I get the gear from a trusted source and know they do good maintenance on it. Even if something goes wrong, I know techniques for handling everything but a catastrophic failure by myself. Even if my gear fails catastrophically, I still have a buddy, who I know personally has been trained to handle that situation. Even if the gear both of us has fails catastrophically, or my buddy panics, or isn't nearby, there are emergency techniques that risk injury but will avoid death.

I actually used similar logic to get myself to learn in the first place - "right now, this seems dangerous, because I do not know how likely the risks are, nor how to mitigate them, nor how to handle emergencies. Between taking classes and doing some research, I can alleviate both of those. The cost of these actions is acceptable even if I decide part-way through that this is too risky for me. Therefore, I will go ahead, sign up for a class, and make a serious commitment to drop the class if I ever feel that SCUBA diving, or the way the class is being taught, places me at serious risk."

Your point seems to be roughly that "highly conjunctive arguments are disproportionately convincing". I hate to pick on what may just be a minor language issue, but I really grind to a halt trying to unify this with the phrase "convincing arguments aren't necessarily correct". I don't see much difference between it and "beliefs aren't necessarily correct". The latter is true, but I'm still going to act as if my beliefs are correct. The former is true, but I'm still going to be convinced by the arguments I find most convincing.

Using the word "convincing" as a 1-place predicate distracts from the actual problem, which is simply that you found a weak argument convincing.

Yes. And the problem is the well-known cognitive bias that plausibility goes up as probability goes down.

Plausible is not probable! Convincing is not correct! Useful little soundbites.

[-][anonymous]13y40

I think "what feels plausible is not probable" is slightly better, since it shows the map-territory distinction--"plausible" and "probable" both sound like words that describe the territory.

Sure; in practice these soundbites are a reaction to someone saying "plausible" (and now "convincing") and they usually respond with some form of "what?". Which lets me do the fifteen-second explanation of the representativeness heuristic: "I'm gonna flip a coin a bunch of times. Which is more plausible, heads heads tails heads tails tails heads, or tails tails tails tails tails? One option makes you four times as much money as the other." If they show interest then I do my best to limit myself to a few minutes.

"Plausible is the opposite of probable" is the soundbite I use.

This is particularly difficult as I have highly tuned my brain to plausibility, like more than most people. Hence the usefulness of a handy soundbite somewhat hooked into the plausibility detector.

I want trip-wires and landmines in my brain so that whenever I semi-consciously judge something as plausible, reasonable, 'a soldier on my side', 'a soldier on the enemy's side', etc alarms and explosions go off and stop me from making such judgments. The closest I can find to such tools is soundbites like these that are trained up as automatic responses to such judgments.

Plausible is the opposite of probable

Reversed stupidity is not intelligence.

This idea has been looked at before on less wrong.

Convincing is a proxy for correct, but not a good one (anymore). And politics is a great example of how optimisation by proxy can produce almost entirely false positives.

In case you were wondering and didn't follow MinibearRex's link the heuristic we use for determining convincingness is how representative of reality the scenario given is; in your case, a lot of the premises seem to be variants on "business as usual" which ought to strike you as particularly representative of reality.

Burton's On Being Certain looks at the neural correlates of certainty - the feeling of knowing something.

Not, to be frank, a very memorable book, but one which you might find useful given your interests. It does offer a useful reduction of the term "intuition", breaking it down into "unconscious thoughts plus feeling of certainty".