Pessimists sound smart. Optimists make money.
–Nat Friedman (quoted by Patrick)

I’ve realized a new reason why pessimism sounds smart: optimism often requires believing in unknown, unspecified future breakthroughs—which seems fanciful and naive. If you very soberly, wisely, prudently stick to the known and the proven, you will necessarily be pessimistic.

No proven resources or technologies can sustain economic growth. The status quo will plateau. To expect growth is to believe in future technologies. To expect very long-term growth is to believe in science fiction.

No known solutions can solve our hardest problems—that’s why they’re the hardest ones. And by the nature of problem-solving, we are aware of many problems before we are aware of their solutions. So there will always be a frontier of problems we don’t yet know how to solve.

Fears of Peak Oil and other resource shortages follow this pattern. Predictions of shortages are typically based on “proven reserves.” We are saved from shortage by the unproven and even the unknown reserves, and the new technologies that make them profitable to extract. Or, when certain resources really do run out, we are saved economically by new technologies that use different resources: Haber-Bosch saved us from the guano shortage; kerosene saved the sperm whales from extinction; plastic saved the elephants by replacing ivory.

In just the same way, it can seem that we’re running out of ideas—that all our technologies and industries are plateauing. Technologies do run a natural S-curve, just like oil fields. But when some breakthrough insight creates an entirely new field, it opens an entire new orchard of low-hanging fruit to pick. Focusing only on established sectors and proven fields thus naturally leads to pessimism. To be an optimist, you have to believe that at least some current wild-eyed speculation will come true.

Why is this style of pessimism repeatedly wrong? How can this optimism be justified? Not on the basis of specific future technologies—which, again, are unproven—but on the basis of philosophical premises about the nature of humans and of progress. The possibility of sustained progress is a consequence of the view of humans as “universal explainers” (cf. David Deutsch), and of progress as driven fundamentally by human choice and effort—that is, by human agency.

The opposite view is that progress is a matter of luck. If the progress of the last few centuries was a random windfall, then pessimism is logical: our luck is bound to run out. How could we get that lucky again? If the next century is an average one, it will see little progress.

But if progress is a primarily matter of agency, then whether it continues is up to us.

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 7:43 AM

Optimism is what agency looks like to people without it.

Pessimism is what non-agency looks like to people who have it.

Why is this style of pessimism repeatedly wrong?

I liked this post, and upvoted, but this sounds like it might be somewhat cherry-picking? If you say that the pessimism is "repeatedly wrong", then you are ignoring the areas where it has been repeatedly right. E.g. perpetual motion machines and faster-than-light travel continue to remain impossible, as does curing aging (even though we might now begin to be in a position to actually cure it, people have tried various means of becoming immortal for probably thousands of years, and so far the pessimism has always been right). 

Pessimism about resources running out is repeatedly wrong, as far as I can tell. I asked for examples on Twitter and got no major examples from the last couple of centuries, and only a handful of minor ones.

I don't mean to argue that pessimism as such is always wrong. There are contexts where it is realistic. See my comments on descriptive vs. prescriptive optimism.

In the limit, the pessimists will eventually be correct. There's only so many technologies and improvements that can be made under the laws of physics (e.g. Landauer's limit), and once we hit those real limits, that'll be as good as it gets.

Of course, the catch that a lot of people might not realize is that a) we're at least centuries away from hitting said limits, even with aligned superintelligence, and b) those limits are far, far better than what we have and live with today.

It's not as pithy, but it seems likely that it's better said that some optimists make money, no?

Of course, this doesn't have any direct bearing on this interesting post.

100% this. Some optimists make money, some get scammed.

Why is this style of pessimism repeatedly wrong? How can this optimism be justified?

Pessimism is very often right, and optimism wrong (especially about expected timeframes).  The missing piece is the payoff - pessimism can be right most of the time, and optimism wrong most of the time, but most of the progress comes from the somewhat rare cases that deviate from median projections.  And the optimists, while mostly wrong, drive toward those payoffs more effectively than pessimists.

I am reminded of the Python (Monty) parable of the flying sheep: https://www.ibras.dk/montypython/episode02.htm . there is no hope of success, but they don't get rid of Harold (the ringleader) because of the "enormous commercial possibilities should he succeed".

Yes, I was referring to pessimism about things like resource shortages. Specific ventures and experiments often fail.

No known solutions can solve our hardest problems—that’s why they’re the hardest ones.

I like the energy, but I have to register a note of dissent here.

Quite a few of our hardest problems do have known solutions - it's just that those known solutions are, or appear, too hard to implement.

  • Brute force algorithms exist for almost everything we care about, up to and including AGI.
  • Overweight individuals know that if they eat less, they will eventually lose weight; it's just often frustratingly beyond them, for one reason or another. Same with alcoholics and drink.
  • Collective action problems have been studied in economics for decades, and by this point a lot of them have clever approaches we could use to at least partway ameliorate them.

'Appear' is important here. Ken Thompson's advice to "when in doubt use brute force" is very good advice because so much resistance can and will crumble to sustained effort. But this requires a somewhat different kind of optimism to the optimism I think you're describing of inventing a fundamentally new solution - it's the optimism to get back up after you fall down and run straight at the wall again until it does.

It's not really a solution if it can't be implemented: if it doesn't work, or is unaffordable, or otherwise isn't practical.

Building on this, there's also the evolutionary element. Humans continued evolving after civilization, even if the timeframes were too short for much mutation to occur. During early civilizations, and probably most civilizations before the industrial revolution, pessimistic outlooks were extremely common in the real world. As a result, both instinctive and learned pessimism would give an individual more surviving offspring within any given civilization.

[-]jmh2y20

Just bouncing something of a reaction to the idea that pessimism sounds smart(er than optimism).

I always hear that humans are, generally, risk-averse. That can be a "smart" strategy, perhaps from some evolutionary imperatives. So we're naturally incline to over-weight possible harms while under-weighting the possible benefits.

But, in that setup pessimism sounds smart because we want to believe we have more to fear on the downside so is perhaps a form of confirmation bias at work.

[-]TLW2y20

Why is this style of pessimism repeatedly wrong?

Beware selection bias. If it wasn't repeatedly wrong, there's a good chance we wouldn't be here to ask the question!

The opposite view is that progress is a matter of luck.

Hm. I tend to not view the pessimistic side as luck so much as 'there's a finite number of useful techs, which we are rapidly burning through'.

[-]TAG2y10

Extreme, apocalyptic pessimism has an equal and opposite problem....you need to persuade people that they are going to be killed by a novel threat, or by something known that has never killed everybody before.