Area Under The Curve

People often try to maximize.

They’ll track their productivity, and try to get the number as high as possible. They’ll look at the amount of weight they can lift, and try to push the envelope. They’ll seek out more and more and more of whatever particular trait or thing they’re currently prioritizing—money, connection, excitement, knowledge.

You could think of life as a graph, where the X axis is time and the Y axis is the trait in question. In this maximizing mindset, the goal is to get the line as high as possible—or sometimes, for people who set process goals rather than outcome goals, to make the slope of the line as steep as possible.

CFAR claims that this mindset is a mistake. Overt maximization often ignores other costs and constraints, like trying to get eight extra hours per day by not sleeping. It doesn’t work—or at least, not for very long.

The key insight is that the property we really care about is the area under the curve.

You could think of the total amount of awesomeness in a given week as being equal to the awesomeness-per-hour times the number of hours. As it turns out, this quantity is exactly the same as the area between the line and the X axis, on our graph.
 


Attempts to just drive the line higher often result in a crash. Attempts to maximize the area under the curve over time tend to keep things like sustainability front and center, reminding people to pace themselves and take breaks and so forth. This is usually not a stunning revelation, or anything, but it’s a phenomenon that’s easy to forget. It’s easy, when surrounded by other people who are ambitious or driven, to forget that you’re running a marathon, and start thinking that you ought to be sprinting.
 


So CFAR’s recommendation is to notice and track how your area is looking, rather than just how high your line is, or how steep its slope. In particular, we recommend looking to your past experiences to figure out what’s likely to be sustainable, and what isn’t. If you’ve tried hardcore cold-turkey dieting ten times in the past and it’s never worked, that’s valuable data about what your next diet plan should look like.
 

Optimizing with noise

There’s one other factor that people often leave out of their calculations, and that’s noise.

If you’ve ever taken an economics class, you may be familiar with graphs like this one:
 


The idea behind this graph is that, as you put in more and more of some property (let’s say “effort”), you get better and better results, until at some point you actually start to get worse results (because of e.g. burnout). Similarly, if you charge more and more money for a product you’re selling, you’ll make more and more money, but at some point the price becomes too high and you start losing customers faster than you’re making money from each customer.

Graphs like this are useful, because they tell you which way to go. If you can plot out predictions about effort or price, you can get a sense of whether you need more or less to maximize the thing you want.

However, in the real world, performance graphs usually don’t actually look like that. Instead, they often look more like this:
 


This is in part because there are usually many relevant factors, most of which don't vary in a straightforwardly simple fashion with things like price or effort. In a situation like this one, it’s much less clear which direction to go, at any given point. Sure, the overall trend is one of a curve just like in the first picture, but there are tons of local maxima and local minima making things more complicated.

The lesson here is that it’s not always clear how to get more of what you want. Sometimes, adding more effort helps, and sometimes it hurts, and sometimes adding X effort might hurt, but adding 2X effort might help, and so on.

CFAR’s recommendation, given this uncertainty, is something like “hold your hypotheses lightly, and be willing to try lots of things.” That means that, as you’re trying to get more area under the curve, you should be sort of humble even about your own predictions about things like the value of more rest, or the value of more self-discipline. We often just don’t know, so it pays to be a little conservative in your predictions, and a little more willing to experiment with your actions.


Eat Dirt

There is a condition called pica in which people who lack a certain nutrient experience strong cravings for things that might not actually contain that nutrient. For instance, people who are iron deficient may find themselves chewing on ice cubes.
 


The theory is that the body’s ability to identify things containing iron is fairly limited, and so it’s fallen back on some other imperfect heuristic, such as “things that are hard.” People with pica occasionally eat dirt, too—perhaps because it’s rust-colored, perhaps because it has a similar taste profile to something iron-rich, perhaps for some other reason that we haven’t figured out yet. In general, though, the takeaway is “nutrient deficiencies make us do weird things for not-entirely-understood reasons.”

This is actually an excellent metaphor for many of the things we do in life. We find ourselves watching sitcoms because we want to feel like we’re surrounded by friends, or relentlessly playing mobile games because we want to feel a sense of progress and accomplishment, or buying new clothes because we want to change something deep about who we’ve become.

In many cases, the answer to pica-like behavior is factoring—with a little introspection, we can figure out the thing we actually needed, drop the weird stand-in behavior, and leap straight to the solution. People who actually suffer from pica can take iron supplements, for instance, and then the craving for ice cubes goes away.

However, it’s not always that simple. Sometimes, the “nutrient” isn’t that easy to get (for example, intrinsic self-worth, or a community of deeply caring, connected peers). And often, we’re not even able to pin down what the missing nutrient is.
 


In these cases, what should you do?

CFAR recommends you eat dirt.

You see, while neither dirt nor ice cubes is actually the thing, at least dirt might contain some tiny trace amounts of iron, rather than being completely and fundamentally hopeless, like ice. It's not necessarily a step in the right direction, but it's at least not a known step in the wrong direction.  It's a deliberate choice to abandon a coping strategy that is never actually going to solve the core problem.

Metaphorically, eating dirt looks like a particular instantiation of the “try things” advice. If you’ve ever had the experience of getting a sip of water, and only then realizing that you’ve been super thirsty (because it tastes like the most delicious, refreshing thing ever), then you’ve got a sense of the sort of thing we’re pointing at—using exploration and empiricism (rather than reasoning) to figure out what’s missing.

If you find yourself engaging in a pica, try paying close attention to your internal experience. Notice how the thing you’re doing isn’t quite what you want or need—how it feels hollow or empty or pointless. And then try something else—not something perfect, not something fully understood and planned out and optimized, but just anything that might contain more of the “nutrient” you’re looking for. If you find something that’s less hollow, do that for a while, and then try searching again. With enough tiny steps, you can get to the right place even if you never fully figure out what it is you’re looking for.


Broccoli Errors

There is a common pattern that crops up when you suggest that people try things.  It goes like this:

"But I don't want to try broccoli again, because if I find out that my tastes have changed and I like broccoli now, I'll end up eating a lot of broccoli, and I don't like broccoli.

This example is silly and amusing, but it's surprising how often people actually raise a broccoli-error-type objection.  The Comfort Zone Exploration class is specifically about targeting this mistake—putting people into a mindset where they're willing to gather data at all, rather than living entirely in their preconceptions.

The broccoli error (overweighting present approximations of your values in a way that prevents you from updating those approximations) is one end of a spectrum.  At the other end is the "Gandhi murder pill" thought experiment, which runs thus:

Should Gandhi, who is opposed to all things violent, take a pill that will make him indifferent to the idea of committing murder?

Clearly, the correct conclusion is "no."  Gandhi should not take that pill, because then he indeed might commit murder, and he does not want to do so.  The fact that his future self wouldn't mind is not a compelling argument.

What distinguishes broccoli errors from correct decisions not to update in the wrong direction?  Essentially, it's whether the thing under consideration is relevant to one's identity, or core values.

Recommendation: When you notice yourself feeling resistant to a potential update, pause for a moment and ask whether your higher self cares about the domain under consideration.  If it's the sort of thing that is an important and enduring part of your identity, then feel free to not tinker with it.

But if, in fact, it has no relevance at all according to your deeply-held values, then consider the possibility that this is a place where it might be worth trying something new.


Copernicus & Chaos

Imagine that you know nothing about a thing except that it exists.

How long will this war go on? How good is this movie? Will this book still be popular a decade from now? What are the chances you’ll be good at this skill?

Absent any additional information, your best guess is that you’re right in the middle of a very normal thing. This makes sense if you imagine encountering a hundred similar things. If you're a time traveler with a faulty time machine and you happen upon a hundred different wars, you’ll randomly encounter some in their first half, and others in their latter half, with a more or less even distribution between “started this morning” and “ending tonight.” Given such a distribution, your safest bet is that if the war has gone on for ten years so far, it will go on for ten years more.

This is true for a range of phenomena, from trivial to interesting. Look at a digital clock that only shows minutes—it’s safer to guess that it’s 6:02:30 than to guess that it’s 6:02:01 or 6:02:59. Look at a popular franchise, like Harry Potter—people have been talking about it and paying attention to it for close to twenty years, so a reasonable guess is that it will fall out of the spotlight by 2035. If you’ve never tried shuffleboarding before, you’d be well within reason to expect to do better than about half of the people who are trying it for the first time.

The Chaos Heuristic

The Copernican principle is a rule of thumb at best, and it particularly applies during “normal” times. By that, we mean times that are between large paradigm shifts—there were thousands of years between human mastery of fire and stone and the invention of agriculture, and thousands more between agriculture and industrialization.

But in many ways, the most interesting times are the least normal ones—the eras in which things leap ahead, and the present looks drastically different from the past. Largely, these times come about because someone figures out some new Big Idea (like the internal combustion engine, or the microchip).

As a forward-facing human, you’d probably prefer to live in one of those times of steep upward development, rather than the long slow “exploit” period between innovations. The problem is, those steep times come from exploration—and exploration is expensive.

Would you rather have a steady job, or found a start-up? Would you rather live in your home country, or drop everything and move to Indonesia? Do you think you have a better chance at success if you stand on the shoulders of giants, or forge your own entirely new trail? 

As an individual, it’s almost always safer to play the exploit game rather than the explore one.

But! As a society, we’re better off with more people playing explore. Most people are climbing toward local maxima; if more people are willing to absorb the risk of jumping off and finding nothing better, the group will find the real mountains much more quickly and reliably.

The Chaos Heuristic, in a nutshell, says this: odds are, you’re nowhere near your best options, and you don’t know what you don’t know. So go exploring!
 

New to LessWrong?

New Comment