I start each of my weekly reviews by re-reading one of my favorite essays of life advice—a different one each week. It’s useful for a few different reasons:

  • It helps me get into the right reflective frame of mind.

  • The best essays are dense enough with useful advice that I find new interesting bits every time I read them.

  • Much good advice is easy to understand, but hard to implement. So to get the most benefit from it, you should find whatever version of it most resonates you and then re-read it frequently to keep yourself on track.

I’ve collected my favorite essays for re-reading below. I’ll keep this updated as I find more great essays, and I’d welcome other contributions—please suggest your own favorites in the comments!

There's a lot of essays here! If you'd like, I can email you one essay every weekend, so you can read it before your weekly review: (sign up on site)


Paul Graham, Life is Short. Inspire yourself never to waste time on bullshit again:

Having kids showed me how to convert a continuous quantity, time, into discrete quantities. You only get 52 weekends with your 2 year old. If Christmas-as-magic lasts from say ages 3 to 10, you only get to watch your child experience it 8 times. And while it’s impossible to say what is a lot or a little of a continuous quantity like time, 8 is not a lot of something. If you had a handful of 8 peanuts, or a shelf of 8 books to choose from, the quantity would definitely seem limited, no matter what your lifespan was.

Ok, so life actually is short. Does it make any difference to know that?

It has for me. It means arguments of the form “Life is too short for x” have great force. It’s not just a figure of speech to say that life is too short for something. It’s not just a synonym for annoying. If you find yourself thinking that life is too short for something, you should try to eliminate it if you can.

When I ask myself what I’ve found life is too short for, the word that pops into my head is “bullshit.” I realize that answer is somewhat tautological. It’s almost the definition of bullshit that it’s the stuff that life is too short for. And yet bullshit does have a distinctive character. There’s something fake about it. It’s the junk food of experience. [1]

If you ask yourself what you spend your time on that’s bullshit, you probably already know the answer. Unnecessary meetings, pointless disputes, bureaucracy, posturing, dealing with other people’s mistakes, traffic jams, addictive but unrewarding pastimes.

I’ve found that unless I’m vigilant, the amount of bullshit in my life only ever increases. Rereading Life is Short every so often gives me a kick in the pants to figure out what really matters and how to get the bullshit levels back down.


Derek Sivers, There is no speed limit, in which he learns a semester’s worth of music theory in an afternoon:

Within a minute, he started quizzing me. “If the 5-chord with the flat-7 has that tri-tone, then so does another flat-7 chord. Which one?”

“Uh… the flat-2 chord?”

“Right! So that’s a substitute chord. Any flat-7 chord can be substituted with the other flat-7 that shares the same tri-tone. So reharmonize all the chords you can in this chart. Go.”

The pace was intense, and I loved it. Finally, someone was challenging me — keeping me in over my head — encouraging and expecting me to pull myself up quickly. I was learning so fast, it felt like the adrenaline rush you get while playing a video game. He tossed every fact at me and made me prove that I got it.

In our three-hour lesson that morning, he taught me a full semester of Berklee’s harmony courses.

This was one of the major inspirations for Be impatient. Every time I reread it, I think of at least one thing where I’m setting myself a speed limit for no reason!


Sam Altman, How To Be Successful. Sam might have observed more successful people more closely than anyone else on the planet, and the advice is as good as you’d expect.

Focus is a force multiplier on work.

Almost everyone I’ve ever met would be well-served by spending more time thinking about what to focus on. It is much more important to work on the right thing than it is to work many hours. Most people waste most of their time on stuff that doesn’t matter.

Once you have figured out what to do, be unstoppable about getting your small handful of priorities accomplished quickly. I have yet to meet a slow-moving person who is very successful.

 

Almost always, the people who say “I am going to keep going until this works, and no matter what the challenges are I’m going to figure them out”, and mean it, go on to succeed. They are persistent long enough to give themselves a chance for luck to go their way.

… To be willful, you have to be optimistic—hopefully this is a personality trait that can be improved with practice. I have never met a very successful pessimistic person.

There are lots of different points here, so this one especially bears rereading!


R. W. Hamming, You and your research. Hamming observed almost as many great scientists as Sam Altman did founders. He had some interesting conclusions:

At first I asked what were the important problems in chemistry, then what important problems they were working on, or problems that might lead to important results. One day I asked, “if what they were working on was not important, and was not likely to lead to important things, they why were they working on them?” After that I had to eat with the engineers!

About four months later, my friend stopped me in the hall and remarked that my question had bothered him. He had spent the summer thinking about the important problems in his area, and while had had not changed his research he thought it was well worth the effort. I thanked him and kept walking. A few weeks later I noticed that he was made head of the department. Many years later he became a member of the National Academy of Engineering. The one person who could hear the question went on to do important things and all the others—so far as I know—did not do anything worth public attention.

… Some people work with their doors open in clear view of those who pass by, while others carefully protect themselves from interruptions. Those with the door open get less work done each day, but those with their door closed tend not know what to work on, nor are they apt to hear the clues to the missing piece to one of their “list” problems. I cannot prove that the open door produces the open mind, or the other way around. I only can observe the correlation. I suspect that each reinforces the other, that an open door will more likely lead you and important problems than will a closed door.

 

There is another trait that took me many years to notice, and that is the ability to tolerate ambiguity. Most people want to believe what they learn is the truth: there are a few people who doubt everything. If you believe too much then you are not likely to find the essentially new view that transforms a field, and if you doubt too much you will not be able to do much at all. It is a fine balance between believing what you learn and at the same time doubting things. Great steps forward usually involve a change of viewpoint to outside the standard ones in the field.

While you are leaning things you need to think about them and examine them from many sides. By connecting them in many ways with what you already know…. you can later retrieve them in unusual situations. It took me a long time to realize that each time I learned something I should put “hooks” on it. This is another face of the extra effort, the studying more deeply, the going the extra mile, that seems to be characteristic of great scientists.

Hamming is an unusual combination of (a) a great scientist himself, (b) curious and thoughtful about what makes others great, and (c) honest and open about his observations (it seems).


Anonymous, Becoming a Magician—on how to become a person that your current self would perceive as magical:

The description was about five or six handwritten pages long, and at the time, it was a manifestation of desperate longing to be somewhere other than where I was, someone who felt free and cared for. At the time I saw that description as basically an impossibility; my life could never be so amazing in reality.

Fast forward about seven or ten years and I rediscovered the description when I was moving old notebooks and journals from one dusty storage spot to another. As I read through it, I discovered that 90% of the statements I had made in that description were true (or true in spirit). … It was incredible to me, despite all the changes that had happened in my life since when I wrote the passage, that I had basically become the person whose life I had dreamed of living as a teenager.

That’s pretty fucking cool.

 

And then came Sanatan Dinda. An Indian visual artist from Kolkata, he didn’t even make the finals the first year he competed, and the next year he placed second with a style that broke half a dozen of the implicit rules of ‘good artwork’ at the competition. … [T]he third year he came he won the entire competition by something like ten percent of the total awarded points over the next artist in second place.

… The thing that confused me though was this – I could not work out how he did it. Like, I had zero mental model of how he created that piece in the same timeframe we all had; how he came up with it, designed it, practiced it. Even though he placed first and I placed fifth and logically we both existed on a scale of ‘competence at bodypainting’ it seemed like the skills required were completely different.

The exercise they suggest is a really useful activity for weekly (or monthly or yearly) reviews. Highly recommended!


Dan Luu, 95th percentile isn’t that good. Great for cultivating self-improvement mindset by reminding you how easy (in some sense) it is to make huge improvements at something:

Reaching 95%-ile isn’t very impressive because it’s not that hard to do. I think this is one of my most ridiculable ideas. It doesn’t help that, when stated nakedly, that sounds elitist. But I think it’s just the opposite: most people can become (relatively) good at most things.

Note that when I say 95%-ile, I mean 95%-ile among people who participate, not all people (for many activities, just doing it at all makes you 99%-ile or above across all people). I’m also not referring to 95%-ile among people who practice regularly. The “one weird trick” is that, for a lot of activities, being something like 10%-ile among people who practice can make you something like 90%-ile or 99%-ile among people who participate.

It’s not weekly review material, but I also appreciate the bonus section on Dan’s other most ridiculable ideas.


Suggest your own favorite life advice essays in the comments!

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 10:56 AM

Almost always, the people who say “I am going to keep going until this works, and no matter what the challenges are I’m going to figure them out”, and mean it, go on to succeed. They are persistent long enough to give themselves a chance for luck to go their way.

 

I've seen this quote (and similar ones) before. I believe that this approach is extremely flawed, to the point of being anti-rationalist. In no particular order, my objections are:

  • It is necessarily restricted to the people Altman knows. As a member of the social, technological, and financial elite, Altman associates with people who have an extremely high base rate for being successful relative to the general population (even relative to the general American population).
  • The "and mean it" opens to the door to a No True Scotsman fallacy. The person didn't succeed even though they said they wouldn't give up? They must have not really meant it.
  • It gives zero weight to the expected value of the work. There are lots of people whose implicit strategy is "No matter my financial challenges, I am never going to give up playing the lottery every week until I get rich. If I run out of money I am going to figure out how to overcome that challenge so I can continue to buy lottery tickets." More seriously, there are lots of important unsolved problems that humanity has been working on for multiple lifetimes without success. I am literally willing to bet against the success of anyone who believes in Altman's quote and works on deciding if P=NP, finding a polynomial time algorithm for integer factorization, or similar problems.
  • It gives zero weight to opportunity cost. If the person wasn't banging their head against whatever they were working on, they could probably switch to a better problem. Recognizing this, Silicon Valley simultaneously glorifies "Not Giving Up", and "The Pivot". One explanation for this apparent contradiction is that the true work that SV wants people to not give up on is "generating returns for investors."
  • In general, it is suspicious that Altman's advice aligns so perfectly with the behavior you would want if you were an angel or VC. That is, you would want the team to work as hard as possible to generate a return without giving up, ignoring opportunity costs, while the investor maintains the option to continue to invest or not. Note that no investor would say, "I will invest as much money as necessary into this startup until it works, and no matter what the challenges are we will figure out how to raise more money for them."
  • A rationalist approach would evaluate the likelihood of overcoming known challenges, the likelihood that an unknown challenge would cause a failure, the expected value of the venture, and the opportunity costs, and then periodically re-evaluate to decide whether to give up or not. Altman's advice to explicitly not do this is self-deceptive, magical thinking.

I don't think founder/investor class conflict makes that much sense as an explanation for that. It's easy to imagine a world in which investors wanted their money returned when the team updates downwards on their likelihood of success. (In fact, that sometimes happens! I don't know whether Sam would do that but my guess is only if the founders want to give up.)

I also don't think at least Sam glorifies pivots or ignores opportunity cost. For instance the first lecture from his startup course:

And pivots are supposed to be great, the more pivots the better. So this isn't totally wrong, things do evolve in ways you can't totally predict.... But the pendulum has swung way out of whack. A bad idea is still bad and the pivot-happy world we're in today feels suboptimal.... There are exceptions, of course, but most great companies start with a great idea, not a pivot.... [I]f you look at the track record of pivots, they don't become big companies. I myself used to believe ideas didn't matter that much, but I'm very sure that's wrong now.

---

More generally, I agree that this claim clashes strongly with some rationalists' worldviews, and it's plausible that it just increases the variance of outcomes and not the mean. But given that outcomes are power-law distributed (mean is proportional to variance!), the number of people endorsing it from on top of a giant pile of utility, and the perhaps surprisingly low number of highly successful rationalists, I'd recommend rationalists treat it with curiosity instead of dismissiveness.

I do agree that it increases the variance of outcomes. I think it decreases the mean, but I'm less sure about that. Here's one way I think it could work, if it does work: If some people are generally pessimistic about their chances of success, and this causes them to update their beliefs closer to reality, then Altman's advice would help. That is, if some people give up too easily, it will help them, while the outside world (investors, the market, etc) will put a check on those who are overly optimistic. However, I think it's still important to note that "not giving up" can lead not just to lack of success, but also to value destruction (Pets.com; Theranos; WeWork). 

Thanks for the "Young Rationalists" link, I hadn't read that before. I think there are a fair number of successful rationalists, but they mostly focus on doing their work rather than engaging with the rationalist community. One example of this is Cliff Asness - here's a essay by him that takes a strongly rationalist view.

I think it's still important to note that "not giving up" can lead not just to lack of success, but also to value destruction (Pets.com; Theranos; WeWork). 

If you're going to interpret the original "don't give up" advice so literally and blindly that "no matter what the challenges are I'm going to figure them out" includes committing massive fraud, then yes, it will be bad advice for you. That's a really remarkably uncharitable interpretation.

Not sure if this is your typo or a LW bug, but "essay" appears not to actually be hyperlinked?

I think I mis-pasted the link. I have edited it, but it's suppose to go to https://www.aqr.com/Insights/Perspectives/A-Gut-Punch

Sarah Constantin's Errors vs. Bugs and the End of Stupidity remains one of my favorite essays.

I wasn't an exceptional pianist, and when I'd play my nocturne for [my teacher], there would be a few clinkers.  I apologized -- I was embarrassed to be wasting his time.  But he never seem to judge me for my mistakes.  Instead, he'd try to fix them with me: repeating a three-note phrase, differently each time, trying to get me to unlearn a hand position or habitual movement pattern that was systematically sending my fingers to wrong notes.

I had never thought about wrong notes that way.  I had thought that wrong notes came from being "bad at piano" or "not practicing hard enough," and if you practiced harder the clinkers would go away.  But that's a myth.

In fact, wrong notes always have a cause. An immediate physical cause.   Just before you play a wrong note, your fingers were in a position that made that wrong note inevitable. Fixing wrong notes isn't about "practicing harder" but about trying to unkink those systematically error-causing fingerings and hand motions.  That's where the "telekinesis" comes in: pretending you can move your fingers with your mind is a kind of mindfulness meditation that can make it easier to unlearn the calcified patterns of movement that cause mistakes.

Remembering that experience, I realized that we really tend to think about mistakes wrong, in the context of music performance but also in the context of academic performance.

A common mental model for performance is what I'll call the "error model."  In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error.  You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something.  Better performers have a lower error rate c.  Improvement is a matter of lowering your error rate.  This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct.  Your performance is defined by a single continuous parameter, your accuracy.

But we could also consider the "bug model" of errors.  A person taking a test or playing a piece of music is executing a program, a deterministic procedure.  If your program has a bug, then you'll get a whole class of problems wrong, consistently.  Bugs, unlike error rates, can't be quantified along a single axis as less or more severe.  A bug gets everything that it affects wrong.  And fixing bugs doesn't improve your performance in a continuous fashion; you can fix a "little" bug and immediately go from getting everything wrong to everything right.  You can't really describe the accuracy of a buggy program by the percent of questions it gets right; if you ask it to do something different, it could suddenly go from 99% right to 0% right.  You can only define its behavior by isolating what the bug does.

Often, I think mistakes are more like bugs than errors.  My clinkers weren't random; they were in specific places, because I had sub-optimal fingerings in those places.  A kid who gets arithmetic questions wrong usually isn't getting them wrong at random; there's something missing in their understanding, like not getting the difference between multiplication and addition.  Working generically "harder" doesn't fix bugs (though fixing bugs does require work). 

Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.

You stop thinking of people as "stupid."

Tags like "stupid," "bad at ____", "sloppy," and so on, are ways of saying "You're performing badly and I don't know why."  Once you move it to "you're performing badly because you have the wrong fingerings," or "you're performing badly because you don't understand what a limit is," it's no longer a vague personal failing but a causal necessity.  Anyone who never understood limits will flunk calculus.  It's not you, it's the bug.

This also applies to "lazy."  Lazy just means "you're not meeting your obligations and I don't know why."  If it turns out that you've been missing appointments because you don't keep a calendar, then you're not intrinsically "lazy," you were just executing the wrong procedure.  And suddenly you stop wanting to call the person "lazy" when it makes more sense to say they need organizational tools.

"Lazy" and "stupid" and "bad at ____" are terms about the map, not the territory.  Once you understand what causes mistakes, those terms are far less informative than actually describing what's happening. [...]

As a matter of self-improvement, I think it can make sense not to think in terms of "getting better" ("better at piano", "better at math," "better at organizing my time").  How are you going to get better until you figure out what's wrong with what you're already doing?  It's really more an exploratory process -- where is the bug, and what can be done to dislodge it? 

Another essay that I like is Will Wilkinson's Public Policy After Utopia; it's technically more "politics" than "life advice", but to the extent that people want to devote their lives to pushing society in a better direction, it seems important:

Many political philosophers, and most adherents of radical political ideologies, tend to think that an ideal vision of the best social, economic, and political system serves a useful and necessary orienting function. The idea is that reformers need to know what to aim at if they are to make steady incremental progress toward the maximally good and just society. If you don’t know where you’re headed—if you don’t know what utopia looks like—how are you supposed to know which steps to take next?

The idea that a vision of an ideal society can serve as a moral and strategic star to steer by is both intuitive and appealing. But it turns out to be wrong.  [...]

The fact that all our evidence about how social systems actually work comes from formerly or presently existing systems is a huge problem for anyone committed to a radically revisionary ideal of the morally best society. The further a possible system is from a historical system, and thus from our base of evidence about how social systems function, the more likely we are to be mistaken about how it would work if it were realized. And the more likely we are to be mistaken about how it would actually work, the more likely we are to be mistaken that it is more free, or more equal, or more socially just than other systems, possible or actual.  

Indeed, there’s basically no way to rationally justify the belief that, say, “anarcho-capitalism” ranks better in terms of libertarian freedom than “Canada 2017,” or the belief that “economic democracy” ranks better in terms of socialist equality than “Canada 2017.” [...]

You may think you can imagine how anarcho-capitalism or economic democracy would work, but you can’t.  You’re really just guessing—extrapolating way beyond your evidence. You can’t just stipulate that it works the way you want it to work. Rationally speaking, you probably shouldn’t even suspect that your favorite system comes out better than an actual system. Rationally speaking, your favorite probably shouldn’t be your favorite. Utopia is a guess. [...]

... expert predictions about the the likely effects of changing a single policy tend to be pretty bad. I’ll use myself as an example. I’ve followed the academic literature about the minimum wage for almost twenty years, and I’m an experienced, professional policy analyst, so I’ve got a weak claim to expertise in the subject. What do I have to show for that? Not much, really. I’ve got strong intuitions about the likely effects of raising minimum wages in various contexts. But all I really know is that the context matters a great deal, that a lot of interrelated factors affect the dynamics of low-wage labor markets, and that I can’t say in advance which margin will adjust when the wage floor is raised. Indeed, whether we should expect increases in the minimum wage to hurt or help low-wage workers is a question Nobel Prize-winning economists disagree about. Labor markets are complicated! Well, the comprehensive political economies of nation-states are vastly more complicated. And that means that our predictions about the outcome of radically changing the entire system are unlikely to be better than random. [...]

The death of ideal theory implies a non-ideological, empirical, comparative approach to political analysis. That doesn’t mean giving up on, say, the value of freedom. I think I’m more libertarian—more committed to value of liberty—than I’ve ever been. But that doesn’t mean being committed to an eschatology of liberty, a picture of an ideally free society, or a libertarian utopia. We’re not in a position to know what that looks like. The best we can do is to go ahead and try to rank social systems in terms of the values we care about, and then see what we can learn. The Cato Institute’s Human Freedom Index is one such useful measurement attempt. What do we see? [...]

Every highlighted country is some version of the liberal-democratic capitalist welfare state. Evidently, this general regime type is good for freedom. Indeed, it is likely the best we have ever done in terms of freedom.

Moreover, Denmark (#5), Finland (#9), and the Netherlands (#10) are among the world’s “biggest” governments, in terms of government spending as a percentage of GDP. The “economic freedom” side of the index, which embodies a distinctly libertarian conception of economic liberty, hurts their ratings pretty significantly. Still, according to a libertarian Human Freedom Index, some of the freest places in on Earth have some of the “biggest” governments. That’s unexpected. [...]

Though libertarianism is of personal interest to me, I want to emphasize again that my larger point has nothing to do with libertarianism. The same lesson applies to alt-right ethno-nationalists dazzled by a fanciful picture of a homogenous, solidaristic ethno-state. The same lesson applies to progressives and socialists in the grip of utopian pictures of egalitarian social justice. Of course, nobody knows what an ideally equal society would look like. If we stick to the data we do have, and inspect the top ranks of the Social Progress Index, which is based on progressive assumptions about basic needs, the conditions for individual health, well-being, and opportunity, you’ll mostly find the same countries that populate the Freedom Index’s leaderboard. [...]

The overlap is striking. And this highlights some of the pathologies of ideal theory: irrational polarization and the narcissism of small differences. [...]

For me, the death of ideal theory has meant adopting a non-speculative, non-utopian perspective on freedom-enhancing institutions. If you know that you can’t know in advance what the freest social system looks will look like, you’re unlikely to see evidence that suggests that policy A (social insurance, e.g.) is freedom-enhancing, or that policy B (heroin legalization, e.g.) isn’t, as threats to your identity as a freedom lover. Uncertainty about the details of the freest feasible social scheme opens you up to looking at evidence in a genuinely curious, non-biased way. And it frees you from the anxiety that genuine experts, people with merited epistemic authority, will say things you don’t want to hear. This in turn frees you from the urge to wage quixotic campaigns against the authority of legitimate experts. You can start acting like a rational person! You can simply defer to the consensus of experts on empirical questions, or accept that you bear an extraordinary burden of proof when you disagree.

What we need are folks who are passionate about freedom, or social justice (or what have you) who actively seek solutions to domination and injustice, but who also don’t think they already know exactly what ideal liberation or social justice look like, and are therefore motivated to identify our real alternatives and to evaluate them objectively. The space of possibility is infinite, and it takes energy and enthusiasm to want to explore it.

I like the sentiment but I rarely go back to read things I already read. Instead I seek out new things that say similar things in different ways.

A great example of this in my life comes from Zen books. Most of them say the same thing (there's a half joke that there are only three dharma talks a teacher can give), but in different ways. Sometimes the way it's said and where I am connect, so it's proven for me a good strategy to keep hearing similar teaching in new ways.

I would love to see "life advice" that anyone here found valuable coming from people who are far from lw/startup/science/math/programming fields or from any "out" enough outgroup.

Seconded.

More quotes from the Sam Altman essay.

It’s useful to focus on adding another zero to whatever you define as your success metric—money, status, impact on the world, or whatever. I am willing to take as much time as needed between projects to find my next thing. But I always want it to be a project that, if successful, will make the rest of my career look like a footnote.

Most people get bogged down in linear opportunities. Be willing to let small opportunities go to focus on potential step changes.

(I removed the HTML forms, since they were breaking on LW, but happy to add anything back in that would be equivalent)

Oops, thanks! Added a link to the signup form on my site. (And fixed my RSS rendering to not do forms like that in the future.)

I wonder to what extent the closed vs. open door dichotomy is true. In Deep Work, Cal Newport develops the point that rather than 100% open-door (interruptible) vs. 100% closed-door (uninterruptible) - we should mix the two. Knowledge workers obviously need to get feedback on their work and contribute with feedback to the work of others. You can also clearly keep your door open when doing work that doesn't require that much focus while also allowing yourself to get deep into something by shutting yourself off from the world at times. And you might miss out on really important focused time if you always keep your door open.

A book that goes very much in that direction with small but impactful chapters is:

“Chop Wood Carry Water”

https://www.amazon.com/Chop-Wood-Carry-Water-Becoming-dp-153698440X/dp/153698440X

I really enjoyed (and as I’m now reminded of it, I will have a look at it again) and I guess you could like it too.

(I found it via https://fs.blog/reading-2019/ and the very good review got me interested in it.)

I love this one from the introduction part of "Algorithms to live by":

Even where perfect algorithms haven't been found, however, the battle between generations of computer scientists and the most intractable real-world problems has yielded a series of insights. These hard-won precepts are at odds with our intuitions about rationality, and they don't sound anything like the narrow prescriptions of a mathematician trying to force the world into clean, formal lines. 

They say: Don't always consider all your options. Don't necessarily go for the outcome that seems best every time. Make a mess on occasion. Travel light. Let things wait. Trust your instincts and don't think too long. Relax. Toss a coin. Forgive, but don't forget. To thine own self be true.