A question that used to puzzle me is “Why can people be so much better at doing a thing for fun, or to help their friends and family, than they are at doing the exact same thing as a job?”

I’ve seen it in myself and I’ve seen it in others. People can be hugely more productive, creative, intelligent, and efficient on just-for-fun stuff than they are at work.

Maybe it’s something around coercion? But it happens to people even when they choose their work and have no direct supervisor, as when a prolific hobbyist writer suddenly gets writer’s block as soon as he goes pro.

I think it has a very mundane explanation; it’s always more expensive to have to meet a specific commitment than merely to do something valuable.

If I feel like writing sometimes and not other times, then if writing is my hobby I’ll write when I feel like it, and my output per hour of writing will be fairly high. Even within “writing”, if my interests vary, and I write about whatever I feel like, I can take full advantage of every writing hour.  By contrast, if I’ve committed to write a specific piece by a specific deadline, I have to do it whether or not I’m in the mood for it, and that means I’ll probably be less efficient, spend more time dithering, and I’ll demand more external compensation in exchange for my inconvenience.

The stuff I write for fun may be valuable! And if you simply divide the value I produce by my hours of labor or the amount I need to be paid, I’m hugely more efficient in my free time than in my paid time! But I can’t just trivially “up my efficiency” in my paid time; reliability itself has a cost.

The costs of reliability are often invisible, but they can be very important. The cost (in time and in office supplies and software tools) of tracking and documenting your work so that you can deliver it on time. The cost (in labor and equipment) of quality assurance testing. The opportunity cost of creating simpler and less ambitious things so that you can deliver them on time and free of defects.

Reliability becomes more important with scale. Large organizations have more rules and procedures than small ones, and this is rational. Accordingly, they pay more costs in reliability.

One reason is that the attack surface for errors grows with the number of individuals involved. For instance, large organizations often have rules against downloading software onto company computers without permission.  The chance that any one person downloads malicious software that seriously harms the company is small, but the chance that at least one person does rises with the number of employees.

Another reason is that coordination becomes more important with more people. If a project depends on many people cooperating, then you as an individual aren’t simply trying to do the best thing, but rather the best thing that is also understandable and predictable and capable of achieving buy-in from others.

Finally, large institutions are more tempting to attackers than small ones, since they have more value to capture. For instance, large companies are more likely to be targeted by lawsuits or public outcry than private individuals, so it’s strategically correct for them to spend more on defensive measures like legal compliance procedures or professional PR.

All of these types of defensive or preventative activity reduce efficiency — you can do less in a given timeframe with a given budget.  Large institutions, even when doing everything right, acquire inefficiencies they didn’t have when small, because they have higher reliability requirements.

Of course, there are also economies of scale that increase efficiency. There are fixed expenses that only large institutions can afford, that make marginal production cheaper. There are ways to aggregate many risky components so that the whole is more robust than any one part, e.g. in distributed computation, compressed sensing, or simply averaging.  Optimal firm size is a balance.

This framework tells us when we ought to find it possible to get better-than-default efficiency easily, i.e. without any clever tricks, just by accepting different tradeoffs than others do. For example:

1.) People given an open-ended mandate to do what they like can be far more efficient than people working to spec…at the cost of unpredictable output with no guarantees of getting what you need when you need it. (See: academic research.)

2.) Things that come with fewer guarantees of reliable performance can be cheaper in the average use case…at the cost of completely letting you down when they occasionally fail. (See: prototype or beta-version technology.)

3.) Activities within highly cooperative social environments can be more efficient…at the cost of not scaling to more adversarial environments where you have to spend more resources on defending against attacks. (See: Eternal September)

4.) Having an “opportunistic” policy of taking whatever opportunities come along (for instance, hanging out in a public place and chatting with whomever comes along and seems interesting, vs. scheduling appointments) allows you to make use of time that others have to spend doing “nothing” … at the cost of never being able to commit to activities that need time blocked out in advance.

5.) Sharing high-fixed-cost items (like cars) can be more efficient than owning…at the cost of not having a guarantee that they’ll always be available when you need them.

In general, you can get greater efficiency for things you don’t absolutely need than for things you do; if something is merely nice-to-have, you can handle it if it occasionally fails, and your average cost-benefit ratio can be very good indeed. But this doesn’t mean you can easily copy the efficiency of luxuries in the production of necessities.

(This suggests that “toys” are a good place to look for innovation. Frivolous, optional goods are where we should expect it to be most affordable to experiment, all else being equal; and we should expect technologies that first succeed in “toy” domains to expand to “important, life-and-death” domains later.)

 

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 1:09 PM

Complementary thought: the ability to accept tradeoffs against reliability increases with slack. This is one possible way to "monetize" slack, i.e. turn it into value. Conversely, if slack is held to absorb shocks from unreliability trade-offs, then if we want to ask "how valuable is this slack?" then we must ask "how much excess value do we get from this trade-off?"

Continuing along those lines: (unreliability + slack) is a strategy with increasing returns to scale. Suppose 1 unreliability tradeoff requires keeping 1 unit of slack in reserve, in case of failure. Then N independent tradeoffs, which each require a similar form of slack in reserve, will require less than N units of reserve slack, because it's highly unlikely for everything to fail at once. (It should require around sqrt(N) units of slack, assuming independent failures each with similar slack requirements.)

That suggests that individual people should either:

  • specialize in having lots of slack and using lots unreliable opportunities (so they can accept N unreliability trade-offs with only sqrt(N) units of slack), or
  • specialize in having little slack and making everything in their life highly reliable (because a relatively large unit of slack would need to be set aside for just one unreliability trade-off).

I've heard it said (and am inclined to believe) that contemporary firms maintain less slack than their analogs did in the past (more just-in-time purchasing, etc.). Under this model, I guess that means they should need to maintain greater reliability, and to need to pay greater "costs of reliability"?

My model here is that new technology has allowed firms to maintain mostly-similar levels of reliability with less slack. Inventory seems like a central example: faster, more decentralized logistics enable low- or even zero-inventory models, while still ensuring that the product is there when the buyer wants it. (Concretely: within seconds of a customer buying X in a store, someone in a warehouse can be alerted to pack one more X in the next truck headed for that store, which makes restocking dramatically faster than it used to be.) There are still reliability costs sometimes, e.g. if the logistics chain falls apart entirely due to a disaster, but the main point is that the whole slack/reliability trade-off curve has shifted.

My thought on this is probably biased by my work helping people with akrasia, procrastination, etc., but in my experience the difference in work vs. play seems to be because of how our brains process costs vs. rewards. When you are doing something as a hobby, the reward is immediate and the cost or threat is low, because you want to do it and nothing bad happens if it doesn't work out.

In contrast, when you have to do a specific thing at a specific time (e.g. because it's for work), then there is a definite cost to insufficient performance, but a sufficient performance is merely the status quo... meaning there's no perceived reward.

In addition, exceptional performance, if repeated often enough, will raise the status quo, making your situation objectively worse!

Under such conditions, engaging with the work feels risky or costly in a way that a hobby project does not.

I personally suffered from issues with this for a very long time, which led to me blogging and then becoming a coach helping people with similar issues... and then getting stuck in akrasia for many years because I'd turned my exciting new hobby into an unmotivating profession, and didn't notice the meta issue. :-)

Solving this type of problem is simple in principle: eliminate the threat-perception, and reset your perception of the "status quo". But the devil is in the details because your brain wants to add any increased rewards into a new status quo, and anything that threatens to lower that standard tends to get viewed as a threat. (Including, paradoxically, any discussion of intentionally lowering one's standards or demands for reward!)

There are techniques that address these things, but some ongoing management is required; otherwise within a few months I tend to drift back to being creatively blocked or unmotivated in relation to paid work or work that's aimed at getting paid.

This is a very important point to have intuitively integrated into one's model, and I charge a huge premium to activities that require this kind of reliability. I hope it makes the cut.

I also note that someone needs to write The Costs of Unreliability and I authorize reminding me in 3 months that I need to do this.

Did you ever write this? I can't find it here or on your blog. If not, I think it would be a good post to write.

I don't know whether it was this post, or maybe just a bunch of things I learned while trying to build LessWrong, but this feels like it has become a pretty important part of my model of how organizations work, and also what kind of things I pay attention to in my personal development. 

Some additional consequences of things that I believe that feel like they extend on this post: 

  • Automating is often valuable because it frequently replaces tasks that were really costly because they had to be executed reliably
  • I am very hesitant to start projects in my free time that come with expectations of reliability. This also has had large effects on trying to get people to organize regular meetups, because those often have a large reliability cost in addition to just the cost of organizing them. 

I do think this post is missing a bunch of important things. In particular, there is a dimension of reliability that often reduces cost. A concrete example might be a regular study group, or a set of habits you have for practicing something. They feel cognitively close to me, but in those cases my guess is that by forcing me to perform a task reliably, I am increasing my efficiency, not decreasing it. I have some initial pointers to why that is the case, but it feels to me like this post currently has a hole in it without at least trying to point to this phenomenon. 

This feels like a crisp articulation of how to think about productivity and reliability, which are pretty central things.

My understanding of the value of the practices of large organizations progressed a lot in 2019. This post is one the causes.

One thing this reminds me of is that management practices vary widely across the globe, and there are 'good' management practices that make firms a great deal more productive. None of the described managemet practices exactly talk about reliability per se, but they do talk about incentivizing workers to meet targets and removing poor performers.

I'm currently musing about and would be very interested in seeing a dive into the interplay between this dynamic and the dynamic of "necessity is the mother of invention".

The scale of reliability also goes beyond daily use. Most of the days you don't need your fire insurance but the day you do you really need it. There was an interesting ted talk about how from a western perspective having a medical insurance for the price of daily coffee was an easy sell but in the different experience environment people seldomly bought it. The perspective where insurance companies will look for excuses not to pay out, cover only partially, restrict the method healthcare can be accessed made it unsure that if you had the insurance and got sick whether you would be fine. If you choose the daily coffee atleast you won't be cheated out of it. I guess it's more about the value of reliabily rather than the cost.

I think you did a good job of answering your question when you hit on the word 'coercion,' and I'll add another one to go with -- 'compulsion.' Are you being coerced by someone, or do you yourself feel compelled? This, in my opinion, is going to define how you apply yourself to the work and may have a bearing on the quality.