Follow-up to: Inadequacy and Modesty

I am now going to introduce some concepts that lack established names in the economics literature—though I don’t believe that any of the basic ideas are new to economics.

First, I want to distinguish between the standard economic concept of efficiency (as in efficient pricing) and the related but distinct concepts of inexploitability and adequacy, which are what usually matter in real life.



Depending on the strength of your filter bubble, you may have met people who become angry when they hear the phrase “efficient markets,” taking the expression to mean that hedge fund managers are particularly wise, or that markets are particularly just.1

Part of where this interpretation appears to be coming from is a misconception that market prices reflect a judgment on anyone’s part about what price would be “best”—fairest, say, or kindest.

In a pre-market economy, when you offer somebody fifty carrots for a roasted antelope leg, your offer says something about how impressed you are with their work hunting down the antelope and how much reward you think that deserves from you. If they’ve dealt generously with you in the past, perhaps you ought to offer them more. This is the only instinctive notion people start with for what a price could mean: a personal interaction between Alice and Bob reflecting past friendships and a balance of social judgments.

In contrast, the economic notion of a market price is that for every loaf of bread bought, there is a loaf of bread sold; and therefore actual demand and actual supply are always equal. The market price is the input that makes the decreasing curve for demand as a function of price meet the increasing curve for supply as a function of price. This price is an “is” statement rather than an “ought” statement, an observation and not a wish.

In particular, an efficient market, from an economist’s perspective, is just one whose average price movement can’t be predicted by you.

If that way of putting it sounds odd, consider an analogy. Suppose you asked a well-designed superintelligent AI system to estimate how many hydrogen atoms are in the Sun. You don’t expect the superintelligence to produce an answer that is exactly right down to the last atom, because this would require measuring the mass of the Sun more finely than any measuring instrument you expect it to possess. At the same time, it would be very odd for you to say, “Well, I think the superintelligence will underestimate the number of atoms in the Sun by 10%, because hydrogen atoms are very light and the AI system might not take that into account.” Yes, hydrogen atoms are light, but the AI system knows that too. Any reason you can devise for how a superintelligence could underestimate the amount of hydrogen in the Sun is a possibility that the superintelligence can also see and take into account. So while you don’t expect the system to get the answer exactly right, you don’t expect that you yourself will be able to predict the average value of the error—to predict that the system will underestimate the amount by 10%, for example.

This is the property that an economist thinks an “efficient” price has. An efficient price can update sharply: the company can do worse or better than expected, and the stock can move sharply up or down on the news. In some cases, you can rationally expect volatility; you can predict that good news might arrive tomorrow and make the stock go up, balanced by a counter-possibility that the news will fail to arrive and the stock will go down. You could think the stock is 30% likely to rise by $10 and 20% likely to drop by $15 and 50% likely to stay the same. But you can’t predict in advance the average value by which the price will change, which is what it would take to make an expected profit by buying the stock or short-selling it.2

When an economist says that a market price is efficient over a two-year time horizon, they mean: “The current price that balances the supply and demand of this financial instrument well reflects all public information affecting a boundedly rational estimate of the future supply-demand balancing point of this financial instrument in two years.” They’re relating the present intersection of these two curves to an idealized cognitive estimate of the curves’ future intersection.

But this is a long sentence in the language of a hunter-gatherer. If somebody doesn’t have all the terms of that sentence precompiled in their head, then they’re likely to interpret the sentence in the idiom of ordinary human life and ordinary human relationships.

People have an innate understanding of “true” in the sense of a map that reflects the territory, and they can imagine processes that produce good maps; but probability and microeconomics are less intuitive.3 What people hear when you talk about “efficient prices” is that a cold-blooded machine has determined that some people ought to be paid $9/hour. And they hear the economist saying nice things about the machine, praising it as “efficient,” implying that the machine is right about this $9/hour price being good for society, that this price well reflects what someone’s efforts are justly worth. They hear you agreeing with this pitiless machine’s judgment about how the intuitive web of obligations and incentives and reputation ought properly to cash out for a human interaction.

And in the domain of stocks, when stock prices are observed to swing widely, this intuitive view says that the market can’t be that smart after all. For if it were smart, would it keep turning out to be “wrong” and need to change its mind?

I once read a rather clueless magazine article that made fun of a political prediction market on the basis that when a new poll came out, the price of the prediction market moved. “It just tracks the polls!” the author proclaimed. But the point of the prediction market is not that it knows some fixed, objective chance with high accuracy. The point of a prediction market is that it summarizes all the information available to the market participants. If the poll moved prices, then the poll was new information that the market thought was important, and the market updated its belief, and this is just the way things should be.

In a liquid market, “price moves whose average direction you can predict in advance” correspond to both “places you can make a profit” and “places where you know better than the market.” A market that knows everything you know is a market where prices are “efficient” in the conventional economic sense—one where you can’t predict the net direction in which the price will change.

This means that the efficiency of a market is assessed relative to your own intelligence, which is fine. Indeed, it’s possible that the concept should be called “relative efficiency.” Yes, a superintelligence might be able to predict price trends that no modern human hedge fund manager could; but economists don’t think that today’s markets are efficient relative to a superintelligence.

Today’s markets may not be efficient relative to the smartest hedge fund managers, or efficient relative to corporate insiders with secret knowledge that hasn’t yet leaked. But the stock markets are efficient relative to you, and to me, and to your Uncle Albert who thinks he tripled his money through his incredible acumen in buying



Not everything that involves a financial price is efficient. There was recently a startup called Color Labs, aka, whose putative purpose was to let people share photos with their friends and see other photos that had been taken nearby. They closed $41 million in funding, including $20 million from the prestigious Sequoia Capital.

When the news of their funding broke, practically everyone on the online Hacker News forum was rolling their eyes and predicting failure. It seemed like a nitwit me-too idea to me too. And then, yes, Color Labs failed and the 20-person team sold themselves to Apple for $7 million and the venture capitalists didn’t make back their money. And yes, it sounds to me like the prestigious Sequoia Capital bought into the wrong startup.

If that’s all true, it’s not a coincidence that neither I nor any of the other onlookers could make money on our advance prediction. The startup equity market was inefficient (a price underwent a predictable decline), but it wasn’t exploitable.4 There was no way to make a profit just by predicting that Sequoia had overpaid for the stock it bought. Because, at least as of 2017, the market lacks a certain type and direction of liquidity: you can’t short-sell startup equity.5

What about houses? Millions of residential houses change hands every year, and they cost more than stock shares. If we expect the stock market to be well-priced, shouldn’t we expect the same of houses?

The answer is “no,” because you can’t short-sell a house. Sure, there are some ways to bet against aggregate housing markets, like shorting real estate investment trusts or home manufacturers. But in the end, hedge fund managers can’t make a synthetic financial instrument that behaves just like the house on 6702 West St. and sell it into the same housing market frequented by consumers like you. Which is why you might do very well to think for yourself about whether the price seems sensible to you before buying a house: because you might know better than the market price, even as a non-specialist relying only on publicly available information.

Let’s imagine there are 100,000 houses in Boomville, of which 10,000 have been for sale in the last year or so. Suppose there are 20,000 fools who think that housing prices in Boomville can only go up, and 10,000 rational hedge fund managers who think that the shale-oil business may collapse and lead to a predictable decline in Boomville house prices. There’s no way for the hedge fund managers to short Boomville house prices—not in a way that satisfies the optimistic demand of 20,000 fools for Boomville houses, not in a way that causes house prices to actually decline. The 20,000 fools just bid on the 10,000 available houses until the skyrocketing price of the houses makes 10,000 of the fools give up.

Some smarter agents might decline to buy, and so somewhat reduce demand. But the smarter agents can’t actually visit Boomville and make hundreds of thousands of dollars off of the overpriced houses. The price is too high and will predictably decline, relative to public information, but there’s no way you can make a profit on knowing that. An individual who owns an existing house can exploit the inefficiency by selling that house, but rational market actors can’t crowd around the inefficiency and exploit it until it’s all gone.

Whereas a predictably underpriced house, put on the market for predictably much less than its future price, would be an asset that any of a hundred thousand rational investors could come in and snap up.

So a frothy housing market may see many overpriced houses, but few underpriced ones.

Thus it will be easy to lose money in this market by buying stupidly, and much harder to make money by buying cleverly. The market prices will be inefficient—in a certain sense stupid—but they will not be exploitable.

In contrast, in a thickly traded market where it is easy to short an overpriced asset, prices will be efficient in both directions, and any day is as good a day to buy as any other. You may end up exposed to excess volatility (an asset with a 50% chance of doubling and a 50% chance of going bankrupt, for example), but you won’t actually have bought anything overpriced—if it were predictably overpriced, it would have been short-sold.6

We can see the notion of an inexploitable market as generalizing the notion of an efficient market as follows: in both cases, there’s no free energy inside the system. In both markets, there’s a horde of hungry organisms moving around trying to eat up all the free energy. In the efficient market, every predictable price change corresponds to free energy (easy money) and so the equilibrium where hungry organisms have eaten all the free energy corresponds to an equilibrium of no predictable price changes. In a merely inexploitable market, there are predictable price changes that don’t correspond to free energy, like an overpriced house that will decline later, and so the no-free-energy equilibrium can still involve predictable price changes.7

Our ability to say, within the context of the general theory of “efficient markets,” that houses in Boomville may still be overpriced—and, additionally, to say that they are much less likely to be underpriced—is what makes this style of reasoning powerful. It doesn’t just say, “Prices are usually right when lots of money is flowing.” It gives us detailed conditions for when we should and shouldn’t expect efficiency. There’s an underlying logic about powerfully smart organisms, any single one of which can consume free energy if it is available in worthwhile quantities, in a way that produces a global equilibrium of no free energy; and if one of the premises is invalidated, we get a different prediction.



At one point during the 2016 presidential election, the PredictIt prediction market—the only one legally open to US citizens (and only US citizens)—had Hillary Clinton at a 60% probability of winning the general election. The bigger, international prediction market BetFair had Clinton at 80% at that time.

So I looked into buying Clinton shares on PredictIt—but discovered, alas, that PredictIt charged a 10% fee on profits, a 5% fee on withdrawals, had an $850 limit per contract bet… and on top of all that, I’d also have to pay 28% federal and 9.3% state income taxes on any gains. Which, in sum, meant I wouldn’t be getting much more than $30 in expected return for the time and hassle of buying the contracts.

Oh, if only PredictIt didn’t charge that 10% fee on profits, that 5% fee on withdrawals! If only they didn’t have the $850 limit! If only the US didn’t have such high income taxes, and didn't limit participation in overseas prediction markets! I could have bought Clinton shares at 60 cents on PredictIt and Trump shares at 20 cents on Betfair, winning a dollar either way and getting a near-guaranteed 25% return until the prices were in line! Curse those silly rules, preventing me from picking up that free money!

Does that complaint sound reasonable to you?

If so, then you haven’t yet fully internalized the notion of an inefficient-but-inexploitable market.

If the taxes, fees, and betting limits hadn’t been there, the PredictIt and BetFair prices would have been the same.



Suppose it were the case that some cases of Seasonal Affective Disorder proved resistant to sitting in front of a 10,000-lux lightbox for 30 minutes (the standard treatment), but would nonetheless respond if you bought 130 or so 60-watt-equivalent high-CRI LED bulbs, in a mix of 5000K and 2700K color temperatures, and strung them up over your two-bedroom apartment.

Would you expect that, supposing this were true, there would already exist a journal report somewhere on it?

Would you expect that, supposing this were true, it would already be widely discussed (or at least rumored) on the Internet?

Would you expect that, supposing this were true, doctors would already know about it and it would be on standard medical pages about Seasonal Affective Disorder?

And would you, failing to observe anything on the subject after a couple of hours of Googling, conclude that your civilization must have some unknown good reason why not everyone was doing this already?

To answer a question like this, we need an analysis not of the world’s efficiency or inexploitability but rather of its adequacy—whether all the low-hanging fruit have been plucked.

A duly modest skepticism, translated into the terms we’ve been using so far, might say something like this: “Around 7% of the population has severe Seasonal Affective Disorder, and another 20% or so has weak Seasonal Affective Disorder. Around 50% of tested cases respond to standard lightboxes. So if the intervention of stringing up a hundred LED bulbs actually worked, it could provide a major improvement to the lives of 3% of the US population, costing on the order of $1000 each (without economies of scale). Many of those 9 million US citizens would be rich enough to afford that as a treatment for major winter depression. If you could prove that your system worked, you could create a company to sell SAD-grade lighting systems and have a large market. So by postulating that you can cure SAD this way, you’re postulating a world in which there’s a huge quantity of metaphorical free energy—a big energy gradient that society hasn’t traversed. Therefore, I’m skeptical of this medical theory for more or less the same reason that I’m skeptical you can make money on the stock market: it postulates a $20 bill lying around that nobody has already picked up.”

So the distinction is:

  • Efficiency: “Microsoft’s stock price is neither too low nor too high, relative to anything you can possibly know about Microsoft’s stock price.”

  • Inexploitability: “Some houses and housing markets are overpriced, but you can’t make a profit by short-selling them, and you’re unlikely to find any substantially underpriced houses—the market as a whole isn’t rational, but it contains participants who have money and understand housing markets as well as you do.”

  • Adequacy: “Okay, the medical sector is a wildly crazy place where different interventions have orders-of-magnitude differences in cost-effectiveness, but at least there’s no well-known but unused way to save ten thousand lives for just ten dollars each, right? Somebody would have picked up on it! Right?!”

Let’s say that within some slice through society, the obvious low-hanging fruit that save more than ten thousand lives for less than a hundred thousand dollars total have, in fact, been picked up. Then I propose the following terminology: let us say that that part of society is adequate at saving 10,000 lives for $100,000.

And if there’s a convincing case that this property does not hold, we’ll say this subsector is inadequate (at saving 10,000 lives for $100,000).

To see how an inadequate equilibrium might arise, let’s start by focusing on one tiny subfactor of the human system, namely academic research.

We’ll even further oversimplify our model of academia and pretend that research is a two-factor system containing academics and grantmakers, and that a project can only happen if there’s both a participating academic and a participating grantmaker.

We next suppose that in some academic field, there exists a population of researchers who are individually eager and collectively opportunistic for publications—papers accepted to journals, especially high-impact journal publications that constitute strong progress toward tenure. For any clearly visible opportunity to get a sufficiently large number of citations with a small enough amount of work, there are collectively enough academics in this field that somebody will snap up the opportunity. We could say, to make the example more precise, that the field is collectively opportunistic in 2 citations per workday—if there’s any clearly visible opportunity to do 40 days of work and get 80 citations, somebody in the field will go for it.

This level of opportunism might be much more than the average paper gets in citations per day of work. Maybe the average is more like 10 citations per year of work, and lots of researchers work for a year on a paper that ends up garnering only 3 citations. We’re not trying to ask about the average price of a citation; we’re trying to ask how cheap a citation has to be before somebody somewhere is virtually guaranteed to try for it.

But academic paper-writers are only half the equation; the other half is a population of grantmakers.

In this model, can we suppose for argument’s sake that grantmakers are motivated by the pure love of all sentient life, and yet we still end up with an academic system that is inadequate?

I might naively reply: “Sure. Let’s say that those selfish academics are collectively opportunistic at two citations per workday, and the blameless and benevolent grantmakers are collectively opportunistic at one quality-adjusted life-year (QALY) per $100.8 Then everything which produces one QALY per $100 and two citations per workday gets funded. Which means there could be an obvious, clearly visible project that would produce a thousand QALYs per dollar, and so long as it doesn’t produce enough citations, nobody will work on it. That’s what the model says, right?”

Ah, but this model has a fragile equilibrium of inadequacy. It only takes one researcher who is opportunistic in QALYs and willing to take a hit in citations to snatch up the biggest, lowest-hanging altruistic fruit if there’s a population of grantmakers eager to fund projects like that.

Assume the most altruistically neglected project produces 1,000 QALYs per dollar. If we add a single rational and altruistic researcher to this model, then they will work on that project, whereupon the equilibrium will be adequate at 1,000 QALYs per dollar. If there are two rational and altruistic researchers, the second one to arrive will start work on the next-most-neglected project—say, a project that has 500 QALYs/$ but wouldn’t garner enough citations for whatever reason—and then the field will be adequate at 500 QALYs/$. As this free energy gets eaten up (it’s tasty energy from the perspective of an altruist eager for QALYs), the whole field becomes less inadequate in the relevant respect.

But this assumes the grantmakers are eager to fund highly efficient QALY-increasing projects.

Suppose instead that the grantmakers are not cause-neutral scope-sensitive effective altruists assessing QALYs/$. Suppose that most grantmakers pursue, say, prestige per dollar. (Robin Hanson offers an elementary argument that most grantmaking to academia is about prestige.9 In any case, we can provisionally assume the prestige model for purposes of this toy example.)

From the perspective of most grantmakers, the ideal grant is one that gets their individual name, or their boss’s name, or their organization’s name, in newspapers around the world in close vicinity to phrases like “Stephen Hawking” or “Harvard professor.” Let’s say for the purpose of this thought experiment that the population of grantmakers is collectively opportunistic in 20 microHawkings per dollar, such that at least one of them will definitely jump on any clearly visible opportunity to affiliate themselves with Stephen Hawking for $50,000. Then at equilibrium, everything that provides at least 2 citations per workday and 20 microHawkings per dollar will get done.

This doesn’t quite follow logically, because the stock market is far more efficient at matching bids between buyers and sellers than academia is at matching researchers to grantmakers. (It’s not like anyone in our civilization has put as much effort into rationalizing the academic matching process as, say, OkCupid has put into their software for hooking up dates. It’s not like anyone who did produce this public good would get paid more than they could have made as a Google programmer.)

But even if the argument is still missing some pieces, you can see the general shape of this style of analysis. If a piece of research will clearly visibly yield lots of citations with a reasonable amount of labor, and make the grantmakers on the committee look good for not too much money committed, then a researcher eager to do it can probably find a grantmaker eager to fund it.

But what if there’s some intervention which could save 100 QALYs/$, yet produces neither great citations nor great prestige? Then if we add a few altruistic researchers to the model, they probably won’t be able to find a grantmaker to fund it; and if we add a few altruistic grantmakers to the model, they probably won’t be able to find a qualified researcher to work on it.

One systemic problem can often be overcome by one altruist in the right place. Two systemic problems are another matter entirely.

Usually when we find trillion-dollar bills lying on the ground in real life, it’s a symptom of (1) a central-command bottleneck that nobody else is allowed to fix, as with the European Central Bank wrecking Europe, or (2) a system with enough moving parts that at least two parts are simultaneously broken, meaning that single actors cannot defy the system. To modify an old aphorism: usually, when things suck, it’s because they suck in a way that’s a Nash equilibrium.

In the same way that inefficient markets tend systematically to be inexploitable, grossly inadequate systems tend systematically to be unfixable by individual non-billionaires.

But then you can sometimes still insert a wedge for yourself, even if you can’t save the whole system. Something that’s systemically hard to fix for the whole planet is sometimes possible to fix in your own two-bedroom apartment. So inadequacy is even more important than exploitability on a day-to-day basis, because it’s inadequacy-generating situations that lead to low-hanging fruits large enough to be worthwhile at the individual level.



A critical analogy between an inadequate system and an efficient market is this: even systems that are horribly inadequate from our own perspective are still in a competitive equilibrium. There’s still an equilibrium of incentives, an equilibrium of supply and demand, an equilibrium where (in the central example above) all the researchers are vigorously competing for prestigious publications and using up all available grant money in the course of doing so. There’s no free energy anywhere in the system.

I’ve seen a number of novice rationalists committing what I shall term the Free Energy Fallacy, which is something along the lines of, “This system’s purpose is supposed to be to cook omelettes, and yet it produces terrible omelettes. So why don’t I use my amazing skills to cook some better omelettes and take over?”

And generally the answer is that maybe the system from your perspective is broken, but everyone within the system is intensely competing along other dimensions and you can’t keep up with that competition. They’re all chasing whatever things people in that system actually pursue—instead of the lost purposes they wistfully remember, but don’t have a chance to pursue because it would be career suicide. You won’t become competitive along those dimensions just by cooking better omelettes.

No researcher has any spare attention to give your improved omelette-cooking idea because they are already using all of their labor to try to get publications into high-impact journals; they have no free work hours.

The journals won’t take your omelette-cooking paper because they get lots of attempted submissions that they screen, for example, by looking for whether the researcher is from a high-prestige institution or whether the paper is written in a style that makes it look technically difficult. Being good at cooking omelettes doesn’t make you the best competitor at writing papers to appeal to prestigious journals—any publication slot would have to be given to you rather than someone else who is intensely trying to get it. Your good omelette technique might be a bonus, but only if you were already doing everything else right (which you’re not).

The grantmakers have no free money to give you to run your omelette-cooking experiment, because there are thousands of researchers competing for their money, and you are not competitive at convincing grantmaking committees that you’re a safe, reputable, prestigious option. Maybe they feel wistfully fond of the ideal of better omelettes, but it would be career suicide for them to give money to the wrong person because of that.

What inadequate systems and efficient markets have in common is the lack of any free energy in the equilibrium. We can see the equilibrium in both cases as defined by an absence of free energy. In an efficient market, any predictable price change corresponds to free energy, so thousands of hungry organisms trying to eat the free energy produce a lack of predictable price changes. In a system like academia, the competition for free energy may not correspond to anything good from your own standpoint, and as a result you may label the outcome “inadequate”; but there is still no free energy. Trying to feed within the system, or do anything within the system that uses a resource the other competing organisms want—money, publication space, prestige, attention—will generally be as hard for you as it is for any other organism.

Indeed, if the system gave priority to rewarding better performance along the most useful or socially beneficial dimensions over all competing ways of feeding, the system wouldn’t be inadequate in the first place. It’s like wishing PredictIt didn’t have fees and betting limits so that you could snap up those mispriced contracts.

In a way, it’s this very lack of free energy, this intense competition without space to draw a breath, that keeps the inadequacy around and makes it non-fragile. In the case of US science, there was a brief period after World War II where there was new funding coming in faster than universities could create new grad students, and scientists had a chance to pursue ideas that they liked. Today Malthus has reasserted himself, and it’s no longer generally feasible for people to achieve career success while going off and just pursuing the research they most enjoy, or just going off and pursuing the research with the largest altruistic benefits. For any actor to do the best thing from an altruistic standpoint, they’d need to ignore all of the system’s internal incentives pointing somewhere else, and there’s no free energy in the system to feed someone who does that.10



Since the idea of civilizational adequacy seems fairly useful and general, I initially wondered whether it might be a known idea (under some other name) in economics textbooks. But my friend Robin Hanson, a professional economist at an academic institution well-known for its economists, has written a lot of material that I see (from this theoretical perspective) as doing backwards reasoning from inadequacy to incentives.11 If there were a widespread economic notion of adequacy that he were invoking, or standard models of academic incentives and academic inadequacy, I would expect him to cite them.

Now look at the above paragraph. Can you spot the two implicit arguments from adequacy?

The first sentence says, “To the extent that this way of generalizing the notion of an efficient market is conceptually useful, we should expect the field of economics to have been adequate to have already explored it in papers, and adequate at the task of disseminating the resulting knowledge to the point where my economist friends would be familiar with it.”

The second and third sentences say, “If something like inadequacy analysis were already a well-known idea in economics, then I would expect my smart economist friend Robin Hanson to cite it. Even if Robin started out not knowing, I expect his other economist friends would tell him, or that one of the many economists reading his blog would comment on it. I expect the population of economists reading Robin’s blog and papers to be adequate to the task of telling Robin about an existing field here, if one already existed.”

Adequacy arguments are ubiquitous, and they’re much more common in everyday reasoning than arguments about efficiency or exploitability.



Returning to that business of stringing up 130 light bulbs around the house to treat my wife’s Seasonal Affective Disorder:

Before I started, I tried to Google whether anyone had given “put up a ton of high-quality lights” a shot as a treatment for resistant SAD, and didn’t find anything. Whereupon I shrugged, and started putting up LED bulbs.

Observing these choices of mine, we can infer that my inadequacy analysis was something like this: First, I did spend a fair amount of time Googling, and tried harder after the first search terms failed. This implies I started out thinking my civilization might have been adequate to think of the more light treatment and test it.

Then when I didn’t find anything on Google, I went ahead and tested the idea myself, at considerable expense. I didn’t assign such a high probability to “if this is a good idea, people will have tested it and propagated it to the point where I could find it” that in the absence of Google results, I could infer that the idea was bad.

I initially tried ordering the cheapest LED lights from Hong Kong that I could find on eBay. I didn’t feel like I could rely on the US lighting market to equalize prices with Hong Kong, and so I wasn’t confident that the premium price for US LED bulbs represented a quality difference. But when the cheap lights finally arrived from Hong Kong, they were dim, inefficient, and of visibly low color quality. So I decided to buy the more expensive US light bulbs for my next design iteration.

That is: I tried to save money based on a possible local inefficiency, but it turned out not to be inefficient, or at least not inefficient enough to be easily exploited by me. So I updated on that observation, discarded my previous belief, and changed my behavior.

Sometime after putting up the first 100 light bulbs or so, I was working on an earlier draft of this chapter and therefore reflecting more intensively on my process than I usually do. It occurred to me that sometimes the best academic content isn’t online and that it might not be expensive to test that. So I ordered a used $6 edited volume on Seasonal Affective Disorder, in case my Google-fu had failed me, hoping that a standard collection of papers would mention a light-intensity response curve that went past “standard lightbox.”

Well, I’ve flipped through that volume, and so far it doesn’t seem to contain any account of anyone having ever tried to cure resistant SAD using more light, either substantially higher-intensity or substantially higher-duration. I didn’t find any table of response curves to light levels above 10,000 lux, or any experiments with all-day artificial light levels comparable to my apartment’s roughly 2,000-lux luminance.

I say this to emphasize that I didn’t lock myself into my attempted reasoning about adequacy when I realized it would cost $6 to perform a further observational check. And to be clear, ordering one book still isn’t a strong check. It wouldn’t surprise me in the least to learn that at least one researcher somewhere on Earth had tested the obvious thought of more light and published the response curve. But I’d also hesitate to bet at odds very far from 1:1 in either direction.

And the higher-intensity light therapy does seems to have mostly cured Brienne’s SAD. It wasn’t cheap, but it was cheaper than sending her to Chile for 4 months.

If more light really is a simple and effective treatment for a large percentage of otherwise resistant patients, is it truly plausible that no academic researcher out there has ever conducted the first investigation to cross my own mind? “Well, since the Sun itself clearly does work, let’s try more light throughout the whole house—never mind these dinky lightboxes or 30-minute exposure times—and then just keep adding more light until it frickin’ works.” Is that really so non-obvious? With so many people around the world suffering from severe or subclinical SAD that resists lightboxes, with whole countries in the far North or South where the syndrome is common, could that experiment really have never been tried in a formal research setting?

On my model of the world? Sure.

Am I running out and trying to get a SAD researcher interested in my anecdotal data? No, because when something like this doesn’t get done, there’s usually a deeper reason than “nobody thought of it.”

Even if nobody did think of it, that says something about a lack of incentives to be creative. If academics expected working solutions to SAD to be rewarded, there would already be a much larger body of literature on weird things researchers had tried, not just lightbox variant after lightbox variant. Inadequate systems tend systematically to be systemically unfixable; I don’t know the exact details in this case, but there’s probably something somewhere.

So I don’t expect to get rich or famous, because I don’t expect the system to be that exploitable in dollars or esteem, even though it is exploitable in personalized SAD treatments. Empirically, lots of people want money and acclaim, and base their short- and long-term career decisions around its pursuit; so achieving it in unusually large quantities shouldn’t be as simple as having one bright idea. But there aren’t large groups of competent people visibly organizing their day-to-day lives around producing outside-the-box new lightbox alternatives with the same intensity we can observe people organizing their lives around paying the bills, winning prestige or the acclaim of peers, etc.

People presumably care about curing SAD—if they could effortlessly push a button to instantly cure SAD, they would do so—but there’s a big difference between “caring” and “caring enough to prioritize this over nearly everything else I care about,” and it’s the latter that would be needed for researchers to be willing to personally trade away non-small amounts of expected money or esteem for new treatment ideas.12

In the case of Japan’s monetary policy, it wasn’t a coincidence that I couldn’t get rich by understanding macroeconomics better than the Bank of Japan. Japanese asset markets shot up as soon as it became known that the Bank of Japan would create more money, without any need to wait and see—so it turns out that the markets also understood macroeconomics better than the Bank of Japan. Part of our civilization was being, in a certain sense, stupid: there were trillion-dollar bills lying around for the taking. But they weren’t trillion-dollar bills that just anyone could walk over and pick up.

From the standpoint of a single agent like myself, that ecology didn’t contain the particular kind of free energy that lots of other agents were competing to eat. I could be unusually right about macroeconomics compared to the PhD-bearing professionals at the Bank of Japan, but that weirdly low-hanging epistemic fruit wasn’t a low-hanging financial fruit; I couldn’t use the excess knowledge to easily get excess money deliverable the next day.

Where reward doesn’t follow success, or where not everyone can individually pick up the reward, institutions and countries and whole civilizations can fail at what is usually imagined to be their tasks. And then it is very much easier to do better in some dimensions than to profit in others.

To state all of this more precisely: Suppose there is some space of strategies that you’re competent enough to think up and execute on. Inexploitability has a single unit attached, like “$” or “effective SAD treatments,” and says that you can’t find a strategy in this space that knowably gets you much more of the resource in question than other agents. The kind of inexploitability I’m interested in typically arises when a large ecosystem of competing agents is genuinely trying to get the resource in question, and has access to strategies at least as good (for acquiring that resource) as the best options in your strategy space.

Inadequacy with respect to a strategy space has two units attached, like “effective SAD treatments / research hours” or “QALYs / $,” and says that there is some set of strategies a large ecosystem of agents could pursue that would convert the denominator unit into the numerator unit at some desired rate, but the agents are pursuing strategies that in fact result in a lower conversion rate. The kind of inadequacy I’m most interested in arises when many of the agents in the ecosystem would prefer that the conversion occur at the rate in question, but there’s some systemic blockage preventing this from happening.

Systems tend to be inexploitable with respect to the resources that large ecosystems of competent agents are trying their hardest to pursue, like fame and money, regardless of how adequate or inadequate they are. And if there are other resources the agents aren’t adequate at converting fame, money, etc. into at a widely desired rate, it will often be due to some systemic blockage. Insofar as agents have overlapping goals, it will therefore often be harder than it looks to find real instances of exploitability, and harder than it looks to outperform an inadequate equilibrium. But more local goals tend to overlap less: there isn’t a large community of specialists specifically trying to improve my wife’s well-being.

The academic and medical system probably isn’t that easy to exploit in dollars or esteem, but so far it does look like maybe the system is exploitable in SAD innovations, due to being inadequate to the task of converting dollars, esteem, researcher hours, etc. into new SAD cures at a reasonable rate—inadequate, for example, at investigating some SAD cures that Randall Munroe would have considered obvious,13 or at doing the basic investigative experiments that I would have considered obvious. And when the world is like that, it’s possible to cure someone’s crippling SAD by thinking carefully about the problem yourself, even if your civilization doesn’t have a mainstream answer.



There’s a whole lot more to be said about how to think about inadequate systems: common conceptual tools include Nash equilibria, commons problems, asymmetrical information, principal-agent problems, and more. There’s also a whole lot more to be said about how not to think about inadequate systems.

In particular, if you relax your self-skepticism even slightly, it’s trivial to come up with an a priori inadequacy argument for just about anything. Talk about “efficient markets” in any less than stellar forum, and you’ll soon get half a dozen comments from people deriding the stupidity of hedge fund managers. And, yes, the financial system is broken in a lot of ways, but you still can’t double your money trading S&P 500 stocks. “Find one thing to deride, conclude inadequacy” is not a good rule.

At the same time, lots of real-world social systems do have inadequate equilibria and it is important to be able to understand that, especially when we have clear observational evidence that this is the case. A blanket distrust of inadequacy arguments won’t get us very far either.

This is one of those ideas where other cognitive skills are required to use it correctly, and you can shoot off your own foot by thinking wrongly. So if you’ve read this far, it’s probably a good idea to keep reading.



Next: Moloch's Toolbox part 1.

The full book will be available November 16th. You can go to to pre-order the book, or sign up for notifications about new chapters and other developments.



  1. If the person gets angry and starts talking about lack of liquidity, rather than about the pitfalls of capitalism, then that is an entirely separate class of dispute. 

  2. You can often predict the likely direction of a move in such a market, even though on average your best guess for the change in price will always be 0. This is because the median market move will usually not equal the mean market move. For similar reasons, a rational agent can usually predict the direction of a future Bayesian update, even though the average value by which their probability changes should be 0. A high probability of a small update in the expected direction can be offset by a low probability of a larger update in the opposite direction. 

  3. Anyone who tries to spread probability literacy quickly runs into the problem that a weather forecast giving an 80% chance of clear skies is deemed “wrong” on the 1-in–5 occasions when it in fact rains, prompting people to wonder what mistake the weather forecaster made this time around. 

  4. More precisely, I would say that the market was inexploitable in money, but inefficiently priced. 

  5. To short-sell is to borrow the asset, sell it, and then buy it back later after the price declines; or sometimes to create a synthetic copy of an asset, so you can sell that. Shorting an asset allows you to make money if the price goes down in the future, and has the effect of lowering the asset’s price by increasing supply. 

  6. Though beware that even in a stock market, some stocks are harder to short than others—like stocks that have just IPOed. Drechsler and Drechsler found that creating a broad market fund of only assets that are easy to short in recent years would have produced 5% higher returns (!) than index funds that don’t kick out hard-to-short assets. Unfortunately, I don’t know of any index fund that actually tracks this strategy, or it’s what I’d own as my main financial asset. 

  7. Robert Shiller cites Edward Miller as having observed in 1977 that efficiency requires short sales, and either Shiller or Miller observes that houses can’t be shorted. But I don’t know of any standard economic term for markets that are inefficient but “inexploitable” (as I termed it). It’s not a new idea, but I don’t know if it has an old name.

    I mention parenthetically that a regulator that genuinely and deeply cared about protecting retail financial customers would just concentrate on making everything in that market easy to short-sell. This is the obvious and only way to ensure the asset is not overpriced. If the Very Serious People behind the JOBS Act to enable crowdfunded startups had honestly wanted to protect normal people and understood this phenomenon, they would mandate that all equity sales go through an exchange where it was easy to bet against the equity of dumb startups, and then declare their work done and go on permanent vacation in Aruba. This is the easy and only way to protect consumers from overpriced financial assets. 

  8. “Quality-adjusted life year” is a measure used to compare the effectiveness of medical interventions. QALYs are a popular way of relating the costs of death and disease, though they’re generally defined in ways that exclude non-health contributors to quality of life. 

  9. Hanson, “Academia’s Function.” 

  10. This is also why, for example, you can’t get your project funded by appealing to Bill Gates. Every minute of Bill Gates’s time that Bill Gates makes available to philanthropists is a highly prized and fought-over resource. Every dollar of Gates’s that he makes available to philanthropy is already highly fought over. You won’t even get a chance to talk to him. Bill Gates is surrounded by a cloud of money, but you’re very naive if you think that corresponds to him being surrounded by a cloud of free energy. 

  11. Robin often says things like, for example: “X doesn’t use a prediction market, so X must not really care about accurate estimates.” That is to say: “If system X were driven mainly by incentive Y, then it would have a Y-adequate equilibrium that would pick low-hanging fruit Z. But system X doesn’t do Z, so X must not be driven mainly by incentive Y.” 

  12. Even the attention and awareness needed to explicitly consider the option of making such a tradeoff, in an environment where such tradeoffs aren’t already normally made or discussed, is a limited resource. Researchers will not be motivated to take the time to think about pursuing more socially beneficial research strategies if they’re currently pouring all their attention and strategic thinking into finding ways to achieve more of the other things they want in life.

    Conventional cynical economics doesn’t require us to posit Machiavellian researchers who explicitly considered pursuing better strategies for treating SAD and decided against them for selfish reasons; they can just be too busy and distracted pursuing more obvious and immediate rewards, and never have a perceptible near-term incentive to even think very much about some other considerations. 

  13. See: What If? Laser Pointer

New Comment
48 comments, sorted by Click to highlight new comments since:

For my own benefit I thought I'd write down examples of markets that I can see are inadequate yet inexploitable. Not all of these I'm sure are actually true, some just fit the pattern.

  • I notice that most charities aren’t cost effective, but if I decide to do better by making a super cost-effective charity I shouldn’t expect to be more successful than the other charities.
  • I notice that most people at university aren’t trying to learn but get good signals for their career, I can’t easily do better in the job market by stopping trying to signal and just learn better
  • I notice most parenting technique books aren't helpful (because genetics), but I probably can’t make money by selling a shorter book that tells you the only parenting techniques that do matter.
  • If I notice that politicians aren’t trying to improve the country very much, I can’t get elected over them by just optimising for improving the country more (because they're optimising for being elected).
  • If most classical musicians spend a lot of money on high-status instruments and spend time with high-status teachers that don’t correlate with quality, you can’t be more successful by just picking high quality instruments and teachers.
  • If most rocket companies are optimising for getting the most money out of government, you probably can’t win government contracts by just making a better rocket company. (?)
  • If I notice that nobody seems to be doing research on the survival of the human species, I probably can’t make it as an academic by making that my focus
  • If I notice that most music recommendation sites are highly reviewing popular music (so that they get advance copies) I can’t have a more successful review site/magazine by just being honest about the music.

Correspondingly, if these models are true, here are groups/individuals who it would be a mistake to infer strong information about if they're not doing well in these markets:

  • Just because a charity has a funding gap doesn't mean it's not very cost-effective
  • Just because someone has bad grades at university doesn't mean they are bad at learning their field
  • Just because a parenting book isn't selling well doesn't mean it isn't more useful than others
  • Just because a politician didn't get elected doesn't mean they wouldn't have made better decisions
  • Just because a rocket company doesn't get a government contract doesn't mean it isn't better at building safe and cheap rockets than other companies
  • Just because an academic is low status / outside academia doesn't mean they're views aren't true
  • Just because a band isn't highly reviewed in major publications doesn't mean it isn't innovative/great

Some of these seem stronger to me than others. I tend to think that academic fields are more adequate at finding truth and useful knowledge than music critics are adequate at figuring out which bands are good.

I notice that most charities aren’t cost effective, but if I decide to do better by making a super cost-effective charity I shouldn’t expect to be more successful than the other charities.

This seems wrong to me. I think you should expect to be more cost-effective, but you should also expect to get much less funding than the average charity (all else equal), which might still make the total impact you have larger.

Or to phrase it in Eliezer's jargon: The market of charities is exploitable in respect to cost-effectiveness, but inexploitable in respect to funding. And ultimately you care about cost-effectiveness * funding.

Yup - I was under-specific about the definition of 'successful'. Thanks!

I tend to think that academic fields are more adequate at finding truth and useful knowledge than music critics are adequate at figuring out which bands are good.

Why is that?

Presumably there's a lot of money on the line to identify music that people will like. (Though I guess that's the role of studio execs and producers, and not critics.)

To echo Meister's comment, I think that rocket companies are different because of the high activation energy. It is not easy money.

Re: Rocket Companies

It turns out it is possible to compete by playing a different game than the incumbents, but you need to have a huge amount of capital and tolerate the risk of losing all of it. If you read the early history of SpaceX, they came within a hair's width of bankruptcy.

This seems to be a general feature of startup founder stories. Founding a startup has been so romanticized that some people would still try even if there's only a 1% chance of success. So even though it requires individuals to go against the energy-gradient, we nevertheless see people try to innovate. And a few even succeed. And this is good for society.

This might give us a general tactic for breaking out of inadequacy-traps: if the successful risk-takers are awarded enough status, then others will try to follow their example even though the expected utility of doing so is disasterous for the individual.

I've heard it said that the fastest way to win a Nobel prize is to do something that everyone else in your field believed was impossible. Maybe academia recognizes that it's held back by a bandwagon-research inadequacy-trap, and giving its highest prize to researchers that don't follow the same incentives as everyone else is a way that the system self-corrects? Not by compensating them for their risk, but by encouraging other researchers to act against their own best interests and perform risky research in the vain hope of obtaining a ludicrously high status prize.

I don't know how you use this to fix politics, though. My first thought is to declare some genuine lifelong public servants to be national heroes or something, but I worry that any such honor will become a popularity contest that the more common kind of politician is more optimized for.

I'd argue the complexity of information gathering and crappy UI of voter punishment or reward are more relevant to politics. A good model of where to start might be an efficient market of many educated actors being able to fix the political power of polticians the same way current markets fix the price of stocks today. There's already a relatively open field for actors willing to become journalists or podcasters so the media moving piece in the current system is less systematically broken. It's also a component in sufficient other systems that are less broken than politics that we should expect it possible to keep the current media and still have better efficiency that today be attaignable.

Not sure how to implement the specifics, however...

Super helpful explanations of how to model important parts of the world with microeconomics. I really appreciate laying out both how a system can be inefficient yet unexploitable, and also the simple model of how a big institution filled with intelligent people (like academia) can be inadequate yet still in a competitive equilibrium (such that it’s hard to take advantage of noticing the institution’s inadequacy). I might go away and think about all the institutions in my life and try to basic analysis of how they are and aren’t adequate. For these reasons I’m promoting this to Featured.

(Also, the first two paragraphs of section 6 were awesome.)

A fun example of inadequacy in action: "It's 2017, where's my jetpack?"

From a technical standpoint, jetpacks do not seem like they'd be that difficult to build. On the other hand, they do seem like a liability nightmare, which would explain why they're not readily available for sale at a reasonable price. However, put those two things together, and you might hypothesize that jetpack blueprints can easily be found online if you go looking.

Thanks for this post. I've seen the term inadequecy before (mostly on your Facebook page, I think) but never had such a clear definition in mind.

There was one small thing that bothered me in this post without detracting from the main argument. In section IV, we provisionally accept the premise "grantmakers are driven by prestige, not expected value of research" for the sake of a toy example. I was happy to accept this for the sake of the example. However, in section V (the omelette example and related commentary about research after the second world war), the text begins to read as though this premise is definitely true in real life. This felt inconsistent and potentially misleading.

(It’s not like anyone in our civilization has put as much effort into rationalizing the academic matching process as, say, OkCupid has put into their software for hooking up dates. It’s not like anyone who did produce this public good would get paid more than they could have made as a Google programmer.)

I appreciated this throwaway example of inadequacy. It gave me a little lightbulb and propelled me forward to read the rest of the post with more interest.

Human dating is notoriously inefficient (not necessarily in the way OP uses this term). Most individuals who want a suitable sex partner and do not already have one accessible will not be able to find one when they want it, even though the odds are very high that someone suitable and willing lives within 5 min from them,. Yet there is clearly very little, if any, free energy in the system, given the number of dating apps and sites trying to extract it. What kind of inefficiencies/inexpoitablities/inadequacies are there in this system?

1) Though there is probably someone suitable and willing living within 5 minutes of Jesse, many more of the people within 5 minutes of em are not. It's hard to filter these people, and risky to get it wrong. At best, the other person is unwilling, rude or annoying. Worse, they could be unhealthy, violent, or untrustworthy.

2) Dating sites don't optimize for efficiently starting romantic relationships. If they were really successful at this, people would spend less time on the sites, getting the sites less attention and thus ad / member revenue.

3) A selection effect (?) Many people are already in satisfying romantic situations. People searching right now are more likely to have some sort of character flaw or bad strategy which has keeps them in this situation.

How would we test these? Maybe there's research already out there that (dis)confirms them?

Most women who want to have sex with a stranger don't simply want sex. They want an interaction that make them feel in a specific way before they have sex. That emotional experience isn't just produced by deciding to hook up together.

Just to clarify, some of the free-energy extractors that are not commonly utilized, and for good reasons, are BDSM events, polyamory, swinging and similar fringe activities.

I posted the idea of installing very bright lights on LW five years ago and Eliezer commented there so I give myself credit for at least making that spontaneous idea more likely. And it happens to be the case I've been thinking about the failings of light boxes for SAD in the meantime.

What happened is that a few people experimented with light therapy, got succcess with 2500lux for two hours, decided two hours per day was infeasible outside the lab, found that they could get the same result dividing the time but multiplying the light intensity and then... just... stopped. They did studies with 10000 lux boxes and that's a relatively expensive study so you better cooperate with a producer of such boxes. So you get some type of kickback and suddenly nobody's interested in studying whether stronger, cheaper lights are even better. Light boxes became a medical device and magically became just as expensive as medical insurers would tolerate. That they don't work for everyone was expected, because no depression treatment works for everyone (maybe except electroconvulsive therapy, and Ketamine but that was later). And LEDs only became cheap enough recently (five years ago they still weren't clearly the cheapest option), so going much beyond 10000 lux presented enough of a technical challenge to make further trials pretty expensive, until recently.

Right now, you could probably do a study with SAD sufferers who have tried light boxes and found them insufficient. Give them a setup that produces like 40000 lux and fits in a normal ceiling fixture so they can have it running while they do things, rather than have to make time to sit in front of it. For a double blind control design, maybe give one group twice the brightness of the other? Have your participants log every day how much time they spent in the room with the lamp running, and how much time they spent outside. Don't give them money but let them keep the lamp if they continue mailing their fillled out questionnaires. Should be doable at a hundred dollars per participant, and without ever physically meeting them. You still need six figures to run that study at a large enough size, and no light box maker is going to fund you.

Why isn't a light box maker willing to pay $100,000 as a marketing expense?

I liked the text, but I don't like that liked it, as I started to suspect that rationality movement may fail into the trap discussed into the text.

That is, the main art of a rationalist is to write rationality blog posts, and all the system is optimised to write better, more beuatiful and more likable posts.

As a result, it is great writing school, but it was not intended to be a school of text writing. It is inadequate equlibria for the rationality community. Shorter and less beutiful texts with condensed TL;DR may help to cure the beauty.

I believe that equilibria has already arrived...and at no real surprise, since no preventive measures were ever put into place.

The reason this equilibria occurs is because there is a social norm that says "upvote if this post is both easy to understand and contains at least one new insight." If a post contains lots of deep and valuable insights, this increases the likelihood that it is complex, dense, and hard to understand. Hard to understand posts often get mistaken for poor writing (or worse, will be put in a separate class and compared against academic writing) and will face higher scrutiny. Only rarely and with much effort will someone be able to successfully write things that are easy to understand and contain deep insights. (As an example, consider MIRI's research papers, which are more likely to contain much more valuable progress towards a specific problem, but also recieve little attention, and are often compared against other academic works where they face an uphill battle to gain wider acceptance.)

The way around this, if you choose to optimize for social approval and prestige, is to write beautifully written posts that explain a relatively simple concept. Generally, it is much easier to be a brilliant writer than someone who uncovers truly original ideas. It's much easier to use this strategy with our current reward system.

Therefore, what results is basically a lot of amazingly-written articles that very clearly explain a concept you probably could have learned somewhere else.

But we're in for a real treat with this sequence, since it openly acknowledges that it's hard to know if you've found a genuine insight. It's going to get really meta...

(This is one of the primary reasons why a post being in featured is not decided by the number of upvotes or downvotes, but by moderator decision. We have a bunch of ability to push back against this incentive gradient.)

Counterargument, depending on what you view the goal of the rationality movement is: If we want to raise the sanity waterline and get the benefits from having a lot of people armed with new insights, we need to be able to explain those insights to people. Take literacy- there's a real benefit to getting a majority of a population fluent in that technique, something distinct from what you get from having a few people able to read. Imagine if the three rationality techniques most important to you were so widespread that it would be genuinely surprising if a random adult on the street wasn't capable with them. What would the last year have looked like if every adult knew that arguments are not soldiers, or that beliefs should pay rent? (My social media would be a much more pleasant place if everyone on it knew that nobody is perfect but everything is commensurable.)

We teach math by starting with counting, then addition, then subtraction, then multiplication, and so on until differential equations or multivariable calculus or wherever ones math education stops. One can argue that we teach math badly (and I would be pretty sympathetic to that argument) but I don't think "too many easy to understand lessons that teach only one new insight" is the problem. I might go so far as to say we need multiple well written articles on the most important insights, written in a variety of styles to appeal to a wide variety of reader.


The analogy to focusing really hard on the basics (like math) to create a situation where your actual anticipations of the world is a really good framing that I hadn't explicitly considered before. Thanks for pointing it out.

The analogy to focusing really hard on the basics (like math) to create a situation where your actual anticipations of the world is a really good framing that I hadn't explicitly considered before.

Was there a word missing after "anticipations of the world"? I'm having trouble parsing as is.


This is an important failure mode to consider, to be sure, but why do you think we've fallen into it? And more relevantly, is this something you're saying you've observed about LessWrong 2.0 in particular, or the rationalist movement in general?

As someone just really getting up to speed, I find myself both enjoying the very-long-posts, but FWIW, few of my friends would ever sit down to read them.

I expect this is a trite and repetitive comment in this community, though.

I think this article / concept is incredibly useful, and singlehandedly justifies the existence of LW2. Thank you!

I want to go reread you and your research and see how the free energy concept could apply there -- if anyone else does, I'd love to hear thoughts.

I agree that the article does some incredibly useful conceptual work. I will say that I think Eliezer had written it and was planning to publish independantly of our LW 2.0 plans, and that I think the test of LW 2.0's success will be (in this case) the discussion following it, and more generally the communication between writers and thinkers from the whole community (things like this valuable critique of Eliezer's fire alarm post).

Huh, I like the suggestion for applying this to Hamming's talk, I might do that myself and do a brief write-up. Thanks!

This is not exactly central to your main argument, but I think it's worth pointing out, since this is something I see even economists I really respect like Scott Sumner being imprecise about: Even if markets are efficient (and I agree they pretty much are!), then prices can still be predictable.

This is the standard view in academic asset pricing theory. The trick is that: under the EMH, risk-adjusted returns must follow a random walk, not that returns themselves must follow a random walk. I have an essay explaining this in more detail for the curious.

Thank you for writing such a clear article on the issue. Cleared up my confusion around EMH, and especially how it differs from the random walk hypothesis. I'll definitely reference this article when people bring up EMH.

Which, in sum, meant I wouldn’t be getting much more than 20% real returns.

Is this a typo? This is stated as if it's a disappointment, and not worth the bother, but 20% real returns sounds great!

I think the actual problem there is the contract cap--going to all this work results in something like $120 [Edit: on further calculation, I think it's actually much lower, like $30, assuming you don't have other gambling winnings to offset.] in reward, which is likely not worth the hassle. If you could put in 10x the money and 10x the return without a corresponding increase in the hassle, then it might be worth it.

We can work out the bounds on the error created by the arbitrage in the absence of contract caps. A wrinkle is that for US tax law you can combine all gambling winnings and losses over the course of the year, so if you bought $2k of Trump on BetFair and $6k of Clinton on PredictIt, the $10k you get back would have $8k subtracted from it, and you only need to pay taxes on your $2k of winnings. But from BetFair's point of view, your profit is $8k, and you need to pay the profit fee on $8k and the withdrawal fee on $10k. (BetFair's fee structure is monstrously complicated, so I'm going to pretend they have PredictIt's fee structure.) Since the profit fees and withdrawal fees apply before the income tax, and the income tax only multiplicatively reduces the profit, we'll just determine the difference in probabilities that leads to the exchange giving you back more money than you put in, as positive profit will still be positive profit after taxes when it might not be after fees. (I'm also assuming no deposit fees, but you can just roll those into the price of the contracts.)

So suppose you have two mutually exclusive and exhaustive options, whose prices on the two exchanges are (going with the example) c and t. As this is an opportunity for arbitrage, let's assume c+t<1-y, and c>t. (We'll eventually solve for y, which tells us how much the probabilities need to be off for there to be an arbitrage opportunity.)

Suppose x wins. We lose (1-x)*.1 to the profit fee, leaving .9+.1x to be hit by the withdrawal fee of 5%, leaving .855+.095x as the gain. We need .855+.095x > c+t for this to be profitable.

If we're highly risk averse, we pick x to be the cheaper of the two (t). That gives us .855 + 0.095t > c + t, and we can immediately observe .855 + 0.095t = 1 - y, which tells us that y = .145-.095t.

Which is basically what we would have gotten if we multiplied the fees together; the only interesting thing is that the lowest probability assignment (in this example, BetFair's 20% on Trump) affects the bounds, such that disparities need to be higher when the more extreme probability is more extreme (because the profit fee cuts deeper).

Actually, someone had tested the "more light" hypothesis by the time you were looking for results, and had written about it!

" Part of our civilization was being, in a certain sense, stupid: there were trillion-dollar bills lying around for the taking. But they weren’t trillion-dollar bills that just anyone could walk over and pick up. "

This is a great explanation for the appeal of politics: it's about trying to take over those parts of our civilization that are displaying this kind of stupidity in order to get them to not do that so much.

Or to get them to do that in the politician's (and his in-group's) favor.

Here is another example of inadequacy / inefficiency in the pharmaceutical market.

Cancer X is very aggressive and even when it is diagnosed at very early stage and surgically removed, the recurrence rate is something around 70% in 5 years. When cancer returns or when a patient has advanced stage, the mean survival time is only 6 months.

Pharmaceutical company Y has recently discovered a new anticancer drug. According to state-of-art preclinical experiments, the drug inhibits spread of cancer and kills cancer cells very effectively. Top scientists at company Y expect that when applied in adjuvant settings for 4 months after the surgical operation, the drug would reduce the cancer recurrence rate to 30%. Even when the drug is given to patients with advanced stages of cancer, the drug is expected to prolong the life of patients twice.

Driven by the desire to bring the drug to the market as early as possible, executives at company Y initiate the fastest clinical trial. A study in the adjuvant setting (4 weeks after the operation) require several years to complete in order to show that drug has an advantage over the standard of care. A study in advanced stage cancer required much less time, so there is some benefit in getting to market for advanced cancer and to extend survival from 6 to 12 months.

However, once the drug is at the market against advanced disease, clinical trial and eventual approval of the drug as adjuvant would undermine total sales because of the two-fold reduction in the number of relapsed patients who need to take the drug every single day until the end of their life.

This is efficient (drug company gives the most profit per dollar invested) but inadequate (drug company could save way more patients by going into adjuvant therapy) market situation. I would say that the root of inadequacy is a conflict in what is a goal of a pharmaceutical company. I would expect pharmaceutical company to sell drugs and not mortgage derivatives, but as a company, its main objective is the maximization of the profits for investors. So, probably there should be a composite measure of QALY and profits that should be used to evaluate adequateness and effectiveness of the market.

BTW, I stopped having depression after started megadosing vutamin D a couple years ago. Not very sure abouat causal connection, but it is the main big changes in my regiment. I take around 10 000 units of D once a week. I also spent one grand on LED.

This a nitpick but often when we say a market is efficient we mean Pareto efficient which does carry with it connotations of fairness and at the very least utility maximisation.  This is the sense that is used when not talking about asset markets (it also applies to asset markets but has the further property of being informational efficient. )

I wonder what do you think of Antifragility notion by Taleb and where would it fit. The central point is that "quantity matters" and one shall divide the investment into as many motivated research teams, startups or ideas as possible - to hit the "blackswan" rare but extremely beneficial intervention. For example M-Pesa was a once in a decade project - DFID gave a 1m pound grant to Vodafone and they matched it. And thus scaled a simple idea that created mobile banking based on SMS- the only universal mobile app besides calling. Now a third of Kenyan GDP flows through M-PESA. Taleb actually writes about much more extreme and obvious trillion dollar ideas that lied on the ground for 1000 years - steem engine being one. Available in Ancient Greece as a toy. Or wheels - only used by Aztecs on toys but not for transportation.

So just to get to SAD - I never heard of that condition, so I suppose it might be rare or recent. And thus only a few researchers might be working on them. And these invention dont come from science labs usually. But from makers. So how many startups tackle this issue? If only a dozen - we need to increase their quantity. But in the Scandinavia or Belarus you find heavily lit streets and even blocks of flats for exactly this reason - to fight depressioon due to short days.

While the Greeks did have some steam engines it's not clear that they had the requirements for using steam engines in a commercially viable way. The commercially-viable steam engines we have seen needed high quality brass that wasn't available before 1700.


> At one point during the 2016 presidential election, the PredictIt prediction market—the only one legally open to US citizens (and only US citizens)—had Hillary Clinton at a 60% probability of winning the general election. The bigger, international prediction market BetFair had Clinton at 80% at that time.

Is it just that there is percentage-income-taxed error in prediction markets' guesses, and this 20% discrepancy falls under this, or are markets unable to Aumann agree if they can only watch each other and can't arbitrage?

percentage-income-taxed error

I'm pretty sure income-tax doesn't affect arbitrage bounds in the absence of other obstacles. (It does multiply the effective height of those obstacles.) That is, if PredictIt and BetFair had no caps, no user restrictions, and charged no fees, but you had to pay income tax on aggregate winnings, you would expect the prices to be the same. But if a profit opportunity had to be worth $100 to justify putting in the effort, then an income tax of 50% means $100 opportunities will be passed up and only $200 opportunities will be taken.

are markets unable to Aumann agree if they can only watch each other and can't arbitrage?

Markets with friction will have their error bounded by the friction. Even if you could legally buy contracts on both PredictIt and BetFair, a withdrawal fee of 5% will mean that seeing a price of 45 on one site and 50 on the other site wouldn't justify buying the cheaper options to try to drive the prices together, because when you pulled your guaranteed dollar out you would only get 95 cents, which is what that guaranteed dollar cost you. (The withdrawal fee's effects are muted if there are lots of markets that close serially, as you can collect lots of arbitrage and then only pay the 5% once, but for rare high-volume events like presidential elections this isn't that relevant.)

You would expect those prices to slowly drift together over time because of new entrants (who would buy the cheaper option if comparing them), but it wouldn't be the immediate price-correction that you see in efficient markets.

To see if I understand this right, I'll try to look at (in)adequacy in terms of utility maximization.

The systems that we looked at can be seen as having utility functions. Sometimes the utility function is explicitly declared by creators of the system but more often it's implicit in its design or just assumed by an observer. For markets it will be some combination of ease of trade, adjacenty of price in sell and buy offers, etc., for academia - the amount of useful scientific progress per dollar, for medicine - amount of saved and improved lives (appropriately weighted) per dollar, and so forth.

We might have different (or nonexistent) precise definitions of the utility functions, but we all agree that "saving ten thousand lives for just ten dollars each" increases medicine's utility value and completing modestly priced research that has great benefit for humanity increases academia's utility value.

Then the adequacy of the system is the ability of the system to maximize its utility value. And lack of such ability is inadequacy. It might also make sense to talk in comparatives: system A is more adequate than system B at maximizing utility function F, system A is more adequate at maximizing F1 than it is an maximizing F2.

I see the following possible reasons for inadequacy:

  1. The system in its current state maximizes a different function than what its creators intended (in other words it's not aligned with the intention of its creators),
  2. The observer has a different idea about the implied or intended utility function or misunderstands how things work.
  3. The system maximizes what we want but does it poorly.
  4. The system is not rational and doesn't consistently maximize any utility function (and perhaps the creators, if any, didn't have anything precise in mind in the first place).

In real world systems we usually see a combination of all of these reasons. Technically, (4) would rule out the other options, but insofar that (4) might be indistinguishable from maximizing utility imperfectly, they could still apply.

I know this sounds quite imprecise compared to typical utility maximization talk but I think it would be a non-trivial amount of work to define things more precisely. Does this seem to go in the right direction though?

>Suppose it were the case that some cases of Seasonal Affective Disorder proved resistant to sitting in front of a 10,000-lux lightbox for 30 minutes (the standard treatment), but would nonetheless respond if you bought 130 or so 60-watt-equivalent high-CRI LED bulbs, in a mix of 5000K and 2700K color temperatures, and strung them up over your two-bedroom apartment.

This is hindsight bias. Eliezer gives this example because it's an example which happened to work.

But the relevant question is not "would immodesty, in this cherry-picked case, produce the right result", but "would immodesty, when applied to many cases whose truth value you don't know about in advance, produce the right result". The procedure that has the greatest chance of working overall might fail in this particular case.

There are all sorts of things which can help you in a cherry-picked case subject to hindsight bias and availability bias, which are bad overall. There are automobile accidents where people were saved by not having seatbelts, but it would be dumb to point to one of those and use it as justification for a policy of not wearing a seatbelt.

"Hindsight bias" seems like the wrong term, unless you're claiming that Eliezer was much less confident beforehand that this experiment would work than he sounds; but the thing you're saying in the rest of your comment is just that the example is cherry-picked and might be unrepresentative, regardless of how confident Eliezer happened to be that "more light" would work. When Eliezer introduced the Bank of Japan example as well as the SAD example, he explicitly said that both were "cherry-picked," so I think it's good that you're pointing this out in case readers forget.

There are different ways in which the example might be unrepresentative, and if you do think that's the case, I think it would be helpful to explicitly state how you'd expect it to be unrepresentative. A few examples:

  • "SAD research is generally on the ball, and this is a weird exception where researchers happened to have a blind spot."
  • "Most medical research is dramatically better than research on depressive disorders, so Eliezer got lucky by having a problem that fell in the depression category."
  • "Medical research is particularly dysfunctional in ways that make it easy to outperform in this way, but this is an anomaly and isn't something you can expect to do in areas outside of medical research."
  • "Higher-intensity, higher-duration artificial illumination is probably useless for treating nearly all cases of SAD, and this is probably well-known to SAD researchers, but Brienne happened to be suffering from a very atypical case of SAD that does respond well to extra artificial light."

I'm not saying you need to have a settled view on what the right answer is; I'm just curious very roughly what kinds of explanations you think are relatively likely, versus relatively unlikely.

"Hindsight bias" seems like the wrong term"

Quoting Less Wrong wiki: " Hindsight bias is a tendency to overestimate the foreseeability of events that have actually happened. I.e., subjects given information about X, and asked to assign a probability that X will happen, assign much lower probabilities than subjects who are given the same information about X, are told that X actually happened, and asked to estimate the foreseeable probability of X. "

Eliezer claims that he knows better than the experts. The event being foreseen is "my claim to know better than the experts pans out". He's pointing to a single instance of that, where it did indeed pan out, and using it to suggest that the event is relatively likely to happen in general. That's a form of hindsight bias.

"I think it would be helpful to explicitly state how you'd expect it to be unrepresentative. "

We know that there are areas where Eliezer claims to know better than the experts. We also know that the most prominent ones of those are not medical at all. There are tons of experts who deny LW-style AI danger, or say that cryonics is pointless, or that you don't have to believe many worlds theory to be a competent physicist. So the answer is "those things are so far from SAD that I'd be surprised if there was any way they could be representative."

I think both of the example that EY gives are cases where he was public about his position before the empirical evidence came in.

EY wrote on facebook about his project to build the mega lamp to help Brienne and was confident enough in it to convince her not to spent the winter outside of the US.

The example with the Japanese fiscal policy is also one where EY was public about his views before the empirical evidence was public.

That doesn't help because there's no baseline. How many times did he have public positions that didn't pan out?

But the point is that "Eliezer knew better than the experts with respect to lamps" doesn't imply "Eliezer knows better than the experts on typical LW topics about which Eliezer claims to know better than the experts".

The key problem is that it worked, but that the knowledge doesn't spread in an effective way.

There are many things that people discover that work but in our society the knowledge doesn't spread in a scaleable way.