The Robots, AI, and Unemployment Anti-FAQ

Q.  Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A.  Conventional economic theory says this shouldn't happen.  Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns.  If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns.  On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q.  Sounds like a lovely theory.  As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact.  Experiment trumps theory and in reality, unemployment is rising.

A.  Sure.  Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries).  We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away.  The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution.  The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries.  Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should.  The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".

Q.  But now people aren't being reemployed.  The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A.  Yes.  And that's a new problem.  We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence.  The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

Q.  Maybe we've finally reached the point where there's no work left to be done, or where all the jobs that people can easily be retrained into can be even more easily automated.

A.  You talked about jobs going away in the Great Recession and then not coming back.  Well, the Great Recession wasn't produced by a sudden increase in productivity, it was produced by... I don't want to use fancy terms like "aggregate demand shock" so let's just call it problems in the financial system.  The point is, in previous recessions the jobs came back strongly once NGDP rose again.  (Nominal Gross Domestic Product - roughly the total amount of money being spent in face-value dollars.)  Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US).  If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions?  Something has gone wrong with the engine of reemployment.

Q.  And you don't think that what's gone wrong with the engine of reemployment is that it's easier to automate the lost jobs than to hire someone new?

A.  No.  That's something you could say just as easily about the 'lost' jobs from hand-weaving when mechanical looms came along.  Some new obstacle is preventing jobs lost in the 2008 recession from coming back.  Which may indeed mean that jobs eliminated by automation are also not coming back.  And new high school and college graduates entering the labor market, likewise usually a good thing for an economy, will just end up being sad and unemployed.   But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too.  It should also be noted that automation has been a comparatively small force this decade next to shifts in global trade - which have also been going on for centuries and have also previously been a hugely positive economic force.  But if something is generally wrong with reemployment, then it might be possible for increased trade with China to result in permanently lost jobs within the US, in direct contrast to the way it's worked over all previous economic history.  But just like new college graduates ending up unemployed, something else must be going very wrong - that wasn't going wrong in 1960 - for anything so unusual to happen!

Q.  What if what's changed is that we're out of new jobs to create?  What if we've already got enough hot dog buns, for every kind of hot dog bun there is in the labor market, and now AI is automating away the last jobs and the last of the demand for labor?

A.  This does not square with our being unable to recover the jobs that existed before the Great Recession.  Or with lots of the world living in poverty.  If we imagine the situation being much more extreme than it actually is, there was a time when professionals usually had personal cooks and maids - as Agatha Christie said, "When I was young I never expected to be so poor that I could not afford a servant, or so rich that I could afford a motor car." 

  Many people would hire personal cooks or maids if we could afford them, which is the sort of new service that ought to come into existence if other jobs were eliminated - the reason maids became less common is that they were offered better jobs, not because demand for that form of human labor stopped existing.  Or to be less extreme, there are lots of businesses who'd take nearly-free employees at various occupations, if those employees could be hired literally at minimum wage and legal liability wasn't an issue.  Right now we haven't run out of want or use for human labor, so how could "The End of Demand" be producing unemployment right now?  The fundamental fact that's driven employment over the course of previous human history is that it is a very strange state of affairs for somebody sitting around doing nothing, to have nothing better to do.  We do not literally have nothing better for unemployed workers to do.  Our civilization is not that advanced.  So we must be doing something wrong (which we weren't doing wrong in 1950).

Q.  So what is wrong with "reemployment", then?

A.  I know less about macroeconomics than I know about AI, but even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation.  In terms of developed countries that seem to be doing okay on reemployment, Australia hasn't had any drops in employment and their monetary policy has kept nominal GDP growth on a much steadier keel - using their central bank to regularize the number of face-value Australian dollars being spent - which an increasing number of influential econbloggers think the US and even more so the EU have been getting catastrophically wrong.  Though that's a long story.[1]  Germany saw unemployment drop from 11% to 5% from 2006-2012 after implementing a series of labor market reforms, though there were other things going on during that time.  (Germany has twice the number of robots per capita as the US, which probably isn't significant to their larger macroeconomic trends, but would be a strange fact if robots were the leading cause of unemployment.)  Labor markets and monetary policy are both major, obvious, widely-discussed candidates for what could've changed between now and the 1950s that might make reemployment harder.  And though I'm not a leading econblogger, some other obvious-seeming thoughts that occur to me are:

* Many industries that would otherwise be accessible to relatively less skilled labor, have much higher barriers to entry now than in 1950.  Taxi medallions, governments saving us from the terror of unlicensed haircuts, fees and regulatory burdens associated with new businesses - all things that could've plausibly changed between now and the previous four centuries.  This doesn't apply only to unskilled labor, either; in 1900 it was a lot easier, legally speaking, to set up shop as a doctor.  (Yes, the average doctor was substantially worse back then.  But ask yourself whether some simple, repetitive medical surgery should really, truly require 11 years of medical school and residency, rather than a 2-year vocational training program for someone with high dexterity and good focus.)  These sorts of barriers to entry allow people who are currently employed in that field to extract value from people trying to get jobs in that field (and from the general population too, of course).  In any one sector this wouldn't hurt the whole economy too much, but if it happens everywhere at once, that could be the problem.

* True effective marginal tax rates on low-income families have gone up today compared to the 1960s, after all phasing-out benefits are taken into account, counting federal and state taxes, city sales taxes, and so on.  I've seen figures tossed around like 70% and worse, and this seems like the sort of thing that could easily trash reemployment.[2]

* Perhaps companies are, for some reason, less willing to hire previously unskilled people and train them on the job.  Empirically this seems to be something that is more true today than in the 1950s.  If I were to guess at why, I would say that employees moving more from job to job, and fewer life-long jobs, makes it less rewarding for employers to invest in training an employee; and also college is more universal now than then.  Which means that employers might try to rely on colleges to train employees, and this is a function colleges can't actually handle because:

* The US educational system is either getting worse at training people to handle new jobs, or getting so much more expensive that people can't afford retraining, for various other reasons.  (Plus, we are really stunningly stupid about matching educational supply to labor demand.  How completely ridiculous is it to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so?  Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three choices, with salary charts and projections and probabilities of graduating that subject given their test scores?  The more so considering this is a central allocation question for the entire economy?  But I have no particular reason to believe this part has gotten worse since 1960.)

* The financial system is staring much more at the inside of its eyelids now than in the 1980s.  This could be making it harder for expanding businesses to get loans at terms they would find acceptable, or making it harder for expanding businesses to access capital markets at acceptable terms, or interfering with central banks' attempts to regularize nominal demand, or acting as a brake on the system in some other fashion.

* Hiring a new employee now exposes an employer to more downside risk of being sued, or risk of being unable to fire the new employee if it turns out to be a bad decision.  Human beings, including employers, are very averse to downside risk, so this could plausibly be a major obstacle to reemployment.  Such risks are a plausible major factor in making the decision to hire someone hedonically unpleasant for the person who has to make that decision, which could've changed between now and 1950.  (If your sympathies are with employees rather than employers, please consider that, nonetheless, if you pass any protective measure that makes the decision to hire somebody less pleasant for the hirer, fewer people will be hired and this is not good for people seeking employment.  Many labor market regulations transfer wealth or job security to the already-employed at the expense of the unemployed, and these have been increasing over time.)

* Tyler Cowen's Zero Marginal Product Workers hypothesis:  Anyone long-term-unemployed has now been swept into a group of people who have less than zero average marginal productivity, due to some of the people in this pool being negative-marginal-product workers who will destroy value, and employers not being able to tell the difference.  We need some new factor to explain why this wasn't true in 1950, and obvious candidates would be (1) legal liability making past-employer references unreliable and (2) expanded use of college credentialing sweeping up more of the positive-product workers so that the average product of the uncredentialed workers drops.

* There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying.  If you can build a feature-app and flip it to Google for $20M in an acqui-hire, why bother trying to invent the next Model T?  Maybe working on hard technology problems using math and science until you can build a liquid fluoride thorium reactor, has been made to seem less attractive to brilliant young kids than flipping a $20M company to Google or becoming a hedge-fund trader (and this is truer today relative to 1950).[3]

* Closely related to the above:  Maybe change in atoms instead of bits has been regulated out of existence.  The expected biotech revolution never happened because the FDA is just too much of a roadblock (it adds a great deal of expense, significant risk, and most of all, delays the returns beyond venture capital time horizons).  It's plausible we'll never see a city with a high-speed all-robotic all-electric car fleet because the government, after lobbying from various industries, will require human attendants on every car - for safety reasons, of course!  If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.  Patents are also an increasing drag on innovation in its most fragile stages, and may shortly bring an end to the remaining life in software startups as well.  (But note that this thesis, like the one above, seems hard-pressed to account for jobs not coming back after the Great Recession.  It is not conventional macroeconomics that re-employment after a recession requires macro sector shifts or new kinds of technology jobs.   The above is more of a Great Stagnation thesis of "What happened to productivity growth?" than a Great Recession thesis of "Why aren't the jobs coming back?"[4])

Q.  Some of those ideas sounded more plausible than others, I have to say.

A.  Well, it's not like they could all be true simultaneously.  There's only a fixed effect size of unemployment to be explained, so the more likely it is that any one of these factors played a big role, the less we need to suppose that all the other factors were important; and perhaps what's Really Going On is something else entirely.  Furthermore, the 'real cause' isn't always the factor you want to fix.  If the European Union's unemployment problems were 'originally caused' by labor market regulation, there's no rule saying that those problems couldn't be mostly fixed by instituting an NGDP level targeting regime.  This might or might not work, but the point is that there's no law saying that to fix a problem you have to fix its original historical cause.

Q.  Regardless, if the engine of re-employment is broken for whatever reason, then AI really is killing jobs - a marginal job automated away by advances in AI algorithms won't come back.

A.  Then it's odd to see so many news articles talking about AI killing jobs, when plain old non-AI computer programming and the Internet have affected many more jobs than that.  The buyer ordering books over the Internet, the spreadsheet replacing the accountant - these processes are not strongly relying on the sort of algorithms that we would usually call 'AI' or 'machine learning' or 'robotics'.  The main role I can think of for actual AI algorithms being involved, is in computer vision enabling more automation.  And many manufacturing jobs were already automated by robotic arms even before robotic vision came along.  Most computer programming is not AI programming, and most automation is not AI-driven.  And then on near-term scales, like changes over the last five years, trade shifts and financial shocks and new labor market entrants are more powerful economic forces than the slow continuing march of computer programming.  (Automation is a weak economic force in any given year, but cumulative and directional over decades.  Trade shifts and financial shocks are stronger forces in any single year, but might go in the opposite direction the next decade.  Thus, even generalized automation via computer programming is still an unlikely culprit for any sudden drop in employment as occurred in the Great Recession.)

Q.  Okay, you've persuaded me that it's ridiculous to point to AI while talking about modern-day unemployment.  What about future unemployment?

A.  Like after the next ten years?  We might or might not see robot-driven cars, which would be genuinely based in improved AI algorithms, and would automate away another bite of human labor.  Even then, the total number of people driving cars for money would just be a small part of the total global economy; most humans are not paid to drive cars most of the time.  Also again: for AI or productivity growth or increased trade or immigration or graduating students to increase unemployment, instead of resulting in more hot dogs and buns for everyone, you must be doing something terribly wrong that you weren't doing wrong in 1950.

Q.  How about timescales longer than ten years?  There was one class of laborers permanently unemployed by the automobile revolution, namely horses.  There are a lot fewer horses nowadays because there is literally nothing left for horses to do that machines can't do better; horses' marginal labor productivity dropped below their cost of living.  Could that happen to humans too, if AI advanced far enough that it could do all the labor?

A.  If we imagine that in future decades machine intelligence is slowly going past the equivalent of IQ 70, 80, 90, eating up more and more jobs along the way... then I defer to Robin Hanson's analysis in Economic Growth Given Machine Intelligence, in which, as the abstract says, "Machines complement human labor when [humans] become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."

Q.  Could we already be in this substitution regime -

A.  No, no, a dozen times no, for the dozen reasons already mentioned.  That sentence in Hanson's paper has nothing to do with what is going on right now.  The future cannot be a cause of the past.  Future scenarios, even if they seem to associate the concept of AI with the concept of unemployment, cannot rationally increase the probability that current AI is responsible for current unemployment.

Q.  But AI will inevitably become a problem later?

A.  Not necessarily.  We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply.  That scenario isn't the only possibility.

Q.  What other possibilities are there?

A.  Lots, since what Hanson is talking about is a new unprecedented phenomenon extrapolated over new future circumstances which have never been seen before and there are all kinds of things which could potentially go differently within that.  Hanson's paper may be the first obvious extrapolation from conventional macroeconomics and steady AI trendlines, but that's hardly a sure bet.  Accurate prediction is hard, especially about the future, and I'm pretty sure Hanson would agree with that.

Q.  I see.  Yeah, when you put it that way, there are other possibilities.  Like, Ray Kurzweil would predict that brain-computer interfaces would let humans keep up with computers, and then we wouldn't get mass unemployment.

A.  The future would be more uncertain than that, even granting Kurzweil's hypotheses - it's not as simple as picking one futurist and assuming that their favorite assumptions correspond to their favorite outcome.  You might get mass unemployment anyway if humans with brain-computer interfaces are more expensive or less effective than pure automated systems.  With today's technology we could design robotic rigs to amplify a horse's muscle power - maybe, we're still working on that tech for humans - but it took around an extra century after the Model T to get to that point, and a plain old car is much cheaper.

Q.  Bah, anyone can nod wisely and say "Uncertain, the future is."  Stick your neck out, Yoda, and state your opinion clearly enough that you can later be proven wrong.  Do you think we will eventually get to the point where AI produces mass unemployment?

A.  My own guess is a moderately strong 'No', but for reasons that would sound like a complete subject change relative to all the macroeconomic phenomena we've been discussing so far.  In particular I refer you to "Intelligence Explosion Microeconomics: Returns on cognitive reinvestment", a paper recently referenced on Scott Sumner's blog as relevant to this issue.

Q.  Hold on, let me read the abstract and... what the heck is this?

A.  It's an argument that you don't get the Hansonian scenario or the Kurzweilian scenario, because if you look at the historical course of hominid evolution and try to assess the inputs of marginally increased cumulative evolutionary selection pressure versus the cognitive outputs of hominid brains, and infer the corresponding curve of returns, then ask about a reinvestment scenario -

Q.  English.

A.  Arguably, what you get is I. J. Good's scenario where once an AI goes over some threshold of sufficient intelligence, it can self-improve and increase in intelligence far past the human level.  This scenario is formally termed an 'intelligence explosion', informally 'hard takeoff' or 'AI-go-FOOM'.  The resulting predictions are strongly distinct from traditional economic models of accelerating technological growth (we're not talking about Moore's Law here).  Since it should take advanced general AI to automate away most or all humanly possible labor, my guess is that AI will intelligence-explode to superhuman intelligence before there's time for moderately-advanced AIs to crowd humans out of the global economy.  (See also section 3.10 of the aforementioned paper.)  Widespread economic adoption of a technology comes with a delay factor that wouldn't slow down an AI rewriting its own source code.  This means we don't see the scenario of human programmers gradually improving broad AI technology past the 90, 100, 110-IQ threshold.  An explosion of AI self-improvement utterly derails that scenario, and sends us onto a completely different track which confronts us with wholly dissimilar questions.

Q.  Okay.  What effect do you think a superhumanly intelligent self-improving AI would have on unemployment, especially the bottom 25% who are already struggling now?  Should we really be trying to create this technological wonder of self-improving AI, if the end result is to make the world's poor even poorer?  How is someone with a high-school education supposed to compete with a machine superintelligence for jobs?

A.  I think you're asking an overly narrow question there.

Q.  How so?

A.  You might be thinking about 'intelligence' in terms of the contrast between a human college professor and a human janitor, rather than the contrast between a human and a chimpanzee.  Human intelligence more or less created the entire modern world, including our invention of money; twenty thousand years ago we were just running around with bow and arrows.  And yet on a biological level, human intelligence has stayed roughly the same since the invention of agriculture.  Going past human-level intelligence is change on a scale much larger than the Industrial Revolution, or even the Agricultural Revolution, which both took place at a constant level of intelligence; human nature didn't change.  As Vinge observed, building something smarter than you implies a future that is fundamentally different in a way that you wouldn't get from better medicine or interplanetary travel.

Q.  But what does happen to people who were already economically disadvantaged, who don't have investments in the stock market and who aren't sharing in the profits of the corporations that own these superintelligences?

A.  Um... we appear to be using substantially different background assumptions.  The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters.  The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the protein structure prediction problem, emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then... well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill.  A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money.  It just moves atoms around into whatever molecular structures or large-scale structures it wants.

Q.  How would it get the energy to move those atoms, if not by buying electricity from existing power plants?  Solar power?

A.  Indeed, one popular speculation is that optimal use of a star system's resources is to disassemble local gas giants (Jupiter in our case) for the raw materials to build a Dyson Sphere, an enclosure that captures all of a star's energy output.  This does not involve buying solar panels from human manufacturers, rather it involves self-replicating machinery which builds copies of itself on a rapid exponential curve -

Q.  Yeah, I think I'm starting to get a picture of your background assumptions.  So let me expand the question.  If we grant that scenario rather than the Hansonian scenario or the Kurzweilian scenario, what sort of effect does that have on humans?

A.  That depends on the exact initial design of the first AI which undergoes an intelligence explosion.  Imagine a vast space containing all possible mind designs.  Now imagine that humans, who all have a brain with a cerebellum, thalamus, a cerebral cortex organized into roughly the same areas, neurons firing at a top speed of 200 spikes per second, and so on, are one tiny little dot within this space of all possible minds.  Different kinds of AIs can be vastly more different from each other than you are different from a chimpanzee.  What happens after AI, depends on what kind of AI you build - the exact selected point in mind design space.  If you can solve the technical problems and wisdom problems associated with building an AI that is nice to humans, or nice to sentient beings in general, then we all live happily ever afterward.  If you build the AI incorrectly... well, the AI is unlikely to end up with a specific hate for humans.  But such an AI won't attach a positive value to us either.  "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else."  The human species would end up disassembled for spare atoms, after which human unemployment would be zero.  In neither alternative do we end up with poverty-stricken unemployed humans hanging around being sad because they can't get jobs as janitors now that star-striding nanotech-wielding superintelligences are taking all the janitorial jobs.  And so I conclude that advanced AI causing mass human unemployment is, all things considered, unlikely.

Q.  Some of the background assumptions you used to arrive at that conclusion strike me as requiring additional support beyond the arguments you listed here.

A.  I recommend Intelligence Explosion: Evidence and Import for an overview of the general issues and literature, Artificial Intelligence as a positive and negative factor in global risk for a summary of some of the issues around building AI correctly or incorrectly, and the aforementioned Intelligence Explosion Microeconomics for some ideas about analyzing the scenario of an AI investing cognitive labor in improving its own cognition.  The last in particular is an important open problem in economics if you're a smart young economist reading this, although since the fate of the entire human species could well depend on the answer, you would be foolish to expect there'd be as many papers published about that as squirrel migration patterns.  Nonetheless, bright young economists who want to say something important about AI should consider analyzing the microeconomics of returns on cognitive (re)investments, rather than post-AI macroeconomics which may not actually exist depending on the answer to the first question.  Oh, and Nick Bostrom at the Oxford Future of Humanity Institute is supposed to have a forthcoming book on the intelligence explosion; that book isn't out yet so I can't link to it, but Bostrom personally and FHI generally have published some excellent academic papers already.

Q.  But to sum up, you think that AI is definitely not the issue we should be talking about with respect to unemployment.

A.  Right.  From an economic perspective, AI is a completely odd place to focus your concern about modern-day unemployment.  From an AI perspective, modern-day unemployment trends are a moderately odd reason to be worried about AI.  Still, it is scarily true that increased automation, like increased global trade or new graduates or anything else that ought properly to produce