Q.  Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A.  Conventional economic theory says this shouldn't happen.  Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns.  If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns.  On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q.  Sounds like a lovely theory.  As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact.  Experiment trumps theory and in reality, unemployment is rising.

A.  Sure.  Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries).  We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away.  The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution.  The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries.  Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should.  The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".

Q.  But now people aren't being reemployed.  The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A.  Yes.  And that's a new problem.  We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence.  The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

Q.  Maybe we've finally reached the point where there's no work left to be done, or where all the jobs that people can easily be retrained into can be even more easily automated.

A.  You talked about jobs going away in the Great Recession and then not coming back.  Well, the Great Recession wasn't produced by a sudden increase in productivity, it was produced by... I don't want to use fancy terms like "aggregate demand shock" so let's just call it problems in the financial system.  The point is, in previous recessions the jobs came back strongly once NGDP rose again.  (Nominal Gross Domestic Product - roughly the total amount of money being spent in face-value dollars.)  Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US).  If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions?  Something has gone wrong with the engine of reemployment.

Q.  And you don't think that what's gone wrong with the engine of reemployment is that it's easier to automate the lost jobs than to hire someone new?

A.  No.  That's something you could say just as easily about the 'lost' jobs from hand-weaving when mechanical looms came along.  Some new obstacle is preventing jobs lost in the 2008 recession from coming back.  Which may indeed mean that jobs eliminated by automation are also not coming back.  And new high school and college graduates entering the labor market, likewise usually a good thing for an economy, will just end up being sad and unemployed.   But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too.  It should also be noted that automation has been a comparatively small force this decade next to shifts in global trade - which have also been going on for centuries and have also previously been a hugely positive economic force.  But if something is generally wrong with reemployment, then it might be possible for increased trade with China to result in permanently lost jobs within the US, in direct contrast to the way it's worked over all previous economic history.  But just like new college graduates ending up unemployed, something else must be going very wrong - that wasn't going wrong in 1960 - for anything so unusual to happen!

Q.  What if what's changed is that we're out of new jobs to create?  What if we've already got enough hot dog buns, for every kind of hot dog bun there is in the labor market, and now AI is automating away the last jobs and the last of the demand for labor?

A.  This does not square with our being unable to recover the jobs that existed before the Great Recession.  Or with lots of the world living in poverty.  If we imagine the situation being much more extreme than it actually is, there was a time when professionals usually had personal cooks and maids - as Agatha Christie said, "When I was young I never expected to be so poor that I could not afford a servant, or so rich that I could afford a motor car." 

  Many people would hire personal cooks or maids if we could afford them, which is the sort of new service that ought to come into existence if other jobs were eliminated - the reason maids became less common is that they were offered better jobs, not because demand for that form of human labor stopped existing.  Or to be less extreme, there are lots of businesses who'd take nearly-free employees at various occupations, if those employees could be hired literally at minimum wage and legal liability wasn't an issue.  Right now we haven't run out of want or use for human labor, so how could "The End of Demand" be producing unemployment right now?  The fundamental fact that's driven employment over the course of previous human history is that it is a very strange state of affairs for somebody sitting around doing nothing, to have nothing better to do.  We do not literally have nothing better for unemployed workers to do.  Our civilization is not that advanced.  So we must be doing something wrong (which we weren't doing wrong in 1950).

Q.  So what is wrong with "reemployment", then?

A.  I know less about macroeconomics than I know about AI, but even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation.  In terms of developed countries that seem to be doing okay on reemployment, Australia hasn't had any drops in employment and their monetary policy has kept nominal GDP growth on a much steadier keel - using their central bank to regularize the number of face-value Australian dollars being spent - which an increasing number of influential econbloggers think the US and even more so the EU have been getting catastrophically wrong.  Though that's a long story.[1]  Germany saw unemployment drop from 11% to 5% from 2006-2012 after implementing a series of labor market reforms, though there were other things going on during that time.  (Germany has twice the number of robots per capita as the US, which probably isn't significant to their larger macroeconomic trends, but would be a strange fact if robots were the leading cause of unemployment.)  Labor markets and monetary policy are both major, obvious, widely-discussed candidates for what could've changed between now and the 1950s that might make reemployment harder.  And though I'm not a leading econblogger, some other obvious-seeming thoughts that occur to me are:

* Many industries that would otherwise be accessible to relatively less skilled labor, have much higher barriers to entry now than in 1950.  Taxi medallions, governments saving us from the terror of unlicensed haircuts, fees and regulatory burdens associated with new businesses - all things that could've plausibly changed between now and the previous four centuries.  This doesn't apply only to unskilled labor, either; in 1900 it was a lot easier, legally speaking, to set up shop as a doctor.  (Yes, the average doctor was substantially worse back then.  But ask yourself whether some simple, repetitive medical surgery should really, truly require 11 years of medical school and residency, rather than a 2-year vocational training program for someone with high dexterity and good focus.)  These sorts of barriers to entry allow people who are currently employed in that field to extract value from people trying to get jobs in that field (and from the general population too, of course).  In any one sector this wouldn't hurt the whole economy too much, but if it happens everywhere at once, that could be the problem.

* True effective marginal tax rates on low-income families have gone up today compared to the 1960s, after all phasing-out benefits are taken into account, counting federal and state taxes, city sales taxes, and so on.  I've seen figures tossed around like 70% and worse, and this seems like the sort of thing that could easily trash reemployment.[2]

* Perhaps companies are, for some reason, less willing to hire previously unskilled people and train them on the job.  Empirically this seems to be something that is more true today than in the 1950s.  If I were to guess at why, I would say that employees moving more from job to job, and fewer life-long jobs, makes it less rewarding for employers to invest in training an employee; and also college is more universal now than then.  Which means that employers might try to rely on colleges to train employees, and this is a function colleges can't actually handle because:

* The US educational system is either getting worse at training people to handle new jobs, or getting so much more expensive that people can't afford retraining, for various other reasons.  (Plus, we are really stunningly stupid about matching educational supply to labor demand.  How completely ridiculous is it to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so?  Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three choices, with salary charts and projections and probabilities of graduating that subject given their test scores?  The more so considering this is a central allocation question for the entire economy?  But I have no particular reason to believe this part has gotten worse since 1960.)

* The financial system is staring much more at the inside of its eyelids now than in the 1980s.  This could be making it harder for expanding businesses to get loans at terms they would find acceptable, or making it harder for expanding businesses to access capital markets at acceptable terms, or interfering with central banks' attempts to regularize nominal demand, or acting as a brake on the system in some other fashion.

* Hiring a new employee now exposes an employer to more downside risk of being sued, or risk of being unable to fire the new employee if it turns out to be a bad decision.  Human beings, including employers, are very averse to downside risk, so this could plausibly be a major obstacle to reemployment.  Such risks are a plausible major factor in making the decision to hire someone hedonically unpleasant for the person who has to make that decision, which could've changed between now and 1950.  (If your sympathies are with employees rather than employers, please consider that, nonetheless, if you pass any protective measure that makes the decision to hire somebody less pleasant for the hirer, fewer people will be hired and this is not good for people seeking employment.  Many labor market regulations transfer wealth or job security to the already-employed at the expense of the unemployed, and these have been increasing over time.)

* Tyler Cowen's Zero Marginal Product Workers hypothesis:  Anyone long-term-unemployed has now been swept into a group of people who have less than zero average marginal productivity, due to some of the people in this pool being negative-marginal-product workers who will destroy value, and employers not being able to tell the difference.  We need some new factor to explain why this wasn't true in 1950, and obvious candidates would be (1) legal liability making past-employer references unreliable and (2) expanded use of college credentialing sweeping up more of the positive-product workers so that the average product of the uncredentialed workers drops.

* There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying.  If you can build a feature-app and flip it to Google for $20M in an acqui-hire, why bother trying to invent the next Model T?  Maybe working on hard technology problems using math and science until you can build a liquid fluoride thorium reactor, has been made to seem less attractive to brilliant young kids than flipping a $20M company to Google or becoming a hedge-fund trader (and this is truer today relative to 1950).[3]

* Closely related to the above:  Maybe change in atoms instead of bits has been regulated out of existence.  The expected biotech revolution never happened because the FDA is just too much of a roadblock (it adds a great deal of expense, significant risk, and most of all, delays the returns beyond venture capital time horizons).  It's plausible we'll never see a city with a high-speed all-robotic all-electric car fleet because the government, after lobbying from various industries, will require human attendants on every car - for safety reasons, of course!  If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.  Patents are also an increasing drag on innovation in its most fragile stages, and may shortly bring an end to the remaining life in software startups as well.  (But note that this thesis, like the one above, seems hard-pressed to account for jobs not coming back after the Great Recession.  It is not conventional macroeconomics that re-employment after a recession requires macro sector shifts or new kinds of technology jobs.   The above is more of a Great Stagnation thesis of "What happened to productivity growth?" than a Great Recession thesis of "Why aren't the jobs coming back?"[4])

Q.  Some of those ideas sounded more plausible than others, I have to say.

A.  Well, it's not like they could all be true simultaneously.  There's only a fixed effect size of unemployment to be explained, so the more likely it is that any one of these factors played a big role, the less we need to suppose that all the other factors were important; and perhaps what's Really Going On is something else entirely.  Furthermore, the 'real cause' isn't always the factor you want to fix.  If the European Union's unemployment problems were 'originally caused' by labor market regulation, there's no rule saying that those problems couldn't be mostly fixed by instituting an NGDP level targeting regime.  This might or might not work, but the point is that there's no law saying that to fix a problem you have to fix its original historical cause.

Q.  Regardless, if the engine of re-employment is broken for whatever reason, then AI really is killing jobs - a marginal job automated away by advances in AI algorithms won't come back.

A.  Then it's odd to see so many news articles talking about AI killing jobs, when plain old non-AI computer programming and the Internet have affected many more jobs than that.  The buyer ordering books over the Internet, the spreadsheet replacing the accountant - these processes are not strongly relying on the sort of algorithms that we would usually call 'AI' or 'machine learning' or 'robotics'.  The main role I can think of for actual AI algorithms being involved, is in computer vision enabling more automation.  And many manufacturing jobs were already automated by robotic arms even before robotic vision came along.  Most computer programming is not AI programming, and most automation is not AI-driven.  And then on near-term scales, like changes over the last five years, trade shifts and financial shocks and new labor market entrants are more powerful economic forces than the slow continuing march of computer programming.  (Automation is a weak economic force in any given year, but cumulative and directional over decades.  Trade shifts and financial shocks are stronger forces in any single year, but might go in the opposite direction the next decade.  Thus, even generalized automation via computer programming is still an unlikely culprit for any sudden drop in employment as occurred in the Great Recession.)

Q.  Okay, you've persuaded me that it's ridiculous to point to AI while talking about modern-day unemployment.  What about future unemployment?

A.  Like after the next ten years?  We might or might not see robot-driven cars, which would be genuinely based in improved AI algorithms, and would automate away another bite of human labor.  Even then, the total number of people driving cars for money would just be a small part of the total global economy; most humans are not paid to drive cars most of the time.  Also again: for AI or productivity growth or increased trade or immigration or graduating students to increase unemployment, instead of resulting in more hot dogs and buns for everyone, you must be doing something terribly wrong that you weren't doing wrong in 1950.

Q.  How about timescales longer than ten years?  There was one class of laborers permanently unemployed by the automobile revolution, namely horses.  There are a lot fewer horses nowadays because there is literally nothing left for horses to do that machines can't do better; horses' marginal labor productivity dropped below their cost of living.  Could that happen to humans too, if AI advanced far enough that it could do all the labor?

A.  If we imagine that in future decades machine intelligence is slowly going past the equivalent of IQ 70, 80, 90, eating up more and more jobs along the way... then I defer to Robin Hanson's analysis in Economic Growth Given Machine Intelligence, in which, as the abstract says, "Machines complement human labor when [humans] become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."

Q.  Could we already be in this substitution regime -

A.  No, no, a dozen times no, for the dozen reasons already mentioned.  That sentence in Hanson's paper has nothing to do with what is going on right now.  The future cannot be a cause of the past.  Future scenarios, even if they seem to associate the concept of AI with the concept of unemployment, cannot rationally increase the probability that current AI is responsible for current unemployment.

Q.  But AI will inevitably become a problem later?

A.  Not necessarily.  We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply.  That scenario isn't the only possibility.

Q.  What other possibilities are there?

A.  Lots, since what Hanson is talking about is a new unprecedented phenomenon extrapolated over new future circumstances which have never been seen before and there are all kinds of things which could potentially go differently within that.  Hanson's paper may be the first obvious extrapolation from conventional macroeconomics and steady AI trendlines, but that's hardly a sure bet.  Accurate prediction is hard, especially about the future, and I'm pretty sure Hanson would agree with that.

Q.  I see.  Yeah, when you put it that way, there are other possibilities.  Like, Ray Kurzweil would predict that brain-computer interfaces would let humans keep up with computers, and then we wouldn't get mass unemployment.

A.  The future would be more uncertain than that, even granting Kurzweil's hypotheses - it's not as simple as picking one futurist and assuming that their favorite assumptions correspond to their favorite outcome.  You might get mass unemployment anyway if humans with brain-computer interfaces are more expensive or less effective than pure automated systems.  With today's technology we could design robotic rigs to amplify a horse's muscle power - maybe, we're still working on that tech for humans - but it took around an extra century after the Model T to get to that point, and a plain old car is much cheaper.

Q.  Bah, anyone can nod wisely and say "Uncertain, the future is."  Stick your neck out, Yoda, and state your opinion clearly enough that you can later be proven wrong.  Do you think we will eventually get to the point where AI produces mass unemployment?

A.  My own guess is a moderately strong 'No', but for reasons that would sound like a complete subject change relative to all the macroeconomic phenomena we've been discussing so far.  In particular I refer you to "Intelligence Explosion Microeconomics: Returns on cognitive reinvestment", a paper recently referenced on Scott Sumner's blog as relevant to this issue.

Q.  Hold on, let me read the abstract and... what the heck is this?

A.  It's an argument that you don't get the Hansonian scenario or the Kurzweilian scenario, because if you look at the historical course of hominid evolution and try to assess the inputs of marginally increased cumulative evolutionary selection pressure versus the cognitive outputs of hominid brains, and infer the corresponding curve of returns, then ask about a reinvestment scenario -

Q.  English.

A.  Arguably, what you get is I. J. Good's scenario where once an AI goes over some threshold of sufficient intelligence, it can self-improve and increase in intelligence far past the human level.  This scenario is formally termed an 'intelligence explosion', informally 'hard takeoff' or 'AI-go-FOOM'.  The resulting predictions are strongly distinct from traditional economic models of accelerating technological growth (we're not talking about Moore's Law here).  Since it should take advanced general AI to automate away most or all humanly possible labor, my guess is that AI will intelligence-explode to superhuman intelligence before there's time for moderately-advanced AIs to crowd humans out of the global economy.  (See also section 3.10 of the aforementioned paper.)  Widespread economic adoption of a technology comes with a delay factor that wouldn't slow down an AI rewriting its own source code.  This means we don't see the scenario of human programmers gradually improving broad AI technology past the 90, 100, 110-IQ threshold.  An explosion of AI self-improvement utterly derails that scenario, and sends us onto a completely different track which confronts us with wholly dissimilar questions.

Q.  Okay.  What effect do you think a superhumanly intelligent self-improving AI would have on unemployment, especially the bottom 25% who are already struggling now?  Should we really be trying to create this technological wonder of self-improving AI, if the end result is to make the world's poor even poorer?  How is someone with a high-school education supposed to compete with a machine superintelligence for jobs?

A.  I think you're asking an overly narrow question there.

Q.  How so?

A.  You might be thinking about 'intelligence' in terms of the contrast between a human college professor and a human janitor, rather than the contrast between a human and a chimpanzee.  Human intelligence more or less created the entire modern world, including our invention of money; twenty thousand years ago we were just running around with bow and arrows.  And yet on a biological level, human intelligence has stayed roughly the same since the invention of agriculture.  Going past human-level intelligence is change on a scale much larger than the Industrial Revolution, or even the Agricultural Revolution, which both took place at a constant level of intelligence; human nature didn't change.  As Vinge observed, building something smarter than you implies a future that is fundamentally different in a way that you wouldn't get from better medicine or interplanetary travel.

Q.  But what does happen to people who were already economically disadvantaged, who don't have investments in the stock market and who aren't sharing in the profits of the corporations that own these superintelligences?

A.  Um... we appear to be using substantially different background assumptions.  The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters.  The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the protein structure prediction problem, emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then... well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill.  A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money.  It just moves atoms around into whatever molecular structures or large-scale structures it wants.

Q.  How would it get the energy to move those atoms, if not by buying electricity from existing power plants?  Solar power?

A.  Indeed, one popular speculation is that optimal use of a star system's resources is to disassemble local gas giants (Jupiter in our case) for the raw materials to build a Dyson Sphere, an enclosure that captures all of a star's energy output.  This does not involve buying solar panels from human manufacturers, rather it involves self-replicating machinery which builds copies of itself on a rapid exponential curve -

Q.  Yeah, I think I'm starting to get a picture of your background assumptions.  So let me expand the question.  If we grant that scenario rather than the Hansonian scenario or the Kurzweilian scenario, what sort of effect does that have on humans?

A.  That depends on the exact initial design of the first AI which undergoes an intelligence explosion.  Imagine a vast space containing all possible mind designs.  Now imagine that humans, who all have a brain with a cerebellum, thalamus, a cerebral cortex organized into roughly the same areas, neurons firing at a top speed of 200 spikes per second, and so on, are one tiny little dot within this space of all possible minds.  Different kinds of AIs can be vastly more different from each other than you are different from a chimpanzee.  What happens after AI, depends on what kind of AI you build - the exact selected point in mind design space.  If you can solve the technical problems and wisdom problems associated with building an AI that is nice to humans, or nice to sentient beings in general, then we all live happily ever afterward.  If you build the AI incorrectly... well, the AI is unlikely to end up with a specific hate for humans.  But such an AI won't attach a positive value to us either.  "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else."  The human species would end up disassembled for spare atoms, after which human unemployment would be zero.  In neither alternative do we end up with poverty-stricken unemployed humans hanging around being sad because they can't get jobs as janitors now that star-striding nanotech-wielding superintelligences are taking all the janitorial jobs.  And so I conclude that advanced AI causing mass human unemployment is, all things considered, unlikely.

Q.  Some of the background assumptions you used to arrive at that conclusion strike me as requiring additional support beyond the arguments you listed here.

A.  I recommend Intelligence Explosion: Evidence and Import for an overview of the general issues and literature, Artificial Intelligence as a positive and negative factor in global risk for a summary of some of the issues around building AI correctly or incorrectly, and the aforementioned Intelligence Explosion Microeconomics for some ideas about analyzing the scenario of an AI investing cognitive labor in improving its own cognition.  The last in particular is an important open problem in economics if you're a smart young economist reading this, although since the fate of the entire human species could well depend on the answer, you would be foolish to expect there'd be as many papers published about that as squirrel migration patterns.  Nonetheless, bright young economists who want to say something important about AI should consider analyzing the microeconomics of returns on cognitive (re)investments, rather than post-AI macroeconomics which may not actually exist depending on the answer to the first question.  Oh, and Nick Bostrom at the Oxford Future of Humanity Institute is supposed to have a forthcoming book on the intelligence explosion; that book isn't out yet so I can't link to it, but Bostrom personally and FHI generally have published some excellent academic papers already.

Q.  But to sum up, you think that AI is definitely not the issue we should be talking about with respect to unemployment.

A.  Right.  From an economic perspective, AI is a completely odd place to focus your concern about modern-day unemployment.  From an AI perspective, modern-day unemployment trends are a moderately odd reason to be worried about AI.  Still, it is scarily true that increased automation, like increased global trade or new graduates or anything else that ought properly to produce a stream of employable labor to the benefit of all, might perversely operate to increase unemployment if the broken reemployment engine is not fixed.

Q.  And with respect to future AI... what is it you think, exactly?

A.  I think that with respect to moderately more advanced AI, we probably won't see intrinsic unavoidable mass unemployment in the economic world as we know it.  If re-employment stays broken and new college graduates continue to have trouble finding jobs, then there are plausible stories where future AI advances far enough (but not too far) to be a significant part of what's freeing up new employable labor which bizarrely cannot be employed.  I wouldn't consider this my main-line, average-case guess; I wouldn't expect to see it in the next 15 years or as the result of just robotic cars; and if it did happen, I wouldn't call AI the 'problem' while central banks still hadn't adopted NGDP level targeting.  And then with respect to very advanced AI, the sort that might be produced by AI self-improving and going FOOM, asking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth.  There would indeed be effects, but you'd be missing the point.

Q.  Thanks for clearing that up.

A.  No problem.

ADDED 8/30/13:  Tyler Cowen's reply to this was one I hadn't listed:

Think of the machines of the industrial revolution as getting underway sometime in the 1770s or 1780s.  The big wage gains for British workers don’t really come until the 1840s.  Depending on your exact starting point, that is over fifty years of labor market problems from automation.

See here for the rest of Tyler's reply.

Taken at face value this might suggest that if we wait 50 years everything will be all right.  Kevin Drum replies that in 50 years there might be no human jobs left, which is possible but wouldn't be an effect we've seen already, rather a prediction of novel things yet to come.

Though Tyler also says, "A second point is that now we have a much more extensive network of government benefits and also regulations which increase the fixed cost of hiring labor" and this of course was already on my list of things that could be trashing modern reemployment unlike-in-the-1840s.

'Brett' in MR's comments section also counter-claims:

The spread of steam-powered machinery and industrialization from textiles/mining/steel to all manner of British industries didn’t really get going until the 1830s and 1840s. Before that, it was mostly piece-meal, with some areas picking up the technology faster than others, while the overall economy didn’t change that drastically (hence the minimal changes in overall wages).

[1]  The core idea in market monetarism is very roughly something like this:  A central bank can control the total amount of money and thereby control any single economic variable measured in money, i.e., control one nominal variable.  A central bank can't directly control how many people are employed, because that's a real variable.  You could, however, try to control Nominal Gross Domestic Income (NGDI) or the total amount that people have available to spend (as measured in your currency).  If the central bank commits to an NGDI level target then any shortfalls are made up the next year - if your NGDI growth target is 5% and you only get 4% in one year then you try for 6% the year after that.  NGDI level targeting would mean that all the companies would know that, collectively, all the customers in the country would have 5% more money (measured in dollars) to spend in the next year than the previous year.  This is usually called "NGDP level targeting" for historical reasons (NGDP is the other side of the equation, what the earned dollars are being spent on) but the most advanced modern form of the idea is probably "Level-targeting a market forecast of per-capita NGDI".  Why this is the best nominal variable for central banks to control is a longer story and for that you'll have to read up on market monetarism.  I will note that if you were worried about hyperinflation back when the Federal Reserve started dropping US interest rates to almost zero and buying government bonds by printing money... well, you really should note that (a) most economists said this wouldn't happen, (b) the market spreads on inflation-protected Treasuries said that the market was anticipating very low inflation, and that (c) we then actually got inflation below the Fed's 2% target.  You can argue with economists.  You can even argue with the market forecast, though in this case you ought to bet money on your beliefs.  But when your fears of hyperinflation are disagreed with by economists, the market forecast and observed reality, it's time to give up on the theory that generated the false prediction.  In this case, market monetarists would have told you not to expect hyperinflation because NGDP/NGDI was collapsing and this constituted (overly) tight money regardless of what interest rates or the monetary base looked like.

[2]  Call me a wacky utopian idealist, but I wonder if it might be genuinely politically feasible to reduce marginal taxes on the bottom 20%, if economists on both sides of the usual political divide got together behind the idea that income taxes (including payroll taxes) on the bottom 20% are (a) immoral and (b) do economic harm far out of proportion to government revenue generated.  This would also require some amount of decreased taxes on the next quintile in order to avoid high marginal tax rates, i.e., if you suddenly start paying $2000/year in taxes as soon as your income goes from $19,000/year to $20,000/year then that was a 200% tax rate on that particular extra $1000 earned.  The lost tax revenue must be made up somewhere else.  In the current political environment this probably requires higher income taxes on higher wealth brackets rather than anything more creative.  But if we allow ourselves to discuss economic dreamworlds, then income taxes, corporate income taxes, and capital-gains taxes are all very inefficient compared to consumption taxes, land taxes, and basically anything but income and corporate taxes.  This is true even from the perspective of equality; a rich person who earns lots of money, but invests it all instead of spending it, is benefiting the economy rather than themselves and should not be taxed until they try to spend the money on a yacht, at which point you charge a consumption tax or luxury tax (even if that yacht is listed as a business expense, which should make no difference; consumption is not more moral when done by businesses instead of individuals).  If I were given unlimited powers to try to fix the unemployment thing, I'd be reforming the entire tax code from scratch to present the minimum possible obstacles to exchanging one's labor for money, and as a second priority minimize obstacles to compound reinvestment of wealth.  But trying to change anything on this scale is probably not politically feasible relative to a simpler, more understandable crusade to "Stop taxing the bottom 20%, it harms our economy because they're customers of all those other companies and it's immoral because they get a raw enough deal already."

[3]  Two possible forces for significant technological change in the 21st century would be robotic cars and electric cars.  Imagine a city with an all-robotic all-electric car fleet, dispatching light cars with only the battery sizes needed for the journey, traveling at much higher speeds with no crash risk and much lower fuel costs... and lowering rents by greatly extending the effective area of a city, i.e., extending the physical distance you can live from the center of the action while still getting to work on time because your average speed is 75mph.  What comes to mind when you think of robotic cars?  Google's prototype robotic cars.  What comes to mind when you think of electric cars?  Tesla.  In both cases we're talking about ascended, post-exit Silicon Valley moguls trying to create industrial progress out of the goodness of their hearts, using money they earned from Internet startups.  Can you sustain a whole economy based on what Elon Musk and Larry Page decide are cool?

[4]   Currently the conversation among economists is more like "Why has total factor productivity growth slowed down in developed countries?" than "Is productivity growing so fast due to automation that we'll run out of jobs?"  Ask them the latter question and they will, with justice, give you very strange looks.  Productivity isn't growing at high rates, and if it were that ought to cause employment rather than unemployment.  This is why the Great Stagnation in productivity is one possible explanatory factor in unemployment, albeit (as mentioned) not a very good explanation for why we can't get back the jobs lost in the Great Recession.  The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.

New Comment
259 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Clearly computers are exactly the same, and ought to be expected to have the same effects, as steam engines. Just look at horses, they're doing fine.

Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US). If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions? Something has gone wrong with the engine of reemployment...But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too. ...even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation.

And ... (read more)

(Upvoted.) I've been reading Tyler and I read McAfee. So far, your comment here is the most impressive argument for this position I've seen anywhere, and so I don't feel bad about not addressing it earlier. I'm not sure you really address the central point either; why can't the disemployed people find new jobs like in the last four centuries, and why did unemployment drop in Germany once they fixed their labor market, and why hasn't employment dropped in Australia, etcetera? (And note that anything along the lines of 'regional boom' contradicts ZMP and completely outcompeted humans and other explanations which postulate unemployability, not 'unemployable unless regional boom'.) Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage? Maybe eventually AI will disemploy that kid but right now humans are still doing laundry! Again, the economy of 1920 seemed to do quite well handling disemployment pressures like this with reemployment, so what changed?

Quick question: To what extent are you playing Devil's Advocate above and to what extent do you actually think that the robotic disemployment thesis is correct, a primary cause of current unemployment, not solvable with NGDP level targeting, and unfixable due to some humans being too-much-outcompeted, rather than due to other environmental changes like the regulatory environment etcetera?


I've been reading Tyler and I read McAfee.

Cowen says some interesting things but I don't think he makes the best case for technological unemployment; not sure what you mean by McAfee - Brynjolfsson is the lead author on Race Against the Machine, not McAfee.

I'm not sure you really address the central point either; why can't the disemployed people find new jobs like in the last four centuries,

As my initial comment implies, I think the last century is qualitatively different automation than before: before, the machines began handling brute force things, replacing things which offered only brute force & not intelligence like horses or watermills. But now they are slowly absorbing intelligence, and this seems to be the final province of humans. In Hanson's terms, I think machines switched from being complements to being substitutes in some sectors a while ago.

and why did unemployment drop in Germany once they fixed their labor market, and why hasn't employment dropped in Australia, etcetera?

I don't know nearly enough about Germany to say. They seem to be in a weird position in Europe, which might explain it. I'd guess that Australia seems to owe its success to avoiding a... (read more)

As my initial comment implies, I think the last century is qualitatively different automation than before: before, the machines began handling brute force things, replacing things which offered only brute force & not intelligence like horses or watermills. But now they are slowly absorbing intelligence, and this seems to be the final province of humans. In Hanson's terms, I think machines switched from being complements to being substitutes in some sectors a while ago.

The key Hansonian concept is that replacing humans at tasks is still complementation because different tasks are complementary to each other, a la hot dogs and buns; I should perhaps edit OP to make this clearer. It is not obvious to me that craftspeople disemployed by looms would have considered their work to be unskilled, but as that particular industry was automated, people moved to other jobs in other industries and complementarity continued to dominate. Again the question is, what's different now? Is it that no human on the planet does any labor any more which could be called unskilled, that nobody cooks or launders or drives? Obviously not. But there are many plausible changes in regulation, taxes, p... (read more)

I'd pay $5/hour for someone to drive me almost anywhere if availability was coordinated by Uber, but not taxi prices... This looks to me like a barrier-to-entry, regulatory-and-tax scenario, not "Darn it we're too rich and running out of things for labor to do!"

Federal minimum wage has been falling relative to productivity for decades. Also, Australia has a much higher minimum wage than the US but a lower unemployment rate. They also don't have at-will employment, implying that the risks of hiring are larger. So I'm not sure the regulations are actually the problem here (that said, I oppose many of them anyway on various grounds).

4Eliezer Yudkowsky
Sure, there can be more than one solution to a problem; Australia and Germany took different paths, one regularizing NGDP, one deregulating labor markets, but neither is suffering from unemployment despite robotics. Basic Income might also solve it. Getting rid of huge marginal tax rates on the poor might solve it. Or making it easier for someone to sign up with an online service that lets them offer me a ride somewhere for $5 might solve it. Since I don't think unemployment problems are due to literal lack of labor that anyone can be paid to do, there are potentially all sorts of things that might solve it.
Am I misreading this part? As in the UK the tax-rates are done on % of your income in a certain bracket, so you pay nothing on the first £15k, then 20% on £15-30k (I forget the exact brackets) then 30% on £30-45k and 40% on everything above that. So if you were earning £19k a year for example you would pay nothing on the first £15k, then 20% of the £4k you earned that sits in the higher bracket. So you don't suddenly pay loads of tax as it only affects the income that sits in the taxed brackets so if you earned £35k you would pay (0*15)+(0.2*15)+(0.3*5)=4.5k in tax and so you avoid having any massive discrete leaps. I thought that's how all progressive taxation systems worked as otherwise people could be better off refusing to take raises etc. and I'm almost certain that isn't the case anywhere in the world.
He's talking about effective marginal tax rates--the USA has a lot of welfare programs with hard cutoffs, which effectively mean more gross income can lower your net income until around $20k or so.

Somewhat irrelevant, but:

$150 can't pay someone to trim your trees, at least not well

I think you need to find an enterprising teenager? I currently pay a local kid $100 a month to do the overwhelming majority of my (very elderly) parent's yardwork. He mows the lawn, does the edging, weeds the flower bed and trims back the bushes. He butchered things a few times the start, but he has gotten quite competent and I fear the day he realizes he is worth more than ~$10 an hour + a christmas bonus + free lunch served by my mother when he is working.

Of course if you have trees > 20-30 feet tall you'll probably need a more expensive professional service.

How do you know this kid? Do you know the parents, and are you implicitly relying on that trust network?
He has been doing the work for about 3 years now, and was the third kid I tried to hire. The first two didn't work out. My parents know him decently well now, because my Mom usually insists he come in and have lunch with them during days he is working. None of us knew him when he started.
In 1920 if that kid was caught doing something like masturbating with the laundry, and he got fired, he might starve to death. Also, even barring that, the fact that upper class people could do almost anything they want to lower class people could lead to serious sanctions (for instance, all his family could be fired as well, or he could be beaten up) that serves to deter such behavior.
You'd think that more severe punishment would have a correspondingly greater deterrent effect, but that doesn't seem to be the case. What matters much more than the severity of the punishment is its likelihood. Sure, you might starve in the streets if you get caught jacking off in some high-born lady's nether-garments -- if you get caught. And, let's be honest: you're probably not going to get caught, and if you get caught, you're probably not going to be reported to your employer. In any case, all that talk of starvation is far-off, way in the future; the laundry is right here, and offers immediate gratification. IQ is pretty strongly correlated with the ability to delay gratification, and (though I don't have a citation for this) people seem to care about the future a lot less when they're horny.
Not treating starvation as important will lead to the 1920's person repeatedly doing such things until he gets unlucky, at which point he'll starve and he'll have selected himself out of existence. You can't just say that people will ignore deferred gratification under circumstances where ignoring deferred gratification will lead to not surviving--natural selection will ensure that the only ones remaining are the ones who don't ignore it. Furthermore, starvation isn't such a remote threat for people who are on the edge of starvation anyway.
What evidence would get you to revise your thought that evolution via natural selection would work in such short time frames? (OK, now what about updating your evidence about starvation levels in the 1920s? Until 1929, almost no-one would have been starving, full employment was normal.)
I didn't use the word "evolution". If servants who do stupid things starve, the only surviving servants will be the ones who don't do stupid things. This does not involve evolution; the servants are not passing the information down to another generation. It does however involve natural selection. And there's no point in "updating evidence", unless you have some evidence that deals specifically with the case of lower class people who work as servants and routinely piss off their employers. Whether people in general starved is irrelevant.
OK, so I'm trying to understand what evidence you need to update your belief that the economy seeks equilibrium at a point where employment is high. I'll try to make a structural/theoretical argument against the economic theory in the mean time. One Micro-economic assumption is that the marginal value of their work is positive, which you claim is true. I'll point out that coordination costs are significant, and the dynamic of creating and maintaining trust systems for small tasks is very significant - structuring monitoring so that your cost is still negligible is hard. (In the 1920s, the social enforcement mechanisms for preventing defection in contracts were stronger - local work, local families, etc.) As direct evidence, I'll also point out that your time to invest in employing others to do low-value tasks is limited, and I'm going to guess that despite having significant excess income compare to the US average, you employ very few people (even indirectly) in these ways, and your friends also do not do so. (Is that useful evidence?) Instead, there are tasks you simply chose to leave undone, or avoid needing. For instance, most well to do people I know buy non-iron shirts (for a large premium, $40-$50 extra) instead of having the laundromat, or other cheap labor iron their shirts (99c/shirt to clean and iron them, where I am.) The coordination issues around dropping off, picking up, and remembering the dry cleaning make it annoying, so we avoid it. Another example; do you have a human assistant in India or China that you farm routine computer based tasks out to? (Emails, editing, managing your schedule, researching random things you saw last month, etc.) Your time is limited, so why not? I'd assume it's trust, training time to get them up to speed on what you need, ongoing costs of coordination driving down value, (and whatever else are you thinking of.) (Post-post edit: I realize that you are looking at computer replacement of human jobs, but I think that st
I don't know, I'm not sure I would call those "unskilled", exactly. Indeed, these days most people achieve those for themselves, so the level of skill required to offer it as a premium, as it were, has only increased. I suspect there may be better examples out there, though.
Do you mean zero income tax, or zero all taxes, or something inbetween?

I mean that when somebody in the bottom quintile gives me a car ride to Berkeley for $5, nothing else happens to them. They don't pay Social Security on the $5. They don't have their health benefits phased out. They don't have to fill out a form. They just have an additional $5.

I know this is a completely radical concept.

Roughly half of Americans don't owe anything to the IRS each year. Pre-recession I believe this figure was about 40%. They of course pay other taxes, such as payroll (social security, medicare, which most people consider taxes), state sales tax, property taxes, etc. It'd be nice if they at least didn't have to file tax returns. http://www.cbpp.org/cms/?fa=view&id=3505

The problem isn't just all those other taxes but phasing-out of benefits - this is what leads to the calculations and observations by which somebody making $25,000/year isn't much better off than someone getting $8,000/year.

ADDED: Also, any paperwork can easily be an extreme barrier to that IQ 70 kid that Gwern was talking about.

It's an extreme barrier (in the sense of an ugh-field) even for smart would-be employers.

I'm kind of worried that 20 people upvoted that any paperwork is an extreme barrier to smart employers--presumably people like themselves? What kind of opportunities have you all been passing up for want of avoiding a form? And what kind of opportunities are present to eliminate or stream-line such (ie, turbo-tax)?
I'm not very well informed on this topic, but isn't something like that always going to be the case in a society with a safety net? e.g., if we make sure everyone has at least $25k to live on, anyone making $8k a year isn't going to be any worse off than someone making $25k. Of course I'm not sure how well America's arcane maze of benefits, tax deductions and whatnot fit into this simple abstraction.

Safety net should be a slope, not a cliff. Earning your first dollar shouldn't mean you get $1 less in benefits - there's actually a good argument for subsidizing the first $X of income - which is what the EITC is. Basically negative income tax.

You mean about half (actually 46%) of all American households did not pay any income tax (which is different from "not owing anything to the IRS") in 2011. 20% of all Americans don't pay income tax by virtue of being too young to work.
I thought they wouldn't need to file taxes, but I just completed a "tax assistant" wizard at the IRS website, for a single, non-retirement-benefit-receiving, single individual with $20k in gross income ... and I was told they'd have to file a return.

Regarding the drop of unemployment in Germany, I've heard it claimed that it is mainly due to changing the way the unemployment statististics are done, e.g. people who are in temporary, 1€/h jobs and still receiving benefits are counted als employed. If this point is still important, I can look for more details and translate.

EDIT: Some details are here:

It is possible to earn income from a job and receive Arbeitslosengeld II benefits at the same time. [...] There are criticisms that this defies competition and leads to a downward spiral in wages and the loss of full-time jobs. [...]

The Hartz IV reforms continue to attract criticism in Germany, despite a considerable reduction in short and long term unemployment. This reduction has led to some claims of success for the Hartz reforms. Others say the actual unemployment figures are not comparable because many people work part-time or are not included in the statistics for other reasons, such as the number of children that live in Hartz IV households, which has risen to record numbers.

To what extent are you playing Devil's Advocate above and to what extent do you actually think that the robotic disemployment thesis is correct, a primary cause of current unemployment, not solvable with NGDP level targeting and unfixably due to some humans being too-much-outcompeted rather than due to other environmental changes like the regulatory environment etcetera?

Gwern on neoluddism: http://www.gwern.net/Mistakes#neo-luddism

Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage?

In addition to gwern's objections, what if his RCA price-point turns out to be, say, 50c an hour? The utility curve is not smooth. Past a point, a starvation wage is still a starvation wage. Even in a hypothetical world where there were zero welfare and no opportunities for crime, he'd be better off spending the time looking for low-probability alternatives than settling on spending 40 hours a week working for sure starvation.

-2Eliezer Yudkowsky
An awful lot of people on this Earth would be very glad of 50c/hour.

Yes, but location isn't fungible, and not all jobs are telecommutable. A 50c/hour wage in the Bay Area is a death sentence without some supplemental source, even if someone in the Congo might live like a king on it.

This reminds me place premium, an interesting concept that someone doing the same job in one country can earn more than in another. Though we are talking about some kid who can't even get a job in the first place, this concept works well.

For example if a homogenous region such as country, city, or even suburb, has automated to such a degree that menial jobs are few. Has attracted the best people, and the best people to serve the best people. Such a region has 'place premium' as the top creative jobs, programming, finance, design work, etc, pay extremely well to entice the best. These people demand, via their wealth, the best service and so entice those that are skilled, good looking, whatever attributes required for service. Continuously filtering people.

I'll also argue that the US is a special case in that US dollar holders get a subsidy to living via the petrodollar/global reserve currency. Payed for by any foreigners wanting to by [relative to them] foreign products. This only increases the place premium of living in the US, and thus earning a wage in USD.

For the IQ 70 kids, perhaps there ARE no jobs for them in the region they live in. They have been filtered out by better (in ... (read more)

"The solution is to move somewhere else, go opposite the flow of people moving to higher 'place premium' locations; the one they are in has been saturated by above average people." The problem, besides foreign countries wanting to get rid of their own low IQ pop., is in admitting to oneself that they should be in the out-going group rather than, say, assume one just needs more education.

Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage?

The left tail on the distribution for inventive, creative, bright people seems highly likely to be fatter than the right tail. You need to be genetically gifted enough and have had the right encouragement, and lived in the right intellectual environment, to go on to create neat inventions and research and so on - that automation supposedly frees people up for/ If it is, then rather than freeing people up for better jobs, it frees people up to compete for a finite number of worse jobs.

Or, in other words, it seems to me like there's a non-trivial possibility that the people who were doing admin tasks are being displaced into doing laundry tasks instead. That what would have been being done by the 70 IQ kid is now being done by a 100 IQ adult.

The trucking industry alone employs ~3% of the entire American population. That's not trivial by any means

I just thought I'd mention that driverless cars can be expected to have a lot of ripple effects. Parking lot attendants; traffic court clerks; insurance claim adjusters; auto body repairmen; the guy whose job it is to calibrate breathalyzers; meter maids; etc. All of these people could face a good deal of unemployment if driverless cars come in.

As far as your larger point goes, I think you make a good point. By looking at AI in a narrow way, Eliezer is giving short shrift to a lot of technological improvements which have the potential to cause unemployment. For example, if a business starts scanning documents and keeping them electronically, it will probably need fewer file clerks and mailroom guys. Does this count as AI? Perhaps and perhaps not, but when people assert that unemployment is due to advances in computers, they certainly are referring to these types of changes.

As far as unemployment itself goes, I also agree with you that even if the theoretical model is correct, there is still surely a lag in reemployment which has the potential to cause disruption. How quickly did the need for blacksmiths drop down to nearly zero? Probably pretty slowly and gently compared to what might be happening now. Perhaps a 50 year old blacksmith would have urged his son to find a different line of work but would have had enough business to see him through.

Gwern on neoluddism: http://www.gwern.net/Mistakes#neo-luddism

If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.

That's not a new thing, that sort of regulation actually happened!

Do you have a source for the claim that this act was due to industry lobbying as opposed to risk aversion?
I can see why you think I was making that implicit claim, though that wasn't quite the point I was trying to make. I don't know to what extent the regulation mentioned in the Wikipedia article I linked to was influenced by industry lobbying versus concern about other sorts of risks to infrastructure or public safety. I'm not sure whether the precise cause of the passage of such regulation is that relevant to the regulation's durability in the face of potential benefits from adoption of new technology. Maybe it is, but the precise example of "limit[ing cars] to the same speed as horses" in the original post seems to imply that was something that didn't happen, not just something that did happen for different reasons.

Many labor market regulations transfer wealth or job security to the already-employed at the expense of the unemployed, and these have been increasing over time.

One example: raising the minimum wage makes lower-productivity workers permanently unemployable, because their work is not worth the price, so no one can afford to hire them any more.

When the government raises minimum wage, it effectively funds the development of automation, as businesses seek replacements for low-end labor. (Like Amazon buying that robotics company to build warehouse management robots.)

Heck, you could almost say that AI doesn't cause unemployment; the need for unemployment causes AI. When labor cost increases without a productivity gain, there has to be a productivity gain to make up for it, and the pain of the increase motivates businesses to actually look for alternatives to their current ways of doing something.

So every time the minimum wage goes up, companies will replace more and more of their former minimum wage workers with automation. Somehow, the politicians never catch on to this, or they know and don't care. It makes me want to scream every time I get a promotional email from some organization talking about how evil low wages are and how the minimum wage needs to be raised. Don't they know they are going to make jobs go away, basically forever?

This bit from Making it in America seems relevant: It only says that the unskilled worker may become unemployed if robots become cheaper and thus more economical, but of course, if the cost for employing the unskilled workers would go up, that would also make the robots a better investment.
6Eliezer Yudkowsky
Note the extraordinariness of this statement, whose truth I don't much doubt.
I have strong doubts about its generality. I am sure there are industries where management is reluctant to make long-term investments. I will hazard a guess that in these industries there is a lot of uncertainty about the future -- future products, future demand, future technology, etc. However I am also sure that there are industries where long-term investment is how these industries work. Computer chip fabs come to mind, or, say, the aerospace factories. Beware the fallacy of the Chesterton's Fence.

I have strong doubts about its generality.

It matches up with my experience, with the caveat that it is much more true for publicly held firms than privately held firms. I remember a project I was working on for a warehouse management software company; the advising professor commented something along the lines of "well, if they can show they make the money back in five years, then it's a win to invest," and we responded with "actually, the decision horizon for most of their clients is about a year or two." He was visibly shocked by the implied difference in time horizons.

The argument for this mostly comes in the implicit discounting of promises. If the salesman claims it has an effect size that large, then very possibly it will actually pay off once you account for the total cost of installation and ownership. The cynical observation is it has something more to do with the quarterly cycle of businesses- investments need to pay back for themselves rather quickly, or it may be your successor who reaps the benefits of your investments. Privately held firms have noticeably longer time horizons, make more of these long-term investments, and that appears to be a major cause for them often performing better in the long run than publicly held businesses.

This sounds way too much like a cached thought to me. I'd like to see empirical data for that. In general, however, we're talking about optimal planning horizons for businesses. As soon as you formulate the problem this way it becomes obvious that the answer is "it depends". I don't think a useful generic answer is possible -- businesses, from an iPhone-case producer to, say, Intel, are too diverse for that. A related question (much studied, I think) is the impact of the agency problem on business management. It surely exists but I don't know whether it translates so straightforwardly into preferences for the short term and unjustified discounting of the long term.
Note that privately held companies includes both companies held by a family (which tend to be less well managed because of the frictional costs associated with replacing upper management) and companies held by private equity firms (which tend to be well-managed). The NBER paper gwern linked through Hanson is available here, and if that link doesn't work for you I can email you a pdf. A quote from it:
Thanks for the link, it works. The paper's interesting but will take a bit of time to dig through and I can already see some iffy things in there (e.g. using sales growth as the measure of investment opportunities available). But I'll hold off expressing opinions until I read through it...
Here's two links: http://www.overcomingbias.com/2012/02/econ-advice-confirmed.html and http://www.overcomingbias.com/2012/02/info-market-failure.html
So let's take a look at these links. From the first one: I don't think this supports the claim made. The second link points to a NBER article behind a paywall (for me). Looking at the abstract, however, it doesn't say anything about preferences for long-term vs short-term. The most relevant data point that I see is that private firms invest more (as % of assets) than public firms, but I'd want to see the details of the study before coming to a conclusion about what that means.
It certainly is consistent with the claim made, even if it is not as on-point as the second link I had been searching for, and so I included it.
A thought being cached is not evidence against it.
OK, I'll be more explicit :-) It sounds like memetic junk which on the surface looks plausible and has been circulating via the popular media for a long time though its empirical foundations are doubtful and it's usually formulated in too generic a fashion to be of any use. As I might have mentioned before, my prior with respect to popular economic wisdom is that it's false.
Do you mean because it seems short-sighted? I suspect there is a business reason for having such a heuristic, but I might be wrong.
This is understandable, and is a fairly common dynamic. It's a 2 level principle/agent problem. The goal of investors is to maximize their long term investment. They do this by using the limited tools available, such as paying management bonuses based on profitability, so managers don't simply exploit their position to be lazy, or worse, make small amounts of money at the expense of the company. The personal goal of management isn't to maximize long term value, it is to maximize short term profits, as measured by stock prices. Long term investments are costly, and because it's impossible over the 2 year time frame to tell the difference between prudent long term investment and simply increased expenses, stock prices drop when expenses go up for any reason.
Thanks! This is a good illustration of my point, i.e., that businesses generally operate at local optima that can be disrupted by minimum wage increases, causing them to seek automation that they didn't use before the local optimum was disturbed.
As gwern pointed out above, the theoretical effects of minimum wage on unemployment have been quite hard to prove statistically. People have studied the effects of increases in unemployment rates, and the differences in employment between neighboring states with different minimum wage levels, and haven't been able to find statistically significant correlations between increases in the minimum wage and employment. This is something economists are quite split on now; there was a recent study where something like 40% of economists thought minimum wages increases unemployment, 40% don't think it has any impact on unemployment, and 20% aren't sure. I will say that if the fairly low levels of minimum wage we have in the US have any effect on unemployment right all, it is most likely a pretty small effect, not nearly large enough to explain the larger trends in the economy.
That's not really a bad thing, so long as poor people are taken care of. There are people who are poor and spend X hours a week doing menial labor, there are people who are poor and don't have to work-or-starve, and there are people who are poor who live terrible lives due to not being able to get work for getting money to afford things. Automation is essentially a good thing so long as it moves people from the first category to the second, but not the third.
Employment is a function of being "worth the price" as you put it. But "worth the price" is not a fixed point; it is a function of demand. If only a handful of people want to buy your product, adding another person for $5/hr may not be worth the price. If everyone in the world were willing and able to buy your product, then you'd hire even if you had to pay $50/hr if you needed to. Demand is a function of employment and wages. If wages go up then demand goes up... which increases employment. Increasing minimum wage has never been shown to send away jobs.
But that's precisely what doesn't happen, particularly in a commodity business. When the minimum wage is raised, McDonalds has to either raise their price (which reduces demand), or find a way to do more with fewer workers. While it's true that for some products and services, demand can increase when the price is raised, it is not true of most products and services, and it is definitely not the case for products and services which are produced by minimum-wage workers. If you're talking minimum-wage workers, you're probably talking about basic goods and services, where demand is price driven.\ This "wages go up then demand goes up which increases employment" concept only works if you leave out any specification of whose wages, whose demand, for which products, increasing whose employment. Not everybody is a minimum wage worker, so unless those workers buy more stuff in exact proportion to how many of them are employed at each company, it doesn't actually translate into revenues for the right companies not to go belly-up if they don't lay people off and find a way to make up the difference. In particular, the timing and proportionality of that increased demand is pretty important. In particular, note that if the people with raised minimum wages go out and buy stuff from companies that aren't using minimum wage workers, or if the companies using minimum wage workers raise prices to stay afloat, there is a net zero change in demand, except that now non-minimum-wage workers are worse off than they were before, in two ways. First, their positional value as premium workers has eroded, which affects demand for their services, and also their job satisfaction. (i.e. the closer a worker's pay is to minimum wage, the less satisfied they are with their pay, even if the reason for the change is that minimum wage was raised.) Second, they are worse off because of the prices that were raised, as long as they use at least some goods produced by minimum wage workers. In short,
Actually neither, according to standard economic theory. According to standard supply/demand ideas price is picked in order to maximized profit. This maximization of profit is the same point regardless of labor costs (or taxes or whatever). Similarly, a well run business has enough workers to meet demand. The first order effect to a minimum wage increase is a decrease in profit. Adjusting the price up will just result in even less profit. The mechanism that drives up unemployment is that marginal businesses that WERE making a profit cannot make a profit under the new minimum wage law. This drives them out of business and reduces employment. So prices don't go up, but marginal businesses fail. The idea that price is related to cost is one of the first thing disabused by standard economic theory. Optimal price is market driven.
Isn't this oversimplifying things? If McDonalds was previously making a 5% profit on its minimum wage workers, and the minimum wage increases by more than 5%, obviously they have to raise their price so they at least make a profit rather than a loss on each unit (of whatever they're selling). But, raising the price reduces demand so they'll probably have to reduce their production rate (number of units) by firing a few workers as well. Which I guess is in a way equivalent to marginal businesses failing.
That is one mechanism, sure. But you've done nothing to address the part where some businesses optimize productivity to remove positions, specifically in response to budget concerns brought about by wage increases. Yes, and spherical cows are frictionless in a vacuum, or something like that. The real world is non-spherical, with air resistance and friction. In this context, the friction is that economists continuous functional curves are averaging out huge amounts of discrete behavior and ignoring time lag. They also tend to ignore things like real world prices not being infinitely adjustable. Consumer goods are priced according to numerology, or less flippantly, according to consumer psychology. If you were to graph sales according to price on a continuous curve, you would find all sorts of weird spikes, mostly where certain ending digits work better than others, but also where certain middle digits and starting digits are more popular as well. This means that "adjusting the price up will just result in less profit" is not always true, because not every business has optimized its prices in the first place. It is a truism among consultants that the default recommendation to a small business in need of more income is to simply raise prices by 10-15% across the board, before looking for other ways to improve the bottom line. (Such as raising the prices of a few select products even further, e.g by 100-400%.) Don't the laws of economics say this condition shouldn't exist? Why were these prices not optimized? Because the business owner is not a spherical cow any more than the consumer is. Market conditions changed but they didn't notice or update their beliefs, for example. If business owners were economic automatons, they would of course have already adjusted their prices to the optimum, and continuously improved their productivity so as to have already gotten rid of any people they could have gotten rid of. In practice, however, humans only pay attention to a few
Sure, but those are secondary effects, and the ability to do this depends on the specifics of the market. As soon as you move away from econ 101, specifics matter and we can't really say what will happen with any certainty. Regardless, for your specific example of McDonalds there is an obvious third option (beyond making do with fewer workers or raising prices) that you were neglecting- making less profit. McDonalds, in particular, is fairly well run and has decently evidence based price points and labor guidelines. Would you really expect McDonalds has failed to optimize labor and productivity to the extent that a minimum wage hike won't primarily come out of franchise and corporate profits? What happens in reality with a minimum wage increase will depend on the mix of businesses in that area and how well they are run. If a small town's local employers are a Walmart logistics center and an amazon warehouse, increasing minimum wage is unlikely to result in much reduced employment, both companies are very well run and focused on both productivity and pricing. A state wide minimum wage increase may even result in MORE employment from increased Walmart/Amazon demand. If a local area's primary employment is poorly run local restaurants, companies may well make do with fewer workers, and employment will drop. And when a consultant works with big employers, (say Walmart), you don't recommend what they are already doing (relentlessly optimizing prices). Many (most in many areas) minimum wage employees work for large chains (which are reasonably well managed), so its not obvious which effect will be larger and it probably varies area to area. I hope we can agree that raising the minimum wage will have some mix of employment effects and (essentially) wealth transfer to low income workers. I hope we can agree that in the real world, this mix is going to be very much dependent on specific economic factors.
Reality disagrees. The higher the minimum wage in a given country, the higher the labor force participation, - Ie, the percentage of the total population working goes up as the sums offered for their labor increases. The effect you describe may be real, but if so it is extremely consistently swamped by the basic law of supply and demand. Offering more money for labor causes more people to work.
I would like to see some data, along with arguments for causation and the direction of the causality arrow.
Forcing an employer to pay more money per worker doesn't magically mean the employer can employ more workers, it simply means he has to pay more for however many workers he would have hired anyway. Furthermore, as I mentioned here, having to pay more for each worker is going to encourage the employer to look for ways to reduce his need for labor.
Which supply, and which demand, are we talking about? There is the demand of workers for work, the demand of employers to have work done, and the demand of consumers for stuff. There is the supply of work from employers, of man-hours from workers, and of stuff being produced. Most people occupy at least two of these roles and sometimes all three.
I could just as easily reverse that argument, increasing the price of labor causes fewer people to get hired.
Abstract argument here is useless. Data is needed. I have been looking for actual academic work on this, but most of what I can find is either behind paywalls or such trash that it can only be explained as work-for-hire for right wing ideologues. - comparing minimum wage increases with teenage employment over time and such. Which has to be active deception / malice - increasing educational attainment over time is the actual cause of variation in that variable, and there is no way any economist could fail to see that. So I am going to have to do my own work here: Commitment: Going to go dig through OECD databases to find median, minimum and average pay, and graph against labor force participation. At least two data sets: one international comparison, and at least one time series for a single nation. Prediction: all correlations will be obviously significant, and positive. I expect median to have the largest effect. Secondary prediction; If I am wrong and this is a weak effect, the international comparison ought to be a shotgun plot. If so, will include multiple time series.. This should not take more than a day, so check back monday,
Well, as point out here your abstract argument doesn't hold up to looking at the details. I assume you mean that the reason for the drop in teenage employment is that they're spending more time in school and/or studying. This is not at all obvious, after all most teenagers have been attending high school for >50 years. Traditionally they'd take part time jobs in addition to attending school. The claim was specifically about minimum wage. This is quantitatively different from median and average pay. Minimum wage is determined by law, whereas median and average pay are determined intrinsically by the economy. Also how are you planning to compare wages in different countries? I also expect median wage to be positively correlated with employment and suspect you have the causation reversed there. The better the general economy is doing, the more demand for labor there will be, the higher the median wage will get bidden up.

Eliezer, what was your motivation for thinking about this topic and writing this post? Is there a strategic relevance to MIRI or the typical LW reader? Are journalists or other people frequently asking you to comment on AI and unemployment (in which case, why is it titled "Anti-FAQ")? Is it just intellectual curiosity from your interests in AI and economics?

Partially dispel the view of MIRI wherein we're allegedly supposed to pontificate on something called 'AI risk' and look gravely concerned about it. Lure young economists into looking at the intelligence explosion microeconomics problem.

9Wei Dai
Ok, thanks. I wanted to make sure that I wasn't missing something important by not having paid attention to this topic previously, and it sounds like I can still safely ignore it. :)
What is the "intelligence explosion microeconomics problem"?
8Eliezer Yudkowsky
This thingy: http://intelligence.org/files/IEM.pdf
Ah. Goes off to read

Both Q and A seem to be treating unemployment as intrinsically bad, which is a case of lost purposes, a confusion between terminal and instrumental goals.


It's not a confusion by the technological unemployment people, at least: most of them seem to come to conclusions like 'this is irreversible and reversing it is undesirable anyway, so what we need to do is de-link employment with being able to survive using something like Basic Income'.

Involuntary unemployment is bad. Not having to work is good.
I'd argue that unemployment fundamentally is a good thing. In past times, children - even quite young children - had to work, as did the elderly. It is one of the achievements of modern technology that these people don't have to work anymore and can instead grow up happily and go to school or retire and live the rest of their lives without having to work, respectively.
These aren't mutually exclusive: one hedonic function that work performs is giving a person a sense of place, purpose, and contribution. Granted, most currently available forms of work don't offer much of this, but at least they provide something one can use as a prop in rationalizing that one is doing something meaningful in providing for one's family or whatever. (Also granted, "work" does not necessarily require "job", and IIUC it's only been in the 20th century that most people's work is as an employee of someone else.)
Unemployment is bad if there are things that should be done, but are not done. Does this description fit our society? I think it does, because I can imagine a few things that should be done... for example cleaning the streets. When we'll have enough cheap IQ-100 machines able to do all the work of IQ-100 people, I will not consider unemployment of people under IQ 100 a bad thing. It will be still bad if a IQ 200 person remains unemployed, while the cure for cancer is still not found and people are dying. Again, when all IQ-200 work is done, I will be okay with that person being unemployed, too.
Provided that doesn't happen before some measure to allow such people to still make a living is implemented.
You mean "to still live" since presumably the making of it will be the machines job.
Because of the complexity of human psychology, unemployment probably is intrinsically bad, in the sense that it's a terminal goal to be 'employed'. I can imagine being 'self-employed' were I provided with a minimum (and satisfactory) income without needing to be officially employed, but large numbers of people already doing this don't seem to be satisfied or happy doing so.

People who think that automation is currently increasing unemployment don't generally just talk about jobs lost during the Great Recession. They see an overall trend of reduction in employment and wages since at least 2000.

You're absolutely right that the recession was caused by a financial shock. The thing is, a normal effect of recessions is for productivity to increase; businesses lay off workers and then try to figure out how to run their operation more efficiently with less workers, that happens in every recession. The difference might be that this time, it is easier then ever in the past for employers to figure out how to do more with less workers (because of the internet, and automation, and computers, ect), and so even when demand starts to come back up as the GDP grows again, they apparently still don't need to hire many workers.

The economists making the automation argument aren't saying that automation caused the great recession or the loss of jobs that happened then; they tend to think that it's a long ongoing trend that's been going for quite a while, that it was partly hidden for a few years by the housing bubble, but that the great recession has accelerat... (read more)

They see an overall trend of reduction in employment and wages since at least 2000.

And also wage stagnation in contrast to continuing productivity gains since the 1970s.

Yes, that's very true. The "productivity gains vs. wages" graph seems fairly convincing to me that something different is happening now.
2Eliezer Yudkowsky
How could that different thing be automation?

The explanation that the people like Erik Brynjolfsson make for why the gap between productivity and wages is growing larger is that, as it becomes easier to automate more and more parts of production, the relative importance of capital (money to invest in automation) grows, while the relative importance of labor declines. So as automation advances, more of the profit goes to those with the capital to invest in automation while less of it goes to the worker.

Paul Krugman wrote an article about 6 months ago discussing in economic terms how it can be possible for certain types of technological advance to benefit capital at the expense of workers, it was pretty interesting, let me find it.


6Eliezer Yudkowsky
Again, why does this happen now and not during the last 300% of all jobs which were automated away?

A possible explanation: You can't have a IQ-70 person doing the work that needs IQ 130, but you can have it the other way round.

So maybe in the past many people were too smart for their jobs (because most things that needed to be done were stupid), and when those jobs were automated away, the smart people moved to do smarter things. This continued for some time... until all the smart people left the stupid jobs. Now when yet more stupid jobs are automated away, the remaining stupid people have nowhere to go.

In a story format:

There was a farmer with three sons -- one was smart, one was average, one was stupid. At the beginning all three sons were needed to work on the farm, otherwise there would not be enough food for them to survive.

Then the farmer bought a machine, so only two sons were needed at the farm. The smartest son left the farm and became a scientist.

Then the farmer bought another machine, so only one son was needed at the farm. The average son left the farm and became a clerk.

The the farmer bought a third machine, so no sons were needed at the farm. The stupid son left the farm and became... unemployed, because he was too stupid to do anything else than farm work.

(Why did this happen now, and not during the previous years when the former sons were leaving?)

Also, maybe eventually the farmer finds another machine that does even better work, except he himself isn't smart enough to run it. But his neighbor is, and eventually buys enough to buy the farmers land. Soon instead of hundreds of famers with their own machines, you have a dozen.
Taking an analogy too literally, I think. :-P
I'm not sure if the point I was making, such as it was, reduces down to the same "This leads to less people needed per machine, and the replaced people can't easily go elsewhere," or if there is a fundamental difference with the farmer not being able to pass on his land & occupation and no longer having a stake in the process.
For a short while, but within a few hours at most I guess the IQ-130 person would get bored like hell and burn out.
So? Imagine yourself 500 years ago born in a peasant family. You work on the farm, you are tired and the work is boring. What will you do about that? You don't have much choices. You can find yourself a cheap hobby, drink a lot of alcohol, try a career in crime, or join an army and probably die. Does the fact that you are bored like hell change the economy? No. Even if a way out exists, people are not automatically strategic, so many will not find it. They may also stay on their place because of their social world, or because they are somehow convinced it is their duty, etc.

I don't think it's the number of jobs being automated away that matters, but the rate; unemployment becomes a problem when automation outpaces reemployment. Better or worse economic policies can move the rate of reemployment up or down, but as the rate of automation increases, the quality of governance required to make par rises with it.

Yeah, that's a very good question. It could depend on the type of automation we're talking about; that is, there may be a difference between technologies that make workers more efficient and technologies that actually make workers less necessary. An industrial-revolution era textile factory was a far more efficient way to produce cloth and then clothing then people spinning and sewing by hand, but it was still very labor intensive. In the cities where the factories sprang up, it actually created a huge demand for labor in those cities (especially since the few countries at the time that had them could then export to the rest of the world), and that huge demand for labor then drives up wages in that specific city; this was the period where there was such great demand for jobs in cities that vast numbers of people moved to the cities. This is also what's going on in places that are adopting industrial-revolution era technologies of labor-intensive mass production now, like China and India, a massive urbanization movement away from the countryside and to the better-paying factory jobs. The kind of automation we're doing in the first world now, though, doesn't seem to create that same local demand for significant amounts of labor that pulled urban wages up in the industrial revolution. The car companies are producing more cars then ever, but they just don't need nearly as many workers to do so as they used to. To an extreme, you get "lights-out factories" that can produce things without needing workers at all; there is a robot-producing factory in Japan now that is almost totally automated and can literally run for weeks producing industrial robots without human intervention. That kind of automation doesn't produce local demand for labor anywhere, it just lowers demand for workers across the board, while increasing the pay-off involved from capital investments into automation.
Whoever remains after you fire all those made redundant has been made more efficient.
Perhaps. Not necessarily, though. If you originally have job 1, job 2, and job 3 in your factory, and then you replace job 2 and job 3 with robots but keep people around to do job 1, the people doing job 1 haven't really gotten any more efficient at it. The factory owner can continue to pay the people who do job 1 the same, and just pocket the difference between the capital investment of the robots and the wages he used to pay for job 2 and job 3. (Now, technically, the way productivity is calculated by economists you would be correct, since it's just based on "total production divided by number of workers". That doesn't actually mean that person A is more efficient at doing job 1, though, not in the sense that we usually mean.)
He's referring to humanly possible labor, not human labor done today.
"a normal effect of recessions is for productivity to increase; businesses lay off workers and then try to figure out how to run their operation more efficiently with less workers, that happens in every recession" This is not true. In fact, the normal effect is the opposite- a productivity decrease. See the data for the US after 1948 here. If you are looking for a story as to why, in some business cycle theories (such as Real Business Cycle Theory) the recession is caused by a negative shock to productivity.

The financial system is staring much more at the inside of its eyelids now than in the 1980s.

What exactly does this mean?


Most money in the financial system is invested in bets on what happens in the financial system.

I, uh, wow... link? (I know basically nothing about the financial system.)
This gets confusing quickly, and my comment is an oversimplification, but: http://www.slate.com/articles/news_and_politics/explainer/2008/10/596_trillion.html
So essentially, any movement in the money markets is amplified by a factor of about 5? I'm not an economist, but this basically seems like a good idea, if we assume that the average direction of the market is up.
Jaibot may be referring to instruments like Credit Default Swaps I'm not implying that is the only type of instrument Jaibot means, but it is definitely seems like an example of money in the financial system which is invested in bets on what happens in the financial system. Edit: Fixed link again
As a factual mater, I don't think this is true. Source?
5Eliezer Yudkowsky
Most estimates I've seen of notional value for OTC derivatives are in the range of $1 quadrillion.

Notional amount is not a good measure for the OTC market. It has two main problems it double (or more) counts multiple step transactions and it doesn't net offsetting transactions.

For the first problem, the power grid is a good analogy. Imagine you wanted to assess the total amount of power in the US power grid, so you add the amount of all power leaving plants, plus the amount of everything that passes through high voltage lines, plus the amount of everything going through substations, plus the amount of everything on transformers, plus the amount of everything going through local grids, plus the amount of all power used by homes or companies.

Since power goes through all of those steps, if you count each step separately and sum, your total will be massively overstated.

Netting: if a friend and I get lunch twice and I buy the first lunch and a friend buys the second we call it even and that's that. If two large corporations do the same thing, they leave both contracts in place. This is because wiring money back and forth is cheap and canceling or amending contracts is cumbersome and incurs legal costs ( which are expensive). So even though economic exposure is zero, notional exposure is the cost of lunch *2.

For long dated contracts that are around for years, repeated nettings can build up large meaningless notionals that bloat the figures.

Both these issues with notionals are well known, so you should probably slightly update your wariness for whatever source was quoting notionals without the requisite disclaimer.

Once you account for this and other forms of derivatives-are-hard-to-count problems, it's down to "only" a few trillion. I;m unable to find a good estimate regarding what fraction of the "financial system" (squares quotes indicating the term is not well defined here) that is.
WQ's response was great. I have worked in "the financial system" for nearly 20 years and just in terms of real assets under management, most of the handful of large global firms are each managing assets in the low trillions, and those are "real" assets in the sense that they aren't really anything you could say was being counted more than once due to arcane accounting rules or contractual obligations or whatever. Back around 2007/2008 there were a few numbers in the mid-tens-of-trillions being tossed around in the news that were guesstimates of the total global net worth, and although I forget the numbers now, I would be very surprised if the sum of assets under management was not approximately equal to those numbers. Via trusts and so on, such firms even manage physical assets (paintings, yachts, diamond rings, office buildings) to varying extents that most people don't think about as being managed by "the financial system." The only place where I'd change WQ's response is to blame regulation more than legal costs. Not that legal is cheap for these firms, but regulation and compliance is the biggest cost-factor by such a large margin that everything else is nearly irrelevant (in fact, legal costs are just a side-effect, for the most part).

Another hypothesis for the mix, conveyed to me by a business major:

The biggest recent change has been the abrupt entry into the information age, with internet companies being in the innovation spotlight. Software companies and other information-centered businesses are far more scalable than most, which means that when a product gets popular, a big profit can result. The idea here is that this provides an exceptionally high opportunity for inequality: information industries create a small population which get the high payoff, with a large number who pay for the new products.

Some simplistic macroeconomic simulations have suggested that there are two equilibriums which an economy can fall into; one where people have roughly the same amount of money, and another where money concentrates into a small number of hands. This makes the tech-inequality idea scary. Surely reality is more complex than the simple simulation; but, innovations with high inequality risk could push us into a different equilibrium...

The traditional story is that when innovators provide new products for everyone to buy, everyone benefits; the innovators may get rich, but the others who buy the product are also better off. Looking at graphs, the standard of living goes way up for the rich, but also rises more slowly for the poor... until the 90s. Then the poor actually get worse off again. (I checked this some time ago, and don't have a convenient link, sorry! In general there are a lot of things in this comment that could use fact-checking.)

Yes, scalability of software is one thing. Another relevant thing is the so-called flattening of the world: it is becoming more connected and less diverse. The barriers to flows of people, goods, information, money are being flattened into nonexistence. This leads to winner-takes-all situations: if your product is better than everything else in, say, Germany, it's likely that it is also better that everything else in the rest of the world. Color me sceptical. I think the key word is "simplistic" -- unless you show relevance to the real world it's just not useful in any way. Might even be harmful on the general "mind contamination" principles. That's debatable. Off the top of my head (I stand ready to be corrected on facts), the median salary of the lowest quintile stagnated or even fell a little, but the median income including benefits continued to rise. Also the decline is pretty much limited to men without college education. If you're a woman, or have a college degree your income even without benefits continued to rise.
Doesn't seem scary to me. I'm more likely to have value overlap with tech people. As such I prefer optimization power be concentrated in their hands.
Albert Speer, Werner von Braun, Robert McNamara, John von Neumann and many others like them would likely qualify as "tech people". I'm terrified of people like them forming a stable and entrenched ruling caste, despite any "value overlap" they might display. Based on prior performance... I'd say it could potentially be just as bad as e.g. a Stalinist dictatorship. "Mein Fuhrer! I can walk!"
For their part, Stalinists have tended to be fond of technical elites as well. However, I suspect that gristly examples may arise simply from the depth of the sample size; the innumerable cruelties of the premodern world, after all, we're chiefly overseen by humanistic elites. It may be that today humanistic values are substantially more weak and "feminine" (from the perspective of their predecessors,) but this may also be part of why existing power structures are less fond of employing them. (All this, of course, assumes this is a useful dichotomy, the primary avenues for elite recruitment under modern liberalism are business and the legal profession, which straddle the line in some ways.)
They have one in the lab next to mine. I love my job.
Video presentation on TED.

Your * explanations in general involve some systemic changes, which doesn't jive with the abrupt and dramatic shock seen in the 2007-2009 data. Any explanation of what is currently happen that doesn't tie into the obvious business cycle seems to lack the necessary explanatory power.

I don't doubt that some or all of those systemic issues are driving long term trends (for instance I know dozens of phds who WANT to be working on next-gen power generation but are instead in banking or finance because no one would hire them to do anything else. This obviously has an effect on the mix of employment sectors but that shouldn't necessarily mean lower employment), but there is an abrupt and sudden shock in the data. The fact that its happening in multiple countries at once makes it harder to blame regulatory environments.

5Eliezer Yudkowsky
The notion would be that the aggregate demand shock / overly tight money allowing NGDP collapse due to the shadow banking collapse produced the Great Recession and the sharp employment drop. And then these other long-term trends meant that re-employment was broken afterward as NGDP rose again, a tendency already noted in the 'jobless recovery' after the 2001 recession.
I think its worth noting here that NGDP never had "catch-up" growth, its still far below the previous trend, and the output gap is closing very slowly. So the simple explanations that tie NGDP growth to job growth don't have to break to establish the type of "jobless" recovery we are still seeing. Okun's law has been holding pretty well throughout the great recession.
And the reason that you don't include increasing automation in with these other trends is that you don't see the automation situation as materially different from a few decades ago, unlike the other factors. Yes?

Maybe we've finally reached the point where there's no work left to be done

If so, this is superb! This is the end goal. A world in which there is no work left to be done, so we can all enjoy our lives, free from the requirement to work.

The thought that work is desirable has been hammered into our heads so hard that it's a really, really dubious proposition that actually a world where nobody has to work is the ultimate goal. Not one in which everyone works. That world sucks. That's world in which 85% of us live today.

That, and that people are so used to the case in which the only way to earn money is to work for it, that they don't think of the thousand alternatives. (The one that comes most readily to mind is living like a king off the labor of a small family business administering thousands of AI-serfs).

Vernor Vinge once said something to the effect of, “When a robot can autonomously clean a bachelor’s bathroom, then we will be very, very close to a singularity.” So he's in agreement with you on this senero:

My God, what have we wrought! There’s going to be a massive [jainitorial] unemployment cri—FOOM

Edit, sourced qoute: http://mindstalk.net/vinge/firstMoversTalk.html

Vernor: My classic example of that is that I figured that a robot that could clean a bachelor's unprepped bathroom


would be something that would be very close to satisfying the singularity.

7Eliezer Yudkowsky
I'll go on record as disagreeing with Vinge(?) here; a robot cleaning a bachelor's bathroom can very plausibly be done with lizard-level general intelligence and is not necessarily a sign of FOOM. On the other hand, most humans are not paid to clean bachelor's bathrooms most of the time, so I also don't think it would necessarily lead to a mass unemployment crisis.

Well, if it has the ability to clean a bathroom, similar systems could cook, clean, drive, construct, do pretty much any routine task—that sounds like a lot of jobs to me. Now, could a lizard-level intelligence clean a randomly chosen bathroom? Said robot would have to have a lot of common sense notions of how to treat objects, very good visual perception, proprioception, and object classification, even the ability to use tools. That sounds more around higher mammel intelligence to me. As I haven’t spent my life studying AI, I’m perfectly willing to replace my opinion on this with your own, but I’m having trouble seeing how cleaning a randomly-chosen bathroom is a lizard-level task.

Hm. My previous sentence is on reflection incorrect; considering the number of jobs that could potentially be replaced by 'clean a bachelor pad' level intelligence, we would be looking at a potential disemployment shock that would be considered large in the US. Not a complete disemployment shock, but it would probably qualify as 'mass unemployment' if reemployment failed.

Now, could a lizard-level intelligence clean a randomly chosen bathroom?

If a generally lizard-level intelligence were hooked to a petabyte database of special cases scraped by slightly smarter algorythms from security footage of previous bathroom cleanings, it could do it. This isn't how an AI theorist would attempt the problem, but it is more or less how Google translate works, and quite possibly how the first bachelor-bathroom-cleaning robot will work. Such an AI would be nowhere near capable of self-improvement.

I don't see why a lizard-level intelligence would necessary not be able to self-improve.
In this case... because all its domain expertise is in getting the dirt off the tiles, and it would not recognize code or hardware if accompanied by explanatory placards.
4Eliezer Yudkowsky
Also: A priori and in advance of learning the true outcome, I'm betting most would have thought that highway and city driving was a more difficult application for AI than cleaning a bachelor pad.

I realize this doesn’t exactly contradict you, but even if true (and it probably is/was) I think those “most” would not in fact think of difficulty but rather of how well you need to solve the problem. That is, a bathroom-cleaning robot that misplaces the shampoo five percent of the time might be considered “solved problem”, but a self-driving car that “misplaces” the car even one percent of the time would sound very scary. I think it’s the difference in “acceptance criteria” that makes people misrank tasks rather than relative difficulty.

Really? I think of roads and highways as simple prepared environments, on which even the unexpected can be handled with relatively few actions - swerve, stop. A bathroom can be messy in a ridiculous variety of ways.

I think that's because driving has to be done perfectly or there are dire consequences, which might mask the fact that it isn't as complex, compared with cleaning a bathroom, which has many tasks that could or could not be done based on the standard imposed.
Driving is considerably more complex than cleaning a bathroom, primarily because you need to interact with a large number of humans whose mental state ranges from fairly rational to OMGWTF.
Yes, but in context there are still a fairly limited number of things that they can do--stop, reverse, speed up, slow down, change direction, etc.--even if it is hard to predict which and when they will do so.
Nope. I'm talking about humans, not drivers. That involves pedestrians, people on bicycles and skateboards, kids playing ball near the street, panhandlers who want to wash your windshield, etc. etc.
I'd wager that Lumifer comes from a place where drivers are much crazier than where you come from. There are huge differences in stuff like that from city to city.
Yes, but are there differences beyond "change in acceleration"? (given acceleration as a vector).
Just because you can measure something with three real numbers doesn't mean that their prior probability distribution isn't all over the place.
This really depends on how you interpret "a robot can autonomously clean a bachelor's bathroom". If you interpret it the same way that "Roomba can autonomously sweep a floor", then lizard-level intelligence seems enough (Roomba is barely insect-level). Roomba can sweep a floor, provided you moved all the toys and cables and papers out of his way first, and put the chairs on the table, and closed the doors of the rooms you don't want him to visit, and are okay with it taking ten times the time a human would take, missed corners, and occasionally: unswept spots, Roomba locking himself in a room, Roomba finding a roll of toilet paper on the floor and scattering the shreds all over your home. So if the future Bathroomba can clean your bathroom (wipe the sinks and mirrors, clean the floor with water, pick up stray stuff, etc.) but with a similar list of caveats (it will take him half a day, you need to prepare the bathroom a bit, there are some situations he just can't handle), then lizard-level intelligence (meaning roughly better than today's robots, but still far from Foom) seems enough. ... and not necessarily an unemployment crisis, because most of your examples - driving, cooking, building - are in domains where mistakes can be very costly, much worse than "insufficiently sweeping the floor" or "knocking an open bottle of shampoo over". You may be able to hack together a commercially successful robot in areas where mistakes are of little consequence by just finding shortcuts that avoid the really difficult problems (Roomba is really good at that - people had been working on Simultaneous Location And Mapping algorithms for at least 15 years before Roomba was released with a really straightforward algorithm that was basically "screw this, just bump around randomly" with a bit of fine tuning (and a clever trick of estimating the size of the place you're in by tracking how long it takes you to bump into something)). But even then, if we get to Bathroomba-level there
Edit, sourced qoute: http://mindstalk.net/vinge/firstMoversTalk.html My classic example of that is that I figured that a robot that could clean a bachelor's unprepped bathroom – [laughter] Vernor: – would be something that would be very close to satisfying the singularity.
I'd be very curious as to when Vinge said that.
I've checked my memory banks, and done some google-fu. I believe (75%) it was during this interview with Adam Ford: http://www.youtube.com/watch?v=tngUabHOea0 or one of his singularity summit talks. Edit: Found a transcript here: http://mindstalk.net/vinge/firstMoversTalk.html
I remember him saying this to a group of people after a Singularity Summit (I don't remember which).

The main question is why is automation associated with unemployment today when it wasn't in the past. To answer, you have to consider the kinds of jobs created by and lost to automation and the determinants of workers incomes in the jobs.

Most of the industrial revolution is associated an increasing number of workers in manufacturing and fewer in farming. The industrial work force grew primarily at the expense of the peasants or farmers. Today, automation is causing manufacturing jobs to be replaced by service jobs. Farming jobs were the first to go because... (read more)

It was, or at least has been at some points. Our word "Luddite" originally referred to members of an an anti-automation movement active in the early 19th century, which believed that powered looms and similar devices would lead to unemployment among the artisan classes. In actual fact the Industrial Revolution ended up creating more jobs than it destroyed, thanks to lower prices for manufactured goods expanding the customer base, but the jobs that it created did demand less skill and were lower-paying than their predecessors, at least until the labor movement caught up. The analogy to the service sector's expansion at the expense of the manufacturing sector isn't perfect, but I think it's closer than you're giving it credit for.
In comparing the skills of just the manufacturing jobs created and lost, you ignore the seismic and dominating change in the urban/rural ratio. The process can be seen at an accelerated rate today in China: peasants transformed into workers and getting paid higher income as the result, thus expanding the economy. Peasants to workers is a much weightier trend than skilled workers to unskilled workers.
Ah. I think we may be working from different senses of "associate". I took it to indicate perceptions, not real economic changes. You are of course right that the Industrial Revolution led to a larger economy and that the urban/rural shift had a lot to do with that.
"Groups of workers with higher status get paid better." True. But what is the main direction of causation here? According to basic economics, workers will get paid their marginal product (how much you add to production). This is a pretty good first approximation. Of course, you can get paid in many ways- money, flexible hours, even status. The higher the status of a job the less it needs to pay to attract workers; this is called a compensating differential. High-level politicians are very high-status but don't make that much. Conversely, very low-status jobs (like janitor or garbageman) have to pay a bit more in money wages to get people to work.

Why are we talking about jobs rather than man-hours worked? Automation reduced man-hours worked. We went from much longer work weeks to 40 hour work weeks as well as raising standards of living.

AI will reduce work time further. If someone can use AI to produce as much in 30 hours as they did in 40, they could chose to work anywhere from 30 - 40 hours and be better off. Many people would chose to work less as they compare the marginal values of free time and extra pay.

Why are we seeing long term unemployment instead of shorter work weeks now? Is this inevitable or is there some structural or institutional problem causing it?

Shorter work weeks didn't just happen. It took a huge amount of effort from unions, which were a lot more powerful then than they are now. Most jobs don't let you freely trade off how long you work for how much money you get. There are fixed per-employee costs, so businesses would rather have one person working 40 hours per week rather than two people working 20 hours per week. Especially when 40 is the norm and wanting to work less is "lazy".
In the US certain employers are required to provide health insurance for employees who work 40 hours per week or more, but not for employees who work 20 hours per week, so that is at least one incentive that would encourage hiring part-time employees vs full-time employees.
And we've noticed that many of the newly created jobs coming out of the recession are part time; the ones that were lost were full time. This is a reduction in employment-hours, even if it's not a reduction in number of employed people.
"Shorter work weeks didn't just happen. It took a huge amount of effort from unions, which were a lot more powerful then than they are now." I've never understood why people find this story compelling, precisely because of your final clause. If unions were the main force determining hours, why have hours continued to go down now that unions have been drastically weakened?
In the time since the peak power of labor unions, the number of benefits accruing to full-time but not part-time workers has increased, making it more economical to employ part-time workers for a lot of jobs. (Peak labor union membership in the U.S. was in 1955. This was also the year the AFL and CIO merged, thus removing effective competition in the market for union organizations.)

This is a good FAQ, but one thing that's bugging me. This bit from footnote #2:

This would also require some amount of decreased taxes on the next quintile in order to avoid high marginal tax rates, i.e., if you suddenly start paying $2000/year in taxes as soon as your income goes from $19,000/year to $20,000/year then that was a 200% tax rate on that particular extra $1000 earned.

Is warning about an error that almost no one makes, and thus ends up sounding kinda clueless in turn. Current tax codes are already written in terms of marginal rates, so ther... (read more)

"There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying."

Tyler Cowen is again relevant here with his http://www.amazon.com/The-Great-Stagnation-Low-Hanging-ebook/dp/B004H0M8QS , though I think he considers it less cultural than Thiel does.

"We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there... (read more)

300 IQ is 10 standard deviations above the mean. So picture a trillion planets each with a trillion humans on them and take the smartest person out of all of this and transport him to our reality and make it very easy for him to quickly clone himself. Do you really think it would take this guy five full years to dominate scientific output?


So picture a trillion planets each with a trillion humans on them

There is almost no way this hypothetical provokes accurate intuitions about a 300 IQ. It's hard to ask someone to picture something they are literally incapable of picturing and I suspect people hearing this will just default to "someone a little smarter than the smartest person I know of".

I know I'm doing that and I can't stop doing it. "A trillion planets each with a trillion humans on them" is something important, but I can't visualize it at all.
I'm picturing someone with the optimization power of the entire human civilization, which seems a little more tractable. It's also based on nothing whatsoever, but it's at least in the right direction? I hope.
7Eliezer Yudkowsky
Plenty of low-wage jobs have been automated away by machines over the last four centuries. You don't end up permanently irrevocably unemployed until all the work you can do has been automated away.
The big thing here is that as we had the massive productivity increases of the industrial revolution and the second industrial revolution, we had a corresponding dramatic increase in consumption to match the productivity increase, which kept employment steady; even if productivity increases by a factor of 20, it doesn't cause unemployment if consumption also increases by a factor of 20, which is basically what happened over the course of the 19th and 20th centuries. I'm not sure that's going to happen again, though. It might, of course, but if productivity increases and (for whatever reason) consumption doesn't continue to increase by the same factor, it would tend to cause unemployment.
0Eliezer Yudkowsky
This would ordinarily be diagnosed as an aggregate demand deficit and solved with additional money - it falls under the category of things that NGDP level targeting ought to solve unless there is something not further specified going on.
A genuine (possibly very stupid) question, since I have practically no knowledge about macroeconomics: when I think of my own preferences, I feel like I pretty much already have all the things that can be bought with money and that I might want. Yes, I would like to have somewhat more money, but mostly so that I could increase my savings to give me more of a safety cushion in case I ever need it, and of course to donate to altruistic aims. If I did start earning a lot more, there are very few things that I imagine would change WRT my own quality of life: maybe I'd eat out a little more, and possibly visit friends in distant countries more often, but for the most part I just don't have any desires that I'm currently unable to fulfill because of not having the money for it. Now my question is, if it happened that I was actually the typical case, who basically had no unfilled preferences of the kind that could be filled with extra money - and I freely admit that I might be very atypical in this respect, but supposing that I wasn't... then how would extra money solve the resulting lack of growth in demand, if everyone was basically already content with what they had?

If everyone already has everything they want, your economy is solved.

Ah, right. The first response that came to mind was "well, I might already have everything that I want, but what about those poor or unemployed folks we're worried about" - but of course, if there are such people with unsatisfied desires, then obviously that means that there's still an unmet demand that the increased production can help meet, and the extra money is so that the poor people can actually buy the fruits of that additional production? Thanks, that makes sense.

Don't forget about status goods. It's pretty much hardwired into humans to be competitive and one of the ways to compete is by having a bigger/shinier/better thing. Note: it's comparative, not absolute, so you can (and do) get into status fights which have no natural stopping point. Your desire is to be bigger than the other guy, and he has the same desire, so you both just escalate. For a real-life example look at superyachts owned by billionaires :-)
Do there exist any studies on how much money people actually spend on pure status pursuits? People keep mentioning that phenomenon, so I must assume that it exists, but I practically never seem to run into it in my own life, so I'm curious to what extent that's just me living in a bubble or not recognizing such purchases. (People putting money into stuff like collectible games does come close, but even that feels more like spending money on a hobby rather than on pure status, especially given that I used to enjoy collecting some CCGs even when I didn't know of anyone else who played them.)
I don't know but I also think that such studies would have major problems with data. In a way, whether you're buying utility or status is about intent. Let's say I like fishing and want a new boat. I can buy a 12' boat or a 18' boat. The 18' one is more powerful, convenient, but also more expensive. It's also bigger than my neighbor's 16' boat. I pick the 18' boat -- how do you determine which role my desire to trump my neighbor played in my decision to get the bigger one? In practical terms status competitions seem to take off when people have nothing useful to do with their money (aka once they personally pass into post-scarcity era). Or, of course, if they really want status. Look at what Russian oligarchs are buying. Look at what the Chinese are building (see e.g. http://edition.cnn.com/2013/07/24/world/asia/china-government-building-ban/?hpt=ias_c2). Do you think Dubai built its tallest building just because they wanted so much office space on so little land?
Well, I can tell you that as a general rule, if you give more money to the rich, they do not spend much more money then they would anyway. There has been economics research on this; basically, if you're trying to stimulate the economy during a recession, the govenrment can spend more money, or it can give tax breaks to the poor, the middle class, or the rich. Out of all of those options, giving tax breaks to the rich has the smallest stimulus impact, because if someone is already rich then increasing their income doesn't affect their spending very much. It has some impact, but it's small.
May I have a few links? I'd like to examine the research on this in more detail.
Sure. If you want to look up economic research about this, the main thing you would want to to look for is what they call the MPC, the "marginal propensity to consume"; that is, if you add 1 more dollar to someone's income, how much will their consumption increase. It's generally somewhere between 1 and 0, 1 being "you give someone another dollar in income and they spend all of it" and 0 being "the spend none of it". Generally speaking, MPC tends to decline the more income someone has. Here was one study, done in Italy in 2012 on the subject. http://www.stanford.edu/~pista/MPC.pdf It's worth mentioning that while the "marginal propensity to consume declines with income" idea is something that was assumed by Keynes and is part of Keynesian economic, it is something that others have contested. There is a lot of debate, for example, on the difference between "windfall income" and "permanent income" and how each affects MPC. But in general, if you look in most economic textbooks, the model you usually see is a sloping curve, where as income goes up MPC drops; it never goes quite to zero, there is usually some increase in consumption as you increase income, but it falls quite close to zero as income rises. The consumption function usually looks something like this: http://en.wikipedia.org/wiki/File:MPC.png
Perhaps. In practice, though, it seems like there does seem to come a point where adding more money doesn't necessarily increase demand significantly; if you already have a million dollars a year income, and that increases by a factor of 20, you probably aren't going to use 20 times as many consumer goods or services. Maybe advances in technology are going to create enough new types goods and services so that increasing demand can keep up with increases in production, but I'm not sure that that's guaranteed to happen. If it doesn't, it seems like there is some upper limit where enough stuff is produced for everyone without needing more then a fraction of the population to produce it.
And in a world where all work CAN be automated, human service can still exist side-by-side. A robot might be able to cut my hair, but I'd pay a premium to have a person do it because I enjoy the experience (I sometimes pay for barber shaves before job interviews rather than do it myself). Similarly, I'd probably pay a premium for an actual bartender over a barmonkey type robot in many settings. I pay a premium over Amazon at the nearby bookstore because I enjoy the old medieval history phd who runs the shop and his conversations/recommendations,etc. I can imagine a world full of robots where face-to-face service becomes the luxury item.

That is very plausibly a world in which unemployment is massively higher than today, if sentiment is the only remaining reason to employ humans at anything; and a world in which a few capital-holders are the only ones who can afford to employ all these premium human hairdressers etcetera. If this is how things end up, then I would call my thesis falsified, and admit that the view I criticized was correct.

If this happens, then some of the robots will start to look and behave exactly like humans. Robot prostitutes would look like human supermodels. This'll cause more unemployment.
Don't underestimate humans' desire for authenticity. As an example, note that even nowadays, some people do pay extra for handcrafted knickknacks and such like. You can say its a silly desire, but it's what they want. If you said to them "Hey, want to buy this factory made knickknack? It looks just like a handmade one." they would for the most part, just turn you down. For better or worse, the desire for authenticity seems to be a deep part of humanity's utility function. Or look at the well known thought experiment of the transporter device. You step in, it scans your body, disintegrates your body, sends the message of what your body was like to the destination transporter, which then reconstructs you, exactly like you were before. Most humans express serious misgivings about going through one of those. They feel it wouldn't be "the real them" anymore. Is that silly? Yes. But it reflects our human desire for "authenticity".
Or until the supply of low-skill workers depress the remaining low-skill wage beneath minimum wage/outsourcing. I think that we are eliminating a larger proportion of low-skill jobs per year than we ever have before, but I agree that the retraining and regulation issues you pointed out are significant.
Well, there's an obvious solution for that.

Yes, inflation.

I don't think he can hear you across the inferential chasm.

Could you point me in the direction of a bridge?
I would estimate even longer- a lot of science's rate limiting steps involve simple routine work that is going to be hard to speed up. Think about the extreme cutting edge- how much could an IQ-300 AI speed up the process of physically building something like the LHC?
Could you give three examples? (I’m not trying to be a wise-ass, I actually thought about it and couldn’t find any solid ones.)

Have you spent much time working in labs? Its been my experience that most of the work is data collection, where the process you are collecting data on is the limiting factor. Honestly can't think of any lab I've been apart of where data collection was not the rate limiting step.

Here are the first examples that popped into my head:

  1. Consider Lenski's work on E.coli. It took from 1988-2010 to get to 50k generations (and is going). The experimental design phase and data analysis here are minimal in length compared to the time it takes e.coli to grow and breed.

  2. It took 3 years to go from the first potential top quark events on record (1992) to actual discovery (1995). This time was just waiting for enough events to build up (I'm ignoring the 20 years between prediction and first-events because maybe a super-intelligence could have somehow narrowed down the mass range to explore, I'm also ignoring the time required to actually build an accelerator, thats 3 years of just letting the machine run).

  3. Depending on what you are looking for, timescales in NMR collection are weeks to months. If your signal is small, you might need dozens of these runs.

Also, anyone who has ever worked with a low temperature system can tell you that keeping the damn thing working is a huge time sink. So you could add 'necessary machine maintenance' to these sorts of tasks. Its not obvious to me that leak checking your cryonics setup to troubleshoot can be sped up much by higher IQ.

No, I did not, and it shows :-) Thank you for the examples, I see your point. I can imagine ways 300-IQ AIs would accelerate some of these that sound plausible to me, but since I don’t really have direct experience that might not mean much. That said, I notice that the bluej’s post mentioned the AI dominating scientific output, not necessarily increasing its rate by much. Of course, a single AI instance would not dominate science—as evidenced by the fact that the few ~200 IQ humans that existed didn’t claim a big part—but an AI architecture that can be easily replicated might. After all, at least as far as IQ is concerned, anyone who hires an IQ 140–160 scientist now would just use an IQ 300 AI instead. Of course, science is not just IQ, and even if IBM’s Watson had IQ 300 right now and I doubt enough instances of it would be built in five years to replace all scientists simply due to hardware costs (not to mention licensing and patent wars). But then again I don’t have a very good feel for the relative cost of humans and hardware for things the size of Google, so I don’t have very high confidence either way. But certainly 20 to 30 years would change the landscape hugely.
Yeah, exactly. Especially if you take Cowen's view that science requires increasing marginal effort.

And, we all pay upwards of 98% of all of our wealth to the hidden tax of inflation

This is nonsense.

Is the userbase comfortable labelling the seymour_results account a troll? If people still take the account seriously then there is obviously false information being flung about that requires correction. But from my perspective this reply triggered my 'do not feed' instincts so I suspect it may be time to revert to "downvote and ignore" as a harm minimisation tactic.

He explicitly said that he aims to get below -100 karma in a day so yeah.
They started out reasonable, but they seem to be getting more and more determined to say whatever they think people won't like. (It's costing me five karma points to say that, which IIRC means it's costing me the opportunity to downvote twenty more of their posts, so I wouldn't expect replies from many other people.)

One limit of the theory is that while it does state the new equilibrium will be "15 hot dogs in 15 buns" (ie, enhanced production and no more unemployment), and that has been verified in the past, it doesn't state at which rate the new equilibrium will be reached, nor what will happen in the transition period.

One possible hypothesis is that if the rate of change is too fast, no equilibrium can't be reached, the economy can't adjust fast enough to new technology.

I don't think it's the case right now - technology isn't going significantly faster than it was for most of the 20st century. But I think it's worth an entry in the FAQ.

Steve Keen's Debunking Economics blames debt, not automation.

Essentially, many people currently feel that they are deep in debt, and work to get out of debt. Keen has a ODE model of the macroeconomy that shows various behaviors, including debt-driven crashes.

Felix Martin's Money goes further and argues that strong anti-inflation stances by central bank regulators strengthen the hold of creditors over debtors, which has made these recent crashes bigger and more painful.

Having read Debunking Economics, I second this. The ODE model is pretty interesting actually. Among other things the thesis is that the second derivative of debt has a strong impact on aggregate demand. You can enhance aggregate demand (intrinsically without a central authority, as people like having stuff) with an accelerating rate of debt accumulation, which both eventually causes problems on its own in terms of instability and distribution of money within the population and pulls aggregate demand backwards from the future when the debt must be repaid. Such accelerating debt also lets you maintain exponentiation for longer if you are faced with external limits on the rate of expansion on the real economy because more and more of the money is in the form of financial instruments (which can effectively be wished out of existence) rather than tied to physical capital. Keeps an illusory boom going long after its physical basis is gone.

Regulations and minimum wage means hiring the worst workers in america is more expensive than getting the same production from workers in 3rd world countries, but the most marginal workers are not getting produced at any less a rate in america than they were in the past.

Not sure I followed your second clause, was 'most marginal' a euphemism for 'worst'?
yeah, like the phrase "on the margins". Sorry if that's not in common parlance!
Gotcha, thanks!

The early parts of this seem to fall apart when you switch from first-order qualitative reasoning to thinking about derivatives. Our basic observation is that the rate at which new technologies are automating away jobs now exceeds the rate at which new jobs are being created. Yes, this indicates a deficiency in the engine of reemployment, but putting all the focus on one side of the inequality seems disingenuous; every factor which changes the values on either side matters, cumulatively. Yeah, reemployment isn't working; but we're also pushing harder on it... (read more)

[This comment is no longer endorsed by its author]Reply
6Eliezer Yudkowsky
A testable consequence of your assertion is that labor market turnover should not have been higher in previous decades than now. Do you believe this would appear in the data? Would you bet on it?

I predict that labor market turnover is higher now than it was in past decades, for as many decades as we have reliable data.

Goes and checks.

BLS data on total separations as a percentage of total employment. It only goes back to Dec 2000, but that is enough to surprise me: the separation side of the turnover fell from 4.0 to 3.2. So my hypothesis, that the rate of automation has increased by enough to significantly impact the labor market, is falsified.

Edit: Actually, after a bit more research I'm not so sure - in particular, I found this which claims that there are 2.7M temporary workers (+50% over the last four years). Converting temporary-worker count into turnover rate is tricky, but this is a symptom you'd expect if turnover has increased, and I don't think it's included in the BLS data.

8Eliezer Yudkowsky
+1 for empiricism. Although on due reflection I think the number we want is not so much turnover in people, but the number of job positions that are eliminated without someone being rehired for them. There might be economists tracking this. Turnover probably correlates with this to some degree, but not perfectly.

The US educational system is either getting worse at training people to handle new jobs, or getting so much more expensive that people can't afford retraining, for various other reasons. (Plus, we are really stunningly stupid about matching educational supply to labor demand. How completely ridiculous is it to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so? Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three cho

... (read more)
This the idea of exposing students to actual workplaces. That would provide much better information than full time teachers could.

It seems to me that a good model of the great recession should include as its predictions that male employment would be particularly hard-hit even among recessions (see https://docs.google.com/spreadsheet/ccc?key=0AofUzoVzQEE5dFo3dlo4Ui1zbU5kZ2ZENGo4UGRKbFE#gid=0). I think this probably favors ZMP (see http://marginalrevolution.com/marginalrevolution/2013/06/survey-evidence-for-zmp-workers.html). Edit: after normalizing the data with historical context, I'm not so sure.

secularly increasing long-term unemployment

What does "secularly" mean here...? I don't think I'm familiar with this usage.

6Said Achmiz
Interesting! Thanks. I was totally not aware of that alternate definition.
9Rob Bensinger
Latin sæculum ("era, generation, century") came to be associated with the ever-changing human, political, worldly order, as contrasted with the divine one. Secular time moves in circles; divine time spins in place.
Compare temporal, which is also used to mean "worldly" (as opposed to divine).

I'm not sure I completely follow your reply to the hot dog and bun example. As the questioner pointed out, we may simply be reaching a saturation of the amount of hot dogs and buns we need. Maybe I'm being unfair but I feel you hand-waved that concern away. You say that:

We do not literally have nothing better for unemployed workers to do. Our civilization is not that advanced.

Which is true, but doesn't address the question, because you don't have to have robots replace 100% of humans for some people to find themselves without a job.

It's plausible we

... (read more)

I think the view that automation is now destroying jobs, the view that the economy always re-allocates the workforce appropriately and the views defended in this anti-FAQ all rest on a faulty generalisation. The industrial revolution and the early phases of computerisation produced jobs for specific reasons. Factories required workers and computers required data entry. It wasn't a consequence of a general law of economics, it was a fortuitous consequence of the technology. We are now seeing the end of those specific reasons, but not because of a general tr... (read more)

The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.

I wouldn't be surprised if this was the case, and I'd be very surprised if the end of cheap (at least, much cheaper) petroleum has nothing to do with that.

I would think it had to do with that and ALSO the diminishing returns of pulling more of the world's population into market systems (which is getting a hell of a lot closer to saturation), population growth (which is slowing), rolling out already existing technology and infrastructure to areas that didn't have it, finding interesting and useful ways to arrange matter that require large amounts of money to be spent, and general diminishing capacity for exponential growth in a world that is becoming much closer to 'full' in terms of rates of what we can suck from ecosystems and the ground. Maintenance cost for all of our established physical and social capital we have built up also has to be considered.

It's plausible we'll never see a city with a high-speed all-robotic all-electric car fleet because the government, after lobbying from various industries, will require human attendants on every car - for safety reasons, of course!

I believe I have alredy pointed out that automatic trains already exist. Putting a human superintendent onto a train with nothing to do except watch it drive itslef would be quite ineffectice, because the job is so boring they are unlikely to concetrate. I believe exisitng driverless trains are monitored by CCTV, which is more effective since the monotirs actually ahve something to do in flicking between channels, and could be appied to driverless cars.

Unless we can harness the power of Desert Bus for Hope. Sadly, I don't think it applies here.

I admit, I stopped reading the linked paper when I saw the page count, but I don't see why you're rejecting decades of 60<IQ<100 AIs as implausible (uninteresting is another matter, but some people are interested). An IQ70 AI is little more able to self-improve than a IQ70 human is able to improve an AI. Even an IQ120 human would have trouble with that. The task of bringing AIs from IQ60 to IQ140 where they can start meaningfully contributing to AI research falls to IQ180 humans, and will probably take a long time.

Not that talking about the IQ of ... (read more)

An IQ70 AI is little more able to self-improve than a IQ70 human is able to improve an AI

This is not obviously true. We're a lot less well optimized for improving code than some conceivable AIs can be: a seed AI with relatively modest general intelligence but very good self-modification heuristics might still end up knocking our socks off.

That said, there's a much larger design space where this isn't the case.

8Rob Bensinger
Human programmers at IQ 70 also can't be run anywhere near as fast as AI programmers at IQ 70.

My [unverified] intuition on AI properties is that the delta between current status and 'IQ60AI' is multiple orders of magnitude larger than the delta between 'IQ60AI' and 'IQ180AI'. In essence, there is not that much "mental horsepower" difference between the stereotypical Einstein and a below-average person; it doesn't require a much larger brain or completely different neuronal wiring or a million years of evolutionary tuning.

We don't know how to get to IQ60AI; but getting from IQ60AI to IQ180AI could (IMHO) be done with currently known methods in many labs around the world by the current (non IQ180) researchers rapidly (ballpark of 6 months maybe?). We know from history that a 0 IQ process can optimize from monkey-level intelligence to an Einstein by bruteforcing; So in essence, if you've got IQ70 minds that can be rapidly run and simulted, then just apply more hardware (for more time-compression) and optimization, as that gap seems to require exactly 0 significant breakthroughs to get to IQ180.

While I agree with almost all of the antifaq (the general point is apparently more plausible to economists than non-economists), this is pretty misleading:

"The future cannot be a cause of the past."

True, but human expectations about the future can be very important in the present. If you expect that 5 years from now FAI will take over, you won't bother to make many long-term investments like building factories and training new workers.

Agree with your agreement about the FAQ, and your disagreement with causal decision theory. It's to be noted that EY acknowledges the limitations of that apparently misleading statement elsewhere with very sophisticated arguments. Though, they were probably judged too tangential to elaborate upon here. In fact, he pioneered alternative perspectives (see: updateless decision theory and timeless decision theory)

So I decided to see how much negative karma I could amass before a "singularity type event."

Y'know, that was a fun game when sites like Slashdot first started to implement a karma system -- it was all new and shiny and of course people wanted to see how do you break one of those.

Hint: that was long time ago. By now negative karmawhoring is strictly in the domain of a certain class of creatures which are not known for the smarts or good hair styling.

Only marginally related to the topic -- not sure if this belongs here or to the Open Thread:

What do people think the effect of raising or lowering the retirement age would be on unemployment? Intuitively, I'd guess that lowering the retirement age means that more old people will retire, and more young people will be needed to take up their jobs, lowering the unemployment rate (and effectively transferring wealth from old to young generations). But I can remember very few people (almost exclusively in meatspace) ever suggesting lowering the retirement age t... (read more)

Lowering the retirement age also increases the number of people receiving pensions and other retirement benefits; many of those benefits are underfunded (depending on country) and quite expensive to pay out. New, young workers also tend to come in at lower pay scales than older workers leave. Those two effects can plausibly increase the cost to government of retirement, so they don't want people to retire early. That might also function as a wealth transfer from elderly people to corporations (or shareholders) too.
Seems to me that Europe is reducing unemployment by letting more people study at universities. It's like a pissing contest which country will have higher % of people with university degree, regardless of their quality. -- Sometimes it seems like we will soon have a university in every village, everyone will be a student until their thirties, most students will leave the university illiterate, and then they will either have to work till age 80 to retire, or spend their whole lives unemployed or working for the government. Actually, this can make sense from a utilitarian viewpoint. Young people supposedly can enjoy their free time better than old people, so we are actually trading retirement for longer childhood. It just feels horrible to people with priorities like me. I would rather learn efficiently, work efficiently, and retire soon knowing that I have already paid my debts to the society. Probably because I am having more fun now than when I was younger -- I have more freedom, more money, more professional and social skills, and my health is still okay -- so the only complaint is that I feel like I don't have free time for anything, and the end of the rat race is nowhere near. (But that's also an issue with my horrible time management and not being strategic enough.)
True that. I hadn't thought of it as a contest among countries but only as one among people within each country, but now that I think about it...

I am very interested to see how Eliezer’s perspective has changed now in 2024..

The effect of automation is to lessen the need for human engagement in producing something. But automation isn't free. There is sometimes a decision to invest in a new machine or another employee. Machines are a form of capital. As you own more capital in the form of machines, you have an edge over someone who doesn't. In the capitalist model, advantages must be exploited as much as possible to remain competitive.

In our society, the difficulty seems to be how to address the problem that less human effort is required. According to supply and demand, this me... (read more)