This week Thomas Philippon posted a paper (PDF) claiming that TFP growth is linear, not exponential. What does this mean, and what can we conclude from this?

A few people asked for my opinion. I’m not an economist, and I’m only modestly familiar with the growth theory literature, but here are some thoughts.

Background for non-economists

Briefly: what is TFP (total factor productivity)? It’s basically a technology multiplier on economic growth that makes capital and labor more productive. It isn’t measured directly, but calculated as a residual by taking capital and labor increases out of GDP growth. What remains is TFP.

In neoclassical growth theory, TFP matters because it enables economic growth. Without increases in TFP, growth would plateau at some per-capita income level limited by technology. We can increase per-capita output if and only if we continue to improve the productivity multiplier from technology.

What does the paper say?

Philippon’s core claim is that a linear model of TFP growth fits the data better than an exponential model. Over long enough time periods, this is actually a piecewise linear model, with breaks.

To demonstrate this, the two models are subjected to various statistical tests on multiple data sets, mostly 20th-century, from the US and about two dozen other countries. In a later section, the models are tested on European data from 1600–1914. The linear model outperforms on pretty much every test:

Model D is linear; Model G is exponential

One theoretical implication of linear TFP growth is that GDP per capita can continue to grow without bound, but that growth will slow over time. Depending on the assumptions you make, growth will converge either to zero or to some positive constant rate.

What to think?

First, I think the evidence Philippon presents at least for the 20th century is compelling. It really does seem that TFP is growing linearly, at least in the last several decades. This is a mathematical model of the Great Stagnation.

Over longer periods of time, however, a pure linear model doesn’t work. TFP, and GDP, clearly grow faster than linear over long periods:

Our World in Data

In fact, our best estimates of very long-run GDP growth grow faster than exponential:

Our World in Data

(Note that both of these charts are on a log scale.)

Philippon deals with this by making the model piecewise linear. At certain points, the slope of the line discretely jumps to a new value. Philippon puts the 20th-century break at about 1933:

Pre-20th century breaks occurred in 1650 and 1830:

The breaks are determined via statistical tests, but they are presumed to represent general-purpose technologies (GPTs). The 1930s were a turning point in electrification, especially of factories; 1830 was the beginning of railroads and the Second Industrial Revolution. 1650 seems less clear; it might be due to rising labor input that is not accounted for in the calculations, or to the rise of cottage industry.

This makes sense, but it leaves open what is to me the most interesting question: how often do these breaks occur, and how big are the jumps?

Philippon briefly suggests a model in which the breaks are random, the result of a (biased) coin flip each year, with probability ~0.5% to 1%. However, I find this unsatisfying. Again, over the long term, we know that growth is super-linear and probably even super-exponential. If the piecewise-linear model is correct, then over time the breaks should be larger and spaced closer together. But the Poisson process implied by Philippon’s model doesn’t fit this historical pattern. And there isn’t even a suggestion of how to model the size of the change in growth rate at each break. So, this model is incomplete.

The interesting idea this paper points to, IMO, is not about long-run growth but about short-run growth. It suggests to me that economic progress might be a “punctuated equilibrium” rather than more smooth and continuous. I don’t think this changes our view of progress over the long term. But it could change our view of the importance of GPTs. And it could help to explain the recent growth slowdown (“stagnation”).

If this model is right, then “why is GDP growth slowing?” is answered: it is normal and expected. But there might be a different stagnation question: is the next GPT “late”? When should we even expect it? Related, why don’t computers show up as a GPT causing a distinct break in the linear TFP growth path? Or is that still to come (the way that the break from electrification didn’t show up until the 1930s?)

Stepping back a bit, it seems to me that the big puzzle of growth theory is that we see super-exponential growth over the very long term, but sub-exponential growth over the last several decades. I don’t think we yet have a unified model that explains both periods.

Anyway, I found this paper interesting and I hope to see other economists responding to it and building on it.

See also Tyler Cowen’s comments.

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 11:06 AM

Here is my take on this:

Looking at FRED's data, TFP growth in the US has averaged 0.7%/year with an annual standard deviation of around 1.1%/year since 1955. The data looks consistent with both a linear trend and an exponential trend, essentially because the time window is too short, so I think based on this data alone it's impossible to get a large likelihood factor for one model vs another in either direction. My estimate of the value added of increasingly elaborate statistical methods on that evidence is low.

The "fast growing economies" data the paper looks at is highly misleading because it's a well-known fact that growth in these countries slows down as they become less poor compared to the frontier economies. Therefore we'd expect all growth in these countries to slow down as they become richer, not just TFP growth. It's not unexpected that a linear model can get a better fit in this particular case. The really interesting question is whether this is also true of frontier economies, i.e. ones that can't "catch up" to countries that are richer and more productive than they are.

The paper also looks at data from before 1955, but here as you say it needs to start introducing new pieces into the piecewise linear model, which undercuts the persuasiveness of its argument considerably. The main problem is that a piecewise linear function with only two pieces already has four free parameters, compared to an exponential fit which has only two. It's not at all surprising that a model with four free parameters fits the data much better. Adding more breaks makes this problem even worse, since as far as I can see the paper never puts the exponential growth model on parameter parity with the piecewise linear growth model for a fair comparison.

My guess is that if we look at the post-1800 (or post-1600) period, the exponential growth model will outperform any linear growth model at parameter parity. This means that a constant exponential growth rate model beats a constant linear growth rate model, and if you complicate the linear model by adding time-varying growth rates with punctuated equilibria whose arrival times are sampled from a Poisson process or whatever, there will be a similar way to adjust the exponential growth model with extra parameters via autocorrelations etc. such that it will still outperform the linear model if they are working at parameter parity. Of course in the "too many parameters" regime the linear model will be able to approximate the exponential one sufficiently well, so in this case I think the performance could end up being comparable in sample but the exponential model will not underperform. I'm willing to bet real money on this.

There's also a broader point here that TFP itself is a dubious measure. It's a residual estimate of a log-linear regression of GDP on labor and capital stocks in what is certainly a misspecified model (Cobb-Douglas, with only labor and capital inputs!) There's also a well known phenomenon that if in your sample labor and capital shares of national income have remained relatively stable, you'll end up estimating that a Cobb-Douglas function is a good fit for it simply because a Cobb-Douglas is defined by the property that the factor shares of income are constant. I think taking it too seriously is a mistake and the fact that such an artificial measure may have sub-exponential growth definitely doesn't imply that "growth is linear and not exponential".

Overall I think the paper is not informative or interesting, and seems to be motivated more out of a desire to be contrarian than anything else.

Update: I've dug into the data a little bit more and it seems like the main advantage of a linear model on near-term data is that it explains away the "great moderation" puzzle. The paper does point this out, but because it doesn't do any log likelihood comparisons it's not easy to appreciate how crucial this is. An exponential growth model with additive noise is only somewhat worse on the data since 1955, perhaps a likelihood factor of ~ 2 favoring the linear model. However, an exponential growth model with multiplicative noise is overwhelmingly worse, with likelihood factors of 50 or more in favor of the linear model.

This is a subtlety that I think the paper doesn't emphasize enough. It does note that the linear model has no heteroskedasticity puzzle, but in fact separating the noise structure from the trend structure takes away most of the edge the linear model has over the exponential model on postwar data. Likewise, adding noise decay to both the linear and the exponential model gets their likelihood ratios in the same ballpark, with a likelihood ratio of < 2 favoring the linear model.

For the moment I've updated towards recent data providing somewhat stronger evidence in favor of a linear model than I'd thought, but I think extrapolating the low noise predictions of the linear model into the future is dangerous practice if you want to generate forecasts & probably leads to overconfidence. I think if the paper had actually mentioned this point about log likelihoods I would have been more sympathetic. As a result, I've crossed out the last sentence of my original comment.

Following up this comment, anyone can run this Python script to confirm the finding that if we have TFP whose logarithm follows a Brownian motion with drift with ~ 0.7%/year mean growth and ~ 1.1%/year annual volatility for 133 years, even though the correct model is exponential growth a piecewise linear model with two pieces consistently gets lower log-L2 loss when we fit it to the data.

[TFP] isn’t measured directly, but calculated as a residual by taking capital and labor increases out of GDP growth.


What does it mean to take out capital increases?

I assume taking out labor increases just means adjusting for a growing population. But isn't the reason that we get economic growth per capita at all because we build new stuff (including intangible stuff like processes and inventions) that enables us to build more stuff more efficiently? And can't all that new stuff be thought of as capital?

Or is what's considered "capital" only a subset of that new stuff that fits into particular categories — maybe tangible things like factories and intangible things only when someone puts an explicit price on them and they show up on a firm's balance sheet?

If that's the case, it seems like TFP is kind of a God-of-the-gaps quantity that is mostly a consequence of what's categorized as capital or not. And capital + TFP is the more "real" and natural quantity.

(But that might be totally wrong, because I don't know what I'm talking about.)

When you zoom in on an exponential curve close enough, it looks linear

Yes, but it doesn't stop looking exponential. The claim here is that a linear model fits better than an exponential one, at least over time periods that are several decades long.

Stepping back a bit, it seems to me that the big puzzle of growth theory is that we see super-exponential growth over the very long term, but sub-exponential growth over the last several decades. I don’t think we yet have a unified model that explains both periods.

 

I thought that was due to population growth stopping?

Unclear, because even though population growth has slowed, the number of researchers and the amount of research investment continues to grow, I think. So I'm not sure we would expect a growth slowdown yet. (Although it's concerning for the long-term future since of course the share of the population devoted to research can't exceed 100%.)