One of biases that are extremely prevalent in science, but are rarely talked about anywhere, is bias towards models that are mathematically simple and easier to operate on. Nature doesn't care all that much for mathematical simplicity. In particular I'd say that as a good first approximation, if you think something fits exponential function of either growth or decay, you're wrong. We got so used to exponential functions and how convenient they are to work with, that we completely forgot the nature doesn't work that way.

But what about nuclear decay, you might be asking now... That's as close you get to real exponential decay as you get... and it's not nowhere close enough. Well, here's a log-log graph of Chernobyl release versus theoretical exponential function, plotted in log-log.

Well, that doesn't look all that exponential... The thing is that even if you have perfect exponential decay processes as with single nucleotide decay, when you start mixing a heterogeneous group of such processes, the exponential character is lost. Early in time faster-decaying cases dominate, then gradually those that decay more slowly, somewhere along the way you might have to deal with results of decay (pure depleted uranium gets **more** radioactive with time at first, not less, as it decays into low half-life nuclides), and perhaps even some processes you didn't have to consider (like creation of fresh radioactive nuclides via cosmic radiation).

And that's the ideal case of counting how much radiation a sample produces, where the underlying process is exponential by the basic laws of physics - it still gets us orders of magnitude wrong. When you're measuring something much more vague, and with much more complicated underlying mechanisms, like changes in population, economy, or processing power.

According to IMF, world economy in 2008 was worth 69 trillion $ PPP. Assuming 2% annual growth and naive growth models, the entire world economy produces 12 cents PPP worth of value in entire first century. And assuming fairly stable population, an average person in 3150 will produce more that the entire world does now. And with enough time dollar value of one hydrogen atom will be higher than current dollar value of everything on Earth. And of course with proper time discounting of utility, life of one person now is worth more than half of humanity millennium into the future - exponential growth and exponential decay are both equally wrong.

To me they all look like clear artifacts of our growth models, but there are people who are so used to them that they treat predictions like that seriously.

In case you're wondering, here are some estimates of past world GDP.

I have this vague idea that sometime in our past, people thought that knowledge was like an almanac; a repository of zillions of tiny true facts that summed up to being able to predict stuff about stuff, but without a general understanding of how things work. There was no general understanding because any heuristic that would begin to explain how things work would immediately be discounted by the single tiny fact, easily found, that contradicted it. Details and concern with minutia and complexity is actually

anti-sciencefor this reason. It’s not that deta... (read more)I recall a discussion I had with a fellow econ student on the effects of higher taxes. He said something to the effect of, "Higher taxes are inefficient, and all you need to do to prove that is to draw the graph." (Unfortunately the topic changed before I could point out the problems with this statement.)

This (rather common) view reflects two major problems with modeling (particularly in economics): an amoral value (economic efficiency) becomes a normative value because it's relatively easy to understand and (in theory) measure, and, more relevan... (read more)

I think that this is a heuristic rather than a bias, because favoring simple models over complex ones is generally a good thing. In particular, the complexity prior is claimed by some to be a fundemental principle of intelligence.

To me, the problem is not "Mathematical Simplicity Bias," but rather, failing to check the model with empirical data. It seems totally reasonable to start with a simple model and add complexity necessary to explain the phenomenon. (Of course it is best to test the model on new data.)

Also, if you're going to claim Mathematical Simplicity Bias is, "One of biases that are extremely prevalent in science," it would help to provide real examples of scientists failing because of it.

In general, rules of thumb have two dimensions - applicability (that is the size of the domain where it applies) and efficacy (the amount or degree of guidance that the rule provides).

Simplicity, a.k.a Occam's Razor, is mentioned frequently as a guide in these (philosophy of science/atheist/AI aficionado) circles. However, it is notable more for its broad applicability than for its efficacy compared to other, less-broadly-applicable guidelines.

Try formulating a rule for listing natural numbers (positive integers) without repeats that does not generally tre... (read more)

A discharging capacitor is a pretty good fit for exponential decay. (At least, until it's very very close to being completely discharged.)

I don't see how they can even try to apply -any- curve to something that has feedbacks over time, like population or gdp. Technology is an obvious confounding variable there, with natural events coming into play as well.

"All models are wrong, some are useful" - George Box

Comparing a made-up exponential to a process that no scientist who knew anything about radioactivity would expect to model with anything but a sum of coupled exponentials is a bit of a straw man. There's a bias to simplicity, certainly, but there's not

thatmuch bias!Obligatory topical XKCD link. Though it's linear, not exponential.

I do not think your claim is what you think it is.

I think your claim is that some people mistake the model for the reality, the map for the territory. Of course models are simpler than reality! That's why they're called "models."

Physics seems to have gotten wiser about this. The Newtonians, and later the Copenhagenites, did fall quite hard for this trap (though the Newtonians can be forgiven to some degree!). More recently, however, the undisputed champion physical model, whose predictions hold to 987 digits of accuracy (not really), has the ... (read more)

Any continuous function is approximately linear over a small enough scale. ;)

My computer is biased toward not running at 100 petahertz and having 70 petabytes of RAM. My brain is biased toward not using so many complicated models that it needs 1 trillion neurons each with 1 million connections and firing up to 10,000 times per second.

And now for something perhaps more useful than sarcasm, it seems to me that people tend to simply come up with the consistent model that is either the easiest one to compute or the simplest one to describe. Are heuristics for inconsistency, such as "exponential growth/decay rarely occurs in nature", quickly spread and often used? How about better approximations such as logistical growth?

I recall a discussion I had with a fellow econ student on the effects of higher taxes. He said, "Higher taxes are inefficient; should I draw the graph." (Unfortunately the topic changed before I could dissect this for him.)

This (rather common) view reflects two major problems with modeling (particularly in economics): an amoral value (economic efficiency) becomes a normative value because it's relatively easy to understand and (in theory) measure, and, more relevant as an example for this post, the model is seen as demonstrating reality, and not ... (read more)

Generally (and therefore somewhat inaccurately) speaking, one way that our brains seem to handle the sheer complexity computing in the real world us is a tendency to simplify the information we gather.

In many cases these sorts of extremely simple models didn't start that way. They may have started with more parameters and complexity. But as they were repeated, explained and applied the model becomes, in effect, simpler. The example begins to represent the entire model, rather than serving to show only a piece of it.

Technically the exponential radioactive... (read more)