Non-causal models

Non-causal models are quite common in many fields, and can be quite accurate. Here predictions are made, based on (a particular selection of) past trends, and it is assumed that these trends will continue in future. There is no causal explanation offered for the trends under consideration: it's just assumed they will go on as before. Non-causal models are thus particularly useful when the underlying causality is uncertain or contentious. To illustrate the idea, here are three non-causal models in computer development:

  1. Moore's laws about the regular doubling of processing speed/hard disk size/other computer related parameter.
  2. Robin Hanson's model where the development of human brains, hunting, agriculture and the industrial revolution are seen as related stages of accelerations of the underlying economic rate of growth, leading to the conclusion that there will be another surge during the next century (likely caused by whole brain emulations or AI).
  3. Ray Kurzweil's law of time and chaos, leading to his law of accelerating returns. Here the inputs are the accelerating evolution of life on earth, the accelerating 'evolution' of technology, followed by the accelerating growth in the power of computing across many different substrates. This leads to a consequent 'singularity', an explosion of growth, at some point over the coming century.

Before anything else, I should thank Moore, Hanson and Kurzweil for having the courage to publish their models and put them out there where they can be critiqued, mocked or praised. This is a brave step, and puts them a cut above most of us.

That said, though I find the first argument quite convincing, I find have to say I find the other two dubious. Now, I'm not going to claim they're misusing the outside view: if you accuse them of shoving together unrelated processes into a single model, they can equally well accuse you of ignoring the commonalities they have highlighted between these processes. Can we do better than that? There has to be a better guide to the truth that just our own private impressions.

Counterfactual resilience

One thing I'd like to do is test the resilience of the model - how robust are they to change. If model M makes prediction P from trends T and the real outcome will be O, we can test resiliency in two ways. First, we can change the world to change T (and hence P), without changing O, or we can change the world to change O, without changing T (and hence P). If we can do either or both, this is a strong indication that the model doesn't work.

This all sounds highly dubious - how can we "change the world" in that way? I'm talking about considering counterfactuals: alternate worlds whose history embodies the best of our knowledge as to how the real world works. To pick an extremely trivial example, imagine someone who maintains that the West's global domination was inevitable four centuries after Luther's 95 theses thesis in 1517, no matter what else happened outside Europe. Then we can imagine counterfactually diverting huge asteroids to land in the Channel, or import hyper-virulent forms of bird flu from Asiatic Russia. According to everything we know about asteroid impacts, epidemiology and economics, this would not have lead to a dominant West for many centuries afterwards.

That was an example of keeping T and P, and changing the outcome O. It is legitimate: we have preserved everything that went into the initial model, and made the prediction wrong. We could take the reverse approach: changing T and P while preserving the outcome O. To do so, we could imagine moving Luther (or some Luther-like character) to 1217, without changing the rest of European history much. To move Luther back in time, we could perfectly imagine that the Catholic church had started selling and abusing indulgences much earlier than they did - corrupt clerics were hardly an impossible idea in the middle ages. It requires a bit religious and social changes to have the 95 these make sense in the thirteenth century, but not all that much. Then we could imagine that Luther-like character being ignored or burnt, and the rest of Western history happening as usual, without western world dominance happening four centuries after that non-event (which is what M would have predicted). Notice that in both these cases, considering counterfactuals allows us to bring our knowledge or theories about other facts of the world to bear on assessing the model - we are no longer limited to simply debating the assumptions of the model itself.

"Objection!" shouts my original strawman, at both my resiliency tests. "Of course I didn't specify 'unless a meteor impacts'; that was implicit and obvious! When you say 'let's meet tomorrow', you don't generally add 'unless there's a nuclear war'! Also, I object to your moving Luther three centuries before and saying my model would predict the same thing in 1217. I was referring to Luther nailing up his theses, in the context of an educated literate population, with printing presses and a political system that was willing to stand up to the Catholic church. Also, I don't believe you when you say there would need to not be 'all that much' religious and social changes for early Luther to exist. You'd have to change so much, that there's no way you could put history back on the 'normal' track afterwards."

Notice that the conversation has moved on from 'outside view' arguments, to making explicit implicit assumptions, extending the model, and arguing about our understanding of causality. Thus if these counterfactual resiliency tests don't break a model, they're likely to improve it, our understanding, and the debate.


The resilience of these models

So let's apply this to Robin Hanson's and Ray Kurzweil's models. I'll start with Robin's, as it's much more detailed. The key inputs of Robin's model are the time differences between the different revolutions (brains, hunting, agriculture, industry), and the growth rates after these revolutions. The prediction is that there is another revolution coming about three centuries after the industrial revolution, and that after this the economy will double every 1-2 weeks. He then makes the point that the only plausible way for this to happen is through the creation of brain emulations or AIs - copyable human capital. I'll also assume the implicit "no disaster" assumption: meteor strikes or world governments bent on banning AI research. How does this fare in counterfactuals?

It seems rather easy to mess with the inputs T. Weather conditions or continental drifts could confine pre-agricultural humans to hunting essentially indefinitely, followed by a slow evolution to agriculture when the climate improved or more lands became available. Conversely, we could imagine incredibly nutritious crops that were easy to cultivate, and hundreds of domesticable species, rather than the 30-40 we actually had. Combine this with a mass die-off of game and some strong evolutionary pressure, and we could end up with agriculture starting much more rapidly.

This sounds unfair - are these not huge transformations to the human world and the natural world that I'm positing here? Indeed I am, but Robin's model is that these differential growth rates have predictive ability, not that these differential growth rates combined with a detailed historical analysis of many contingent factors have predictive ability. If the model were to claim that the vagaries of plate tectonics and the number of easily domesticated species in early human development have relevance to how long after the industrial revolution would brain emulations be developed, then something has gone wrong with it.

Continuing on this vein, we can certainly move the industrial revolution back in time. The ancient Greek world, with its steam engines, philosophers and mathematicians, seems an ideal location for a counterfactual. Any philosophical, social or initial technological development that we could label as essential to industrialisation, could at least plausibly have arisen in a Greek city or colony - possibly over a longer period of time.

We can also tweak the speed of economic growth. The yield on hunting can be changed through the availability or absence of convenient prey animals. During the agricultural era, we could posit high-yield crops and an enlightened despot who put in place some understandable-to-ancient-people elements of the green revolution - or conversely, poor yield crops suffering from frequent blight. Easy or difficult access to coal would affect growth during the industrial era, or we could jump ahead by having the internal combustion engine, not the steam engine, as the initial prime driver of industrialisation. The computer era could be brought forwards by having Babbage complete his machines for the British government, or pushed backwards by removing Turing from the equation and assuming the Second World War didn't happen.

You may disagree with some of these ideas, but it seems to me that there are just too many contingent factors that can mess up the input to the model, leading some putative parallel-universe Robin Hanson to give completely different times to brain emulations. This suggests the model is not very resilient.

Or we can look at the reverse: making whole brain emulations much easier, or much harder, than they are now, without touching the inputs to the model at all (and hence its predictions). For instance, if humans were descendant from a hibernating species, it's perfectly conceivable that we could have brains that would be easy to fixate and slice up for building emulations. Other changes to our brain design could also make this easier. It might be that our brains had a different architecture, one where it was much simpler to isolate a small "consciousness module" or "decision making module". Under these assumptions, we could conceivably have had adequate emulations back in the 60s or 70s! Again, these assumptions are false - life didn't happen like that, it may be impossible for life to happen like that - but knowing that these assumptions are false requires knowledge that is neither explicitly nor implicitly in the model. And of course we have converses: brain architectures too gnarly and delicate to fix and slice. Early or late neuroscience breakthroughs (and greater or lesser technological or medical returns on these breakthroughs). Greater or lesser popular interest in brain architecture.

For these reasons, it seems to me that Robin Hanson's model fails the counterfactual resiliency test. Ray Kuzweil's model suffers similarly - since Kurweil's model includes the whole of evolutionary history (including disasters), we can play around with climate, asteroid collisions and tectonics to make evolution happen at very different rates (one easy change is to kill off all humans in the Toba catastrophe). Shifting around the date of the technological breakthroughs and that of first computer still messes up with the model, and backdating important insights allows us to imagine much earlier AIs.

And then there's Moore's law, starting with Moore's 1965 paper... The difference is immediately obvious, as we start trying to apply the same tricks to Moore's law. Where even to start? Maybe certain transistors designs are not available? Maybe silicon is hard to get ahold of rather than being ubiquitous? Maybe Intel went bust at an early stage? Maybe no-one discovered photolithography? Maybe some specific use of computers wasn't thought of, so demand was reduced? Maybe some special new chip design was imagined ahead of time?

None of these seem to clearly lead to situations where Moore's law would fail. We don't really know what causes Moore's law, but it has been robust for moves to very different technologies, and has spanned cultural transformations and changes in the purpose and uses of computers. It seems to lie at the interaction between markets demand, technological development, and implementation. Some trivial change could conceivably throw it off its rails - but we just don't know what, which means we can't bring our knowledge about other facts in the world to bear.


In conclusion: more work needed

It was the comparative ease with which we could change the components of the other two models that revealed their lack of resilience; it is the difficulty of doing so with Moore's law that shows it is resilient.

I've never seen this approach used before; more resilience tests only involve changing numerical parameters from inside the model. Certainly the approach needs to be improved: it feels very informal and subjective for the moment. Nevertheless, I feel that it has afforded me some genuine insights, and I'm hoping to improve and formalise it in future - with any feedback I get here, of course.

New Comment
78 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The ancient Greek world, with its steam engines, philosophers and mathematicians, seems an ideal location for a counterfactual. Any philosophical, social or initial technological development that we could label as essential to industrialisation, could at least plausibly have arisen in a Greek city or colony - possibly over a longer period of time.

But... steam engines, philosophers, and mathematicians aren't the critical elements for the Industrial Revolution! If they were, it would have happened in Greece.

Here's a short story of why the Industrial Revolution happened in Britain/Holland/Northern Germany:

  1. Six centuries of (mostly) peace, atomic households, and downward social mobility (i.e. the upper middle class having more children than the lower class, and their secondary and tertiary children becoming the lower middle class) led to a significant change in British demographics. It's little surprise that the industrial revolution would occur in a nation of shopkeepers, and it took about 30 generations of evolution to make them shopkeepery enough to have the industrial revolution.

  2. Abundant coal made energy cheap, even after wood and peat reserves were depleted. Steam engines be

... (read more)

It seems to me that the differences you measure between different theories here are entirely subsumed under N, the number of trials we have. For the Martin Luther theory, N=1 - Martin nailed up the theses twice. N=4 for Robin Hanson. I'm not sure how to measure N for Ray Kurzweil, but N for Moore's Law is close to 25. Your argument for Moore's Law seems mainly reliant on that: "robust for moves to very different technologies, and has spanned cultural transformations and changes in the purpose and uses of computers" - if any of the others had 25 examples, they would seem robust to all those things!

Both Robin's model and Ray's include Moore's law as a part of their input data, so they have at least as many trials as it does. You can argue they don't have as much info in the early eras, but simply counting the number of data points doesn't put Moore's law on top.
Robin's model takes as a given that periods of exponential growth occur and argues for a pattern in the length and relative rate of periods of exponential growth. Thus, trials are either entire periods of exponential growth or the transitions between them.

It seems rather easy to mess with the inputs T. Weather conditions or continental drifts could confine pre-agricultural humans to hunting essentially indefinitely

This is sort of amazing, but after a couple million years of hunting and gathering humans developed agriculture independently within a few thousand years in multiple locations (the count is at least 7, possibly more).

This really doesn't have a good explanation, it's too ridiculous to be a coincidence, and there's nothing remotely like a plausible common cause.

There's a very plausible common cause. Humans likely developed the traits that allowed them to easily invent agriculture during the last glacial period. The glacial period ended 10 000 years ago, so that's when the climate became amenable to agriculture.

Yes, I had always assumed that was the cause. (Which might have some kind of bearing on Great Filter-like reasonings.)
Agriculture developed very far from regions most affected by glaciation, and in very diverse climates, so any climatic common cause is pretty dubious.
Odd. Last I checked there were a dozen or two prominent theories on this, and at least twice as many hypotheses in general as for why we would observe this. Most of these I find plausible, and rather adequate considering the amount of information we have. One of my favorites is that long before this happened, some individuals learned how to do it, but could not transfer a sufficient portion of this knowledge to others, until selection effects made these individuals more frequent and improvements in communication crossed a threshold where it suddenly wasn't so prohibitively expensive anymore to teach others how to plant seeds and make sure they grew into harvestable plants. Once evolution had done its job and most ancestors were now capable of transmitting enough knowledge between eachother to learn basic agriculture, it seems almost inevitable that over several dozen generations, for any select tribe, there will be at least one individual that stumbles upon this knowledge and eventually teaches it to the rest of the tribe. Naturally, testing these hypotheses isn't exactly easy, so one could reasonably claim that there is no "good" explanation here. However, I wouldn't go cry "Amazing Anthropomorphic Coincidence That Trumps Great Filter!" at all either, as you say, and I'm not quite sure where you were going with this other than "oooh, shiny unanswered question!", if anywhere.
All theories of emergence of agriculture I'm aware of pretend it happened just once, which is totally wrong. Is these any even vaguely plausible theory explaining how different populations, in very different climates, with pretty much no contact with each other, didn't develop anything like agriculture for very long time, and then in happened multiple times nearly simultaneously? Any explanation involving "selection effects" is wrong, since these populations were not in any kind of significant genetic contact with each other for a very long time before that happened (and such explanations for culture are pretty much always wrong as a rule - it's second coming of "scientific racism").
Hmm. The more you know. Should I take this to imply that what I learned in high school and wikipedia is wrong, or very poorly understood? From what I know, throughout the paleolithic populations started developing the knowledge and techniques for sedentary lifestyles, food preservation, and growing plants, while at the same time spreading out across the globe. Then came the end of the ice age, and these populations started slowly applying this knowledge at various points in time, with a difference in the 10^4 order of magnitude between the earliest and slowest populations. That looks very much like the human species had "already been selected" before it was completely split into separate populations, though admittedly that alone as described in my previous comment isn't enough to explain how close they came to one another on the timeline (I would have expected a variance of ~50-80k years or so, if that were the only factor, rather than 10-11k). Edit: I only realized after posting both comments that I have a very derogatory / adversarial / accusational tone. This is not (consciously) intentional, and I'm really grateful you brought up this point. I'm learning a lot from these comments.
Backing up a step from this, actually... how confident are we of the "no contact with each other" condition? Speaking from near-complete ignorance, I can easily imagine how a level of contact sufficiently robust to support "hey, those guys over there are doing this nifty thing where they plant their own food, maybe we could try that!" once or twice a decade would be insufficient to otherwise leave a record (e.g., no commerce, no regular communication, no shared language, etc.), but there might exist plausible arguments for eliminating that possibility.
Well, we know pretty well that even when societies were in very close contact, they rarely adopted each other's technology if it wasn't already similar to what they've been doing. See this for example: If in this close contact scenario agriculture didn't spread, it's a huge stretch to expect very low level contact to make it happen.
(nods) Yup, if that theory is true, then the observed multiple distinct onset points of agriculture becomes more mysterious.
Interesting! That certainly reinforces Robin's model. Do you have source for that?
How does that reinforce Robin's model? It goes against it if anything. Imagine if humans, dolphins, bats, bears, and penguins nearly simultaneously developed language on separate continents. It would be a major unexplained WTF. You can start here, but Wikipedia has pretty bad coverage of that.
Robin's model makes more sense if we think of it as "some process we don't understand is behind all these repeated patterns". If agriculture indeed arises at a specific point for reasons we don't understand, it makes Robin's model more likely - and it makes it harder for us to counterfactually mess with the data.
I think a good steel-man version of Robin's model would start with a discussion of universality. I think I recall Robin having some thoughts along such lines, but I don't know Kurweil well enough to know whether he invokes such things.
That wouldn't make Robin's model more counterfactually resilient - it would just provide more evidence for the model, thus pitting our understanding of universality directly against our understanding of history, evolution, economics, etc...
I'm not quite sure what you're saying, but I think I agree with this. I guess I'm trying to flesh out the idea of "some process we don't understand", since Robin's model seems to depend on it (as do things like Moore's law, which is more strongly supported by the data). If we do assume universality, counterfactual resiliency is still a useful method of analysis, and we can even further clarify the reasons why by pointing out that models of universal behaviour usually involve the aggregation of many small, mostly independent effects. However, some evidence against counterfactual resilience is weakened. For example, we could take the counterfactual that the Greeks had an industrial revolution, but we might not actually know how plausible that is. Models like Robin's that conjecture universality would predict that there were reasons that that couldn't have happened, so we need to be more familiar with the data before a claimed instance of nonresilience can really be considered good evidence against the model. Thus, the idea of universality allows us to more accurately evaluate the strength of arguments of this sort.
I think I agree with you - what I'm saying is that if we had evidence for universality or for Robin's model, and we also had evidence that the Greek (or Chinese) industrial revolutions could have happened early, then we can now pit these two sources of evidence directly against each other...
Hmm, apparently 'behavioural modernity', 'most recent common ancestor' and 'out of Africa' are all around 50 000 years ago. Until about 10 000 years ago a great deal of the world was under thick ice sheets, and probably a lot of the rest was cold, so there probably weren't that many humans alive. If you give each living person a tiny chance of 'inventing agriculture', then "multiple recent inventions thousands of years apart" sounds about right to me. I realize that that's a completely implausible model, but I'm not sure why a more realistic one would make it 'too ridiculous to be a coincidence', and if you require plant evolution as part of the scheme, that will push the expected dates later.
Some counterpoints: * "Behavioural modernity" is a hypothesis which is very far from being universally accepted. Many features supposedly of behavioural modernity have some reasonable evidence of existence far earlier. * Any hypothesis linking behavioral modernity with language (the only plausible common cause) is on extremely shaky grounds since as far as we know Neanderthals had language just as well, and that pushes language to nearly 1mya. * Behavioural modernity without common cause like language, and without any definite characteristics that weren't present earlier in some form is far less plausible, and pretty much falls apart. * Migration out of Africa is dated at anywhere between 125kya and 60kya, not 50kya. * Even starting count at 60kya, agriculture being invested 10kya multiple times independently is still extremely surprising. * Even disregarding admixtures with Neanderthals, Denisovans etc. most recent common ancestor is more like 140kya-200kya by mitochondrial and Y chromosome dating. Dating anything here is very dubious, so you can find a number that fits your hypothesis whatever your hypothesis might be. * At each point of history vast majority of humans lived in places very far from those covered by ice, or particularly cold. Agriculture was invented only in places far from ice. These are still climatic effects like rainfall that depend on glaciations, but these are much more tenuous links. * Modern attempts at domesticating plants and animals show it takes a few decades, not tens of thousands of years. Now these are done with benefit of modern science and technology, but still it doesn't imply tens of thousands of years. * Agriculture developed in some places very soon after human settlement, like maize and potato agriculture, so that's another argument against requiring thousands of years of plant evolution. * If it took plants and animals tens of thousands of years on average, then surely there would be a huge spread in time of

How do you judge the plausibility of a counterfactual?

You say "we can imagine" some of these scenarios more easily than others. But our imaginations aren't magic. There are plenty of things I can imagine that on closer examination are virtually impossible. And plenty of real things that I couldn't imagine until I knew about them.

If we had a good causal model, we could apply it. But we're usually interested in non-causal models precisely when causal models are intractable.

If the counterfactuals' plausibilities boil down to "I said so", then so does the entire argument.

Schematically: 1) model M claims that X happened necessarily the way it did, for reasons we don't understand. 2) A critic presents a counterfactual C where X doesn't happen that way, while C is still consistent with the model. To argue that C changes X, he uses causal reasoning. 3) The defenders of the model must now either abandon the model, show that C is not actually consistent with M, or refute the claim that C changes X. 4) The conversation has now progressed beyond direct claims of likelyhood or not of M.
It depends what kind of argument is being made via presenting the counterfactual. If it is challenging the generalisability of a strategy (like a moral system or a decision theory) then an implausible counterfactual is just what is needed. Plausibility would be a distraction.

Well, Moore's law is used by the semiconductor industry as a road map, so no wonder it's that accurate. Stuff I beemind tends to have an approximately linear time variation, but that's just because if I'm above the line I slack off and if I'm below the line I concentrate on it.

It's still a wonder that they manage to hit their road map goals so consistently. If Moore's Law holds because the industry works to uphold the goal, why not set the goal to be a doubling each month, while you're at it? Clearly there must be some limits to what is possible in theory and to what makes sense economically, and it's strange that Moore's law continues to meet both constraints year after year.
Clock frequency seems to have already leveled out, Moore's law continues to hold for transistor density.
“Limits to what is possible in theory” might be waaay looser than what actually happens, and given that they are an oligopoly it might be the case that pretty much anything reasonable would “make sense economically” if they decide so.
Which still raises the question of why this is so, when it isn't true in other industries. (I don't think the medical industry would have a cure for aging by now even if they'd decided in 1950 to make a roadmap goal of having it by 2000.)

I think this approach show some promise, but if I am understanding it correctly it seems like it has a significant weakness. From what I understand non-causal models do assume that there is something causing the trend, they just don't address what it is. It seems like when a model fails this resiliency test it could be because its a bad model, but it is also possible that the hidden cause makes the counterfactual less likely than it seems. More generally, the models make retroactive predictions about the real world and treat the entire real world as a bl... (read more)

The approach can show a tension between the non-causal model and our causal understanding of events. If the model says agriculture must have happened this way, and our knowledge of biology says the opposite, then we need to abandon one of our theories.

The model I have of human progress is this. Intelligence is not the limiting factor. Things are invented quite soon after they become possible and worthwhile.

So, let's take the steam engine. Although the principle of the steam turbine is known to the Greeks, actual steam engines are only commercially viable from the time of Newcomen's atmspheric engine. Why not earlier?

Well, there is an existing technology to displace, first of all, which is a couple of unfortunate animals walking in a circle driving an axle. This is far more fuel efficient than the steam ... (read more)

[...] Out of curiosity, what was it that made better metalugy possible?
Gunpowder, which required Iron Working and Invention, and so on.
The industrial revolution has some very tightly coupled advances, The key advance was making iron with coal rather than using charcoal. This reduced the price, and a large increase in quantity of manufacture followed. One of the immediate triggers was that England was getting rather short of wood, and the use of coal as a substitute started for iron-making and heating. The breakthrough in steelmaking was initially luck - some very low sulphur coal was found and used in steelmaking. But luck arises often out of greater quantities of usage, and perhaps that was the key here. It certainly wasn't science in the modern sense as the chemistry of what was going on wasn't really understood - certainly not by the practitioners of the time. Trial and error was therefore the key, and greater quantity of manufacture leads to more trials.

It's a good article, but I think you are being more lenient in your alternative scenarios to Moore's law, and only really trying to mess up the other two. An asteroid wiping out half the earth's population in 1985 or a worldwide conversion to Amish principles would definitely screw up Moore's law, but...those type of things only happen to naughty predictions. On top of that, no matter how wacky the divergence from reality, you have been able to insert a plausible revision of the prediction based on that scenario, so for every blow you deal, you are also providing the bandage along with it; how imaginative you choose to be is what is determining the resilience.

Do you want to try and counterfactually break Moore's Law? I didn't think I was being over-lenient, but it's certainly possible!
^That doesn't count?
That sort of stuff messes up any model. I might predict that Obama will be elected next US president; or I might predict that I will be elected US president. Both of these prediction will be wrong if an asteroid wipes out the human population, but that doesn't mean that both are equally good. Basically, what is there that could have messed up Moore's law that isn't the equivalent of "rocks fall, everybody dies"?
Aliens show up, give us their awesome technology. But I take your point.
Unless rocks fell, most people died, then aliens showed up with gifts of hyper-advanced technology at the exact moment that Moore's Law would have predicted that level. That's sort of how the idea that "working to keep up" with Moore's Law strikes me.
What? No! My point was that any game-changer, positive or negative, disrupts Moore's law. Alien hypertech is one such game changer. EDIT: edited for clarity.
I know, I was just playfully nullifying the game-changer. (Although, aliens restoring Moore's law with hypertech is only slightly less likely than aliens with hypertech at any other point in time.)
Ah, I see. Wait, what? Aliens restoring Moore's law is vastly less likely than the (admittedly minuscule) probability of them showing up at some other time.
So it's a...vast minisculity?
Vastly minuscule, yes.
That's exactly my point, most of the counter-arguments you gave for the other two predictive models were along those lines, but only when it came to countering Moore's law did you put those weapons away and play gently.
Simple version: if your model includes phenomena X in its timeline, yet does not make any claims about X having causal influence, then I can play with X to attempt to break your model. Hence I used the weapons that were implicit in the model. For Kurzweil's model, it includes the history of evolution on earth, a history littered with disasters, meteor impacts and the like. Since the model is claimed to be correct despite these disasters, I decided that adding a few more shouldn't break the model if the model actually worked. I was gentler on Robin's model, because it focused more on narrowly on human evolution. Still, it covers a period in history where there were pandemic, genetic bottlenecks, dramatic expansions of new technology, and so on. If the model was correct across these phenomena, then I could play with them without blowing up the model. What would be equivalent for Moore's much narrower law? Well, there are no "unless economic growth splutters" caveats. So the most credible way to break Moore's law is to add a few economic disasters and imagine their consequences. But actual real economic disasters seem to have not affected Moore's law at all. What else? Political change? This might work - there's some evidence that communist states didn't have a Moore's law for their own computer industry. So a global communist takeover could break Moore's law - or at least caveat it to "in a market economy, computer speeds will..."
Thanks, I see your point now. If that's the case, though, it really just boils down to "(models that we have extremely accurate data and current physical and recorded evidence for) tend to produce more accurate predictions than (models that are less so)".
Not exactly. Robin's and Kurzweil's model have more data! (as they include Moore's law as a subcomponent).
Don't you agree that Moore's law is the only trustworthy part of their models, though?
Simply pointing out that it's not just the quantity of the data that matters, but other factors too.
I think Robin's info about GDP growth throughout history is decent, too.
GDP estimates of premodern societies are highly speculative, and anyway the GDP of a premodern society was probably not a good measure of the magnitude of its economic activity.

If there were dramatically more domesticable species available, that would imply differences in evolutionary history as well, and the abundant resources available to civilization would accelerate subsequent technological development as well. Many of your proposed counterfactuals seem to have the same flaw, implying exponential growth with slightly different coefficients rather than a completely broken model.

Making the zebra more like the horse, or making hippos like Indian elephants, does not seem to require massive surgery in evolution. That's a valid point, and could be analysed further. Different coefficients and starting times is enough to break Robin's model. And for Kurzweil's model, I can wipe out the whole human species at critical moments, and still be inside it.
Making the zebra more like the horse wouldn't make it a more attractive species to domesticate unless it provided what early humans were looking for better than horses did, in which case we'd be talking about making the horse more like the zebra right now. More domesticable species would have to mean species which fill different niches, for a wider range of possible specializations.

Nitpicking : the sentence « four centuries after Luther's thesis in Luther's 95 theses thesis in 1957 » seems broken to me. Both the date and the repetition.

Thanks, corrected!

This pattern-matches in my mind to the Stability theory, only you try to use large changes instead of tiny ones, which might be too much of a jump.

It might be worth considering small changes in the initial conditions (is that what you call T?). In the Reformation example, would having 94 theses make a difference? What if Luther's proclamation was not translated to German? Etc.

Suppose you establish that a model is stable to small perturbations (how? seems to need math), you can then try to see where the tipping points are. If there is no such stability, the model is probably useless.

Hum... That is one suggested way of going. But it does seem to ignore the fact that these non-causal models are claimed to be correct, without needing to know anything much about the underlying processes. Maybe "small" should be calibrated by the claims of the model?
At least for the three examples you cited, I seem to remember them bring called approximations, not "correct".
What's the difference between a singularity, and an approximate singularity? :-)
In the former case, it progresses asymptotically, while in the latter, it progresses exponentially or super-exponentially but not asymptotically.

Robin Hanson has posted on a demographic model that predicts political instability circe 2020. It seems like good further grist for counterfactual resilience.


Whatever past trends were, the rate of progress must slow as we approach physical limits. For example, there must be some minimum size for a reliable resistor. So even if we accept the inevitability of certain past trends, extrapolation is risky.

Once we've used most of the oil (or phosphate, for which there's no substitute), past trends driven by culture, technology, or economics won't continue. In agriculture, best-farmer yields haven't increased much since 1980, although averages go up as they buy their neighbors' land. (My recent book on Darwinia... (read more)

Past "physical limits" once considered immutable have often been broken. It was not long ago that 9600bps was considered the limit for phone line data rate. Replacing cattle with vat meat grown in factories powered by solar energy and methane digesters can likely alleviate many potential food shortages and environmental issues. There is no guarantee that there will not be a true limiting factors of progress rate, but it is extremely bold (and misguided) to proclaim that you know in advance what they will be.
I agree that some "limits" have proved illusory. But do you have an example where a limit based on conservation of matter or energy was surpassed? I assume solar technology will continue to improve, but it would take several orders of magnitude of improvement for food-from-solar cells to be cost-competitive with cattle grazing low-value land. What does an acre of solar cells cost?
I mentioned some far-fetched stuff before, not that we need that much energy yet. There is plenty of energy around, just waiting to be harvested: solar, geothermal, fusion...
We're not anywhere near that limit yet. The point is that we've approached various things that looked like limits, and they weren't. When we get near being limited by, say, the amount of matter/energy in the universe, then we'll find out whether it's really a limit.
Presumably as phosphate mines get depleted it'll become profitable to stop pissing away all our phosphorus. Trade in urine was common in medieval Europe; now the yucky bits can be automated and hidden I see no reason it couldn't start again.
On the other hand, if we reach a point where stockpiling human urine to supply phosphorous for agriculture (as opposed to merely conserving it locally) is economically viable, that implies some pretty scary things about the general availability of food and knock-on effects for general social stability. I'm not sure how much of it we're (literally) pissing into the sewers and whatnot, but I'd be surprised if agricultural runoff weren't a much greater percentage of the total.
Yes, we should start with the low-hanging fruit. For example, nutrients in human waste are a small fraction of what's in animal waste, and the latter should be easier to capture. Even so, much of the manure still gets applied at pollution-causing rates near barns and feedlots, rather than paying the cost of transport to where it is most needed. But your point about food availability and social stability is more important. Recycling urine seems like a good idea. But a society that needs to recycle urine will be a society where many people are spending most of their income on food and others are going hungry, as was the case for the societies mentioned above.
Well, something like this has already been fake-newsed.
Not just medieval Europe - plenty of urban environments had such trades, like in China, and night soil trades were (and may still be) pretty much universal. The Ancient Roman urine trade gave us the still-current phrase 'money doesn't stink'.

I'm not sure that these conterfactual arguments are appropriate.

However, it seems to me that Moore is obviously in a different category than Hanson and Kurzweil:

Moore's law was formulated as a description of an empirically observable trend. As far as I know, Moore didn't use it to make far future predictions (the Wikipedia page quotes a prediction at 10 years). Moreover, Moore's laws refers to well-defined variables (transistor density at minimum cost per transistor, in the original formulation) for which accurate and complete estimates are available.

Hanso... (read more)