Information theory and FOOM

byPhilGoetz9y14th Oct 200995 comments


Information is power.  But how much power?  This question is vital when considering the speed and the limits of post-singularity development.  To address this question, consider 2 other domains in which information accumulates, and is translated into an ability to solve problems:  Evolution, and science.

DNA Evolution

Genes code for proteins.  Proteins are composed of modules called "domains"; a protein contains from 1 to dozens of domains.  We classify genes into gene "families", which can be loosely defined as sets of genes that on average share >25% of their amino acid sequence and have a good alignment for >75% of their length.  The number of genes and gene families known doubles every 28 months; but most "new" genes code for proteins that recombine previously-known domains in different orders.

Almost all of the information content of a genome resides in the amino-acid sequence of its domains; the rest mostly indicates what order to use domains in individual genes, and how genes regulate other genes.  About 64% of domains (and 84% of those found in eukaryotes) evolved before eukaryotes split from prokaryotes about 2 billion years ago. (Michael Levitt, PNAS July 7 2009, "Nature of the protein universe"; D. Yooseph et al. "The Sorcerer II global ocean sampling expedition", PLoS Bio 5:e16.)  (Prokaryotes are single-celled organisms lacking a nucleus, mitochondria, or gene introns.  All multicellular organisms are eukaryotes.)

It's therefore accurate to say that most of the information generated by evolution was produced in the first one or two billion years; the development of more-complex organisms seems to have nearly stopped evolution of protein domains.  (Multi-cellular organisms are much larger and live much longer; therefore there are many orders of magnitude fewer opportunities for selection in a given time period.)  Similarly, most evolution within eukaryotes seems to have occurred during a period of about 50 million years leading up to the Cambrian explosion, half a billion years ago.

My first observation is that evolution has been slowing down in information-theoretic terms, while speeding up in terms of the intelligence produced.  This means that adding information to the gene pool increases the effective intelligence that can be produced using that information by a more-than-linear amount.

In the first of several irresponsible assumptions I'm going to make, let's assume that the information evolved in time t is proportional to i = log(t), while the intelligence evolved is proportional to et = ee^i.  I haven't done the math to support those particular functions; but I'm confident that they fit the data better than linear functions would.  (This assumption is key, and the data should be studied more closely before taking my analysis too seriously.)

My second observation is that evolution occurs in spurts.  There's a lot of data to support this, including data from simulated evolution; see in particular the theory of punctuated equilibrium, and the data from various simulations of evolution in Artificial Life and Artificial Life II.  But I want to single out the eukaryote-to-Cambrian-explosion spurt.  The evolution of the first eukaryotic cell suddenly made a large subset of organism-space more accessible; and the speed of evolution, which normally decreases over time, instead increased for tens of millions of years.


The following discussion relies largely on de Solla Price's Little Science, Big Science (1963), Nicholas Rescher's Scientific Progress: A Philosophical Essay on the Economics of Research in Natural Science (1978), and the data I presented in my 2004 TransVision talk, "The myth of accelerating change".

The growth of "raw" scientific knowledge is exponential by most measures: Number of scientists, number of degrees granted, number of journals, number of journal articles, number of dollars spent.  Most of these measures have a doubling time of 10-15 years.  (GDP has a doubling time closer to 20 years, suggesting that the ultimate limits on knowledge may be economic.)

The growth of "important" scientific knowledge, measured by journal citations, discoveries considered worth mentioning in histories of science, and perceived social change, is much slower; if it is exponential, it appears IMHO to have had a doubling time of 50-100 years between 1600 and 1940.  (It can be argued that this growth began slowing down at the onset of World War II, and more dramatically around 1970).  Nicholas Rescher argues that important knowledge = log(raw information).

A simple argument supporting this is that "important" knowledge is the number of distinctions you can make in the world; and the number of distinctions you can draw based on a set of examples is of course proportional to the log of the size of your data set, assuming that the different distinctions are independent and equiprobable, and your data set is random.  However, an opposing argument is that log(i) is simply the amount of non-redundant information present in a database with uncompressed information i.  (This appears to be approximately the case for genetic sequences.  IMHO it is unlikely that scientific knowledge is that redundant; but that's just a guess.)  Therefore, important knowledge is somewhere between O(log(information)) and O(information), depending whether information is closer to O(raw information) or O(log(raw information)).


We see two completely-opposite pictures:  In evolution, the efficaciousness of information increases more-than-exponentially with the amount of information.  In science, it increases somewhere between logarithmically and linearly.

My final irreponsible assumption will be that the production of ideas, concepts, theories, and inventions ("important knowledge") from raw information, is analogous to the production of intelligence from gene-pool information.  Therefore, evolution's efficacy at using the information present in the gene pool can give us a lower bound on the amount of useful knowledge that could be extracted from our raw scientific knowledge.

I argued above that the amount of intelligence produced from a given gene-information-pool i is approximately e^ei, while the amount of useful knowledge we extract from raw information i is somewhere between O(i) and O(log(i)).  The implication is that the fraction of discoveries that we have made, out of those that could be made from the information we already have, has an upper bound between O(1/e^e^i) and O(1/e^e^e^i).

One key question in asking what the shape of AI takeoff will be, is therefore: Will AI's efficiency at drawing inferences from information be closer to that of humans, or that of evolution?

If the latter, then the number of important discoveries that an AI could make, using only the information we already have, may be between e^e^i and e^e^e^i times the number of important discoveries that we have made from it.  i is a large number representing the total information available to humanity.  e^e^i is a goddamn large number.  e^e^e^i is an awful goddamn large number.  Where before, we predicted FOOM, we would then predict FOOM^FOOM^FOOM^FOOM.

Furthermore, the development of the first AI will be, I think, analogous to the evolution of the first eukaryote, in terms of suddenly making available a large space of possible organisms.  I therefore expect the pace of information generation by evolution to suddenly switch from falling, to increasing, even before taking into account recursive self-improvement.  This means that the rate of information increase will be much greater than can be extrapolated from present trends.  Supposing that the rate of acquisition of important knowledge will change from log(i=et) to et gives us FOOM^FOOM^FOOM^FOOM^FOOM, or 4FOOM.

This doesn't necessarily mean a hard takeoff.  "Hard takeoff" means, IMHO, FOOM in less than 6 months.  Reaching the e^e^e^i level of efficiency would require vast computational resources, even given the right algorithms; an analysis might find that the universe doesn't have enough computronium to even represent, let alone reason over, that space.  (In fact, this brings up the interesting possibility that the ultimate limits of knowledge will be storage capacity:  Our AI descendants will eventually reach the point where they need to delete knowledge from their collective memory in order to have the space to learn something new.)

However, I think this does mean FOOM.  It's just a question of when.

ADDED:  Most commenters are losing sight of the overall argument.  This is the argument:

  1. Humans have diminishing returns on raw information when trying to produce knowledge.  It takes more dollars, more data, and more scientists to produce a publication or discovery today than in 1900.
  2. Evolution has increasing returns on information when producing intelligence.  With 51% of the information in a human's DNA, you could build at best a bacteria.  With 95-99%, you could build a chimpanzee.
  3. Producing knowledge from information is like producing intelligence from information. (Weak point.)
  4. Therefore, the knowledge that could be inferred from the knowledge that we have is much, much larger than the knowledge that we have.
  5. An artificial intelligence may be much more able than us to infer what is implied by what it knows.
  6. Therefore, the Singularity may not go FOOM, but FOOMFOOM.