This is a crosspost from my personal website. Inspired by: Naval, If Sapiens Were a Blogpost and Brett Hall’s podcast.

Many people have recommended the book The Beginning of Infinity: Explanations that Transform the World by David Deutsch to me. I don’t know how, because I can’t imagine any of them actually finished it. Previously on my blog I’ve reviewed books and been critical of aspects of them. But this post is more of a summary of The Beginning of Infinity. I decided to write it this way because this book is very complicated, reasonably long and frequently misunderstood. Deutsch is a physicist at Oxford and a pioneer of quantum computing, but his interests are wide-ranging.

All progress comes from good explanations

In this book I argue that all progress, both theoretical and practical, has resulted from a single human activity: the quest for what I call good explanations.

One of the key pieces of terminology in this book is the idea of a good explanation. In Deutsch’s formulation, a good explanation is one that accounts for observations while being hard to vary. If a theory can explain anything, it can explain nothing. Some people think that what makes a good explanation is testability. But this isn’t enough: some theories are perfectly testable but do not constitute good explanations. For example, consider the hypothesis “If you eat 1kg of grass, it will cure the common cold.” The problem with this statement isn’t that it’s not testable, it’s that no one should bother testing it. And the reason why no one should bother testing it is that it’s easy to vary: why 1kg, and not 2kg? What is the explanatory account of how eating grass could cure a cold? Bad explanations have more moving parts than there needs to be, and each of these parts could have been different.

This book has many different threads to it, but one of the most important is a kind of philosophical treatise about how good explanations come to be. One classical idea, which Deutsch rejects, is that we do so by induction, a doctrine known as inductivism. This is based on the idea that ‘the unseen resembles the seen’ or ‘the future resembles the past.’ We observe the sun rising day after day, and inductively reason that the sun will rise tomorrow. There are a few problems with this. One of them is that we do not, in fact, use induction to reason about most observations in the world. Consider someone who was born in the 20th century and saw the digits 19 at the start of the year number hundreds of times. On December 31st, 1999, she would not extrapolate the rule and predict that tomorrow, the year will begin with a 19. You might object that what she was actually extrapolating was the rule “The year will start with the digits 19 until the day after December 31st, 1999, when it will start to begin with a 20”, and that this rule was repeatedly confirmed by observation. But this is question-begging. Why this rule and not some other rule? In philosophy, this is known as the problem of induction.  

Induction also struggles with answering a question like “What is the probability that the sun will rise tomorrow?” If something has never happened, and it fails to happen once more, how do you update your probability judgement? This is sometimes known as the problem of zero-failure data:

On each occasion when that prediction comes true, and provided that it never fails, the probability that it will always come true is supposed to increase. Thus one supposedly obtains ever more reliable knowledge of the future from the past, and of the general from the particular. That alleged process was called ‘inductive inference’ or ‘induction’, and the doctrine that scientific theories are obtained in that way is called inductivism . . . First, inductivism purports to explain how science obtains predictions about experiences. But most of our theoretical knowledge simply does not take that form. Scientific explanations are about reality, most of which does not consist of anyone’s experiences.

This is a subtle point. Are scientific theories about reality, or are they about how experiments move the dials on measuring instruments? The latter view is called instrumentalism, which Deutsch roundly rejects: “prediction is not, and cannot be, the purpose of science.” Moreover, he views instrumentalism as a philosophical absurdity: “Instrumentalism, even aside from the philosophical enormity of reducing science to a collection of statements about human experiences, does not make sense in its own terms. For there is no such thing as a purely predictive, explanationless theory.” Deutsch’s view is that knowledge is not only not justified by induction, as the inductivists believed, but that it is not justified at all:

The misconception that knowledge needs authority to be genuine or reliable dates back to antiquity, and it still prevails. To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism. The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.

An argument in favour of instrumentalism is that, while predictions get successively more accurate as science progresses, the underlying conceptual model oscillates wildly. For example, general relativity was only a small amount more accurate than Newton’s laws in most situations, but the explanations it gave for observations were completely different. Because each theory’s explanation swept away the previous one, the previous explanation must have been false, and so we can’t regard these successive explanations as growth in knowledge at all. The reason why this is wrong is that it takes too narrow a view of what constitutes scientific progress. The fact that relativity explained things rather differently to Newton is beside the point; what matters is that our explanatory power grew.

This belief that what constitutes scientific progress is growth in explanatory power is why Deutsch rejects Bayesian philosophy of science. This is the view that we have ‘credences’ (probabilities) attached to our level of belief in theories, and that science progresses by moving our credences in the correct theories closer and closer to one. But there is no consistent movement of theories in the direction of having a higher probability. For instance, there might be a 0.00001% chance that Greek mythology is true, but we know there is a zero chance that our current theories of physics are true, because general relativity and quantum mechanics are incompatible with one another. What are current theories of physics are is plausible: refined many times by criticism. This is also why our explanations can progress in philosophy and art, despite the fact that you can’t ever “prove” a proposition wrong. That doesn’t stop the overwhelming majority of explanations in those fields from being bad.

The real way that we generate explanations about the world, according to Deutsch, is that we conjecture. Our minds are constantly generating conjectures about the world, and we use observation to either refute them or to criticise them. A person, in this formulation, is an entity that produces explanatory knowledge. Arguments should proceed as follows: person A conjectures something, and this conjecture has problems. Person B offers a rival conjecture that fixes those problems. And so on, indefinitely. In science, we never want to propose something and say “This is the ultimate truth.” That is the sin of justificationism.

We do not derive knowledge from the senses

Empiricism is the philosophical idea that we derive knowledge from our senses. There are a number of problems with this. One is that sense-data by themselves are meaningless. If you had no pre-existing ideas or expectations, you wouldn’t know how to interpret your senses. We do not read from the book of nature. The other major problem with empiricism is how to deal with false perceptions, like optical illusions. He writes: 

The deceptiveness of the senses was always a problem for empiricism – and thereby, it seemed, for science. The empiricists’ best defence was that the senses cannot be deceptive in themselves. What misleads us are only the false interpretations that we place on appearances.

As Karl Popper put it, “All observation is theory-laden”, and hence fallible, like all our theories. In other words: you have to know what you’re looking for. We bring expectations, and explanations, to the act of measuring and observing itself. There is no such thing as The Facts, in a vacuum. There are only people, pursuing explanations that are better or worse at responding to criticism. Another of Deutsch’s enduring frustrations with empiricism is the idea that interpretation and prediction are two separate processes. There is only one process: explanation.

One legacy of empiricism that continues to cause confusion, and has opened the door to a great deal of bad philosophy, is the idea that it is possible to split a scientific theory into its predictive rules of thumb on the one hand and its assertions about reality (sometimes known as its ‘interpretation’) on the other.

A common argument goes like this: you can have all the facts in the world, but this does not allow you to make the logical jump to making normative statements about what ought to be. Maybe you can’t get moral judgements from factual claims, but you can’t get scientific theories from factual claims either! Deutsch is essentially saying that the epistemic jump that empiricism is ignoring (from observations to theories) is the dual of the much-discussed epistemic jump in moral philosophy (from facts to values). So, there may be a metaphysical sense in which you can’t get an ought from an is. But the project never was to get an ought from an is:

In the case of moral philosophy, the empiricist and justificationist misconceptions are often expressed in the maxim that ‘you can’t derive an ought from an is’ (a paraphrase of a remark by the Enlightenment philosopher David Hume). It means that moral theories cannot be deduced from factual knowledge. This has become conventional wisdom, and has resulted in a kind of dogmatic despair about morality: ‘you can’t derive an ought from an is, therefore morality cannot be justified by reason’. That leaves only two options: either to embrace unreason or to try living without ever making a moral judgement. Both are liable to lead to morally wrong choices, just as embracing unreason or never attempting to explain the physical world leads to factually false theories (and not just ignorance). Certainly you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations . . . Moral philosophy is basically about the problem of what to do next – and, more generally, what sort of life to lead, and what sort of world to want . . . There are objective truths in ethics. One of them is this: Thou shalt not close off the paths for error-correction.

Progress is unbounded

Deutsch argues that there are two possibilities: either something is forbidden by the laws of physics, or it is possible, given the right knowledge. Therefore, all evils are due to insufficient knowledge. Deutsch calls this ‘The Principle of Optimism’. The following is one of the most important paragraphs in the book:

Every putative physical transformation, to be performed in a given time with given resources or under any other conditions, is either – impossible because it is forbidden by the laws of nature; or – achievable, given the right knowledge. That momentous dichotomy exists because if there were transformations that technology could never achieve regardless of what knowledge was brought to bear, then this fact would itself be a testable regularity in nature. But all regularities in nature have explanations, so the explanation of that regularity would itself be a law of nature, or a consequence of one. And so, again, everything that is not forbidden by laws of nature is achievable, given the right knowledge.

This implies that, contrary to popular belief, humans are highly cosmically significant. Consider the champagne bottle stored in the fridge at the offices of the Search for Extraterrestrial Intelligence (SETI). The cork will come off that champagne bottle if and only if humans succeed in detecting an alien civilisation. To explain why the cork came off the bottle, you would need to explain facts about which extraterrestrial civilisations are transmitting signals, and how those signals could have been intelligible to humans. In other words: to explain humans, you have to explain the universe first.

Similar champagne bottles are stored in other laboratories. The popping of each such cork signals a discovery about something significant in the cosmic scheme of things. Thus the study of the behaviour of champagne corks and other proxies for what people do is logically equivalent to the study of everything significant. It follows that humans, people and knowledge are not only objectively significant: they are by far the most significant phenomena in nature – the only ones whose behaviour cannot be understood without understanding everything of fundamental importance . . . Some people become depressed at the scale of the universe, because it makes them feel insignificant. Other people are relieved to feel insignificant, which is even worse. But, in any case, those are mistakes. Feeling insignificant because the universe is large has exactly the same logic as feeling inadequate for not being a cow. Or a herd of cows. The universe is not there to overwhelm us; it is our home, and our resource. The bigger the better.

You probably know that the effects of gravity drop off as the square of the distance. The same is true of the intensity of light. Indeed, there is only one known phenomenon whose effects do not necessarily drop off with distance: knowledge. A piece of knowledge could travel without any consequence for a thousand light-years, then completely transform the civilisation that it reached. This is another reason for the cosmic significance of humans, and one interpretation of the book’s title. Animals or pre-Enlightenment humans may have had a big impact, but that would necessarily diminish with time and distance. Only knowledge-creation can transform the world limitlessly.

The dichotomy I just discussed seems like a tautology, but Deutsch is making a stronger claim: that no knowledge is off limits to humans. Think of it this way: a chimp will never understand trigonometry. A central claim of this book – perhaps the most central – is that there is nothing to humans what trigonometry is to a chimp. Humans are universal constructors. A bird is an egg’s way of making more eggs. An elephant is elephant sperm’s way of making more elephants. But humans are nature’s way of making anything into anything.

Deutsch introduces this notion of universality by talking about number systems. The ancient Greek number system wasn’t universal, in the sense that there was a bound after which you couldn’t represent larger numbers. Simple tallies, and the Roman numeral system, could express indefinitely large numbers, but as the numbers grew in size, so too did the difficulty in representing them. Hindu-Arabic numerals (the type we use) are so significant because they are not just universal (they can represent any number) but digital. Technically speaking, digitality is the attribute of a system that it ‘corrects to the norm’ from the particularities of the physical substrate in which it is embodied. For instance, if my friend who has a thick accent tells me something, I can subsequently convey the same message without making any of the same noises. I wouldn’t even have to use any of the same words. In this sense, human language is digital. This is relevant because this error-correction is necessary for something to be universal. If you couldn’t correct your mistakes, even slight errors would add up until you couldn’t generate useful explanations at all. Hence, digitality is a pre-condition to the jump to universality. This is the reason, by the way, that all spoken languages build words out of a finite set of elementary sounds. There are no languages that limitlessly generate new sounds to represent new concepts: with errors in transmission and differences in accents, this would quickly become unintelligible. The reason we use the same word for this property as we do for fingers and numbers is that a digital signal can be encoded in digits. Your computer can record an analogue noise, but this is only because it can make a digital representation of it. Now, of course, ‘digital’ is simply used to mean ‘associated with computers’.

To return to an earlier point: are we really so sure that chimps could never understand trigonometry? Given indefinite time, could a chimp ever figure out mathematics? Or a collection of chimps, able to argue and debate with each other? Nobody knows the answer to this, but there is suggestive evidence that the answer is no:

Such activities [like creating and using tools] may seem to depend on explanation – on understanding how and why each action within the complex behaviour has to fit in with the other actions in order to achieve the overall purpose. But recent discoveries have revealed how apes are able to imitate such behaviours without ever creating any explanatory knowledge. In a remarkable series of observational and theoretical studies, the evolutionary psychologist and animal-behaviour researcher Richard Byrne has shown how they achieve this by a process that he calls behaviour parsing (which is analogous to the grammatical analysis or ‘parsing’ of human speech or computer programs).

We might make future discoveries that show that chimpanzees really do create explanatory knowledge. But, if this line of research is correct, animals have no explanations. This is the fundamental justification for why Deutsch thinks that knowledge – and therefore progress – is unbounded. There are certain things that a cat can never understand. So why aren’t there other facts that are simply too complicated for humans to understand? Because humans, unlike cats, create explanations, and explaining things is a general procedure. The point is not that any particular human will ever understand a specific concept. We can understand things better; we can never understand things fully.

One corollary of universality, Deutsch says, is that worries about artificial intelligence are misplaced. Deutsch has a chapter on AI, but it is significantly outdated so I decided to cut my commentary on it:

This [computers getting more efficient] can indeed be expected to continue. For instance, there will be ever-more-efficient human–computer interfaces, no doubt culminating in add-ons for the brain. But tasks like internet searching will never be carried out by super-fast AIs scanning billions of documents creatively for meaning, because they will not want to perform such tasks any more than humans do. Nor will artificial scientists, mathematicians and philosophers ever wield concepts or arguments that humans are inherently incapable of understanding. Universality implies that, in every important sense, humans and AIs will never be other than equal.

Another consequence of universality is that there is only one form of intelligence: the ability to create explanatory knowledge. I don’t think he ever actually says this in the book; I think I got this from Steven Pinker. People are enamoured with the idea of multiple intelligences, and frequently say things like that intelligence can’t be measured or that IQ isn’t very meaningful. But, perversely, this is about the only psychological trait for which this is not true. Sure, our approximations of this objective intelligence will always be flawed, and we may speak about multiple intelligences for the sake of convenience. But, if Deutsch is right, all intelligence is unified.

Problems are soluble and problems are inevitable

Get two stone tablets. On one of them inscribe: problems are soluble. On the other one inscribe: problems are inevitable. Deutsch views this discovery as the key idea of the Enlightenment, and therefore the source of our civilisational progress:

That progress is both possible and desirable is perhaps the quintessential idea of the Enlightenment. It motivates all traditions of criticism, as well as the principle of seeking good explanations . . . Perhaps a more practical way of stressing the same truth would be to frame the growth of knowledge (all knowledge, not only scientific) as a continual transition from problems to better problems, rather than from problems to solutions or from theories to better theories.

Deutsch says that the Continental Enlightenment recognised that problems are soluble but not that problems are inevitable, whereas the British Enlightenment recognised both. These geographical boundaries are approximate, and there were Continental figures (e.g. Condorcet) who were quite British in their thinking, and vice versa. The most important consequence of the Enlightenment is that it created a tradition of criticism – one in which ideas could be tried out and rejected. A lack of a tradition of criticism is the reason why the year 1AD looked much the same as 1000AD. And a tradition of criticism is the reason why 2000AD looked completely different to 1000AD.

The inevitability of problems has two meanings. One is that everything in society is a trade-off, and there is no such thing as a free lunch. And the other is that we cannot ever be perfectly secure in our foundations of knowledge. Even if we appeared to be reaching the limits of fundamental physical laws, the concept of a ‘law’ is not set in stone; it has changed many times in the past and may change again. And in mathematics, we can never be sure that the axioms we have chosen are the correct ones. There is a famous debate over whether mathematics is created or discovered. But the Deutschian philosophy of science puts a spin on this by saying that mathematics is discovered by being created, along with everything else. Deutsch believes in moral and aesthetic truths, but he doesn’t believe in foundational truths. Everything is conjecture.

You may know about Gödel’s incompleteness theorem, which says that some mathematical problems are ‘undecidable’. Deutsch doesn’t think that this contradicts his dictum that ‘problems are soluble’ because we can always imagine devising an algorithm that would solve a given undecidable problem if there were no physical constraints (for example, if we could get a person to represent each natural number and have them move infinitely fast). All facts about unprovable statements are therefore actually facts about physics, and fit quite nicely into his dichotomy.   

This book contains many critiques of academic philosophy. Deutsch thinks philosophy took a bad turn in the 20th century, with the rise of ideas like positivism and inductivism. But one of philosophy’s worst attributes is that much of it ignores progress:

Bad philosophy is philosophy that denies the possibility, desirability or existence of progress. And progress is the only effective way of opposing bad philosophy. If progress cannot continue indefinitely, bad philosophy will inevitably come again into the ascendancy – for it will be true.

I worry sometimes about how people deny progress so much. Yes, we have just replaced the problems of simple agricultural lives with the problems of advanced civilisation, but those are better problems to have. The problem of obesity is the problem of there being too much delicious food! The problem of teenagers being addicted to their phones is the problem of there being too much compelling entertainment! It’s better to be unequal with some rich people than have nobody be rich at all, as was the case for the vast majority of human history. I’m not downplaying these problems: I want people to solve them! But after we solve them, we’ll be left with more problems; such is the nature of progress.

People have predicted many times before that progress was about to end, or that some ecological catastrophe was imminent. Predictions like this have a spectacularly poor track record. Deutsch divides forecasts into two categories: prophecies are forecasts that do not take into account the growth of knowledge, while predictions do take into account the growth of knowledge – and thus, have some chance of actually being correct. One of the most infamous examples of a prophecy was The Population Bomb, a 1968 book by Paul Ehrlich which predicted that mass famines would occur within a decade. Another is biogeographical accounts of human history, like the one given by Jared Diamond in Guns, Germs and Steel. The motivation of this book was to come up with an account of why Europe and America became so dominant without resorting to racist stereotypes, but Deutsch still finds the approach distasteful:

Presumably Diamond can look at ancient Athens, the Renaissance, the Enlightenment – all of them the quintessence of causation through the power of abstract ideas – and see no way of attributing those events to ideas and to people; he just takes it for granted that the only alternative to one reductionist, dehumanizing reinterpretation of events is another.

Here, we see that two strands of Deutsch’s thesis are actually one and the same – his optimism and his belief in the causal power of abstraction. The parochial answer to why the dinosaurs went extinct is that they were hit by an asteroid. But, at a deeper level, the real answer is that dinosaurs didn’t have a space program.

Abstractions are real

Why is there a particular atom of copper in one specific spot in Parliament Square? One way to answer this question is to track the evolution of the physical system, or perhaps use computer modelling to get successively better approximations of the movement of atoms. But there is a better explanation: the atom of copper is there because it is in a statue of Winston Churchill, and humans like to honour their influential leaders with statues. It’s not just that this is a simplified way of talking about the movement of atoms. It’s that abstractions like ‘statue’ and ‘Winston Churchill’ exert real causal force. Causation goes up, as well as down, the ladder of abstraction:

The behaviour of high-level physical quantities consists of nothing but the behaviour of their low-level constituents with most of the details ignored. This has given rise to a widespread misconception about emergence and explanation, known as reductionism: the doctrine that science always explains and predicts things reductively, i.e. by analysing them into components. Often it does, as when we use the fact that inter-atomic forces obey the law of conservation of energy to make and explain a high-level prediction that the kettle cannot boil water without a power supply. But reductionism requires the relationship between different levels of explanation always to be like that, and often it is not.

The view that abstractions are real is called weak emergence, and the idea that they are as real as anything else and exert causal power is called strong emergence. These terms are often used loosely, and Deutsch here is defending a controversial variety of strong emergence.  

Anthropic reasoning is flawed

Anthropic reasoning is reasoning from the fact that we are observers. For example, if we find that some process is required to make stars burn, then we know a priori that this process must have occurred because we exist and are orbiting around a star (indeed, this is exactly what Fred Hoyle did). Anthropic reasoning is often employed to deal with the fine-tuning argument. Deutsch’s first problem with anthropic reasoning is that, if there are an appreciable number of variables (like the speed of light, the masses of the various elementary particles, and so on) that determine the likelihood of astrophysicists arising, it will always look as if our universe is very finely tuned. The argument runs like this: suppose we say that a variable is ‘close to the edge’ when it is within 10% of its possible extreme values on either side. If there were only one variable that determined our universe, 20% of its possible values would be close to the edge. If we observed such a variable as being very close to the edge, we might suspect that something fishy was going on or that our universe was designed. But for two variables, 1 - 0.8^2 = 32% of values will be close to the edge. And in general, for n variables, 1 - 0.8^n of the values will be close to the edge. We do not know what value n is, but as long as it is not very small, the vast majority of possible configurations of variables will be close to the edge. More concretely, if we take ‘edge’ to be the edge of values for which it is possible for life to arise, then the vast majority of universes with life will appear as though they almost didn’t have life. The vast majority of universes with astrophysicists almost didn’t have any astrophysicists! There is a geometric analogy here: think of variables as dimensions and take an arbitrarily small band around the extreme possible values of the variables. The proportion of the volume, or area, close to these extreme values will start very small, but in higher and higher dimensions it will approach 100%! If you had a physical object surrounded by a single layer of atoms, as you increased the number of dimensions, almost the entire volume of the shape would be just the atoms. Here’s a graphic Deutsch shows to explain this:

Failure to understand this point, and other limitations of anthropic reasoning, have led to some confused arguments. For instance, Deutsch dismisses the argument, expounded by philosopher Nick Bostrom, that we are living in a simulation. Briefly, the argument is that future humans will likely produce ‘ancestor simulations’ for commercial and scientific reasons. Pretty quickly after this technology is invented, simulated humans will vastly outnumber real ones. Hence, if you find yourself as a human observer, you are overwhelmingly likely to be simulated. The simulation argument, Deutsch says, can be rejected out of hand because it would create a barrier to knowledge. We might as well say Zeus did it. He’s not rejecting an empirical theory for philosophical reasons, he’s actually saying it’s not even an empirical theory. Theories that propose barriers to knowledge are not even wrong.

One of the difficulties of anthropic reasoning is that it’s very hard to meaningfully define what counts as a proportion of an infinite set. For example, if there are infinitely many parallel universes, it is unclear what it means to say that a certain proportion of them contain astrophysicists. You might intuitively say that there are half as many even numbers as there are natural numbers – but this only appears to be the case because of the arrangement rule that we have chosen to apply to the natural numbers. If we grouped them in a different way, (e.g. 1, 3, 2, 5, 7, 4...) you would conclude that there are one third as many even numbers as there are natural numbers. The branch of mathematics that deals with problems like this is called measure theory. Other dubious applications of anthropic reasoning are the quantum suicide argument, the doomsday argument, and Boltzmann brains. There are a host of other paradoxes that arise when you start thinking about ethics in the multiverse, or indeed in an infinite universe. These are studied in the recently developed field of infinite ethics.

Almost all logically possible universes that contain astrophysicists are governed by laws of physics that are bad explanations. So should we predict that our universe, too, is inexplicable? Or has some high but unknowable probability to be? Thus, again, anthropic arguments based on ‘all possible laws’ are ruled out for being bad explanations . . . Scientific explanations cannot possibly depend on how we choose to label the entities referred to in the theory. So anthropic reasoning, by itself, cannot make predictions. Which is why I said . . . that it cannot explain the fine-tuning of the constants of physics . . . Fine tuning is an unsolved problem in physics. An unsolved problem is no more evidence for the supernatural than an unsolved crime is evidence that a ghost did it.

Almost all members of an infinite set can be unrepresentative of that set, and there is no paradox here. If the argument above is correct, then the overwhelming probability is that our explanations about fine-tuning will be bad. Generalising this argument, almost all our explanations about everything will be bad. Does this put us in an epistemological crisis in which we can’t know anything? I don’t exactly understand the argument here, but I think Deutsch is saying that we can dismiss these worries because any explanation that posits the creation of bad explanations is itself a bad explanation. Just try to hypothesise that the universe is fundamentally unknowable. The steps in your reasoning may well appear sound, but, if your argument is actually correct, you have a paradox: if the universe is fundamentally unknowable, how could you know that it was unknowable?

Focus on ejecting bad leaders, not selecting good ones

In reading this part of the book, it would be helpful to have some background knowledge about voting theory – here’ a primer. One of the most important results is the Condorcet paradox: even given a complete, and consistent, list of people’s preferences, you can still get cyclical preferences, e.g. a group that prefers Alice to Bob to Carol to Alice. This means that, mathematically speaking, there is no such thing as the will of the people. Some voting systems are certainly fairer than others, but none are perfectly fair.

In this book, Deutsch defends something that I had never before read someone actually defend: First-Past-the-Post (FPTP) voting, i.e. everyone gets a single vote, and the person with the most votes wins. In brief, he thinks that the point of elections is not to select the “correct” leaders, but to be able to eject bad ones. Elections are not like a distributed version of a hiring process, where we’re trying to find the best person for the job. They’re the mechanism that societies use to put someone in charge without violence. On this criterion, which was a central part of Popper’s political philosophy, proportional representation systems do worse than FPTP. In an FPTP system like that in Britain, a marginal change in the preferences of the population will almost certainly lead to a substantial change in the outcome – for instance, a leftward shift in the population will lead to a leftward shift in the government. On the other hand, the coalition governments that characterise most of continental Europe can simply change which parties are in the coalition, such that a leftward shift in the population might well lead to a rightward shift in the government. Moreover, proportional representation systems, while lauded for their fairness, give hugely disproportionate power to the third-largest party, since they can use their necessity in forming a coalition as a bargaining chip to have their policies passed. Instead of focusing on theoretical notions of fairness, we should favour political systems that embody traditions of peaceful, constructive criticism. While continental European voting systems have more theoretical considerations in their favour, Britain has a virtually unmatched history of political stability. (Keep in mind that this book was written ten years ago. With the hyper-partisanship in the US and the recent trend in British elections, (namely: the Tories win every time) the error-correcting attributes of these systems are not looking so well.)

Proportional representation is often defended on the grounds that it leads to coalition governments and compromise policies. But compromises – amalgams of the policies of the contributors – have an undeservedly high reputation. Though they are certainly better than immediate violence, they are generally, as I have explained, bad policies. If a policy is no one’s idea of what will work, then why should it work? But that is not the worst of it. The key defect of compromise policies is that when one of them is implemented and fails, no one learns anything because no one ever agreed with it. Thus compromise policies shield the underlying explanations which do at least seem good to some faction from being criticized and abandoned . . . Ideas have consequences, and the ‘who should rule?’ approach to political philosophy is not just a mistake of academic analysis: it has been part of practically every bad political doctrine in history. If the political process is seen as an engine for putting the right rulers in power, then it justifies violence, for until that right system is in place, no ruler is legitimate; and once it is in place, and its designated rulers are ruling, opposition to them is opposition to rightness.

This view that we should be able to identify specific views with specific individuals and parties is borne out in the way the book is written. There’s not very much hedging language. Maybe Deutsch fully believes everything he says in this book, and maybe sometimes he’s playing devil’s advocate. In any case, he wants us to think: “There’s this view X, which we can identify with David Deutsch. If he’s right, we can praise him and if he’s wrong, we can blame him.” That brings me to why there is a chapter about voting systems in this book. There are two reasons: one, to emphasise the importance of a tradition of criticism, and two, to show that error-correction is not just epistemologically necessary but politically necessary also. It’s error-correction all the way down.

This reasoning about voting systems is related to Zeno’s famous paradox. If there are infinitely many points between the corner of my room and me, how am I ever able to move? Deutsch says that voting theory effectively commits Zeno’s mistake. It mistakes an abstract process of decision-making with the real-life thing of the same name. The map is not the territory:

A quantity is definitely neither infinite nor infinitesimal if it could, in principle, register on some measuring instrument. However, by that definition a quantity can be finite even if the underlying explanation refers to an infinite set in the mathematical sense. To display the result of a measurement the needle on a meter might move by one centimetre, which is a finite distance, but it consists of an uncountable infinity of points. This can happen because, although points appear in lowest-level explanations of what is happening, the number of points never appears in predictions. Physics deals in distances, not numbers of points. Similarly, Newton and Leibniz were able to use infinitesimal distances to explain physical quantities like instantaneous velocity, yet there is nothing physically infinitesimal or infinite in, say, the continuous motion of a projectile.

Beauty is objective

Why are flowers beautiful? Is it just a coincidence that they look so pretty to human eyes? You might say this is because we share an evolutionary history with insects. And indeed, sometimes shared evolutionary lineage is the explanation for our aesthetic tastes: the sweetness of honey is an example. Or, you might say that flowers signalled a food-rich environment to our ancestors, but we don’t find leaves beautiful (except by chance) and we certainly don’t find roots beautiful. Other things in nature look beautiful by coincidence, like a peacock’s tail. Yet flowers are reliably beautiful, even though many of them evolved to attract different species in very different environments. There are various general traits that humans tend to find attractive, like symmetry, and yet these are lacking in many types of flowers that we find beautiful. Deutsch’s hypothesis is this: flowers are objectively beautiful. They create a hard-to-forge signal between species that lack shared knowledge. The vast majority of beautiful things are beautiful for parochial reasons, like species or culture, and are hence only subjectively beautiful. But, if Deutsch is right, even aliens would find flowers beautiful. Talk of objective beauty might sound strange, but you probably already think beauty is objective to a certain extent. Whether Mozart or Beethoven is better might strike you as completely subjective, but clearly, there is some objective sense in which we can say that Mozart is better than my three-year-old cousin randomly banging keys on a piano.

The first time I read this book, I thought this was a tangent. But it really isn’t. This is relevant to the broader thesis because signalling between humans is much like signalling across species. Every person is a species unto themselves:

Signalling across the gap between two humans is analogous to signalling across the gap between two entire species. A human being, in terms of knowledge content and creative individuality, is like a species . . . And therefore my guess is that the easiest way to signal across such a gap with hard-to-forge patterns designed to be recognized by hard-to-emulate pattern-matching algorithms is to use objective standards of beauty. So flowers have to create objective beauty, and insects have to recognize objective beauty. Consequently the only species that are attracted by flowers are the insect species that co-evolved to do so – and humans.

This is a very optimistic account of beauty. If beauty is objective, then the creation of artistic beauty is unbounded in the way other forms of knowledge-creation are. That would mean that there is literally no limit on how much we can refine human aesthetic experiences. Also, explanations about beauty would be unpredictable. If you knew what new law of physics was going to be discovered tomorrow, it would have been discovered today. Similarly, art can’t be predicted, despite the fact that it is determined by the laws of physics:

New art is unpredictable, like new scientific discoveries. Is that the unpredictability of randomness, or the deeper unknowability of knowledge-creation? In other words, is art truly creative, like science and mathematics? That question is usually asked the other way round, because the idea of creativity is still rather confused by various misconceptions. Empiricism miscasts science as an automatic, non-creative process. And art, though acknowledged as ‘creative’, has often been seen as the antithesis of science, and hence irrational, random, inexplicable – and hence unjudgeable, and non-objective. But if beauty is objective, then a new work of art, like a newly discovered law of nature or mathematical theorem, adds something irreducibly new to the world.

Determinism says that the universe is completely determined by the laws of physics and could not have occurred otherwise (excluding truly random effects like those seen in quantum mechanics). Compatibilists argue that this is compatible with the notion of free will. Deutsch appears to be proposing a kind of meta-compatibilism, wherein the ability of persons to create knowledge means that, in a sense, explanations have free will too. The question isn’t whether science is creative in the way art is. The question is whether art is creative in the way science is:

One amusing corollary of this theory is, I think, that it is quite possible that human appearance, as influenced by human sexual selection, satisfies standards of objective beauty as well as species-specific ones. We may not be very far along that path yet, because we diverged from apes only a few hundred thousand years ago, so our appearance isn’t yet all that different from that of apes. But I guess that when beauty is better understood it will turn out that most of the differences have been in the direction of making humans objectively more beautiful than apes.

Imitation is a creative act

A ‘meme’ is a term coined by the biologist Richard Dawkins, by analogy with gene, which refers to units of cultural transmission, like a tune or the idea of bagels. Memes, and imitation, have a philosophical complexity to them. A student might acquire a meme at a lecture without being able to repeat a single sentence spoken by the lecturer. There’s no such thing as “just imitating the behaviour”. Human memes transmit themselves not by being observed, but by being internally generated within each person. Hence, every act of imitation is an act of creativity.

What sort of a thing is a meme? Consider a tune, the prototypical example of a meme. You might say that a tune is a sequence of noises at certain frequencies, but that’s not right – it’s still the same tune if you play it on a different instrument or in a different key. Is it the pattern in the brains of the people who know the tune? This also seems problematic – the same tune will be encoded completely differently in different people’s brains, and it’s not like the brain has easily identifiable discrete pieces of information. Rather, a meme is an abstraction (recall, abstractions are real) that is the superset of all of these things.

The idea that memes are simply there to be replicated is the same fallacy at work in empiricism, where people think that there is simply knowledge in the senses that is there to be ‘derived’. There is a problem here, and it is why creativity ever arose to begin with. Why be creative when you live in a society with no innovation? Why speak a language when no one else can understand you? Deutsch says that the problem of the replication of memes and the evolution of creativity are two sides of the same coin:

I have presented two puzzles. The first is why human creativity was evolutionarily advantageous at a time when there was almost no innovation. The second is how human memes can possibly be replicated, given that they have content that the recipient never observes. I think that both those puzzles have the same solution: what replicates human memes is creativity; and creativity was used, while it was evolving, to replicate memes. In other words, it was used to acquire existing knowledge, not to create new knowledge. But the mechanism to do both things is identical, and so in acquiring the ability to do the former, we automatically became able to do the latter.

Next, Deutsch introduces the dichotomy between ‘rational memes’ and ‘anti-rational memes’. Rational memes are those that rely on the critical faculties of their host to survive. Anti-rational memes are those that rely on selectively disabling the critical faculties of their host. A tradition of criticism has many rational memes. In a tradition of criticism, it is hard for anti-rational memes to survive, except within subcultures that suppress criticism: “Bigotry exists not because it benefits the bigots, but despite the harm they do to themselves.” Creativity and rational memes tie in with a topic from earlier: universality. When there is a jump to universality, the system often looks the same from the outside:

From the perspective of hypothetical extraterrestrials observing our ancestors, a community of advanced apes with memes before the evolution of creativity began would have looked superficially similar to their descendants after the jump to universality. The latter would merely have had many more memes. But the mechanism keeping those memes replicating faithfully would have changed profoundly. The animals of the earlier community would have been relying on their lack of creativity to replicate their memes; the people, despite living in a static society, would be relying entirely on their creativity.

Let me introduce a taxonomy courtesy of Daniel Dennett. At first, evolution created Darwinian creatures – ones who had a certain behaviour they pursued through their whole lives; for example, single-celled organisms programmed to do nothing other than divide. Then, we got Skinnerian creatures – ones who could be conditioned to react to different stimuli with different strategies. Next, Popperian creatures evolved, which could internally test strategies before trying them out in the real world. As Popper put it, “We can let our theories die in our place.” The final stage is one that perhaps only humans have achieved: Gregorian creatures. These form a collective intelligence in which ideas can be tested by many individuals and implemented by any of them – in other words, a culture. Notice how these aren’t just alternate niches that creatures can fill to survive. They’re genuine advancements in evolution. Darwinian creatures, by definition, are no better than chance at surviving. Skinnerian creatures at least have their odds improved by experience. But humans can direct their evolution in a deliberate purposeful direction, through culture. Evolution itself evolves.

I mentioned earlier that anti-rational memes do not disable the critical faculties of their host in general, but rather disable certain parts:

The overarching selection pressure on memes is towards being faithfully replicated. But, within that, there is also pressure to do as little damage to the holder’s mind as possible, because that mind is what the human uses to be long-lived enough to be able to enact the meme’s behaviours as much as possible. This pushes memes in the direction of causing a finely tuned compulsion in the holder’s mind: ideally, this would be just the inability to refrain from enacting that particular meme (or memeplex). Thus, for example, long-lived religions typically cause fear of specific supernatural entities, but they do not cause general fearfulness or gullibility, because that would both harm the holders in general and make them more susceptible to rival memes.

Another dichotomy that Deutsch introduces is between dynamic societies and static societies. Dynamic societies progress by reinventing themselves and encouraging the criticism of rational memes. Static societies continue by suppressing criticism and innovation. The vast majority of societies throughout history have been static. With the exception of the current explosion of dynamism originating in the Enlightenment, there were really only a few examples of dynamic societies, including Athens. Athens could have been a beginning of infinity, but for one reason or another, its dynamism was stamped out.

Sustainability is overrated

There is a common idea, sometimes called Spaceship Earth, which says that the Earth is uniquely habitable to humans, and that it is fragile and must be sustained by us. But, when you think about it, Earth is barely habitable to humans. Without clothing and other technologies, humans would freeze to death in the winter in most places on Earth. As for the sustainability point, one of the confusions in this discussion is that the word ‘sustain’ has two meanings which are often in tension with one another. To sustain something means to keep it alive or flourishing. It also means to keep something the same, which sometimes means the exact opposite. Most of the things that have improved human life, like curing diseases, have been unsustainable. Keeping things the same would be tyranny, because of all of the suffering caused by soluble problems.

In the pessimistic conception, the distinctive ability of people to solve problems is a disease for which sustainability is the cure. In the optimistic conception, sustainability is the disease and people are the cure. ‘Sustainability’ has evolved into a meaningless catch-all term which sometimes just refers to ‘avoiding terrible outcomes’. Sustainability, in the sense of wanting to keep things the same, is frequently motivated by an obsession with naturalness. Many people have a view that natural things are intrinsically good, and unnatural things intrinsically bad. When considering climate change, this obsession with naturalness and with maintaining the status quo becomes especially absurd:

Unfortunately, this has led to the political debate being dominated by the side issue of how ‘anthropogenic’ (human-caused) the increase in temperature to date has been. It is as if people were arguing about how best to prepare for the next hurricane while all agreeing that the only hurricanes one should prepare for are human-induced ones.

Sustaining something requires that one actively resist change. Very often, this means rampant violence and oppression:

Static societies do tend to settle issues by violence, and they do tend to sacrifice the welfare of individuals for the ‘good’ of (that is to say, for the prevention of changes in) society. I mentioned that people who rely on such analogies end up either advocating a static society or condoning violence and oppression. We now see that those two responses are essentially the same: oppression is what it takes to keep a society static; oppression of a given kind will not last long unless the society is static.

This is relevant to the interminable debates over whether life is actually better in primitive societies (I don’t mean this word as a pejorative; ‘primitive’ literally means ‘resembling an earlier time’). One of the key arguments used to argue in favour of primitive societies is that people who live in them have very free lives: they don’t have to work in a MegaCorp to pay the rent, and it doesn’t take them very long to hunt and gather so they can spend the rest of their time telling stories and making art. But actually, this argument about the staticity of societies indicates that traditional lifestyles are incredibly unfree, often in ways that are opaque to outsiders. If these societies were not actively suppressing the growth of knowledge, they wouldn’t have stayed the same for so long, and constraining people’s ability to think and invent necessarily involves heavy-handed interference with their lives.

Since the sustained, exponential growth of knowledge has unmistakable effects, we can deduce without historical research that every society on Earth before the current Western civilization has either been static or has been destroyed within a few generations. The golden ages of Athens and Florence are examples of the latter, but there may have been many others.

My view is that this book would have been very controversial if anyone actually understood it:

Nations beyond the West today are also changing rapidly, sometimes through the exigencies of warfare with their neighbours, but more often and even more powerfully by the peaceful transmission of Western memes. Their cultures, too, cannot become static again. They must either become ‘Western’ in their mode of operation or lose all their knowledge and thus cease to exist – a dilemma which is becoming increasingly significant in world politics . . . Western civilization is in an unstable transitional period between stable, static societies consisting of anti-rational memes and a stable dynamic society consisting of rational memes. Contrary to conventional wisdom, primitive societies are unimaginably unpleasant to live in.

 We will always be at the beginning of infinity

It might be well for all of us to remember that, while differing a lot in the little bits we do know, in our infinite ignorance we are all equal.

As discussed in the section on anthropic bias, our intuitions break down at infinity. One of the most common thought experiments used to explain infinity is Hilbert’s Hotel. This is a hotel with an infinite number of rooms, all of which are always full. Despite this, Hilbert’s Hotel is always able to make room for more guests, by announcing over the loudspeaker that every guest in room n should move to room 2n. For our present purposes, what’s relevant is that every guest in Hilbert’s Hotel is unusually close to the beginning. Pick any guest, and they will have a finite number of people staying in rooms with numbers smaller than theirs, but an infinite number in rooms larger than theirs. Similarly, any person living during a period of unbounded knowledge-creation will be unusually close to the beginning. This is yet another interpretation of the book’s title:

Meme evolution [is] enormously faster than gene evolution, which partly explains how memes can contain so much knowledge. Hence the frequently cited metaphor of the history of life on Earth, in which human civilization occupies only the final ‘second’ of the ‘day’ during which life has so far existed, is misleading. In reality, a substantial proportion of all evolution on our planet to date has occurred in human brains. And it has barely begun. The whole of biological evolution was but a preface to the main story of evolution, the evolution of memes.

Gene evolution was simply a precursor to meme evolution. If we do not mess things up, the first few billion years of life will be but a footnote to the next few hundred years of humans. We will always be scratching the surface, never anything else:  

Many people have an aversion to infinity of various kinds. But there are some things that we do not have a choice about. There is only one way of thinking that is capable of making progress, or of surviving in the long run, and that is the way of seeking good explanations through creativity and criticism. What lies ahead of us is in any case infinity. All we can choose is whether it is an infinity of ignorance or of knowledge, wrong or right, death or life.

Thanks to Gytis Daujotas and Sydney Marcy for reviewing drafts of this post.

Update 10/8/21: Thanks to Brett Hall for pointing out that I misunderstood important points in the ‘Imitation is a creative act’ section. I was saying that imitation is an ambiguous term, and therefore that creativity is required to imitate. But Deutsch is actually saying, I think, that while imitation is possible without creativity, it is not the basis of human meme-replication: “The truth is that imitating people’s actions and remembering their utterances could not possibly be the basis of human meme replication. In reality these play only a small – and for the most part inessential – role.” The actual basis of human meme-replication is creativity. Hence what I wrote is misleading and you should mentally replace most instances of the word ‘imitation’ with ‘human meme-replication’.

New Comment
16 comments, sorted by Click to highlight new comments since:

Thank you for this superb summary. I’ve spend the last year reading and reading Deutsch’s book and listening to Brett’s podcast. Still many a-ha moments for me reading this. 🙏🏻🙏🏻🙏🏻

cross-posting my comment here:

I really like the entirely new causes for optimism that are contained in this book.

I wonder sometimes, though, if Deutsch views such questions too much in the light of systems or phase transitions and thus looses the general view of morality. One central example is, as you described, his view on the ‘spaceship earth’ metaphor as a largely fearmongering response. Of course, the explanatory ability and general ability to innovate will prevail over a huge amount of adversity, like climate change. But you get the sense that these arguments remain true even if half of the world were to die tomorrow or something. Really, as long as some viable breeding population of humans in spacesuits can read instructions printed on steel etched cards, the Deutsch view has no comment and, following only this reasoning, we incur no loss.

On one hand, huge suffering is bad and should be avoided. But on the other, maybe Deutsch is right and right more intensely than even I would be comfortable with, that essentially nothing else matters than the phase transitions which he calls the beginnings of infinity.

Thanks for the review Sam and keep up the great work.

This grossly misinterprets Deutsch. He believes that individual's have rights and rejects utilitarianism. The view isn't "As long as some number of people are able to create knowledge it doesn't matter if lots of people die and/or suffer." It's that cutting off the means of creating knowledgei s wrong, but it applies at every level (individual/institution/nation). People dying is bad.



Some people think that what makes a good explanation is testability. But this isn’t enough: some theories are perfectly testable but do not constitute good explanations.

The more usual claim is that falsifiability makes something a (scientific) explanation at all, while other factors make it a good explanation.

This book has many different threads to it, but one of the most important is a kind of philosophical treatise about how good explanations come to be. One classical idea, which Deutsch rejects, is that we do so by induction, a doctrine known as inductivism.

So, Deutsch argues against induction as the the source of new scientific theories

Nobody believes any more than that induction is the sole source of scientific explanations. That was a feature of very early philosophy of science , such as Bacon's. Yet many believe induction has many uses. It is extremely useful to be able to predict future events, even if you can't explain the mechanism. Which is by no means to say that the explanations, even non predictive ones, have no value.

Empiricism is the philosophical idea that we derive knowledge from our senses. There are a number of problems with this. One is that sense-data by themselves are meaningless. If you had no pre-existing ideas or expectations, you wouldn’t know how to interpret your senses. We do not read from the book of nature. The other major problem with empiricism is how to deal with false perceptions, like optical illusions.

Again, empiricism may have historically been the idea that knowledge is formed by passively registering sense data, or that sense data are infallible, but no serious person believes either anymore.. yet plenty of people still believe in empiricism. Empiricial evidence is standardly used to confirm (justify) and disprove theories.

Popperians use it for the second purpose, but not the they are among those who believe in a form of empiricism. For Popperians, a conjecture is not knowledge until it has been corroborated, and corroboration means attempted refutation, and the classic (but not only) means of refuting a theory is contrary evidence..contrary empirical evidence. So Popperians derive knowledge from the senses, if not entirely from them

The misconception that knowledge needs authority to be genuine or reliable dates back to antiquity, and it still prevails

Justification is not regarded by anyone as a matter of anthropic far as I know, it never was, so this is not even a historical mistake. And the most widely accepted justification is empirical evidence...which is not anthropic authority. And empirical evidence is reliable enough, so long as you don't insist on certainty...a lot of these problems are solved by the probabilistic approach.

Lack of falsification makes a theory better, in addition to explanatoriness, and so does confirmation.

(Consider Omega, the hypothetical entity that makes true pronouncements about everything. On the face of it, Omega is the ultimate authority...but actually you have no reason to trust any one pronouncement by Omega, unless you have evidence that Omega has made correct pronouncements in the past, ie. that Omega is reliable. So authority follows follows from reliability, not vice versa!)

What is the explanatory account of how eating grass could cure a cold?

Where's the evidence, for that matter? Our ancestors followed many practices which work, but for which they had no explanation. They baked and brewed without understanding microbiology, and so on. If you have an explanation without evidence, then it would be a good idea to look for evidence, to test it; and if you have evidence, but no explanation,then it would be a good idea to look for an explanation.

Of course, "working" is an appeal to induction. So if you re- run history with an insistence on explanation, and a disregard for induction, we would be considerably worse off.We are the latest iteration of a dynasty of organisms that made progress without explanations. Lots of progress was achieved without explanations.

(To be continued).

"Our ancestors followed many practices which work, but for which they had no explanation."

That would be very surprising for a species that reflexively attempts to explain things.

Also, in the book, he specifies that's he's explaining the unprecedented rate of consistent progress from the scientific revolution onward.

Edit: I was mistaken. He is trying to explain all progress.


That would be very surprising for a species that reflexively attempts to explain things

Not really. Failing to do stuff that works will kill you. Doing stuff that works inexplicably won't.

I think I wasn't clear. An explanation that isn't accurate is still an explanation to Deutsch, it just isn't a good one. Microbiology or bread-spirits are both explanations for rising bread.


That strengthens the case for explanation being ubiquitous at the expense of the case for explanation being important. What can you do with a bad explanation that you can't do with no explanation?

Deutsch specifies good explanations (laws of nature, scientific theories), and claims the rapid increase of good explanations is because of the invention of the scientific method, and thus explanations are essential for progress.

A bad explanation allows me to make (bad) sense of the world, which makes it appear less chaotic and threatening. 

Ah yes, the spirits are causing the indigestion. Now I know that I need only do a specific dance to please them and the discomfort will resolve. 

The alternative is suffering for no apparent reason or recourse. At least until we find a good explanation for indigestion.


A bad explanation allows me to make (bad) sense of the world, which makes it appear less chaotic and threatening

The lowest limit on bad explanation isnt even zero, it's negative. For instance, the use of leaching as a cure-all.

Yes, but i'm not sure how that follows from your original question.

What can you do with a bad explanation that you can't do with no explanation?


Most of the notes in my copy are about "The Leap to Universality" .

He's taken with the theory of the Turing machine as universal computer, and expanded it to other kinds of universality.. indeed he thinks there is a completely general kind of universality, a universal universality.

A universal Turing machine is universal in the sense that it can emulate any finite Turing machine. It pulls the trick off by having an infinite memory, in which any finite TM can be represented as a programme. A computer with an infinite memory cannot be built, so a UTM is an abstraction: it doesn't exist in the real world. Reality didn't make a jump to universality when digital computation was invented, because for every real digital computer there is an infinite number of programmes which can't fit into it.

Deutsch also gives the example of number systems. There are ways of writing numerals that don't allow you to write arbitrarily large numerals, and ways that do. So the ways that do are universal ... in a sense. They don't require actual infinities , like a UTM. On the other hand, the argument only demonstrates universality in a limited sense: a number system that can write any integer , cannot necessarily write fractions or complex numbers, or whatever. So what is the ultimately universal system? No one knows. Integers have been extended to real numbers, surreal numbers, and so on. No one knows where the outer limit is.

Deutsch toys with the idea that DNA is also universal, but it is not at all clear whether DNA as we know it can build any living organism, or what life in the most generic sense is. Deutsch seems to think that digitality and error correction, both of which DNA has, are necessary components of universality. This is, as ever based, on analogy with the UTM, the universal digital computer , but it seems plausible that digitality and error correction are necessary components of digital computation, not universality.

So there are two problems: the lack of a single concept of universality; and one of the major constituent concepts of universality isn't realisable. There’s no jump to universality because there is no jump to infinity.

"Not only can all problems be solved, but all people can solve problems. People have universality. As Deutsch says: “there can be only one type of person: universal explainers”. Universal explainers can create explanatory knowledge.

This falls out of the Church-Turing Thesis where anything is computable if it can be performed by a Turing machine"

Are human minds actually analogous to Turing Machines? No, because, the truly universal TM has infinite memory and is infinitely programmable -- neither is true of humans. In addition, We can’t completely wipe and reload our brains, so we might be forever constrained by some fundamental hardcoding , something like Chomskyan innate linguistic structures , or Kantian perceptual categories. And having quantitative limitations puts a ceiling on which concepts and theories we can entertain. Which is effectively a qualitative limit. Being able to rewrite out genetic code does not entirely avoid the problem: if we have a blind spot that we are not even aware of, we cannot overcome it.

Deutsch believes there is no limit to explanation. That's like the claim that there is no highest number: it's theoretically true , but in practice there is a limit to the numbers you can think about. So he doesn't just need the claim about the limits of explanation, in the abstract, he needs a claim about the limitations, or lack thereof of the human mind.

And he has one, which is the conjecture that humans are universal explainers. This is argued by analogy with Turing machines, which immediately runs into the problem of finite memory. Whatever algorithm generates any possible explanation needs to fit in a human brain...and merely having the capacity is no guarantee of having the algorithm. It also runs into the problem that's it's an argument by analogy. Argument by analogy isn't logically valid. Worse still, computation and explanation aren't entirely analogous. A universal computer can run any programme given to it by an, but a human explainer must be able to create explanations.

For Deutsch, the ability to explain is that if you are an explainer, you can generate any explanation, and if you aren't ,you can't generate any. Why? Surely a limited, imperfect explanation-generator is conceivable. That's an unrefuted conjecture, too.

And there's plenty of evidence that the ability to create and understand explanations lies along a spectrum. Newtons and Einsteins arise barely once a century,..and the less gifted are often unable to grasp their explanations, let alone recreate them. Deutsch seems to think that non human animals aren't universal explainers, and therefore aren't explainers at all, but, again, there is an observed spectrum of abilities ..a dog isn't as smart as a human , but is a lot smarter than a worm.Or maybe the line is somewhere beneath really smart humans, like physics PhDs. But the 99% who aren't physics PhDs aren't hopeless at explanations. The all-or-nothing theory would predict that the large number of people who aren't quite as smart as physics or professors, can't come up with explanations at all. The non physicists clearly can come up with explanations, But they clearly can't come up with any explanation, since they can't understand any explanation...If someone can't understand relativity when it is explained to them, how can they have the power to recreate it?

But just because the physics PhDs are better explained doesn't mean they are universal explainers. Why shouldn't the universal explainers be some alien species with an average IQ of 1000?

Deutsch is in favour of explanation in the abstract but seems oddly uninterested in explaining well attested facts about variations in cognitive ability.


"You probably know that the effects of gravity drop off as the square of the distance. The same is true of the intensity of light. Indeed, there is only one known phenomenon whose effects do not necessarily drop off with distance: knowledge. A piece of knowledge could travel without any consequence for a thousand light-years, then completely transform the civilisation that it reached. This is another reason for the cosmic significance of humans, and one interpretation of the book’s title"

No two entities have ever communicated knowledge or information as such. What is transmitted and received is some kind of symbol or signal (themselves an abstraction over some physical state-change). We call that transferring information or transferring knowledge if it succeeds. But there are many levels of interpretation involved that can go wrong .

People transmit words, data, diagrams..all things that require interpretation. The greater the inferential distance, the more difficulty in interpretation. And inferential distance is likely to increase with physical distance


"Empiricism is the philosophical idea that we derive knowledge from our senses. There are a number of problems with this. One is that sense-data by themselves are meaningless. If you had no pre-existing ideas or expectations, you wouldn’t know how to interpret your senses. We do not read from the book of nature. The other major problem with empiricism is how to deal with false perceptions, like optical illusions"

This has similar problems to Deutsch 's critique of induction. It is true that pure empiricism is not a source of explanations, but it does not follow that empiricism can play no useful role: empirical evidence can even play a role in Popperian science, as a source of refutation.

It is true that emprical data need interpretation. It follows that pure empiricism is useless, but does imply that empiricism has no use.

It is true that empiricism that is not infallible. But that can be addressed by using probabilistic and falliblistic approaches ... approaches, plural, because the Deutsch/Popper version of fallibilism is not the only one.

Indeed, there is only one known phenomenon whose effects do not necessarily drop off with distance: knowledge.

A real-valued effect will tend to diminish with distance, but a binary transition (such as in a bi-stable system) will not diminish. Knowledge spreading is just a specific case of such a binary transition (unknown -> known). An example of a physical binary transition is a detonation wave in an explosive.


The argument runs like this: suppose we say that a variable is ‘close to the edge’ when it is within 10% of its possible extreme values on either side. If there were only one variable that determined our universe, 20% of its possible values would be close to the edge. If we observed such a variable as being very close to the edge, we might suspect that something fishy was going on or that our universe was designed. But for two variables, 1 - 0.8^2 = 32% of values will be close to the edge. And in general, for n variables, 1 - 0.8^n of the values will be close to the edge.

This is simply mathematically incorrect. What would be correct is to say that there is a probability of 1 - 0.8^n that at least one of the variables is "close to the edge" in the sense described. This is not the same thing as saying the expected proportion of the variables is 1 - 0.8^n; that quantity remains constant no matter how many variables you have. (And it is this latter quantity that is relevant for the fine-tuning argument: it is not surprising that the universe contains a single variable that looks to be optimized for our existence, but that it contains a whole host of variables so optimized.)

I don't know whether this is a mistake Deutsch himself made, or whether he had a better argument that you (the reviewer) simply summarized sloppily. Either way, however, it doesn't speak well to the quality of the book's content. (And, moreover, I found the summaries of most of the book's other theses rather sloppy and unconvincing as well, though at least those contained no basic mathematical errors.)