Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?
I don't mean that deciding to exterminate life is rational. But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.
Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):
Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.
Was this a bad decision? Well, consider the expected value to the people involved. Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese. The loss to them if they ignited the atmosphere would be another 30 or so years of life. The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life. The loss in being conquered would also be large. Easy decision, really.
Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it. Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1. If I've done my math right, that's ≈ 33,777,000 years.
This supposition seems reasonable to me. There is a balance between offensive and defensive capability that shifts as technology develops. If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed. In the near future, biological weapons will be more able to wipe out life than we are able to defend against them. We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.
If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially. The 34M years remaining to life is then in subjective time, and must be mapped into realtime. If we suppose the subjective/real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years. If we instead use Ray Kurzweil's figure of about 2 years, this gives life about 40 remaining realtime years. (I don't recommend Ray's figure. I'm just giving it for those who do.)
Please understand that I am not yet another "prophet" bemoaning the foolishness of humanity. Just the opposite: I'm saying this is not something we will outgrow. If anything, becoming more rational only makes our doom more certain. For the agents who must actually make these decisions, it would be irrational not to take these risks. The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.
I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:
We can outrun the danger: We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction.
Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies.
People will stop having conflicts.
- Rational agents incorporate the benefits to others into their utility functions.
Rational agents with long lifespans will protect the future for themselves.
Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains.
- Independent agents will cease to exist, or to be free (the Singleton scenario).
Let's look at these one by one:
We can outrun the danger.
We will colonize other planets; but we may also figure out how to make the Sun go nova on demand. We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.
One problem with this idea is that apocalypses are correlated; one may trigger another. A disease may spread to another planet. The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet. It's not clear whether spreading out and increasing in population actually makes life more safe. If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.
But neither of those are my final objection. More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.
Technology will stabilize in a safe state.
Maybe technology will stabilize, and we'll run out of things to discover. If that were to happen, I would expect that conflicts would increase, because people would get bored. As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages - a reason some of the actors themselves gave in their writings - is that the nobility were bored. They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.
But that's not my final rejection. The big problem is that by "safe", I mean really, really safe. We're talking about bringing existential threats to chances less than 1 in a million per century. I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.
People will stop having conflicts.
That's a nice thought. A lot of people - maybe the majority of people - believe that we are inevitably progressing along a path to less violence and greater peace.
They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.
But that's not my final rejection either. The bigger problem is that this isn't something that arises only in conflicts. All we need are desires. We're willing to tolerate risk to increase our utility. For instance, we're willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming. In return for this risk, we can enjoy a better lifestyle now.
Also, we haven't burned all physics textbooks along with all physicists. Yet I'm confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe. (In fact, I'd guess the chance is nearer to 1 in 10.)1 We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods. And it's reasonable for us to do this, because an improvement in utility of 1% over an agent's lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.
The Wikipedia entry on Large Hadcon Collider risk says, "In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole." The more authoritative "Review of the Safety of LHC Collisions" by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.
The LHC conclusions are criminally low. Their evidence was this: "Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun - and the Sun still exists." There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it - apparently due to our flawless ability to detect black holes and ascertain what caused them - and therefore we can multiply this figure by the number of stars in the universe.
I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect. Based on my experience with peer-reviewed papers, there's at least a one-in-ten chance that there's a basic arithmetic error in their paper that no one has noticed yet. I'm thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument. (That's based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)
The Trinity test was done for the sake of winning World War II. But the LHC was turned on for... well, no practical advantage that I've heard of yet. It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit. And this is rational, since the LHC will probably improve our lives by more than one part in a million.
Rational agents incorporate the benefits to others into their utility functions.
"But," you say, "I wouldn't risk a 1% chance of destroying the universe for a 1% increase in my utility!"
Well... yes, you would, if you're a rational expectation maximizer. It's possible that you would take a much higher risk, if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility. (This seems difficult, but is worth exploring.) If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn't. It's a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it's already in there.
The US national debt should be enough to convince you that people act in their self-interest. Even the most moral people - in fact, especially the "most moral" people - do not incorporate the benefits to others, especially future others, into their utility functions. If we did that, we would engage in massive eugenics programs. But eugenics is considered the greatest immorality.
But maybe they're just not as rational as you. Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth. Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa. Maybe it's really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.
It doesn't matter. Because you would be adding up everyone else's utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.
But that will stop you from risking atmospheric ignition to defeat the Nazis, right? Because you'll incorporate them into your utility function? Well, that is a subset of the claim "People will stop having conflicts." See above.
And even if you somehow worked around all these arguments, evolution, again, thwarts you.2 Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents. The claim that rational agents are not selfish implies that rational agents are unfit.
Rational agents with long lifespans will protect the future for themselves.
The most familiar idea here is that, if people expect to live for millions of years, they will be "wiser" and take fewer risks with that time. But the flip side is that they also have more time to lose. If they're deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.
Also, if they live a million times longer than us, they're going to get a million times the benefit of those nicer iPods. They may be less willing to take an existential risk for something that will benefit them only temporarily. But benefits have a way of increasing, not decreasing, over time. The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.
But that's not my final rejection. More important is time-discounting. Agents will time-discount, probably exponentially, due to uncertainty. If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn't even waste time trying to figure out what you wanted. And, since future generations will be able to get more utility out of the same resources, we'd all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.
Time discounting is always (so far) exponential, because non-asymptotic functions don't make sense. I supposed you could use a trigonometric function instead for time discounting, but I don't think it would help.
Could a continued exponential population explosion outweigh exponential time-discounting? Well, you can't have a continued exponential population explosion, because of the speed of light and the Planck constant. (I leave the details as an exercise to the reader.)
Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting. You can't stay you forever. If you change, the future you will be less like you, and weigh less strongly in your utility function. Objections to this generally assume that it makes sense to trace your identity by following your physical body. Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea. And if you don't change, well, what's the point of living?
Evolutionary arguments may help us with self-discounting. Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent's selfness function. The major variable is reproduction rate over lifespan. This applies to genes or memes. But they can't help us with time-discounting.
I think there may be a way to make this one work. I just haven't thought of it yet.
A benevolent singleton will save us all.
This case takes more analysis than I am willing to do right now. My short answer is that I place a very low expected utility on singleton scenarios. I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.
I'm not ready to place my faith in a singleton. I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.
(Please don't conclude from my arguments that you should go out and create a singleton. Creating a singleton is hard to undo. It should be deferred nearly as long as possible. Maybe we don't have 34 million years, but this essay doesn't give you any reason not to wait a few thousand years at least.)
I think that the figures I've given here are conservative. I expect existential risk to be much greater than 3/1,000,000 per century. I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose. I expect population and technology to continue to increase, and existential risk to be proportional to population times technology. Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.
Our greatest chance for survival is that there's some other possibility I haven't thought of yet. Perhaps some of you will.
1 If you argue that the laws of physics may turn out to make this impossible, you don't understand what "probability" means.
2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.
Here's a possible problem with my analysis:
Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."
How many cards do you draw?
I'm pretty sure that someone who believes in many worlds will keep drawing cards until they die. But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)
So this whole post may boil down to "maximizing expected utility" not actually being the right thing to do. Also see my earlier, equally unpopular post on why expectation maximization implies average utilitarianism. If you agree that average utilitarianism seems wrong, that's another piece of evidence that maximizing expected utility is wrong.
Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.
Omega offers you the healing of all the rest of Reality; every other sentient being will be preserved at what would otherwise be death and allowed to live and grow forever, and all unbearable suffering not already in your causal past will be prevented. You alone will die.
You wouldn't take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo? I would go for it so fast that there'd be speed lines on my quarks.
Really, this whole debate is just about people being told "X utilons" and interpreting utility as having diminishing marginal utility - I don't see any reason to suppose there's more to it than that.
I'm shocked, and I hadn't thought that most people had preferences like yours - at least would not verbally express such preferences; their "real" preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend's funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undi... (read more)
This is not how evolution works. Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this. Also, evolution can't really thwart you. You're done evolving; you can check it off your to-do list.
It's entirely plausible tha... (read more)
Although your conclusions are very depressing, it seems I must accept them. The other commenters' reluctance to agree puzzles me.
The figure of 3/1,000,000 for the probability of the trinity nuke destroying the world is almost certainly too low. Consider that, subjectively, the scientists should have assigned at least a 1 in 1000 probability that they'd made a mistake in their calculation of safety. Probably more like 1 in 100, considering that the technology was entirely new. In fact, the first serious mistake in a physical calculation that resulted in an actual disaster involving a nuke was Castle Bravo, which occurred probably only 50-150 detonations after trinity. Since then, we ... (read more)
What are the existential risks for a multi-galaxy super-civilization? Or even a multi-stellar civilization expanding outward at some fraction of light speed? I don't see how life can be exterminated once it has spread that far. "liberate much of the energy in the black hole at the center of our galaxy in a giant explosion" does not make sense, since a black hole is not considered a store of energy that can be liberated.
If you are speculating about new physics that haven't been discovered yet, then "subjective-time exponential" and risk ... (read more)
If you don't believe black holes can ever be used as weapons, here's an article about a star 8000 light years away that some astronomers worry may harm life on Earth (to what extent it doesn't say).
This whole point may reflect collective confusion surrounding the term "utility."
I do not presently have a coefficient in my utility function attached to John Doe, who is a mechanic in Des Moines (I'm assuming). I know nothing about him, and whatever happens to him does not affect my experience of happiness in the slightest. I wish him well, but it would make little sense to say he is reflected in my utility function. I would agree that, ceteris paribus, the better off he is, the better, but (particularly since I won't know it), this doesn't real... (read more)
You can't decide your utility function. It's a given. You can only make decisions based on preference that, probably, can be represented in part as a utility function. Deciding to use a particular utility function (that doesn't happen to be exactly the one representing your own preference) constitutes throwing away your humanity and replacing it with whatever the new utility function says.
On "incorporating the benefits to others into their utility functions", you hint at a sharp dichotomy between Scrooges and Saints - people who act entirely in their own self-interest, and people who act in everyone's interest (because that is the nature of their own self-interest). But most humans are not at these poles - most of us act in the interest of at least several people. Mirroring (understanding) is partially a learned trait, but actually caring about other people who you mirror is emotionally "basic". By this I mean it's entir... (read more)
This assumes they don't care about their children and grandchildren after their death.
You can put monetary value on humanity, just as you can on a person's life.
Incidentally, I once met Brian Moriarty at a party John Romero threw where I embarassed myself in front of or offended all of my childhood heroes, one after another, ending with Steve Wozniak.
I was talking to him about trends in text adventures, and said, "One great thing is that IF authors have gotten away from the idea that every game has to be about saving the world."
He said something like, "Well, I happen to think that saving the world is not such a bad thing," and went off in a bit of a huff. And then I remembered that he was the author of Trinity, which was about the Trinity test and saving the world from nuclear holocaust. (And was a really good game, BTW.)
Re: "If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially."
Uh - that doesn't go on forever. Any more than rats with a grain pile allows growth forever. Your statement takes the idea of exponential change into an utterly ridiculous realm.
I was directed here from FIMFiction.
Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).
Some people thought ... (read more)
In saying "our compression of subjective time can be exponential", do you actually mean that the compression rate may keep growing exponentially as a function of real time?
Here's another possible objection:
Much of time-discounting is due to uncertainty. Because you're more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.
But if you can find a way to predict future impacts that is unbiased, then you don't need to time-discount due to uncertainty! Um... do you?
No, I think you still do, because your probability distribution's variance increases the farther you look into the future. Rats.
It seems like this post exhibits a great deal of omission bias. Refusing to make rational trade-offs with existential risk doesn't make the risk go away.
Under your theory of 3/1M/Century, you'd only need to do better than a 1/3 failure rate to lower chances to 1/1M/C. A 1/3 failure rate seems rather plausible. If the defense had a 1/1M failure rate, you'd have a 3/1,000,000,000,000 chance of eradication per century.
Your argument assumes that the time-horizon of rational utility maximisers never reaches further than their next decision. If I only get one shot to increase my expected utility by 1%, and I'm rational, yes, I'll take any odds better than 99:1 in favour on an all or nothing bet. That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.
Further, the risks of not using nuclear weapons in... (read more)
Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider "decent people" to do today)...
If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won't say "unselfish" for reasons discussed elsewhere in t... (read more)
Re: "If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed."
Unsupported hypothesis. As life spreads out in the universe, it gets harder and harder to destroy all of it - while the technology of destruction will stabilise.