I don't think that even Buddhism allows that.Depends on the version of Buddhism and who you ask... but yes, even the utter destruction of the mind.
Of course, 'utter destruction' is not a well-defined term. Depending on who you ask, nothing in Buddhism is ever actually destroyed. Or in the Dust hypothesis, or the Library of Babel... the existence of the mind never ends, because we've never beaten our wives in the first place.
"Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe"
Bit of a leap in logic here, no?
The leap is that the Church–Turing thesis applies to human (“sentient”) cognition. Many theists deny this.
if God exists then consciousness depends on having an immaterial soul.
I translate that into logical notation:
(God exists) -> For all X (X is conscious -> X has an immaterial soul)
I don't concede this conditional. I can imagine a universe with a personal creator, where consciousness is a material property of certain types of complex systems, but souls don't exist.
"In sober historical fact", clear minds could already see in 1919 that the absurdity of the Treaty of Versailles (with its total ignorance of economic realities, and entirely fueled by hate and revenge) was preparing the next war -- each person (in both nominally winning and nominally defeated countries) being put in such unendurable situations that "he listens to whatever instruction of hope, illusion or revenge is carried to him on the air".
This was J.M. Keynes writing in 1919, when A. Hitler was working as a police spy for the Rechswehr, infiltrating a tiny party then named DAP (and only later renamed to NDA); Keynes' dire warnings had nothing specifically to do with this "irrelevant" individual, which he had no doubt never even heard about -- there were plenty of other matches ready to set fire to a tinderbox world, after all; for examle, at that time, Benito Mussolini was a much more prominent figure, a well known and controversial journalist, and had just founded the "Fasci Nazionali di Combattimento".
So your claim, that believing the European errors in 1919 made another great war extremely likely, "is an unreasonable belief",...
The claim isn't that Germany would have been perfectly fine, and would never have started a war or done anything else extreme. And the claim is not that Hitler trashed a country that was ticking along happily.
The claim is that the history of the twentieth century would have gone substantially differently. World War II might not have happened. The tremendous role that Hitler's idiosyncrasies played in directing events, doesn't seem to leave much rational room for determinism here.
Macroscopic determinism, i.e., the belief that an outcome was not sensitive to small thermal (never mind quantum) fluctuations. If I'm hungry and somebody offers me a tasty hamburger, it's macroscopically determined that I'll say yes in almost all Everett branches; if Zimbabwe starts printing more money, it's macroscopically determined that their inflation rates will rise further.
Reminds me of this: "Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."
But my question would be: Is the universe of cause and effect really so less safe than the universe of God? At least in this universe, someone who has an evil whim is limited by the laws of cause and effect, e.g. Hitler had to build tanks first, which gave the allies time to prepare. In that other universe, Supreme Being decides he's bored with us and zap we're gone, no rules he has to follow to achieve that outcome.
So why is relying on the goodness of God safer than relying on the inexorability of cause and effect?
Given how widespread white nationalism is in America, (i.e. it's a common phenomenon) and how intimately tied to fascism it is, I think that there's a substantial chance that the leader that would have taken Hitler's place would have shared his predilection for ethnic cleansing, even if not world domination.
"I don't think that even Buddhism allows that."
Remove whatever cultural or personal contextual trappings you find draped over a particular expression of Buddhism, and you'll find it very clear that Buddhism does "allow" that, or more precisely, un-asks that question.
As you chip away at unfounded beliefs, including the belief in an essential self (however defined), or the belief that there can be a "problem to solved" independent of a context for its specification, you may arrive at the realization of a view of the world flippe...
By the way, I should clarify that my total disagreement with your thesis on WW2 being single-handedly caused by A. Hitler does in no way imply disagreement with your more general thesis. In general I do believe the "until comes steam-engine-time" theory -- that many macro-scale circumstances must be present to create a favorable environment for some revolutionary change; to a lesser degree, I also do think that mostly, when the macro-environment is ripe, one of the many sparks and matches (that are going off all the time, but normally fizz out b...
I thought I already knew all this, but this post has made me realize that I've still, deep down, been thinking as you describe - that the universe can't be that unfair, and that the future isn't really at risk. I guess the world seems like a bit scarier of a place now, but I'm sure I'll go back to being distracted by day-to-day life in short order ;).
As for cryonics, I'm a little interested, but right now I have too many doubts about it and not enough spare money to go out and sign up immediately.
With all the sci fi brought up here, I think we are familiar with Hitler's Time Travel Exemption Act.
Ian C., that is half the philosophy of Epicurus in a nutshell: there are no gods, there is no afterlife, so the worst case scenario is not subject to the whims of petulant deities.
If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0. If there is any probability of your annihilation, no matter how small, you will not survive for an infinite amount of time. That is what happens in an i...
Not necessarily. If the risk decreases faster than an inverse function (ie. if the risk is less than 1/n for each event, where n is the number of events), there can be a probability between 0 and 1.
What's the point of despair? There seems to be a given assumption in the original post that:
1) there is no protection, universe is allowed to be horrible --> 2)lets despair
But number 2 doesn't change 1 one bit. This is not a clever argument to disprove number 1. I'm just saying despair is pointless if it changes nothing. It's like when babies cry automatically when something isn't the way they like because they are programmed to by evolution because this reliably attracted the attention of adults. Despairing about the universe will not attract the attention of adults to make it better. We are the only adults, that's it. I would rather reason along the lines of:
1) there is no protection, universe is allowed to be horrible --> 2)what can I do to make it better
Agreed with everything else except the part where this is really sad news that's supposed to make us unhappy.
I don't understand the faith in cryonics.
In a Universe beyond the reach of God, who is to say that the first civilization technologically advanced enough to revive you will not be a "death gives meaning to life" theocracy which has a policy of reviving those who chose to attempt to escape death in order to submit them and their defrosted family members to 1000 years of unimaginable torture followed by execution?
Sure, there are many reasons to believe such a development is improbable. But you are still rolling those dice in a Universe beyond God's reach, are you not?
Of course you are. It's still a probability game. But Eliezer's contention is that the probabilities for cryonics look good. It's worth rolling the dice.
Yes very very bad things can happen for little reason. But of course we still want positive arguments to convince us to assign large probabilities to scenarios about which you want us to worry.
Where is this noirish Eliezer when he's writing about the existence of free will and non-relativist moral truths?
Don't get bored with the small shit. Cancers, heart disease, stroke, safety engineering, suicidal depression, neurodegenerations, improved cryonic tech. In the next few decades I'm probably going to see most of you die from that shit (and that's if I'm lucky enough to persist as an observer), when you could've done a lot more to prevent it, if you didn't get bored so easily of dealing with the basics.
Kip, the colors of rationality are crystal, mirror, and glass.
Robin, fair enough; but conversely no amount of argument will convince someone in zettai daijobu da yo mode.
For the benefit of those who haven't been following along with Overcoming Bias, I should note that I actually intend to fix the universe (or at least throw some padding atop my local region of it, as disclaimed above) - I'm not just complaining here.
"If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0. If there is any probability of your annihilation, no matter how small, you will not survive for an infinite amount of time. That is what happens in an infinite amount of time: everything possible. If all your backup plans can fail at once, even at P=1/(3^^^3), that number will come of eventually with infinite trials." Zubon, this seems to assume that the probabilities in different periods are independent. It could be that...
Without Hitler it's likely Ludendorf would have been in charge and things would have been even worse. So perhaps we should be grateful for Hitler!
I gather there are some Orthodox Jews involved in Holocaust denial and were in Iran for that, but this post gets me to thinking that there should be more of them if they really believe in a benevolent and omnipotent God that won't allow sufficiently horrible things to happen.
How widespread is white nationalism in America? I would think it's one of the least popular things around, although perhaps I'm taking the Onion too seriously.
"The standard rebuttal is that evil is Man's own fault,"
There is no evil. There is neutrality. The universe isn't man's fault; it isn't anyone's fault.
I'm not at all saddened by these facts. My emotional state is unaltered. It's because I take them neutrally.
I've experienced severe pain enough to know that A) Torture works. Really. It does. If you don't believe it, try it. It'll be a short lesson. B) Pain is not such a big deal. It's just an avoid-this-at-all-cost -signal. Sure, I'm in agony, sure, I'd hate to remain in a situation where that signal doesn't go away, but it still is just a signal.
Perhaps as you look at some spot in the sky, they've already - neutrality allowing - tamed neutrality there; made it Friendly.
We've got a project to finish.
More parents might let their toddler get hit by a car if they could fix the toddler afterwards.
There are an awful lot of types of Buddhism. Some allow mind annihilation, and even claim that it should be our goal. Some strains of Epicurianism hold that mind annihilation is a) neutral, and b) better than what all the religions believed in. Some ancient religions seemed to believe in the same awful universal fate as quantum immortality believers do, e.g. eternal degeneration, progressively advanced Alzheimers forever more or less. Adam Smith suggests that...
Good post, but how to deal with this information so that it is not so burdensome: Conway himself, upon creating The Game of Life, didn't believe that the cellular automaton could 'live' indefinitely, but was proven wrong shortly after his games creation by the discovery of the glider gun. We cannot assume that the cards were dealt perfectly and the universe or our existence is infinite, but we can hope that the pattern we have put down will continue to stand the test of time. Belief that we are impervious to extinction or that the universe will not ultimat...
I don't understand why the end of the universe bugs people so much. I'll just be happy to make it to next decade, thanks very much. When my IQ rises a few thousand points, I'll consider things on a longer timescale.
What I don't understand is that we live on a planet, where we don't have all people with significant loose change
A) signing up for cryonics B) super-saturating the coffers of life-extensionists, extinction-risk-reducers, and AGI developers.
Instead we currently live on a planet, where their combined (probably) trillions of currency units are doing nothing but bloating as 1s and 0s on hard drives.
Can someone explain why?
"What can a twelfth-century peasant do to save themselves from annihilation? Nothing."
She did something. She passed on a religious meme whose descendents have inspired me, in turn, to pass on the idea that we should engineer a world that can somehow reach backward to save her from annihilation. That may not prove possible, but some possibilities depend on us for their realization.
A Jewish prophet once wrote something like this: "Behold, I will send you Elijah the prophet before the coming of the great and dreadful day of the Lord: And he sha...
Chad: if you seriously think that Turing-completeness does not imply the possibility of sentience, then you're definitely in the wrong place indeed.
And I do quite fancy well-written, well-researched "alternate history" fiction, such as Turtledove's, so I'd love to read a novel about what happens in 1812 to the fledgling USA if the British are free to entirely concentrate on that war, not distracted by Napoleon's last hurrahs in their backyard, because Napoleon was never around...
Nitpick:
The "War of 1812" was basically an offshoot of the larger Napoleonic Wars; Britain and France were both interfering with the shipping of "neutral" nations, such as the United States, in or...
I should note that I actually intend to fix the universe [...]
I was not aware that the universe was broken. If so, can we get a replacement instead? ;-)
It is a strange thing. I often feel the impulse to not believe that something would really be possible - usually when talking about existential risks - and I have to make a conscious effort to suppress that feeling, to remind myself that anything the laws of physics allow is possible. (And even then, I often don't succeed - or don't have the courage to entirely allow myself to succeed.)
A) Torture works. Really. It does. If you don't believe it, try it. It'll be a short lesson.
That depends on what you're trying to use it for. Torture is very good at getting people to do whatever they believe will stop the torture. For example, it's a good way to get people to confess to whatever you want them to confess to. Torture is a rather poor way to get people to tell you the truth when they have motive to lie and verification is difficult; they might as well just keep saying things at random until they say something that ends the torture.
Consequentialist: Is it a fair universe where the wealthy live forever and the poor die in the relative blink of an eye? It seems hard for our current society to look past that when setting public policy. This doesn't necessarily explain why there isn't more private money put to the purpose, but I think many of the intelligent and wealthy at the present time would see eternal life quests as a millennial long cliche of laughable selfishness and not in tune with leaving a respectable legacy.
...Can someone explain why?
Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?
That's probably not the explanation, since there are many millions of atheists who heard about cryonics and/or extinction risks. I figure the actual explanation is a combination of conformity, the bystander effect, the tendency to focus on short term problems, and the Silliness Factor.
I can only speak for myself on this, but wouldn't sign up for cryonics even if it were free, because I don't want to be revived in the future after I'm dead. (Given the choice, I would rather not have existed at all. However, although mine was not a life worth creating, my continued existence will do far less harm than my abrupt death.)
There's a corallary mystery category which most of you fall into: why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds? Where's the fight against a global ban on reproductive human cloning? Where's the fight to increase legal organ markets? Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation? Until you square aware your own repugnancy bias based inaction, criticisms of that of...
To show that hellish scenarios are worth ignoring, you have to show not only that they're improbable, but also that they're improbable enough to overcome the factor (utility of oblivionish scenario - utility of hellish scenario)/(utility of heavenish scenario - utility of oblivionish scenario), which as far as I can tell could be anywhere between tiny and huge.
As for global totalitarian dictatorships, I doubt they'd last for more than millions of years without something happening to them.
HA,
"why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds?" Here's a MSM citation of Gene Expression today: http://www.theglobeandmail.com/servlet/story/LAC.20081004.WORDS04//TPStory/Science
Steve Sailer is also widely read among conservative (and some other) elites, and there's a whole network of anonymous bloggers associated with him.
"Where's the fight against a global ban on reproductive human cloning?" Such bans have been fought, primarily throu...
I can only speak for myself on this, but wouldn't sign up for cryonics even if it were free, because I don't want to be revived in the future after I'm dead.
I would probably sign up for cryonics if it were free, with a, "do not revive sticker" and detailed data about me so that future brain studiers would have another data point when trying to figure out how it all works.
I don't wish that I hadn't been born, but I figure I have a part to play a purpose that no one else seems to be doing. Once that has been done, then unless something I see need doing and is important and sufficiently left field for no one else to be doing, I'll just potter along doing random things until I die.
"I figure I have a part to play a purpose that no one else seems to be doing"
How do you figure that? Aren't you a materialist? Or do you just mean that you might find a niche to fill that would be satisfying and perhaps meaningful to someone? I'm having trouble finding a non-teleological interpretation of your comment.
"If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods."
While I of course I agree with the general sentiment of the post, I don't think this argument works. There is a relevant quote by John McCarthy:
"In the 195...
Doug, Will: There is no fundamental difference between being revived after dying, waking up after going to sleep, or receiving neurotransmitter in a synapse after it was released. There is nothing special about 10^9 seconds as opposed to 10^4 seconds or 10^-4 seconds. Unless, of course, these times figure into your morality, but these are considerations far out of scope of ancestral environments humans evolved in. This is a care where unnatural category meets unnatural circumstances, so figuring out a correct answer is going to be difficult, and relying on intuitively reinforced judgment would be reckless.
"So invoking them do not give us any more information."
I do think we get a little: if such constraints exist, they are a property of the patterns themselves, and not a property of the low-level substrate on which they are implemented. If such a thing were true in this world, it would be a property of people and societies, not a metaphysical property. That rules out a lot of religion and magical thinking, and could be a useful heuristic.
You can't 'fix the universe'. You can at most change the properties of small parts of reality -- and that can only be accomplished by accepting and acting in accordance to the nature of reality.
If you don't like the nature of reality, you'd better try to change what you like.
What probability do you guys assign to the god hypothesis being true?Incoherent 'hypotheses' cannot be assigned a probability; they are, so to speak, "not even wrong".
I don't want to sign up for cryonics because I'm afraid I will be revived brain-damaged. But maybe others are worried they will have the social status of a freak in that future society.
Great post and discussion. Go Team Rational!
Eliezer, I think there's a slight inconsistency in your message. On the one hand, there are the posts like this, which can basically be summed up as: "Get off your asses, slackers, and go fix the world." This is a message worth repeating many times and in many different ways.
On the other hand are the "Chosen One" posts. These posts talk about the big gaps in human capabilities - the idea being that some people just have an indefinable "sparkliness" that gives them the power to do inc...
"So, what I'd like to see is a discussion of what the rank-and-file members of Team Rational should be doing to help (and I hope that involves more than donating lots of money to SIAI)." How 'rank-and-file' are we talking here? With what skillset, interests, and level of motivation?
I have an analogy: "justice is like cake, it's permitted to exist but someone has to make it".
Can you be happier sheltering in ignorance? I'm not convinced. I think that's a strategy that only works while you're lucky.
It is extraordinarily difficult to figure out how to use volunteers. Almost any nonprofit trying to accomplish a skilled-labor task has many more people who want to volunteer their time than they can use. The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share.
I'm surprised by the commenters who cannot conceive of a future life that is more fun than the one they have now - who can't imagine a future they would want to stick around for. Maybe I should bump the priority of the Fun Theory sequence.
"The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share."
There's always Amazon's Mechanical Turk (https://www.mturk.com/mturk/welcome). It's an inefficient use of people's time, but it's better than just telling people to go away. If people are reluctant to donate money, you can ask for donations of books- books are actually a fairly liquid asset (http://www.cash4books.net/).
Eliezer: Does the law allow just setting them to productive but entirely tangential work, and pocketing the profit for SIAI?
@Hidden: just a "typical" OB reader, for example. I imagine there are lots of readers who read posts like this and say to themselves "Yeah! There's no God! If we want to be saved, we have to save ourselves! But... how...?" Then they wake up the next day and go to their boring corporate programming jobs.
@pdf23ds: This feels like tunnel vision. Surely the problem SIAI is working on isn't the ONLY problem worth solving.
@Eliezer: I recognize that it's hard to use volunteers. But members of Team Rational are not herd thinkers. They probabl...
The obvious example of a horror so great that God cannot tolerate it, is death - true death, mind-annihilation. I don't think that even Buddhism allows that.
This is sort of a surprising thing to hear from someone with a Jewish religious background. Jews spend very little attention and energy on the afterlife. (And your picture of Buddhism is simplistic at best, but other people have already dealt with that). I've heard the interesting theory that this stems from a reaction against their Egyptian captors, who were of course obsessed with death and the ...
"but on the other hand you're essentially saying that if a person is not a Chosen One, there's not much he can really contribute."
Do you think there aren't at least a few Neos whom Eliezer, and transhumanism in general, hasn't reached and influenced? I'm sure there are many, though I put the upper limit of number of people capable of doing anything worthwhile below 1M (whether they're doing anything is another matter). Perhaps the figure is much lower. But the "luminaries", boy, they are rare.
Millions of people are capable of hoovering money well in excess of their personal need. Projects aiming for post-humanity only need to target those people to secure unlimited funding.
"what makes you so damn important that you need to live forever? Get over yourself. After you die, there will be others taking over your work, assuming it was worth doing. Leave some biological and intellectual offspring and shuffle off this mortal coil and give a new generation a chance"
I vehemently disagree. What makes me so damn important, huh? What makes you so damn unimportant that you're not even giving it a try? The answer to both of these: You, yourself; you make yourself dman important or don't. Importance and significance are self-made. No one can give them to you. You must earn them.
There are damn important people. Unfortunately most of them were. Think of the joy if you could revive the best minds who've ever walked the earth. If you aren't one of them, try to become one.
Mtraven: "I truly have trouble understanding why people here think death is so terrible [...] [S]ince we are all hard-core materialists here, let me remind you that the flow of time is an illusion, spacetime is eternal [...]"
I actually think this one goes the other way. You choose to live right now, rather than killing yourself. Why not consistently affirm that choice across your entire stretch of spacetime?
"[W]hat makes you so damn important that you need to live forever?"
Important to whom?
Even if you're only capable of becoming an average, main sequence star, and not a quasistellar object outshining billions of others, what you must do is to become that star and not remain unlit. Oftentimes those who appear to shine brightly do so only because there's relative darkness around.
What if Eliezers weren't so damn rare; what if there were 100,000 x "luminaries"; which Eliezer's blog would you read?
"Important to whom?"
Important to the development of the universe. It's an open-ended project where we, its sentient part, decide what the rewards are, we decide what's important. I've come to the conclusion that optimizing, understanding, and controlling that which is (existence) asymptotically perfectly, is the most obvious goal. Until we have that figured out, we need to stick around.
"What if Eliezers weren't so damn rare"
The weird obsequiousness towards Eliezer makes yet another appearance on OB.
Oh, and while I'm stirring up the pot, let me just say that this statement made me laugh: "But members of Team Rational are not herd thinkers." Dude. Self-undermining much?
Consequentialist: "I've come to the conclusion that optimizing, understanding, and controlling that which is (existence) asymptotically perfectly, is the most obvious goal."
You haven't been talking to Roko or Richard Hollerith lately, have you?
"The weird obsequiousness towards Eliezer makes yet another appearance on OB."
Quite the contrary. I'd prefer it be so that Eliezer is a dime a dozen. It's the relative darkness around that keeps him in the spotlight. Is suspect there's nothing special - in the Von Neumann sense - about this chap, just that I haven't found anyone like him so far. Care to point some others like him?
Eliezer, if that last comment was in response to mine it is a disappointingly obtuse misinterpretation which doesn't engage with any of the points I made. "Life" is worth something; that doesn't mean that striving for the infinite extension of individual lives should be a priority.
I'm surprised by the commenters who cannot conceive of a future life that is more fun than the one they have now - who can't imagine a future they would want to stick around for. Maybe I should bump the priority of the Fun Theory sequence.
I a different type of fun helping people perform a somewhat meaningful* task than I do when I am just hanging out, puzzle solving, adventure sports or going on holiday. I have a little nagging voice asking, "What was the point of that". Which needs to be placated every so often, else the other types of fun los...
Who knows, perhaps there is a deep fundamental fact that it is not possible to implement sentient beings in a universe where the evaluation rules don't enforce fairness. Or, slightly more plausible, it could be impossible to implement sentient tyrants who don't feel a "shade of gloom" when considering what they've done. Neither scenario sounds very plausible, of course.
The rule of thumb is: if you can imagine it, you can simulate it (because your brain is a simulator). The simulation may not be easy, but at least it's possible.
You name specific excuses for why life in the future will be bad for you. It sounds like you see the future as a big abandoned factory, where you are a shadow, and the strange mechanisms do their spooky dance. Think instead of what changes could make the future right specifically for you, with tremendous amount of effort applied to this goal. You are just a human, so attention your comfort can get starts far above the order of whole of humanity thinking about every tiny gesture to make you a little bit more comfortable for millions of years, and thinking a...
"The claim isn't that Germany would have been perfectly fine, and would never have started a war or done anything else extreme. And the claim is not that Hitler trashed a country that was ticking along happily.
The claim is that the history of the twentieth century would have gone substantially differently. World War II might not have happened. The tremendous role that Hitler's idiosyncrasies played in directing events, doesn't seem to leave much rational room for determinism here."
I disagree. Hitler did not departure very far from the general bel...
[sorry for ambiguity: thinking for millions of years, not making comfortable for millions of years]
It sounds like you see the future as a big abandoned factory, where you are a shadow, and the strange mechanisms do their spooky dance.I see the future as full of adults, to which I am a useless child. Or if Eliezer gets his way one adult to which I am an embryo. I can't even help with the equivalent of washing up.
Think instead of what changes could make the future right specifically for you.I'd like a future where people were on a level with me, so I could be of some meaningful use.
However a future without massive disparities of power and knowledge between myself and the inhabitants, would not be able to revive me from cryo sleep.
So you don't think you could catch up? If you had been frozen somewhere between -10000 and -100 years and revived now, don't you think you could start learning what the heck it is people are doing and understand nowadays? Besides a lot of the pre-freeze life-experience would be fully applicable to present. Everyone starts learning from the point of birth. You'd have headway compared to those who just start out from nothing.
There are things we can meaningfully contribute to even in a Sysop universe, filled with Minds. We, after all, are minds, too, which h...
This is a big do-it-yourself project. Don't complain about there not being enough opportunities to do meaningful things. If you don't find anything meaningful to do, that's your failure, not the failure of the universe. Searching for meaningful problems to solve is part of the project.
Correction: headway - I meant to say headstart.
Giant cheesecake fallacy. If future could do everything you wanted to do, it doesn't mean it would do so. Especially if it will be bad for you. If future decides to let you work on a problem, even though it could solve it without you, you can't apply to the uselessness of your action: if future refuses to perform it, only you can make a difference. You can grow to be able to vastly expand the number of things you will be capable of doing, this source never dwindles. If someone or something else solved a problem, it doesn't necessarily spoil the fun for eve...
A "head start" in the wrong direction isn't much help.
Imagine a priest in the temple of Zeus, back in Ancient Greece. Really ancient. The time of Homer, not Archimedes. He makes how best to serve the gods the guiding principle of his life. Now, imagine that he is resurrected in the world of today. What do you think would happen to him? He doesn't speak any modern language. He doesn't know how to use a toilet. He'd freak out at the sight of a television. Nobody worships the gods any more. Our world would seem not only strange, but blasphemous and ...
Doug: From almost every perspective I could think of, it would be better to invest resources in raising a newborn than to recreate and rehabilitate a random individual from our barbaric past.
No, for him it won't be better. Altruistic aspect of the humane morality will help, even if it's more energy-efficient to incinerate you. For that matter, why raise a newborn child instead of making a paperclip?
In the interest of helping folks here to "overcome bias", I should add just how creepy it is to outside observers to see the unswervingly devoted members of "Team Rational" post four or five comments to each Eliezer post that consist of little more than homilies to his pronouncements, scattered with hyperlinks to his previous scriptural utterances. Some of the more level-headed here like HA have commented on this already. Frankly it reeks of cultism and dogma, the aromas of Ayn Rand, Scientology and Est are beginning to waft from this blog. I think some of you want to live forever so you can grovel and worship Eli for all eternity. . .
I'd like a future where people were on a level with me, so I could be of some meaningful use.However a future without massive disparities of power and knowledge between myself and the inhabitants, would not be able to revive me from cryo sleep.
I already guessed that might be the wish of many people. That's one reason why I would like to acquire the knowledge to deliberately create a single not-person, a Very Powerful Optimization Process. What does it take to not be a person? That is one of those moral questions that runs into empirical confusions. But if I could create a VPOP that did not have subjective experience (or the confusion we name subjective experience), and did not have any pleasure or pain, or valuation of itself, then I think it might be possible to have around a superintelligence that did not, just by its presence, supersede us as an adult; but was nonetheless capable of guarding the maturation of humans into adults, and, a rather lesser problem, capable of reviving cryonics patients.
If there is anything in there that seems like it should be impossible to understand, then remember that mysteries exist in the map, not in the territory.
The only thing more diff...
A.R.: The standard rebuttal is that evil is Man's own fault, for abusing free will.
That only excuses moral evil, not natural evil.
I was not aware that the universe was broken. If so, can we get a replacement instead? ;-)
Britain is broken, but Cameron's on that case.
An emergency measure should do as little as possible, and everything necessary;
The VPOP will abolish, "Good bye," no?
It will obsolete or profoundly alter the nature of emergency surgery doctors, cancer researchers, fund raisers for cancer research, security services, emergency relief workers, existential risk researchers etc...
Every person on the planet who is trying to act somewhat like an adult will find they are no longer needed to do what is necessary. It doesn't matter that they are obsoleted by a process rather than a person, they are sti...
"Frankly it reeks of cultism and dogma,"
Oh, I wouldn't worry about that too much; that's a cunning project underway to enbias Eliezer with delusions-of-grandeur bias, smarter-than-thou bias and whatnot.
Anything to harden our master. :D
"Chad: if you seriously think that Turing-completeness does not imply the possibility of sentience, then you're definitely in the wrong place indeed."
gwern: The implication is certainly there and it's one I am sympathetic with, but I'd say its far from proven. The leap in logic there is one that will keep the members of the choir nodding along but is not going to win over any converts. A weak argument is a weak argument, whether you agree with the conclusion reached by that argument -- it's better for the cause if the arguments are held to higher standards.
Zubon,
"If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0."
You're only correct if the probability is constant with respect to time. Consider, however, that some uncertain events have a non-zero probability even if infinite time passes. For example, random walks in three dimensions (or more) are not guaranteed to meet their origin again, even over infinite time:
gwern: The implication is certainly there and it's one I am sympathetic with, but I'd say its far from proven.1) Consciousness exists. 2) There are no known examples of 'infinite' mathematics in the universe. 3) It is therefore more reasonable to say that consciousness can be constructed with non-infinite mathematics than to postulate that it can't.
Disagree? Give us an example of a phenomenon that cannot be represented by a Turing Machine, and we'll talk.
Eliezer: imagine that you, yourself, live in a what-if world of pure mathematics
Isn't this true? It seems the simplest solution to "why is there something rather than nothing". Is there any real evidence against our apparently timeless, branching physics being part of a purely mathematical structure? I wouldn't be shocked if the bottom was all Bayes-structure :)
Today's post is a tad gloomier than usual, as I measure such things. It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me. Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading. (Unless they have something to protect, including their own life.)
So! Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong. Not as the result of any explicit propositional verbal belief. More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.
Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.
But we don't live in that world. We live in the world beyond the reach of God.
It's been a long, long time since I believed in God. Growing up in an Orthodox Jewish family, I can recall the last remembered time I asked God for something, though I don't remember how old I was. I was putting in some request on behalf of the next-door-neighboring boy, I forget what exactly—something along the lines of, "I hope things turn out all right for him," or maybe "I hope he becomes Jewish."
I remember what it was like to have some higher authority to appeal to, to take care of things I couldn't handle myself. I didn't think of it as "warm", because I had no alternative to compare it to. I just took it for granted.
Still I recall, though only from distant childhood, what it's like to live in the conceptually impossible possible world where God exists. Really exists, in the way that children and rationalists take all their beliefs at face value.
In the world where God exists, does God intervene to optimize everything? Regardless of what rabbis assert about the fundamental nature of reality, the take-it-seriously operational answer to this question is obviously "No". You can't ask God to bring you a lemonade from the refrigerator instead of getting one yourself. When I believed in God after the serious fashion of a child, so very long ago, I didn't believe that.
Postulating that particular divine inaction doesn't provoke a full-blown theological crisis. If you said to me, "I have constructed a benevolent superintelligent nanotech-user", and I said "Give me a banana," and no banana appeared, this would not yet disprove your statement. Human parents don't always do everything their children ask. There are some decent fun-theoretic arguments—I even believe them myself—against the idea that the best kind of help you can offer someone, is to always immediately give them everything they want. I don't think that eudaimonia is formulating goals and having them instantly fulfilled; I don't want to become a simple wanting-thing that never has to plan or act or think.
So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers. Even a Friendly AI might not respond to every request.
But clearly, there exists some threshold of horror awful enough that God will intervene. I remember that being true, when I believed after the fashion of a child.
The God who does not intervene at all, no matter how bad things get—that's an obvious attempt to avoid falsification, to protect a belief-in-belief. Sufficiently young children don't have the deep-down knowledge that God doesn't really exist. They really expect to see a dragon in their garage. They have no reason to imagine a loving God who never acts. Where exactly is the boundary of sufficient awfulness? Even a child can imagine arguing over the precise threshold. But of course God will draw the line somewhere. Few indeed are the loving parents who, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.
The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it.
What if you build your own simulated universe? The classic example of a simulated universe is Conway's Game of Life. I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of "physical law". Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward. Other cellular automata would make it simpler.
Could you, by creating a simulated universe, escape the reach of God? Could you simulate a Game of Life containing sentient entities, and torture the beings therein? But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors. If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene. God being omnipresent, there is no refuge anywhere for true horror: Life is fair.
But suppose that instead you ask the question:
Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?
Not even God can modify the answer to this question, unless you believe that God can implement logical impossibilities. Even as a very young child, I don't remember believing that. (And why would you need to believe it, if God can modify anything that actually exists?)
What does Life look like, in this imaginary world where every step follows only from its immediate predecessor? Where things only ever happen, or don't happen, because of the cellular automaton rules? Where the initial conditions and rules don't describe any God that checks over each state? What does it look like, the world beyond the reach of God?
That world wouldn't be fair. If the initial state contained the seeds of something that could self-replicate, natural selection might or might not take place, and complex life might or might not evolve, and that life might or might not become sentient, with no God to guide the evolution. That world might evolve the equivalent of conscious cows, or conscious dolphins, that lacked hands to improve their condition; maybe they would be eaten by conscious wolves who never thought that they were doing wrong, or cared.
If in a vast plethora of worlds, something like humans evolved, then they would suffer from diseases—not to teach them any lessons, but only because viruses happened to evolve as well, under the cellular automaton rules.
If the people of that world are happy, or unhappy, the causes of their happiness or unhappiness may have nothing to do with good or bad choices they made. Nothing to do with free will or lessons learned. In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average. Who prevents it? God would prevent it from ever actually happening, of course; He would at the very least visit some shade of gloom in the Khan's heart. But in the mathematical answer to the question What if? there is no God in the axioms. So if the cellular automaton rules say that the Khan is happy, that, simply, is the whole and only answer to the what-if question. There is nothing, absolutely nothing, to prevent it.
And if the Khan tortures people horribly to death over the course of days, for his own amusement perhaps? They will call out for help, perhaps imagining a God. And if you really wrote that cellular automaton, God would intervene in your program, of course. But in the what-if question, what the cellular automaton would do under the mathematical rules, there isn't any God in the system. Since the physical laws contain no specification of a utility function—in particular, no prohibition against torture—then the victims will be saved only if the right cells happen to be 0 or 1. And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that. So the victims die, screaming, and no one helps them; that is the answer to the what-if question.
Could the victims be completely innocent? Why not, in the what-if world? If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods.
Is this world starting to sound familiar?
Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited: Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?
For so many lives and so much loss to turn on a single event, seems disproportionate. The Divine Plan ought to make more sense than that. You can believe in a Divine Plan without believing in God—Karl Marx surely did. You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum. It ought not to be allowed. It's too disproportionate. Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.
But in the world beyond the reach of God, there isn't any clause in the physical axioms which says "things have to make sense" or "big effects need big causes" or "history runs on reasons too important to be so fragile". There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.
The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe. Many who are atheists, still think as if certain things are not allowed. They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect. But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors. There is no particular empirical justification that I happen to have heard of, for doubting this. The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.
But why not? What prohibits it?
In the God-universe, God prohibits it. To recognize this is to recognize that we don't live in that universe. We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else. Whatever physics says will happen, will happen. Absolutely anything, good or bad, will happen. And there is nothing in the laws of physics to lift this rule even for the really extreme cases, where you might expect Nature to be a little more reasonable.
Reading William Shirer's The Rise and Fall of the Third Reich, listening to him describe the disbelief that he and others felt upon discovering the full scope of Nazi atrocities, I thought of what a strange thing it was, to read all that, and know, already, that there wasn't a single protection against it. To just read through the whole book and accept it; horrified, but not at all disbelieving, because I'd already understood what kind of world I lived in.
Once upon a time, I believed that the extinction of humanity was not allowed. And others who call themselves rationalists, may yet have things they trust. They might be called "positive-sum games", or "democracy", or "technology", but they are sacred. The mark of this sacredness is that the trustworthy thing can't lead to anything really bad; or they can't be permanently defaced, at least not without a compensatory silver lining. In that sense they can be trusted, even if a few bad things happen here and there.
The unfolding history of Earth can't ever turn from its positive-sum trend to a negative-sum trend; that is not allowed. Democracies—modern liberal democracies, anyway—won't ever legalize torture. Technology has done so much good up until now, that there can't possibly be a Black Swan technology that breaks the trend and does more harm than all the good up until this point.
There are all sorts of clever arguments why such things can't possibly happen. But the source of these arguments is a much deeper belief that such things are not allowed. Yet who prohibits? Who prevents it from happening? If you can't visualize at least one lawful universe where physics say that such dreadful things happen—and so they do happen, there being nowhere to appeal the verdict—then you aren't yet ready to argue probabilities.
Could it really be that sentient beings have died absolutely for thousands or millions of years, with no soul and no afterlife—and not as part of any grand plan of Nature—not to teach any great lesson about the meaningfulness or meaninglessness of life—not even to teach any profound lesson about what is impossible—so that a trick as simple and stupid-sounding as vitrifying people in liquid nitrogen can save them from total annihilation—and a 10-second rejection of the silly idea can destroy someone's soul? Can it be that a computer programmer who signs a few papers and buys a life-insurance policy continues into the far future, while Einstein rots in a grave? We can be sure of one thing: God wouldn't allow it. Anything that ridiculous and disproportionate would be ruled out. It would make a mockery of the Divine Plan—a mockery of the strong reasons why things must be the way they are.
You can have secular rationalizations for things being not allowed. So it helps to imagine that there is a God, benevolent as you understand goodness—a God who enforces throughout Reality a minimum of fairness and justice—whose plans make sense and depend proportionally on people's choices—who will never permit absolute horror—who does not always intervene, but who at least prohibits universes wrenched completely off their track... to imagine all this, but also imagine that you, yourself, live in a what-if world of pure mathematics—a world beyond the reach of God, an utterly unprotected world where anything at all can happen.
If there's any reader still reading this, who thinks that being happy counts for more than anything in life, then maybe they shouldn't spend much time pondering the unprotectedness of their existence. Maybe think of it just long enough to sign up themselves and their family for cryonics, and/or write a check to an existential-risk-mitigation agency now and then. And wear a seatbelt and get health insurance and all those other dreary necessary things that can destroy your life if you miss that one step... but aside from that, if you want to be happy, meditating on the fragility of life isn't going to help.
But this post was written for those who have something to protect.
What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Nature's little challenges aren't always fair. When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That's how it is for people, and it isn't any different for planets. Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against: Absolute, utter, exceptionless neutrality.
Knowing this won't always save you. It wouldn't save a twelfth-century peasant, even if they knew. If you think that a rationalist who fully understands the mess they're in, must surely be able to find a way out—then you trust rationality, enough said.
Some commenter is bound to castigate me for putting too dark a tone on all this, and in response they will list out all the reasons why it's lovely to live in a neutral universe. Life is allowed to be a little dark, after all; but not darker than a certain point, unless there's a silver lining.
Still, because I don't want to create needless despair, I will say a few hopeful words at this point:
If humanity's future unfolds in the right way, we might be able to make our future light cone fair(er). We can't modify fundamental physics, but on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe. There's a lot of stuff out there that we can't touch—but it may help to consider everything that isn't in our future light cone, as being part of the "generalized past". As if it had all already happened. There's at least the prospect of defeating neutrality, in the only future we can touch—the only world that it accomplishes something to care about.
Someday, maybe, immature minds will reliably be sheltered. Even if children go through the equivalent of not getting a lollipop, or even burning a finger, they won't ever be run over by cars.
And the adults wouldn't be in so much danger. A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn't seem so harsh, would be only another problem to be solved.
The problem is that building an adult is itself an adult challenge. That's what I finally realized, years ago.
If there is a fair(er) universe, we have to get there starting from this world—the neutral world, the world of hard concrete with no padding, the world where challenges are not calibrated to your skills.
Not every child needs to stare Nature in the eyes. Buckling a seatbelt, or writing a check, is not that complicated or deadly. I don't say that every rationalist should meditate on neutrality. I don't say that every rationalist should think all these unpleasant thoughts. But anyone who plans on confronting an uncalibrated challenge of instant death, must not avoid them.
What does a child need to do—what rules should they follow, how should they behave—to solve an adult problem?