The Gift I Give Tomorrow

by Raemon 8y11th Jan 201233 comments

28


 

This is the final post in my Ritual Mini-Sequence. Previous posts include the Introduction, a discussion on the Value (and Danger) of Ritual, and How to Design Ritual Ceremonies that reflect your values.

 

I wrote this as a concluding essay in the Solstice ritual book. It was intended to be at least comprehensible to people who weren’t already familiar with our memes, and to communicate why I thought this was important. It builds upon themes from the ritual book, and in particular, the readings of Beyond the Reach of God and The Gift We Give to Tomorrow. Working on this essay was transformative to me - it allowed me to finally bypass my scope insensitivity and other biases, so that I could evaluate organizations like the Singularity Institute with fairness. I haven’t yet decided what to do with my charitable dollars - it’s a complex problem. But I’ve overcome my emotional restistance to the idea of fighting X-Risk.

 

I don’t know if that was due to the words themselves, or to the process I had to go through to write them, but I hope others may benefit from this.

 


 

I thought ‘The Gift We Give to Tomorrow’ was incredibly beautiful when I first read it. I actually cried. I wanted to share it with friends and family, except that work ONLY has meaning in the context of the Sequences. Practically every line is a hyperlink to an important, earlier point, and without many hours of previous reading, it just won’t have the impact. But to me, it felt like the perfect endcap to everything the Sequences covered, taking all of the facts and ideas and weaving them into a coherent, poetic narrative that left me feeling satisfied with my place in the world.


Except that... I wasn’t sure that it actually said anything.

And when I showed it to a few other people, they reacted similarly: “This is pretty, but what does it mean?” I knew I wanted to include it in our Solstice celebration, if for no other reason than “it was pretty.” Particularly pretty in a particular way that seemed right for the occassion. But I’m wary about things that seem beautiful and moving without really understanding why, especially when those things become part of your worldview, perhaps subtly impacting your decisions.

 

In order to use The Gift as part of the Solstice, it needed to be pared down. It’s not designed to be read out loud. This meant I needed to study it in detail, figuring out what made it beautiful so I could be sure to capture that part while pruning away the words that were difficult to pronounce or read in dim candlelight.


Shortly afterwards, I began the same work with “Beyond the Reach of God.”


Unlike The Gift, Beyond the Reach of God is important and powerful for very obvious reasons. If you have something you value more than your own happiness, if you care about your children’s children, then you need to understand that there is no God. Or at the very least, for whatever reason, for whatever mysterious end that you don’t understand, God doesn’t intervene.

 

If your loved ones are being tortured, or are dying of illness, or getting run over by a car, God will not save them. The actions that matter are the ones that impact the physical world, the world of interlinked causes that we can perceive. The beliefs that ultimately matter, when you care about more than your own subjective happiness, are the beliefs that allow you to make accurate predictions about the future. These beliefs allow you to establish the right social policies to protect your children from harm. They allow you to find the right medicine and treatment to keep your aging parents alive and healthy, both mentally and physically, for as long as possible. To keep the people you love part of your life. And to keep yourself part of theirs.


Unlike some in this community, I don’t entirely dismiss unprovable, comforting beliefs, so long as you have the right compartmentalization to keep them separate from your other decision making processes. A vague, comforting belief in an afterlife, or in a ‘natural, cyclical order of things’... returning to the earth and pushing up daisies... it can be useful to help accept the things you cannot change.

 

We still live in a world where Death exists. There are things we can’t change. Yet.


And those things can be horrible, and I don’t begrudge anyone a tool to work through them.


But if someone’s vague, comforting beliefs lead them to let a person go, not because they’d done everything they could to save them, but because they had a notion that they’d be together somehow in a supernatural world... if a belief leads someone to believe that they couldn’t change something that they, in fact, could have...


No. I can’t condone that.


It can be disturbing, going down the rationality rabbit hole. I started by thinking “I want to be succeeding at life,” and learned about a few biases that are affecting me, and I made some better choices, and that was good. But it wasn’t fully satisfying. I needed to form some coherent long term goals. Someone in my position might then say “Alright, I want to be more succesful at my career.”

 

But then maybe they realize that success at their career wasn’t actually what was most important to them. They didn’t need that money, what they wanted was the ability to purchase things that make them happy, and support their family, and have the security to periodically do fun projects. The career was just one way of doing that. And it may not have been the best way. And suddenly they’re open to the entirety of possibility-space, a million different paths they could take that might or might not leave them satisfied. And they don’t have any of the tools they need to figure out which ones to take. Some of those tools have already been invented, and they just need to find them. Others, they may need to invent for themselves.


The problem is that most people don’t have a good understanding of their values.. “Be Happy” is vague, so is “Have a nice family,” so is “Make the world a better place.” Vaguest of all is “Some combination of the above.”

 

If you’re going down the rationality rabbit hole, you need to start figuring out your REAL values, instead of reciting cached thoughts that you’ve picked up from society. You might start exploring cognitive science to give you some insight into how your mind works. And then you’d start to learn that the mind is a machine, that follows physical rules. And that it’s an incoherent mess, shaped by a blind idiot god that wasn’t trying to make us happy or give us satisfying love lives or a promising future - it was just following a set of mathematical rules that caused the propagation of whatever traits increased reproductive fitness at the time.

 

And it’s not even clear that there’s a singular you in any of this. Your brain is full of separate entities working at cross purposes; your conscious mind isn’t necessarily responsible for your decisions; the “you” of today isn’t necessarily the same as the “you” of yesterday or tomorrow. And like it or not, this incoherent mess is what your hopes and dreams and morals are made of.


Maybe for a moment, you may come to believe that it all IS really meaningless. We’re not put here with a purpose. The universe doesn’t care about us. Love isn’t inherently any more important than paperclips. The very concept of a continuous self isn’t obviously true. When all is said and done, morality isn’t “real” in an objective sense. There’s just matter, and math. So why the hell worry about anything?

 

Or maybe instead you’d flinch away from these ideas. Avoid the discomfort. You can do that. But these aren’t just silly philosophical questions that can be ignored. Somebody has to think about them. Because as technology moves forward, we *will* be relying increasingly on automated processes. Not just to work for us, but to think for us. Computers are already better at solving certain types of problems than the average expert. Machine intelligence is almost definitely coming, and society will have to change rapidly around it, and it will become incredibly important for us to know what it is we actually care about. Partly so that we don’t accidentally change ourselves into something we regret. But also so that if and when an AI is created which has the ability to improve itself, and rapidly becomes smart enough to convince its human creators to give it additional resources for perfectly “good” reasons, until it suddenly is powerful enough to grow on its own with only our initial instructions to guide it... we better hope that those initial instructions contained detailed notes about everything we hold dear.


We better hope that the AI’s interior world of pure math includes some kind of ghost in the machine that looks over each step and thinks “yes, my decisions are still moving in a good direction.” That ghost-in-the-machine will only exist if we deliberately put it there. And the only way to do that is to understand ourselves well enough to bother explaining that no, you don’t use the atoms of people to create paperclips. You don’t just “save as many lives as possible” by hooking people up to feeding tubes. You don’t make everyone happy by pumping them full of heroin, you don’t go changing people’s bodies or minds without their consent.


None of these things are remotely obvious to a ghost of perfect emptiness that wasn’t shaped for millions of years by a blind idiot god. Many humans wouldn’t even consider them as options. But someday people may build a decision-making machine with the capacity to surpass us, and those people will need to understand the convoluted mess of values that makes up their mind, and mine, and yours. They’ll need to be able to reduce an understanding of love to pure math, that a computer can comprehend. Because the future is at stake.

 

It would be nice to just say “don’t build the superintelligence.” But in the Information Age, preventing technological development is just not a reliable safeguard.


This may all seem far fetched, and if you weren’t already familiar with a lot of these ideas, I wouldn’t expect you to be convinved in these few pages (indeed, you should be demanding more than three paragraphs of assertions as evidence). But even without the risk of AI, the future is still at stake. Hell, the present is at stake. People are dying as we speak. And suffering. Losing their autonomy. Their equality. Losing the ability to control their bodies. Even those who lived good lives in modern countries, age can creep over them and cripple their ability not just to move but to think and decide, destroying everything they thought made them human until all that’s left is a person, trapped in a crumbling body, who can’t control their own life but who desperately doesn’t want to die alone.

 

This is a monstrously harsh reality. A monstrously hard problem, not at all calibrated to our current skills. The problems extend beyond the biological processes that make death a reality, and into the world of resources and politics and limited space. It’s easy to decide that the problem is too hard, that we’ll never be able to solve it. And this is just the present. All of the suffering of the people currently alive pales in comparison the potential suffering of future generations, or worse, to the lives that might go unlived if humanity makes too many mistakes in an unfair universe and erases itself.

 

What is it about the future that’s worth protecting? What makes it worth it to drag eight thousand pound stones across 150 miles of land, for the benefit of people who won’t be born for centuries, who you’ll never see? I can tell you my answer: a young mind, born millenia from now, whose values are still close enough to mine that I can at least recognize them. Who has the mental framework to ask of its parents, “Why does love exist?” and to care about the answer to the question.

 

The answer is as ludicrously simple as it is immensely complicated, and you may not have needed the Gift We Give to Tomorrow to explain it to you. Love exists, it was shaped by blind mathematical forces that don’t care about anything. But it exists and we care about it - we care so, so very deeply. And not just about love. Creativity. Curiosity. Excitement. Autonomy. Other people. Morality. Our children’s children. We don’t need a reason to care about these things. We may not fully understand them. But they exist. For us, they are real.


The Gift We Give to Tomorrow walked me through all this understanding. Deep, down into the heart of the abyss where nothing actually matters. Pretending no comforting lies. Cutting away the last illusions. And still, it somehow left me with a vision of the humanity, of the universe, of the future, that is beautiful and satisfying.

 

It doesn’t matter that it didn’t really say anything new, that I hadn’t already worked out.

 

It was just beautiful. Just because.

 

That beauty, that vision of the future, that is what is worth protecting. That’s why I’m sacrificing comfort and peace of mind. That’s why I’m thinking hard, rebelling against my initial instincts to make fun video games. My second instinct to give to the first charity that shows me a picture of an adorable orphan, or that I’m already familiar with in some way. My third instinct to settle for saving maybe a few dozen lives.


My instincts were shaped by blind mathematical forces in an ancestral environment where one orphan was the most I could be expected to worry about. And it is my prerogative, as one small conscious fragment of an incoherent sentient mind, to look at the part of my brain that thinks “that’s all that matters”, and rebel. Take the cold, calculating long view. It’s not enough to think in the moral terms that my mind is naturally good at.

 

A million people feel like a statistic. They feel even more like a statistic when they live in a distant country. They feel even more like a statistic when they live in a distant future and their values have drifted somewhat from the things we care about today.

 

But those people are not a statistic. A million deaths is a million tragedies. A billion deaths is a billion tragedies. The possible extinction of the human race is something fundamentally worse than a tragedy, something I still don’t have a word for.


I don’t know what exactly I’m capable of doing, to bring about the most good I can. It might be working hard at a high paying programming job and donating to effective charities. It be directly working on problems that save lives, or which educate future generations to be able to save even more. It might be investing in companies that are producing important services but in a for-profit context. It might be working on scientific research in any one of a hundred important fields. It might be creating art that moves people in important ways.

 

It might be contributing to AI research. It might not. I don’t know. This isn’t about abandoning one familiar cause for another. When the future is at stake, I don’t have the luxury of not thinking hard about the right choice or passing the buck to a modern Pascal’s wager. Current organizations working on AI research might be effective enough at their jobs to be worth supporting. They might not. They might be worth supporting later, but not yet. Or vice versa. So many factors to consider.

 

I have to decide what’s good, and I have to decide alone, and I can’t take forever to think about it.

 

I can’t expect everyone, or even me, to devote their lives to this question. I don’t know what kind of person I am yet. Right now I’m in a fever pitch of inspiration and I feel ready to take on the world, but when all is said and done, I do mostly care about my own happiness. And I think that’s okay - I think most people should spend most of their time seeing to their own needs, building their own community. But my hope is that I can find a way to be happy and contribute to a greater good at the same time.


In the meantime, I wrote this book, and planned an evening that bordered on religious service. I did this for a lot of reasons, but the biggest one was to battle parts of my mind that I am not satisfied with. The parts of me that think a rare, spectacular disease is more important that a common, easily fixed problem. The parts of me that think a smiling, hungry orphan is more important than a billion lives. The parts of me that I have to keep fighting in order to make good decisions.


Because I am fucking sick of having to feel like a cold hearted bastard, when I try to make the choice that is good.

 

I’m willing to feel that way, if I have to. It’s worth it. But I shouldn’t have to, and neither should you.

 

To fix this, I use art. And good art sometimes has to blur the line between fact and fiction, using certain kinds of lies so that my lizard brain can fully comprehend certain other kinds of truths. To understand why 6 billion people are more important than a single hungry orphan, it can help to tell a story.

 

Not about six billion people, but about one child.

 

Across space and time, ages from now, ever so far away: In a universe of pure math, where there is no loving god to shelter us nor punish the Genghis Khans of the world.... there exists the possibility of a child whose values I can understand, asking their parents “Why does love exist?”

 

That child’s existence is not inevitable. It will be born, or not, depending on actions that humans take today. It will suffer, or not, depending on the direction that humanity steers itself. It will die in hundred, a thousand or million years, depending on how far we progress in solving the problem of death. And I don’t know for sure whether any of this will specifically require your actions, or mine.


That child is beautiful. The very possibility of that child is beautiful. That beauty is worth protecting. I don’t speak for the entire Less Wrong community, but I write this to honor the birth of that child, and everything that child represents: Peace on earth, and perhaps across the galaxy. Good will, among all sentient minds. Scientific and ethical progress. All the hard work and sacrifice that these things entail.

 

In the world beyond the reach of god, if we care about that child, then 'good enough' decisions just aren't good enough.
Rationality matters.

 

28