Jan 16, 2009
Previously in series: Justified Expectation of Pleasant Surprises
"Vagueness" usually has a bad name in rationality—connoting skipped steps in reasoning and attempts to avoid falsification. But a rational view of the Future should be vague, because the information we have about the Future is weak. Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge—the power of pleasant surprises.
But there's also a more severe warning that I must deliver: It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them. It can suck the emotional energy out of your actual, current, ongoing life.
Epistemically, we know the Past much more specifically than the Future. But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing. Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.
Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live—having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens—I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there. That may suck out your soul like an emotional vacuum cleaner.
I briefly remarked on this phenomenon earlier, and someone said, "Define 'suck out your soul'." Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future. It's like something out of H. P. Lovecraft: The Call of Eutopia. A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.
But for the record, I will now lay out the components of "soul-sucking", that you may recognize the bright abyss and steer your thoughts away:
Hope can be a dangerous thing. And when you've just been hit hard—at the moment when you most need hope to keep you going—that's also when the real world seems most painful, and the world of imagination becomes most seductive.
It's a balancing act, I think. One needs enough Fun Theory to truly and legitimately justify hope in the future. But not a detailed vision so seductive that it steals emotional energy from the real life and real challenge of creating that future. You need "a light at the end of the secular rationalist tunnel" as Roko put it, but you don't want people to drift away from their bodies into that light.
So how much light is that, exactly? Ah, now that's the issue.
I'll start with a simple and genuine question: Is what I've already said, enough?
Is knowing the abstract fun theory and being able to pinpoint the exact flaws in previous flawed Utopias, enough to make you look forward to tomorrow? Is it enough to inspire a stronger will to live? To dispel worries about a long dark tea-time of the soul? Does it now seem—on a gut level—that if we could really build an AI and really shape it, the resulting future would be very much worth staying alive to see?
Part of The Fun Theory Sequence
Next post: "The Uses of Fun (Theory)"
Previous post: "Justified Expectation of Pleasant Surprises"