From Greg Egan's *Permutation City*:

The workshop abutted a warehouse full of table legs—one hundred and sixty-two thousand, three hundred and twenty-nine, so far. Peer could imagine nothing more satisfying than reaching the two hundred thousand mark—although he knew it was likely that he'd change his mind and abandon the workshop before that happened; new vocations were imposed by his exoself at random intervals, but statistically, the next one was overdue. Immediately before taking up woodwork, he'd passionately devoured all the higher mathematics texts in the central library, run all the tutorial software, and then personally contributed several important new results to group theory—untroubled by the fact that none of the Elysian mathematicians would ever be aware of his work. Before that, he'd written over three hundred comic operas, with librettos in Italian, French and English—and staged most of them, with puppet performers and audience. Before that, he'd patiently studied the structure and biochemistry of the human brain for sixty-seven years; towards the end he had fully grasped, to his own satisfaction, the nature of the process of consciousness. Every one of these pursuits had been utterly engrossing, and satisfying, at the time. He'd even been interested in the Elysians, once.

No longer. He preferred to think about table legs.

Among science fiction authors, (early) Greg Egan is my favorite; of early-Greg-Egan's books, *Permutation City* is my favorite; and this particular passage in *Permutation City,* more than any of the others, I find utterly horrifying.

If this were all the hope the future held, I don't know if I could bring myself to try. Small wonder that people don't sign up for cryonics, if even SF writers think this is the best we can do.

You could think of this whole series on Fun Theory as my reply to Greg Egan—a list of the ways that his human-level uploaded civilizations Fail At Fun. (And yes, this series will also explain what's wrong with the Culture and how to fix it.)

We won't get to *all* of Peer's problems today—but really. *Table legs?*

I could see myself carving *one* table leg, maybe, if there was something non-obvious to learn from the experience. But not 162,329.

In *Permutation City*, Peer modified himself to find table-leg-carving fascinating and worthwhile and pleasurable. But really, at *that *point, you might as well modify yourself to get pleasure from playing Tic-Tac-Toe, or lie motionless on a pillow as a limbless eyeless blob having fantastic orgasms. It's not a worthy use of a human-level intelligence.

Worse, carving the 162,329th table leg doesn't *teach *you anything that you didn't already know from carving 162,328 previous table legs. A mind that changes so little in life's course is scarcely experiencing time.

But apparently, once you do a little group theory, write a few operas, and solve the mystery of consciousness, there isn't much else worth doing in life: you've *exhausted the entirety of Fun Space* down to the level of table legs.

Is this plausible? How large is Fun Space?

Let's say you were a human-level intelligence who'd never seen a Rubik's Cube, or anything remotely like it. As Hofstadter describes in two whole chapters of *Metamagical Themas,* there's a *lot* that intelligent human novices can learn from the Cube—like the whole notion of an "operator" or "macro", a sequence of moves that accomplishes a limited swap with few side effects. Parity, search, impossibility—

So you learn these things in the long, difficult course of solving the *first *scrambled Rubik's Cube you encounter. The *second *scrambled Cube—solving it might still be difficult, still be enough fun to be worth doing. But you won't have quite the same pleasurable shock of encountering something as new, and strange, and interesting as the first Cube was unto you.

Even if you encounter a variant of the Rubik's Cube—like a 4x4x4 Cube instead of a 3x3x3 Cube—or even a Rubik's Tesseract (a 3x3x3x3 Cube in four dimensions)—it still won't contain quite as much fun as the first Cube you ever saw. I haven't tried mastering the Rubik's Tesseract myself, so I don't know if there are added secrets in four dimensions—but it doesn't seem likely to teach me anything as fundamental as "operators", "side effects", or "parity".

(I was quite young when I encountered a Rubik's Cube in a toy cache, and so that actually *is* where I discovered such concepts. I tried that Cube on and off for months, without solving it. Finally I took out a book from the library on Cubes, applied the macros there, and discovered that this particular Cube was *unsolvable *—it had been disassembled and reassembled into an impossible position. I think I was faintly annoyed.)

Learning is fun, but it *uses up* fun: you can't have the same stroke of genius twice. Insight is insight because it makes future problems *less difficult,* and "deep" because it applies to many such problems.

And the smarter you are, the faster you learn—so the smarter you are, the less *total *fun you can have. Chimpanzees can occupy themselves for a lifetime at tasks that would bore you or I to tears. Clearly, the solution to Peer's difficulty is to become stupid enough that carving table legs is *difficult* again—and so lousy at generalizing that every table leg is a new and exciting challenge—

Well, but hold on: If you're a chimpanzee, you can't understand the Rubik's Cube *at all.* At least I'm willing to bet against anyone training a chimpanzee to solve one—let alone a chimpanzee solving it spontaneously—let alone a chimpanzee understanding the deep concepts like "operators", "side effects", and "parity".

I could be wrong here, but it seems to me, on the whole, that when you look at the number of ways that chimpanzees have fun, and the number of ways that humans have fun, *that Human Fun Space is larger than Chimpanzee Fun Space.*

And not in a way that increases just *linearly *with brain size, either.

The space of problems that are Fun to a given brain, *will *definitely be smaller than the exponentially increasing space of all possible problems that brain can *represent*. We are interested only in the borderland between triviality and impossibility—problems difficult enough to worthily occupy our minds, yet tractable enough to be worth challenging. (What *looks* "impossible" is not always impossible, but the border is still *somewhere* even if we can't see it at a glance—there are some problems so difficult you can't even learn much from failing.)

An even stronger constraint is that if you do something many times, you ought to learn from the experience and get better—many problems of the same *difficulty *will have the same "learnable lessons" embedded in them, so that doing one consumes some of the fun of others.

As you learn new things, and your skills improve, problems will get easier. Some will move off the border of the possible and the impossible, and become too easy to be interesting.

But *others *will move from the territory of impossibility into the borderlands of mere extreme difficulty. It's easier to invent group theory if you've solved the Rubik's Cube first. There are insights you can't have without prerequisite insights.

If you get smarter over time (larger brains, improved mind designs) that's a still higher octave of the same phenomenon. (As best I can grasp the Law, there are insights you can't understand *at all *without having a brain of sufficient size and sufficient design. Humans are not maximal in this sense, and I don't think there should be any maximum—but that's a rather deep topic, which I shall not explore further in this blog post. Note that Greg Egan seems to explicitly believe the reverse—that humans can understand *anything understandable*—which explains a lot.)

One suspects that in a better-designed existence, the eudaimonic rate of intelligence increase would be bounded below by the need to *integrate* the loot of your adventures—to incorporate new knowledge and new skills *efficiently*, without swamping your mind in a sea of disconnected memories and associations—to manipulate larger, more powerful concepts that generalize more of your accumulated life-knowledge at once.

And one also suspects that part of the poignancy of transhuman existence will be having to *move on* from your current level—get smarter, leaving old challenges behind—before you've explored more than an infinitesimal fraction of the Fun Space for a mind of your level. If, like me, you play through computer games trying to slay every single monster so you can collect every single experience point, this is as much tragedy as an improved existence could possibly need.

Fun Space can increase much more slowly than the space of representable problems, and still overwhelmingly swamp the amount of time you could bear to spend as a mind of a fixed level. Even if Fun Space grows at some ridiculously tiny rate like N-squared—bearing in mind that the actual raw space of representable problems goes as 2^{N}—we're still talking about "way more fun than you can handle".

If you consider the loot of every human adventure—everything that was ever learned about science, and everything that was ever learned about people, and all the original stories ever told, and all the original games ever invented, and all the plots and conspiracies that were ever launched, and all the personal relationships ever raveled, and all the ways of existing that were ever tried, and all the glorious epiphanies of wisdom that were ever minted—

—and you deleted all the duplicates, keeping only one of every lesson that had the same moral—

—how long would you have to stay human, to collect *every* gold coin in the dungeons of history?

Would it all fit into a single human brain, without that mind completely disintegrating under the weight of unrelated associations? And even then, would you have come close to exhausting the space of *human possibility, *which we've surely not finished exploring?

This is all sounding like suspiciously good news. So let's turn it around. Is there any way that Fun Space could fail to grow, and instead collapse?

Suppose there's only so many deep insights you *can* have on the order of "parity", and that you collect them all, and then math is never again as exciting as it was in the beginning. And that you then exhaust the shallower insights, and the trivial insights, until finally you're left with the delightful shock of "Gosh wowie gee willickers, the product of 845 and 109 is 92105, I didn't know that logical truth before."

Well—obviously, if you sit around and catalogue all the deep insights *known *to you to exist, you're going to end up with a bounded list. And equally obviously, if you declared, "This is all there is, and all that will ever be," you'd be taking an unjustified step. (Though I fully expect some people out there to step up and say how it seems to them that they've already started to run out of available insights that are as deep as the ones they remember from their childhood. And I fully expect that—compared to the sort of person who makes such a pronouncement—I *personally *will have collected more additional insights than they believe exist in the whole remaining realm of possibility.)

Can we say anything more on this subject of fun insights that might exist, but that we haven't yet found?

The obvious thing to do is start appealing to Godel, but Godelian arguments are dangerous tools to employ in debate. It does seem to me that Godelian arguments weigh in the general direction of "inexhaustible deep insights", but inconclusively and only by loose analogies.

For example, the Busy-Beaver(N) problem asks for the longest running time of a Turing machine with no more than N states. The Busy Beaver problem is uncomputable—there is no fixed Turing machine that computes it for all N—because if you knew all the Busy Beaver numbers, you would have an infallible way of telling whether a Turing machine halts; just run it up for as long as the longest-running Turing machine of that size.

The human species has managed to figure out and prove the Busy Beaver numbers up to 4, and they are:

BB(1): 1

BB(2): 6

BB(3): 21

BB(4): 107

Busy-Beaver 5 is believed to be 47,176,870.

The current lower bound on Busy-Beaver(6) is ~2.5 × 10^{2879}.

This function *provably *grows faster than any compact specification you can imagine. Which would seem to argue that each new Turing machine is exhibiting a new and interesting kind of behavior. Given infinite time, you would even be able to *notice *this behavior. You won't ever know for certain that you've discovered the Busy-Beaver *champion *for any given N, after finite time; but conversely, you will *notice* the Busy Beaver champion for any N after some finite time.

Yes, this is an *unimaginably long* time—one of the few occasions where the word "unimaginable" is literally correct. We can't *actually* do this unless reality works the way it does in Greg Egan novels. But the point is that in the limit of infinite time we can point to *something *sorta like "an infinite sequence of *learnable *deep insights not reducible to any of their predecessors or to any learnable abstract summary". It's not conclusive, but it's at least *suggestive.*

Now you could still look at that and say, "I don't think my life would be an adventure of neverending excitement if I spent until the end of time trying to figure out the weird behaviors of slightly larger Tuing machines."

Well—as I said before, Peer is doing more than *one *thing wrong. Here I've dealt with only one sort of dimension of Fun Space—the dimension of how much *novelty* we can expect to find available to introduce into our fun.

But even on the arguments given so far... I don't call it conclusive, but it seems like sufficient reason to *hope and expect* that our descendants and future selves won't exhaust Fun Space to the point that there is literally nothing left to do but carve the 162,329th table leg.

Part of *The Fun Theory Sequence*

Next post: "Continuous Improvement"

Previous post: "High Challenge"