I recently discovered a cool new blog called studiolo and wanted to share it here. You will probably like this post if you like science fiction since it contains long excerpts of it. Unfortunately formatting it properly as a quote has been giving me some trouble, so I'll go with the least ugly looking solution, please don't think I claim to have wrote it. I found the speculation entertaining and interesting because I have extensively thought along similar lines about the effect of science fiction I consumed on my own world view (though I didn't mention it often).

Link to original post by Federico.

The school of science fiction

I have tried to persuade my friends and acquaintances that governmental reboot, and friendly AI, are important problems. I have failed. Two candidate hypotheses:

1. They do not share my distaste for the banal.

2. They did not consume, at the formative age, a sufficient amount of science fiction.

#1 and #2 are not mutually exclusive. Distaste for the banal is merely an attitude—but in the first place, fascinating consequences are what nourished my Bayesian, utilitarian beliefs. Science fiction encourages kids to realise that life, the Universe and everything holds out fascinating possibilities, and that it is both valid and essential for humans to explore these ideas.

Sister Y, in her pornographically insightful essay on insight porn, highlights Philip K Dick’s short stories. I concur. Dick writes in 1981:

I will define science fiction, first, by saying what sf is not. It cannot be defined as “a story (or novel or play) set in the future,” since there exists such a thing as space adventure, which is set in the future but is not sf: it is just that: adventures, fights and wars in the future in space involving super-advanced technology. Why, then, is it not science fiction? It would seem to be, and Doris Lessing (e.g.) supposes that it is. However, space adventure lacks the distinct new idea that is the essential ingredient. Also, there can be science fiction set in the present: the alternate world story or novel. So if we separate sf from the future and also from ultra-advanced technology, what then do we have that can be called sf?

We have a fictitious world; that is the first step: it is a society that does not in fact exist, but is predicated on our known society; that is, our known society acts as a jumping-off point for it; the society advances out of our own in some way, perhaps orthogonally, as with the alternate world story or novel. It is our world dislocated by some kind of mental effort on the part of the author, our world transformed into that which it is not or not yet. This world must differ from the given in at least one way, and this one way must be sufficient to give rise to events that could not occur in our society — or in any known society present or past. There must be a coherent idea involved in this dislocation; that is, the dislocation must be a conceptual one, not merely a trivial or bizarre one — this is the essence of science fiction, the conceptual dislocation within the society so that as a result a new society is generated in the author’s mind, transferred to paper, and from paper it occurs as a convulsive shock in the reader’s mind, the shock of dysrecognition. He knows that it is not his actual world that he is reading about.

Now, to separate science fiction from fantasy. This is impossible to do, and a moment’s thought will show why. Take psionics; take mutants such as we find in Ted Sturgeon’s wonderful MORE THAN HUMAN. If the reader believes that such mutants could exist, then he will view Sturgeon’s novel as science fiction. If, however, he believes that such mutants are, like wizards and dragons, not possible, nor will ever be possible, then he is reading a fantasy novel. Fantasy involves that which general opinion regards as impossible; science fiction involves that which general opinion regards as possible under the right circumstances. This is in essence a judgment-call, since what is possible and what is not possible is not objectively known but is, rather, a subjective belief on the part of the author and of the reader.

Now to define good science fiction. The conceptual dislocation — the new idea, in other words — must be truly new (or a new variation on an old one) and it must be intellectually stimulating to the reader; it must invade his mind and wake it up to the possibility of something he had not up to then thought of. Thus “good science fiction” is a value term, not an objective thing, and yet, I think, there really is such a thing, objectively, as good science fiction.

I think Dr. Willis McNelly at the California State University at Fullerton put it best when he said that the true protagonist of an sf story or novel is an idea and not a person. If it is good sf the idea is new, it is stimulating, and, probably most important of all, it sets off a chain-reaction of ramification-ideas in the mind of the reader; it so-to-speak unlocks the reader’s mind so that that mind, like the author’s, begins to create. Thus sf is creative and it inspires creativity, which mainstream fiction by-and-large does not do. We who read sf (I am speaking as a reader now, not a writer) read it because we love to experience this chain-reaction of ideas being set off in our minds by something we read, something with a new idea in it; hence the very best science fiction ultimately winds up being a collaboration between author and reader, in which both create — and enjoy doing it: joy is the essential and final ingredient of science fiction, the joy of discovery of newness.

Spoiler_warning

Several of Dick’s short stories prefigure Eliezer Yudkowsky’s (entirely serious) notion of unfriendly AI:

“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

—Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

Here is an excerpt from Autofac (1955):

Cut into the base of the mountains lay the vast metallic cube of the Kansas City factory. Its surface was corroded, pitted with radiation pox, cracked and scarred from the five years of war that had swept over it. Most of the factory was buried subsurface, only its entrance stages visible. The truck was a speck rumbling at high speed toward the expanse of black metal. Presently an opening formed in the uniform surface; the truck plunged into it and disappeared inside. The entrance snapped shut.

“Now the big job remains,” O’Neill said. “Now we have to persuade it to close down operations — to shut itself off.”

Judith O’Neill served hot black coffee to the people sitting around the living room. Her husband talked while the others listened. O’Neill was as close to being an authority on the autofac system as could still be found.

In his own area, the Chicago region, he had shorted out the protective fence of the local factory long enough to get away with data tapes stored in its posterior brain. The factory, of course, had immediately reconstructed a better type of fence. But he had shown that the factories were not infallible.

“The Institute of Applied Cybernetics,” O’Neill explained, “had complete control over the network. Blame the war. Blame the big noise along the lines of communication that wiped out the knowledge we need. In any case, the Institute failed to transmit its information to us, so we can’t transmit our information to the factories — the news that the war is over and we’re ready to resume control of industrial operations.”

“And meanwhile,” Morrison added sourly, “the damn network expands and consumes more of our natural resources all the time.”

“I get the feeling,” Judith said, “that if I stamped hard enough, I’d fall right down into a factory tunnel. They must have mines everywhere by now.”

“Isn’t there some limiting injunction?” Ferine asked nervously. “Were they set up to expand indefinitely?”

“Each factory is limited to its own operational area,” O’Neill said, “but the network itself is unbounded. It can go on scooping up our resources forever. The Institute decided it gets top priority; we mere people come second.”

“Will there be anything left for us?” Morrison wanted to know.

“Not unless we can stop the network’s operations. It’s already used up half a dozen basic minerals. Its search teams are out all the time, from every factory, looking everywhere for some last scrap to drag home.”

“What would happen if tunnels from two factories crossed each other?”

O’Neill shrugged. “Normally, that won’t happen. Each factory has its own special section of our planet, its own private cut of the pie for its exclusive use.”

“But it could happen.”

“Well, they’re raw material-tropic; as long as there’s anything left, they’ll hunt it down.” O’Neill pondered the idea with growing interest. “It’s something to consider. I suppose as things get scarcer –”

He stopped talking. A figure had come into the room; it stood silently by the door, surveying them all.

In the dull shadows, the figure looked almost human. For a brief moment, O’Neill thought it was a settlement latecomer. Then, as it moved forward, he realized that it was only quasi-human: a functional upright biped chassis, with data-receptors mounted at the top, effectors and proprioceptors mounted in a downward worm that ended in floor-grippers. Its resemblance to a human being was testimony to nature’s efficiency; no sentimental imitation was intended.

The factory representative had arrived.

It began without preamble. “This is a data-collecting machine capable of communicating on an oral basis. It contains both broadcasting and receiving apparatus and can integrate facts relevant to its line of inquiry.”

The voice was pleasant, confident. Obviously it was a tape, recorded by some Institute technician before the war. Coming from the quasi-human shape, it sounded grotesque; O’Neill could vividly imagine the dead young man whose cheerful voice now issued from the mechanical mouth of this upright construction of steel and wiring.

“One word of caution,” the pleasant voice continued. “It is fruitless to consider this receptor human and to engage it in discussions for which it is not equipped. Although purposeful, it is not capable of conceptual thought; it can only reassemble material already available to it.”

The optimistic voice clicked out and a second voice came on. It resembled the first, but now there were no intonations or personal mannerisms. The machine was utilizing the dead man’s phonetic speech-pattern for its own communication.

“Analysis of the rejected product,” it stated, “shows no foreign elements or noticeable deterioration. The product meets the continual testing-standards employed throughout the network. Rejection is therefore on a basis outside the test area; standards not available to the network are being employed.”

“That’s right,” O’Neill agreed. Weighing his words with care, he continued, “We found the milk substandard. We want nothing to do with it. We insist on more careful output.”

The machine responded presently. “The semantic content of the term ‘pizzled’ is unfamiliar to the network. It does not exist in the taped vocabulary. Can you present a factual analysis of the milk in terms of specific elements present or absent?”

“No,” O’Neill said warily; the game he was playing was intricate and dangerous. “‘Pizzled’ is an overall term. It can’t be reduced to chemical constituents.”

“What does ‘pizzled’ signify?” the machine asked. “Can you define it in terms of alternate semantic symbols?”

O’Neill hesitated. The representative had to be steered from its special inquiry to more general regions, to the ultimate problem of closing down the network. If he could pry it open at any point, get the theoretical discussion started. . .

“‘Pizzled,’” he stated, “means the condition of a product that is manufactured when no need exists. It indicates the rejection of objects on the grounds that they are no longer wanted.”

The representative said, “Network analysis shows a need of high-grade pasteurized milk-substitute in this area. There is no alternate source; the network controls all the synthetic mammary-type equipment in existence.” It added, “Original taped instructions describe milk as an essential to human diet.”

O’Neill was being outwitted; the machine was returning the discussion to the specific. “We’ve decided,” he said desperately, “that we don’t want any more milk. We’d prefer to go without it, at least until we can locate cows.”

“That is contrary to the network tapes,” the representative objected. “There are no cows. All milk is produced synthetically.”

“Then we’ll produce it synthetically ourselves,” Morrison broke in impatiently. “Why can’t we take over the machines? My God, we’re not children! We can run our own lives!”

The factory representative moved toward the door. “Until such time as your community finds other sources of milk supply, the network will continue to supply you. Analytical and evaluating apparatus will remain in this area, conducting the customary random sampling.”

Ferine shouted futilely, “How can we find other sources? You have the whole setup! You’re running the whole show!” Following after it, he bellowed, “You say we’re not ready to run things — you claim we’re not capable. How do you know? You don’t give us a chance! We’ll never have a chance!”

O’Neill was petrified. The machine was leaving; its one-track mind had completely triumphed.

“Look,” he said hoarsely, blocking its way. “We want you to shut down, understand. We want to take over your equipment and run it ourselves. The war’s over with. Damn it, you’re not needed anymore!”

The factory representative paused briefly at the door. “The inoperative cycle,” it said, “is not geared to begin until network production merely duplicates outside production. There is at this time, according to our continual sampling, no outside production. Therefore network production continues.”

This is not to say that sci-fi always hits on the right answers to important problems. The panel below is from Meka-City, an episode of Judge Dredd that takes place shortly after the “Apocalypse War”.

Nuclear EthicsRobert Heinlein’s The Moon Is A Harsh Mistress is equally questionable, from an x-risk perspective: the heroes place Earth at the mercy of a superintelligence whose friendliness, and even sanity is unproven and untested. Might it, however, have inspired a generation of libertarian dissidents?

Prof shook head. “Every new member made it that much more likely that you would be betrayed. Wyoming dear lady, revolutions are not won by enlisting the masses. Revolution is a science only a few are competent to practice. It depends on correct organization and, above all, on communications. Then, at the proper moment in history, they strike. Correctly organized and properly timed it is a bloodless coup. Done clumsily or prematurely and the result is civil war, mob violence, purges, terror. I hope you will forgive me if I say that, up to now, it has been done clumsily.”

Wyoh looked baffled. “What do you mean by ‘correct organization’?”

“Functional organization. How does one design an electric motor? Would you attach a bathtub to it, simply because one was available? Would a bouquet of flowers help? A heap of rocks? No, you would use just those elements necessary to its purpose and make it no larger than needed—and you would incorporate safety factors. Function controls design.

“So it is with revolution. Organization must be no larger than necessary—never recruit anyone merely because he wants to join. Nor seek to persuade for the pleasure of having another share your views. He’ll share them when the times comes . . . or you’ve misjudged the moment in history. Oh, there will be an educational organization but it must be separate; agitprop is no part of basic structure.

“As to basic structure, a revolution starts as a conspiracy therefore structure is small, secret, and organized as to minimize damage by betrayal—since there always are betrayals. One solution is the cell system and so far nothing better has been invented.

“Much theorizing has gone into optimum cell size. I think that history shows that a cell of three is best—more than three can’t agree on when to have dinner, much less when to strike. Manuel, you belong to a large family; do you vote on when to have dinner?”

“Bog, no! Mum decides.”

“Ah.” Prof took a pad from his pouch, began to sketch. “Here is a cells-of-three tree. If I were planning to take over Luna. I would start with us three. One would be opted as chairman. We wouldn’t vote; choice would be obvious—or we aren’t the right three. We would know the next nine people, three cells . . . but each cell would know only one of us.”

“Looks like computer diagram—a ternary logic.”

“Does it really? At the next level there are two ways of linking: This comrade, second level, knows his cell leader, his two cellmates, and on the third level he knows the three in his subcell—he may or may not know his cellmates’ subcells. One method doubles security, the other doubles speed—of repair if security is penetrated. Let’s say he does not know his cellmates’ subcells—Manuel, how many can he betray? Don’t say he won’t; today they can brainwash any person, and starch and iron and use him. How many?”

“Six,” I answered. “His boss, two cellmates, three in sub-cell.”

“Seven,” Prof corrected, “he betrays himself, too. Which leaves seven broken links on three levels to repair. How?”

“I don’t see how it can be,” objected Wyoh. “You’ve got them so split up it falls to pieces.”

“Manuel? An exercise for the student.”

“Well . . . blokes down here have to have way to send message up three levels. Don’t have to know who, just have to know where.”

“Precisely!”

“But, Prof,” I went on, “there’s a better way to rig it.”

“Really? Many revolutionary theorists have hammered this out, Manuel. I have such confidence in them that I’ll offer you a wager—at, say, ten to one.”

“Ought to take your money. Take same cells, arrange in open pyramid of tetrahedrons. Where vertices are in common, each bloke knows one in adjoining cell—knows how to send message to him, that’s all he needs. Communications never break down because they run sideways as well as up and down. Something like a neural net. It’s why you can knock a hole in a man’s head, take chunk of brain out, and not damage thinking much. Excess capacity, messages shunt around. He loses what was destroyed but goes on functioning.”

“Manuel,” Prof said doubtfully, “could you draw a picture? It sounds good—but it’s so contrary to orthodox doctrine that I need to see it.”

“Well . . . could do better with stereo drafting machine. I’ll try.” (Anybody who thinks it’s easy to sketch one hundred twenty-one tetrahedrons, a five-level open pyramid, clear enough to show relationships is invited to try!)

Presently I said, “Look at base sketch. Each vertex of each triangle shares self with zero, one, or two other triangles. Where shares one, that’s its link, one direction or both—but one is enough for a multipli-redundant communication net. On corners, where sharing is zero, it jumps to right to next corner. Where sharing is double, choice is again right-handed.

“Now work it with people. Take fourth level, D-for-dog. This vertex is comrade Dan. No, let’s go down one to show three levels of communication knocked out—level E-for-easy and pick Comrade Egbert.

“Egbert works under Donald, has cellmates Edward and Elmer, and has three under him, Frank, Fred, and Fatso . . . but knows how to send message to Ezra on his own level but not in his cell. He doesn’t know Ezra’s name, face, address, or anything—but has a way, phone number probably, to reach Ezra in emergency.

“Now watch it work. Casimir, level three, finks out and betrays Charlie and Cox in his cell, Baker above him, and Donald, Dan, and Dick in subcell—which isolates Egbert, Edward, and Elmer, and everybody under them.

“All three report it—redundancy, necessary to any communication system—but follow Egbert’s yell for help. He calls Ezra. But Ezra is under Charlie and is isolated, too. No matter, Ezra relays both messages through his safety link, Edmund. By bad luck Edmund is under Cox, so he also passes it laterally, through Enwright . . . and that gets it past burned-out part and it goes up through Dover, Chambers, and Beeswax, to Adam, front office . . . who replies down other side of pyramid, with lateral pass on E-for-easy level from Esther to Egbert and on to Ezra and Edmund. These two messages, up and down, not only get through at once but in way they get through, they define to home office exactly how much damage has been done and where. Organization not only keeps functioning but starts repairing self at once.”

Wyoh was tracing out lines, convincing herself it would work—which it would, was “idiot” circuit. Let Mike study a few milliseconds, and could produce a better, safer, more foolproof hookup. And probably—certainly—ways to avoid betrayal while speeding up routings. But I’m not a computer.

Prof was staring with blank expression. “What’s trouble?” I said. “It’ll work; this is my pidgin.”

“Manuel my b— Excuse me: Señor O’Kelly . . . will you head this revolution?”

“Me? Great Bog, nyet! I’m no lost-cause martyr. Just talking about circuits.”

Wyoh looked up. “Mannie,” she said soberly, “you’re opted. It’s settled.”

The marriage of fantastic and familiar allows science fiction authors to deal freely with touchy issues. The following excerpt, from PKD’s The Golden Man, is about “mutants”:

From the dirt road came the sound of motors, sleek purrs that rapidly grew louder. Two teardrops of black metal came gliding up and parked beside the house. Men swarmed out, in the dark gray-green of the Government Civil Police. In the sky swarms of black dots were descending, clouds of ugly flies that darkened the sun as they spilled out men and equipment. The men drifted slowly down.

“He’s not here,” Baines said, as the first man reached him. “He got away. Inform Wisdom back at the lab.”

“We’ve got this section blocked off.”

Baines turned to Nat Johnson, who stood in dazed silence, uncomprehending, his son and daughter beside him. “How did he know we were coming?” Baines demanded.

“I don’t know,” Johnson muttered. “He just — knew.”

“A telepath?”

“I don’t know.”

Baines shrugged. “We’ll know, soon. A clamp is out, all around here. He can’t get past, no matter what the hell he can do. Unless he can dematerialize himself.”

“What’ll you do with him when you — if you catch him?” Jean asked huskily.

“Study him.”

“And then kill him?”

“That depends on the lab evaluation. If you could give me more to work on, I could predict better.”

“We can’t tell you anything. We don’t know anything more.” The girl’s voice rose with desperation. “He doesn’t talk.”

Baines jumped. “What?”

“He doesn’t talk. He never talked to us. Ever.”

“How old is he?”

“Eighteen.”

“No communication.” Baines was sweating. “In eighteen years there hasn’t been any semantic bridge between you? Does he have any contact? Signs? Codes?”

“He — ignores us. He eats here, stays with us. Sometimes he plays when we play. Or sits with us. He’s gone days on end. We’ve never been able to find out what he’s doing — or where. He sleeps in the barn — by himself.”

“Is he really gold-colored?”

“Yes. Skin, eyes, hair, nails. Everything.”

“And he’s large? Well-formed?” It was a moment before the girl answered. A strange emotion stirred her drawn features, a momentary glow. “He’s incredibly beautiful. A god come down to earth.” Her lips twisted. “You won’t find him. He can do things. Things you have no comprehension of. Powers so far beyond your limited –”

“You don’t think we’ll get him?” Baines frowned. “More teams are landing all the time. You’ve never seen an Agency clamp in operation. We’ve had sixty years to work out all the bugs. If he gets away it’ll be the first time –”

Baines broke off abruptly. Three men were quickly approaching the porch. Two green-clad Civil Police. And a third man between them. A man who moved silently, lithely, a faintly luminous shape that towered above them.

“Cris!” Jean screamed.

“We got him,” one of the police said.

Baines fingered his lash-tube uneasily. “Where? How?”

“He gave himself up,” the policeman answered, voice full of awe. “He came to us voluntarily. Look at him. He’s like a metal statue. Like some sort of — god.”

The golden figure halted for a moment beside Jean. Then it turned slowly, calmly, to face Baines.

“Cris!” Jean shrieked. “Why did you come back?”

The same thought was eating at Baines, too. He shoved it aside — for the time being. “Is the jet out front?” he demanded quickly.

“Ready to go,” one of the CP answered. “Fine.” Baines strode past them, down the steps and onto the dirt field. “Let’s go. I want him taken directly to the lab.” For a moment he studied the massive figure who stood calmly between the two Civil Policemen. Beside him, they seemed to have shrunk, become ungainly and repellent. Like dwarves. . . What had Jean said? A god come to earth. Baines broke angrily away. “Come on,” he muttered brusquely. “This one may be tough; we’ve never run up against one like it before. We don’t know what the hell it can do.”

Of course, there is a political subtext. Dick writes in 1979:

In the early Fifties much American science fiction dealt with human mutants and their glorious super-powers and super-faculties by which they would presently lead mankind to a higher state of existence, a sort of promised land. John W. Campbell. Jr., editor at Analog, demanded that the stories he bought dealt with such wonderful mutants, and he also insisted that the mutants always be shown as (1) good; and (2) firmly in charge. When I wrote “The Golden Man” I intended to show that (1) the mutant might not be good, at least good for the rest of mankind, for us ordinaries; and (2) not in charge but sniping at us as a bandit would, a feral mutant who potentially would do us more harm than good. This was specifically the view of psionic mutants that Campbell loathed, and the theme in fiction that he refused to publish… so my story appeared in If.

We sf writers of the Fifties liked If because it had high quality paper and illustrations; it was a classy magazine. And, more important, it would take a chance with unknown authors. A fairly large number of my early stories appeared in If; for me it was a major market. The editor of If at the beginning was Paul W. Fairman. He would take a badly-written story by you and rework it until it was okay – which I appreciated. Later James L. Quinn the publisher became himself the editor, and then Frederik Pohl. I sold to all three of them.

In the issue of If that followed the publishing of “The Golden Man” appeared a two-page editorial consisting of a letter by a lady school teacher complaining about “The Golden Man”. Her complaints consisted of John W. Campbell, Jr.’s complaint: she upbraided me for presenting mutants in a negative light and she offered the notion that certainly we could expect mutants to be (1) good; and (2) firmly in charge. So I was back to square one.

My theory as to why people took this view is this: I think these people secretly imagined they were themselves early manifestations of these kindly, wise, super-intelligent Ubermenschen who would guide the stupid – i.e. the rest of us – to the Promised Land. A power phantasy was involved here, in my opinion. The idea of the psionic superman taking over was a role that appeared originally in Stapleton’s ODD JOHN and A.E.Van Vogt’s SLAN. “We are persecuted now,” the message ran, “and despised and rejected. But later on, boy oh boy, we will show them!”

As far as I was concerned, for psionic mutants to rule us would be to put the fox in charge of the hen house. I was reacting to what I considered a dangerous hunger for power on the part of neurotic people, a hunger which I felt John W. Campbell, Jr. was pandering to – and deliberately so. If, on the other hand, was not committed to selling any one particular idea; it was a magazine devoted to genuinely new ideas, willing to take any side of an issue. Its several editors should be commended, inasmuch as they understood the real task of science fiction: to look in all directions without restraint.

(Now read between the lines of this, with reference to the policy implications of this.)

Finally, Isaac Asimov’s Foundation series has inspired all sorts of people.

The lights went dim!

They didn’t go out, but merely yellowed and sank with a suddenness that made Hardin jump. He had lifted his eyes to the ceiling lights in startled fashion, and when he brought them down the glass cubicle was no longer empty.

A figure occupied it ‚ a figure in a wheel chair!

It said nothing for a few moments, but it closed the book upon its lap and fingered it idly. And then it smiled, and the face seemed all alive.

It said, “I am Hari Seldon.” The voice was old and soft.

Hardin almost rose to acknowledge the introduction and stopped himself in the act.

The voice continued conversationally: “As you see, I am confined to this chair and cannot rise to greet you. Your grandparents left for Terminus a few months back in my time and since then I have suffered a rather inconvenient paralysis. I can’t see you, you know, so I can’t greet you properly. I don’t even know how many of you there are, so all this must be conducted informally. If any of you are standing, please sit down; and if you care to smoke, I wouldn’t mind.” There was a light chuckle. “Why should I? I’m not really here.”

Hardin fumbled for a cigar almost automatically, but thought better of it.

Hari Seldon put away his book – as if laying it upon a desk at his side – and when his fingers let go, it disappeared.

He said: “It is fifty years now since this Foundation was established – fifty years in which the members of the Foundation have been ignorant of what it was they were working toward. It was necessary that they be ignorant, but now the necessity is gone.

“The Encyclopedia Foundation, to begin with, is a fraud, and always has been!”

There was a sound of a scramble behind Hardin and one or two muffled exclamations, but he did not turn around.

Hari Seldon was, of course, undisturbed. He went on: “It is a fraud in the sense that neither I nor my colleagues care at all whether a single volume of the Encyclopedia is ever published. It has served its purpose, since by it we extracted an imperial charter from the Emperor, by it we attracted the hundred thousand humans necessary for our scheme, and by it we managed to keep them preoccupied while events shaped themselves, until it was too late for any of them to draw back.

“In the fifty years that you have worked on this fraudulent project – there is no use in softening phrases – your retreat has been cut off, and you have now no choice but to proceed on the infinitely more important project that was, and is, our real plan.

“To that end we have placed you on such a planet and at such a time that in fifty years you were maneuvered to the point where you no longer have freedom of action. From now on, and into the centuries, the path you must take is inevitable. You will be faced with a series of crises, as you are now faced with the first, and in each case your freedom of action will become similarly circumscribed so that you will be forced along one, and only one, path.

“It is that path which our psychology has worked out – and for a reason.

“For centuries Galactic civilization has stagnated and declined, though only a few ever realized that. But now, at last, the Periphery is breaking away and the political unity of the Empire is shattered. Somewhere in the fifty years just past is where the historians of the future will place an arbitrary line and say: ‘This marks the Fall of the Galactic Empire.’

“And they will be right, though scarcely any will recognize that Fall for additional
centuries.

“And after the Fall will come inevitable barbarism, a period which, our psychohistory tells us, should, under ordinary circumstances, last for thirty thousand years. We cannot stop the Fall. We do not wish to; for Imperial culture has lost whatever virility and worth it once had. But we can shorten the period of Barbarism that must follow – down to a single thousand of years.

“The ins and outs of that shortening, we cannot tell you; just as we could not tell you the truth about the Foundation fifty years ago. Were you to discover those ins and outs, our plan might fail; as it would have, had you penetrated the fraud of the Encyclopedia earlier; for then, by knowledge, your freedom of action would be expanded and the number of additional variables introduced would become greater than our psychology could handle.

“But you won’t, for there are no psychologists on Terminus, and never were, but for Alurin – and he was one of us.

“But this I can tell you: Terminus and its companion Foundation at the other end of the Galaxy are the seeds of the Renascence and the future founders of the Second Galactic Empire. And it is the present crisis that is starting Terminus off to that climax.

“This, by the way, is a rather straightforward crisis, much simpler than many of those that are ahead. To reduce it to its fundamentals, it is this: You are a planet suddenly cut off from the still-civilized centers of the Galaxy, and threatened by your stronger neighbors. You are a small world of scientists surrounded by vast and rapidly expanding reaches of barbarism. You are an island of nuclear power in a growing ocean of more primitive energy; but are helpless despite that, because of your lack of metals.

“You see, then, that you are faced by hard necessity, and that action is forced on you. The nature of that action – that is, the solution to your dilemma – is, of course, obvious!”

The image of Hari Seldon reached into open air and the book once more appeared in his hand. He opened it and said:

“But whatever devious course your future history may take, impress it always upon your descendants that the path has been marked out, and that at its end is new and greater Empire!”

And as his eyes bent to his book, he flicked into nothingness, and the lights brightened once more.

Hardin looked up to see Pirenne facing him, eyes tragic and lips trembling.

The chairman’s voice was firm but toneless. “You were right, it seems. If you will see us tonight at six, the Board will consult with you as to the next move.”

They shook his hand, each one, and left, and Hardin smiled to himself. They were fundamentally sound at that; for they were scientists enough to admit that they were wrong – but for them, it was too late.

He looked at his watch. By this time, it was all over. Lee’s men were in control and the Board was giving orders no longer.

The Anacreonians were landing their first spaceships tomorrow, but that was all right, too. In six months, they would be giving orders no longer.

In fact, as Hari Seldon had said, and as Salvor Hardin had guessed since the day that Anselm haut Rodric had first revealed to him Anacreon’s lack of nuclear power – the solution to this first crisis was obvious.

Obvious as all hell!

Sayeth Moldbug:

Now, some have described the dramatic formula of UR as having a rather Tolkienesque feel; others may connect it more with C.S. Lewis; I certainly grew up reading both. But above all, I grew up reading Isaac Asimov.

If my journey into the awesome, humbling lost library that is Google Books was a film and needed a name, it might be called “Searching for Hari Seldon.” With more or less the entire Victorian corpus, modulo a bit of copyfraud, the Hari Seldon game is to enquire of this Library: which writers of the 19th would feel most justified, in their understanding of the eternal nature of history, humanity and government, by the events of the 20th? Whose crystal ball worked? Whose archived holograms delivered the news?

Broadly speaking, I think the answer is clear. Hari Seldon is Carlyle – the late Carlyle, of the Pamphlets. I consider myself a Carlylean pretty much the way a Marxist is a Marxist. There is simply no significant phenomenon of the 20th century not fully anticipated. Almost alone Carlyle predicts that the 20th will be a century of political chaos and mass murder, and he says not what but also why. And what a writer! Religions could easily be founded on the man – and perhaps should be.

And Paul Krugman:

There are certain novels that can shape a teenage boy’s life. For some, it’s Ayn Rand’s Atlas Shrugged; for others it’s Tolkien’s The Lord of the Rings. As a widely quoted internet meme says, the unrealistic fantasy world portrayed in one of those books can warp a young man’s character forever; the other book is about orcs. But for me, of course, it was neither. My Book – the one that has stayed with me for four-and-a-half decades – is Isaac Asimov’s Foundation Trilogy, written when Asimov was barely out of his teens himself. I didn’t grow up wanting to be a square-jawed individualist or join a heroic quest; I grew up wanting to be Hari Seldon, using my understanding of the mathematics of human behaviour to save civilisation.

A pity he didn’t move on to this.

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 12:37 PM

I am skeptical of any claim by and to my tribe about the deep spiritual benefits of our distinctive forms of entertainment.

Agreed. Science fiction, like any kind of fictional evidence, biases your thinking in ways that are a priori unlikely to make them more accurate. For example, science fiction prominently features technological solutions to problems, so it probably biases your thinking towards technological solutions to problems and away from, say, changes in social policy.

I have tried to persuade my friends and acquaintances that governmental reboot, and friendly AI, are important problems.

Did you fail to convince them that the problems are important, or that pursuing the solutions is a good idea?

There are plenty of problems with our government (I have the American government in particular in mind, although to greater or lesser extents this is probably true of others) that could be solved with a governmental reboot, but I don't think agitating for governmental reboot is a good time investment because I'm convinced the chances of success are too low to justify prioritizing the attempt over other things you could be doing instead.

This should be posted as a comment on the original post.

Thanks for pointing that out, I didn't realize that the whole text was lifted directly from the link rather than being a synopsis plus elaboration.

[-][anonymous]11y00

Did you fail to convince them that the problems are important, or that pursuing the solutions is a good idea?

You do know this post wasn't written by me? Just checking.

[This comment is no longer endorsed by its author]Reply

In case someone else felt the need to read the rest of Autofac after finishing that excerpt, Scribd has a copy.

This guy sure sounds a lot like JamesG. I'm glad he's back, I loved his blog.

[-][anonymous]11y10

Yes same here, but I'm unsure whether he minds being called that now or not. You'll find some old classics restored in the archives of his new blog, but he deleted a few other posts.

Vox Day suggests that we can find another Hari Seldon in Oswald Spengler:

http://voxday.blogspot.com/2013/01/spenglerian-decline.html

Which discusses:

http://nationalinterest.org/article/spenglers-ominous-prophecy-7878?page=show

I've suspected for awhile now that the democratic, egalitarian and feminist (DEF) era we've lived in represents a kind of unsustainable drunkard's walk from long-term and more stable social norms which has fooled our pattern-recognition heuristics into imposing a vector on this deviation and calling it "progress." It wouldn't surprise me if future societies descended from ours, for example, the ones which in which we might reanimate from cryostasis (assuming that could even happen) will look noticeably more aristocratic, hierarchical and patriarchal than our departure society. That might suck for the feminist women who have signed up for cryosuspension and survive the ambulance ride across time, but I think I could handle it. ; )

Interestingly enough, much American science fiction written during the mid 20th Century presents a similarly skeptical view of the current DEF ideology. How many science fiction stories postulate noble houses, monarchies and feudal-looking societies with advanced sciences and technologies, but set in "the future"? These writers might have followed the lead of their predecessor H.G. Wells, who advocates in his works that an aristocracy of the mind should run things.

Spengler notoriously predicted that a Caesar-like figure could, in his lifetime, reinvigorate the European civilization and end its weakness and complacency, at least for a while. I think we all know how well that turned out. (Although he criticized Hitler's lack of refinement, sophistication and aristocratism after voting for him.)

That might suck for the feminist women who have signed up for cryosuspension and survive the ambulance ride across time, but I think I could handle it. ; )

Do you think you'd be able to persuade many of today's people - not just women who don't like the idea of patriarchy, etc - to support building a society that they find cruel and morally abhorrent on the sole argument that it might be "sustainable" and colonize the stars, etc? [Edit: couldn't make heads or tails of my grammar upon revision, simplified this.]

Unless you personally want to participate in it and are confident you'd enjoy it... how is optimizing for a powerful and self-sustaining fascist/baby-eating/etc society (e.g. Ancient Rome as it would look to us, with genocides and crucifixions and slave fights) different from just building a computronium-paving AI and putting a memory of our culture and knowledge into it? It would also last for a long time and build big things. It might even be programmed to derive utility from making and comprehending our kind of art, texts, etc. Would it be a good deal to let it destructively assimilate/enslave/whatever "our" branch of humanity, just because we are too fragile and might not last long?

These writers might have followed the lead of their predecessor H.G. Wells, who advocates in his works that an aristocracy of the mind should run things.

You do understand that even in his day Wells became a byword for naive liberalism and belief in progressivist technocracy? His so-called "aristocracy of the mind", and the manner in which it was supposed to rule, was worlds apart from "future feudalism" (although I think both are tyrannical upon closer inspection). See Orwell.

As to the (sickening and perverse IMO) idea of a Hari Seldon - with all that it implies - here's a quote from Chesterton, the great crusader against "nihilism" and anti-humanism:

In the July 10, 1920 issue of The Illustrated London News, G. K. Chesterton took issue with both pessimists (such as Spengler) and their optimistic critics, arguing that neither took into consideration human choice: "The pessimists believe that the cosmos is a clock that is running down; the progressives believe it is a clock that they themselves are winding up. But I happen to believe that the world is what we choose to make it, and that we are what we choose to make ourselves; and that our renascence or our ruin will alike, ultimately and equally, testify with a trumpet to our liberty."
http://en.wikipedia.org/wiki/Oswald_Spengler#Aftermath

Re: Stability:

I don't get your position. Are you arguing that we should support a "moral" society even if it's unstable and hope(pray?) it doesn't collapse into something much worse than the stable society we could create if we actively attempt to?

In the July 10, 1920 issue of The Illustrated London News, G. K. Chesterton took issue with both pessimists (such as Spengler) and their optimistic critics, arguing that neither took into consideration human choice: "The pessimists believe that the cosmos is a clock that is running down; the progressives believe it is a clock that they themselves are winding up. But I happen to believe that the world is what we choose to make it, and that we are what we choose to make ourselves; and that our renascence or our ruin will alike, ultimately and equally, testify with a trumpet to our liberty."

Weren't you attempting to arguing earlier that treating humans as capable of morally significant choices was a cardinal sin?

I've found an excellent negative-utilitarian critique of the "stability/X-risk-reduction" mindset.

Brian Tomasik argues that human extinction might be greatly preferrable to creating a lasting supercivilization(s) more tolerant of suffering/torture and willing to induce it than ours. Thus, he argues that we should devote way less effort to averting X-risks in themselves, and way more to improving our current society + simultaneously increasing the odds of a future that's not abhorrent to our values.

Targeted interventions to change society in ways that will lead to better policies and values could be more cost-effective than increasing the odds of a future-of-some-sort that might be good but might be bad.

I'm inclined to agree.

I've found an excellent negative-utilitarian critique of the "stability/X-risk-reduction" mindset.

Existential risk is by far not the only risk of unstable societies. In fact devolving into a lasting supercivilization based on torture is closer to what I had in mind in the parent.

In fact devolving into a lasting supercivilization based on torture is closer to what I had in mind in the parent.

And note that Western liberalism/progressivism has pretty much created the first culture in history with strong norms against torture (extending to things like child discipline). It's inconsistent and hypocritical in applying those norms to itself, true (especially regarding imprisonment) - but I'd still consider it a kind of moral progress that a Western citizen would be more likely to lose sleep and make some noise about police brutality, waterboarding, etc than a Russian, Chinese or, say, Singaporean one. To say nothing of the subjects of past empires.

This recent aversion to torture seems to endure despite the high perceptions of crime, terrorist threats, etc (see the latest scandal over Zero Dark Thirty) - and wouldn't it be a very convenient thing for a "rational", non-squeamish social engineer to optimize away? And then where would the slippery slope end?

And note that Western liberalism/progressivism has pretty much created the first culture in history with strong norms against torture

I agree that Western civilization has many unique accomplishments, I would argue that it is therefore worth defending.

(extending to things like child discipline). It's inconsistent and hypocritical in applying those norms to itself, true (especially regarding imprisonment)

I'd argue that these are examples of taking the prohibition too far. In any case if Western civilization collapses because parents failed to adequately pass it on to their children, or because it is no longer capable of dealing with crime (for example), its replacement will likely have a lot fewer prohibitions on torture, and probably no free speech or free inquiry, nor anything resembling democracy.

and wouldn't it be a very convenient thing for a "rational", non-squeamish social engineer to optimize away? And then where would the slippery slope end?

This is actually my biggest issue with "progressives", you destroy traditional Schelling points on the grounds that they're arbitrary and "irrational" and then discover you have no way of taking non-extreme positions.

Even if the "moral" society is doomed to fail and bring a horrible disaster tomorrow, you can still get fuzzies and social status for promoting it today.

On the other hand, it is also important to ask how much certaintly do you have that it is doomed, and what is your evidence. But in the real world, promoting the "moral" society would include actively destroying all such evidence. But also reversed stupidity is not intelligence, and just because someone is trying to destroy the evidence, we should not automatically conclude that the evidence was overwhelming. Also... it's complicated.