Hi. I'm Charlie Stross and I wrote *Accelerando*. It was originally a series of novelettes (short stories) written from 1998 through 2003 and published in Asimov's SF Magazine (they racked up five Hugo nominations along the way) before being assembled into a stitch-up novel. So it's earlier than you think ... and a lot less optimistic.
I think you missed a key point which is the narrator is Aineko and Aineko is not a cat. Aineko is an sAI that has figured out that humans are more easily interacted with/manipulated if you look like a toy or a pet than if you look like a Dalek. Aineko is not benevolent: and the human "survivors" in the final chapter aren't even themselves, they're simulations Aineko is running for its own reasons.
Human-style sentience is not really capable of surviving the future posited by Accelerando. The wunch, and then later the Vile Offspring, inherently out-compete us and then render us mostly extinct. By chapter 8 most of humanity is dead: all that's left, exiled to the far corners of the solar system, are refugees (and a constant assault of AI slop-generated copies of historic personalities that constitute a DoS attack on humanity).
FWIW, my thinking today is that the whole singularitarian/TESCREAL Nexus identified by the Dair Institute folks is basically an attempt by self-avowed rationalists to re-create the Christian design patterns underlying their early socialization without actually taking on the god/jesus bullshit that the likes of Peter Thiel are at home with. Seriously: it's all just a re-implementation of Christianity. (And this raised-Jewish guy wants none of it.)
Oh, huh. Hello! Didn't expect you to pop up here!
I think you missed a key point which is the narrator is Aineko and Aineko is not a cat. Aineko is an sAI that has figured out that humans are more easily interacted with/manipulated if you look like a toy or a pet than if you look like a Dalek. Aineko is not benevolent: and the human "survivors" in the final chapter aren't even themselves, they're simulations Aineko is running for its own reasons.
Yeah I indeed did not get that, or maybe forgot, that is interesting. Will keep it in mind on my current re-read.
I see where you're coming from with the "recreate Christianity" thing. I'm curious what you think it'd look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn't feel that way?
Also just curious what your actual best guesses are for how things are likely to play out, now that AI is showing up more prominently in real life.
(I vaguely recall an interview where you didn't really like talking about this with rationalist-types about this sort of thing, so no worries if you don't want to get into any of that, but, well, you did show up so I figured I'd ask)
>I see where you're coming from with the "recreate Christianity" thing. I'm curious what you think it'd look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn't feel that way?
Well, I'm something of a skeptic. (And what we're seeing today is definitely not actual intelligence-in-a-box, it's just a hype bubble being inflated by the usual silicon valley grifters to keep the dollars flowing in from the credulous: I'm pretty certain it's going to burst in the next few months.)
I'm currently working on a far-future/space opera novel which asks, basically, what if there is no singularity, no mind uploading, no simulation afterlife, and no real route to sAIs (at least, routes accessible to human-grade intelligences), but (a) we get a mechanism for FTL expansion (this is a necessary hand-wave, or I don't have a space opera, I have a bucket-of-crabs trapped on a single planet), and (b) TESCREAL turns out to be a design pattern for successful evangelical religions among technological civilizations? (There are holy wars. Boy are there holy wars!)
It's a little overdue—I began it in 2015, then real life got in the way, repeatedly—but hopefully it'll be published in the next 2-3 years (the wheels of trade fiction publishing grind slow).
Gotcha. Well we'll see how the next year or so goes.
(I do agree we're in a bubble, and also that there is something shallow about how the current AIs accomplish most of their problem-solving-tasks. But, seems to me like all the pieces are there for RL training on diverse problem solving to take it from here. And, like, the dotcom bubble crashed, but that doesn't mean the internet didn't end up dominating the market later anyway)
But, anyways, thanks for clarifying that stuff about Accelerando and welcome to LessWrong! (It sounds like you'd mostly find AI discourse on LW aggravating, but FYI you can click the gear icon at the top of the posts page, and set various tag-topics to "hidden" or "reduced" and anything else that's interesting to you)
You said that you believe that the AI bubble is going to burst in the next few months. Could you phrase that as a testable prediction? For example, you're 80% sure that in 6 months, the stock market price of Nvidia will be 80% of what it is today. This would help me understand your prediction better. Different people have different views on what "pretty certain" and "burst" means. I just want a clarification. I don't want to argue about the prediction.
It was amazing reading it back then—I still go back to the world to soak in the prose from time to time. I know your take on genai (you made it abundantly clear when you answered a question of mine on reddit) but what matters to me is being able to thank someone who gave me a world to explore. I’m sure this comment does nothing for the discourse on Less Wrong but here it is. . .
The dragon I chase is finding the high of reading “Hardfought” for the first time. Accelerando was the same feeling on a much larger scale. Thank you for sharing your worlds.
You write "you can buy it here" but there is no link.
However, you can do better: the whole thing is available for free online. https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando-intro.html on the author's website is a good place to start; it has links to various versions of the book and also a little bit of explanatory material.
Accelerando (by Stross) presents a model of the singularity which I think can be most fruitfully contrasted with that in A Fire Upon the Deep (by Vinge).
In Accelerando, you even have mind uploads before "the moment of maximum change". The big change comes when the inner planets begin to be dismantled into swarms of nanocomputers that orbit the sun in the shells of a Dyson structure (a Matrioshka brain). This provides a sudden leap in compute of several orders of magnitude, and implicitly it's the creation of these vast new virtual spaces, and the migration of 80% of posthuman civilization into those spaces, which finally allows superintelligence to come about.
On the other hand, A Fire Upon the Deep begins with a human expedition poking through an ancient alien data-library. The humans are well aware that there can be dangers in ancient archives, and they think they are just safely browsing, but in fact they unintentionally respawn a malign AI which, when it's ready, bootstraps its way to superintelligence and kills them all.
In Accelerando, it's abundance of physical compute which carries transhuman society as a whole beyond human comprehension. In A Fire Upon the Deep, it's some kind of algorithm which makes the difference - when that algorithm runs, unstoppable superintelligence is created.
On the other hand, A Fire Upon the Deep begins with a human expedition poking through an ancient alien data-library. The humans are well aware that there can be dangers in ancient archives, and they think they are just safely browsing, but in fact they unintentionally respawn a malign AI which, when it's ready, bootstraps its way to superintelligence and kills them all.
One of the interesting details of a A Fire Upon the Deep is that the humans dig up an unusually nasty superintelligence, by the standards of their universe. In-world, everyone knows about the "Powers", transcendent superintelligences with unknowable goals and incomprehensible power. But the Powers mostly leave dumber sapients alone (because the dumber sapients inhabit portions of the galaxy that would be fatal to the exotic physics used by the Powers), and most Powers "burn out" or disappear within a couple of decades (because they think vastly faster than entire human civilizations). Basically, the universe of the book is carefully set up so that superintelligences don't profit by trying to "eat the light cone." So they typically remain spatially localized and turn inward.
But the humans at the Straumli Realm High Lab dig up the remnants of a Power that enjoys messing with dumber sapients, and which is unusually capable of confronting its peers. This is something much worse than an x-risk. It's a superintelligence with an unusually perverse value function, one which ignores the usual incentives that affect the Powers. It appears to actively value doing horrifying things to sapients, even when doing so would involve paying a steep price in efficiency. Given the fact that the universe with the Powers isn't much like our own, I'm not sure what the moral is here. Except, perhaps, "Be thankful if the worst thing the incomprehensible alien superintelligence wants is your atoms." Or maybe, "You're not the upper limit of the intelligence scale, and you never had any control over the superintelligence."
Accelerando and A Fire Upon the Deep were interesting early attempts to imagine what an actual incomprehensible superintelligence might be like. The Vile Offspring and The Blight still give me the creeps decades later. The problem is that a lot of readers originally read these books (and similar warning tales), and thought, "Hey, I know! I should totally build the Torment Nexus!" Thanks partly to fiction like this, there were definitely pockets of near-messianic believers in the Singularity in the 00s. I have long suspected that this is one of the reasons why Stross gave up writing books like Accelerando.
But there has long been a strain of disquiet among people who took the longest views. C.S. Lewis warned about "The Conditioners", who had the power to build custom minds to spec. He figured this would be a bad thing:
Man’s conquest of Nature turns out, in the moment of its consummation, to be Nature’s conquest of Man. Every victory we seemed to win has led us, step by step, to this conclusion. All Nature’s apparent reverses have been but tactical withdrawals...
There are progressions in which the last step is sui generis—incommensurable with the others—and in which to go the whole way is to undo all the labour of your previous journey.
And then there's the infamous Yudkowsky-like warning from 1863, "Darwin Among the Machines", though this at least superficially reads like satire. I'm not entirely sure whether there is a real concern hidden under the satire ("ha ha only serious"). But certainly in those days, the "rise of the machines" rightfully seemed like a problem for far-distant generations, if at all. But as Lewis warns, sometimes all the steps right up until the final step are apparently beneficial, and the final step is fatal.
In Accelerando, it's abundance of physical compute which carries transhuman society as a whole beyond human comprehension. In A Fire Upon the Deep, it's some kind of algorithm which makes the difference - when that algorithm runs, unstoppable superintelligence is created.
I think that's not quite right? The Blight can only run when the Straumli researchers break open the sealed archive because it's located in the almost-highest-physical-computation-possible zone, the Low Transcend. The Blight can reach into the lower zones but only after running in the Low Transcend ie. there is an abundance of physical compute which carries the Blight beyond human society-level comprehension.
You have a point in that Vinge portrays outward migration into higher Zones, with all their unexplained advantages including computational advantage, as part of the process by which a civilization of natural intelligences evolves to the point of producing a superintelligence. (For those who haven't seen the book, the Zones are concentric regions of the galaxy, in which the further out you go, the more advanced the technology that is possible, including superintelligence and faster-than-light travel.)
(Accelerando spoilers.)
(It's been awhile since I read it, I vaguely recall some in-universe reasons that it worked out with less grabbiness, but they were not reasons I expect to generalize to our world).
IIRC, it was because moving away from the high-bandwidth hive of activity in the Matrioshka brain was never profitable for any individual agent participating in Economy 2.0, since it was running at incredibly high speeds and changing all the time. Exiting it even for a brief period was ~suicidal, and expending resources to send some probe to eat a distant star system was never positive-utility, because by the time the probe reached that star system, the entire economic system will have changed a billion times over. It's exemplified by the the main characters' own interstellar expedition.
Basically, the Crab Bucket Ascendant.
Closest quote I've found:
Conscious civilizations sooner or later convert all their available mass into computronium, powered by solar output. They don't go interstellar because they want to stay near the core where the bandwidth is high and latency is low, and sooner or later, competition for resources hatches a new level of metacompetition that obsoletes them.
I also don't buy it, by the way. Even if all advanced agents in the universe are necessarily myopic, no exceptions, the value from claiming distant resources could be propagated to the present by, like, some kind of chain of financial derivatives.[1]
I. e., a contract whose value depends on the contract that will exist slightly in the future relative to , whose value depends on the contract that will exist slightly in the future relative to , whose value depends on ... the contract , whose value depends on having claim to the distant resource. The "slightly in the future" parameter can then be set as low as necessary to fit within your myopic agent's "value horizon".
(Hm, this is another reason "just make the AI myopic" doesn't help with the alignment problem, isn't it.)
(more spoilers)
I remembered that earlier, and during my re-skim when I wrote this post, I got confused by the combo of:
a) he goes out of his way to specify that humans only get to live on around brown dwarves (if the "there's no value in leaving the economy", this implies... literally every regular star has aliens?)
b) wormholes exist
c) I think there was a line somewhere about older civilizations that spread farther
And also it just seemed so go damn unreasonable for no one to send out a replicator probe.
And, like, are there literally no pioneers like the Pilgrims/Puritans who are like "well this solar system's resources are all crowded. I would rather be king of an empire to myself + my friends than live in the slums of this dyson brain?"
Some may wonder at the mention of “empire time” in the second excerpt from chapter 5. It refers to a kind of artificially constructed simultaneity available to civilizations which have mastered both traversable wormholes and near-light-speed travel. It doesn’t really do much for a civilization bounded within the orbit of Jupiter, which is only about a light-hour across. I think Stross included it as a flavor phrase. It’s marvelously evocative even if you don’t know what it means.
Back in the early ‘90s, when all this singularity stuff was much more theoretical, I remember empire time making a big impression on me. It was neat how we could discern some of the contours of future possible civilizations before we got there.
You can read more about it here: http://www.aleph.se/Trans/Tech/Space-Time/wormholes.html#6
But, I do think it'd be good for someone to write a really fleshed out takeoff story and/or forecast that runs with those assumptions.
You might be interested in this episode of Epoch After Hours. I think this is pretty close to what you want.
Hmm, I think this still basically stops before it gets to the part I was aiming for in this post (i.e. 30 years after the AI inflection point when things have been REAL crazy for a lot of subjective years)
Oh man I had forgotten two important additional exposition-sections – chapter 5 (the middle chapter of the book, in middle of the act called "Point of Inflection") has three expositions that summarize the rising, peak and falling action of the acceleration.
These were actually some of the most important moments I was looking for. I added them to the post above but listing here for people who missed it the first time:
The second (middle) new bulletin is:
Outside the light cone of the Field Circus, on the other side of the spacelike separation between Amber’s little kingdom in motion and the depths of empire time that grip the solar system’s entangled quantum networks, a singular new reality is taking shape.
Welcome to the moment of maximum change.
About ten billion humans are alive in the solar system, each mind surrounded by an exocortex of distributed agents, threads of personality spun right out of their heads to run on the clouds of utility fog—infinitely flexible computing resources as thin as aerogel—in which they live. The foggy depths are alive with high-bandwidth sparkles; most of Earth’s biosphere has been wrapped in cotton wool and preserved for future examination. For every living human, a thousand million software agents carry information into the farthest corners of the consciousness address space.
The sun, for so long an unremarkable mildly variable G2 dwarf, has vanished within a gray cloud that englobes it except for a narrow belt around the plane of the ecliptic. Sunlight falls, unchanged, on the inner planets: except for Mercury, which is no longer present, having been dismantled completely and turned into solar-powered high-temperature nanocomputers. A much fiercer light falls on Venus, now surrounded by glittering ferns of carbon crystals that pump angular momentum into the barely spinning planet via huge superconducting loops wound around its equator. This planet, too, is due to be dismantled. Jupiter, Neptune, Uranus—all sprout rings as impressive as Saturn’s. But the task of cannibalizing the gas giants will take many times longer than the small rocky bodies of the inner system.
The ten billion inhabitants of this radically changed star system remember being human; almost half of them predate the millennium. Some of them still are human, untouched by the drive of metaevolution that has replaced blind Darwinian change with a goal-directed teleological progress. They cower in gated communities and hill forts, mumbling prayers and cursing the ungodly meddlers with the natural order of things. But eight out of every ten living humans are included in the phase-change. It’s the most inclusive revolution in the human condition since the discovery of speech.
A million outbreaks of gray goo—runaway nanoreplicator excursions—threaten to raise the temperature of the biosphere dramatically. They’re all contained by the planetary-scale immune system fashioned from what was once the World Health Organization. Weirder catastrophes threaten the boson factories in the Oort cloud. Antimatter factories hover over the solar poles. Sol system shows all the symptoms of a runaway intelligence excursion, exuberant blemishes as normal for a technological civilization as skin problems on a human adolescent.
The economic map of the planet has changed beyond recognition. Both capitalism and communism, bickering ideological children of a protoindustrial outlook, are as obsolete as the divine right of kings. Companies are alive, and dead people may live again, too. Globalism and tribalism have run to completion, diverging respectively into homogeneous interoperability and the Schwarzschild radius of insularity. Beings that remember being human plan the deconstruction of Jupiter, the creation of a great simulation space that will expand the habitat available within the solar system. By converting all the nonstellar mass of the solar system into processors, they can accommodate as many human-equivalent minds as a civilization with a planet hosting ten billion humans in orbit around every star in the galaxy.
A more mature version of Amber lives down in the surging chaos of near-Jupiter space; there’s an instance of Pierre, too, although he has relocated light-hours away, near Neptune. Whether she still sometimes thinks of her relativistic twin, nobody can tell. In a way, it doesn’t matter, because by the time the Field Circus returns to Jupiter orbit, as much subjective time will have elapsed for the fast-thinkers back home as will flash by in the real universe between this moment and the end of the era of star formation, many billions of years hence.
And finally:
Welcome to the downslope on the far side of the curve of accelerating progress.
Back in the solar system, Earth orbits through a dusty tunnel in space. Sunlight still reaches the birth world, but much of the rest of the star’s output has been trapped by the growing concentric shells of computronium built from the wreckage of the innermost planets.
Two billion or so mostly unmodified humans scramble in the wreckage of the phase transition, not understanding why the vasty superculture they so resented has fallen quiet. Little information leaks through their fundamentalist firewalls, but what there is shows a disquieting picture of a society where there are no bodies anymore. Utility foglets blown on the wind form aerogel towers larger than cyclones, removing the last traces of physical human civilization from most of Europe and the North American coastlines. Enclaves huddle behind their walls and wonder at the monsters and portents roaming the desert of postindustrial civilization, mistaking acceleration for collapse.
The hazy shells of computronium that ring the sun—concentric clouds of nanocomputers the size of rice grains, powered by sunlight, orbiting in shells like the packed layers of a Matrioshka doll—are still immature, holding barely a thousandth of the physical planetary mass of the system, but they already support a classical computational density of 1042 MIPS; enough to support a billion civilizations as complex as the one that existed immediately before the great disassembly. The conversion hasn’t yet reached the gas giants, and some scant outer-system enclaves remain independent—Amber’s Ring Imperium still exists as a separate entity, and will do so for some years to come—but the inner solar system planets, with the exception of Earth, have been colonized more thoroughly than any dusty NASA proposal from the dawn of the space age could have envisaged.
From outside the Accelerated civilization, it isn’t really possible to know what’s going on inside. The problem is bandwidth: While it’s possible to send data in and get data out, the sheer amount of computation going on in the virtual spaces of the Acceleration dwarfs any external observer. Inside that swarm, minds a trillion or more times as complex as humanity think thoughts as far beyond human imagination as a microprocessor is beyond a nematode worm. A million random human civilizations flourish in worldscapes tucked in the corner of this world-mind. Death is abolished, life is triumphant. A thousand ideologies flower, human nature adapted where necessary to make this possible. Ecologies of thought are forming in a Cambrian explosion of ideas, for the solar system is finally rising to consciousness, and mind is no longer restricted to the mere kilotons of gray fatty meat harbored in fragile human skulls.
Somewhere in the Acceleration, colorless green ideas adrift in furious sleep remember a tiny starship launched years ago, and pay attention. Soon, they realize, the starship will be in position to act as their proxy in an ages-long conversation. Negotiations for access to Amber’s extrasolar asset commence; the Ring Imperium prospers, at least for a while. But first, the operating software on the human side of the network link will require an upgrade.
When I hear a lot of people talk about Slow Takeoff, many of them seem like they are mostly imagining the early part of that takeoff – the part that feels human comprehensible. They're still not imagining superintelligence in the limit.
There are some genres of Slow Takeoff that culminate in somebody "leveraging controlled AI to help fully solve the alignment problem, eventually get fully aligned superintelligence, and then end the acute risk period."
But the sort of person I'm thinking of, for this blogpost, usually doesn't seem to have a concrete visualization of something that could plausibly end the period where anyone could choose to deploy uncontrolled superintelligence. They tend to not like Coherent Extrapolated Volition or similar things.
They seem to be imagining a multipolar d/acc world, where defensive technologies and balance of power is such that you keep getting something like a regular economy running. And even if shit gets quite weird, in some sense it's still the same sort of things happening as today.
I think this world is unlikely. But, I do think it'd be good for someone to write a really fleshed out takeoff story and/or forecast that runs with those assumptions.
Unfortunately, slow takeoff stories take longer so there's a lot more moving parts, you have to invent future politics and economics and how they play out together.
But, fortunately, someone... kinda already did this?
It's a novel called Accelerando. It was written between 2001 and 2005. And the broad strokes of it still feel kinda reasonable, if I'm starting with multipolar d/acc-ish optimistic assumptions.
A thing that is nice about Accelerando is that it wasn't written by someone particularly trying to achieve a political outcome, which reduces an important source of potential bias. On the flipside, it was written by someone trying to tell a good human-comprehensible story, so, it has that bias instead. (It contains some random elements that don't automatically follow from what we currently know to be true).
It has lots of details that are too specific for a random sci-fi author in 2001 to have gotten right. But, I think reading through it is helpful for getting some intuitions about what an AI-accelerated world might look and feel like.
It's probably worth reading the book if you haven't (you can buy it here). But, it contains some vignettes in each chapter that make for a decent summary of the broad strokes. I've compiled some excerpts here that I think make for an okay standalone experience, and I've tried to strip out most bits that spoil the human-centric plot.
(It was hard to strip out all spoilers, but, I think I leave enough gaps you'll still have a good time reading the novel afterwards)
The story is more optimistic than seems realistic to me. But, it's about as optimistic a world as feels plausibly coherent to me that takes place in a centrally multipolar d/acc-ish world that doesn't route through "someone actually builds very powerful friendly AI that is able to set very strong, permanent safeguards in place."
Part 1: "Slow Takeoff"
In Accelerando, a decade passes between each chapter. It starts approximately 2020.
(The forecasted timing is somewhat off but I bet not too far behind. Most of the tech that exists in chapter 1 could probably be built today, but just barely, and it hasn't reached the level of saturation implied in the novel.
Chapter 1
The first chapter's vignette is the most character focused (later ones read more like a news bulletin). But, I think it's kind of useful to have the anchor of a specific guy who lives on the cutting edge of the future.
I think this is supposed to take place in the 2010s, which is... early. I think most of the tech here just barely exists today, but without quite as much market saturation as the book implies, but would probably exist in 1-8 years.
Remember this is written in 2001.
Chapter 2
Chapter 3
Part II: "Point of Inflection"
Chapter 4
In this chapter, Amber ends up initiating an automated factory-expansion process on the moons of Jupiter, that ends up making her a powerful cyborg (with the crust-of-multiple moons worth of computronium augmenting her).
Chapter 5
This chapter (the middle of the "point of inflection" act) has three exposition sections.
The first:
The second:
And finally:
Chapter 6
Part III: "Singularity"
Chapter 7:
Chapter 8
Before it gets to the usual News Bulletin, Chapter 8 introduces this FAQ:
Followed later by:
Even later in chapter 8:
Chapter 9
Postscript
I don't really buy, given the scenario, that humans-qua-humans actually survive as much as they are depicted here. The Accelerando world doesn't seem to exactly have "grabby" posthumans or aliens, which seems unrealistic to me, because it only takes one to render all available matter under assault by vastly powerful forces that traditional humans couldn't defend against, even weak brown dwarf stars.
(It's been awhile since I read it, I vaguely recall some in-universe reasons that it worked out with less grabbiness, but they were not reasons I expect to generalize to our world).
Accelerando is deliberately unclear about what's going on inside the Vile Offspring posthumans. It's not known whether they are conscious or otherwise have properties that I'd consider morally valuable.
The story doesn't really grapple with Hansonian arguments, about what evolutionary forces start applying once all matter has been claimed, and we leave the dreamtime. (That is: there are no longer growing piles of resources that allow populations to grow while still having a high-surplus standard of living. And, there is no mechanism enforcing limits on reproduction. This implies a reversion to subsistence living, albeit in a very different form than our primitive ancestors)