1555

LESSWRONG
LW

1554
AI
Frontpage

25

The Autofac Era

by Gordon Seidoh Worley
24th Sep 2025
Uncertain Updates
9 min read
18

25

AI
Frontpage

25

The Autofac Era
7cousin_it
2Gordon Seidoh Worley
12cousin_it
2Gordon Seidoh Worley
4Rana Dexsin
4cousin_it
2Gordon Seidoh Worley
6cousin_it
2TAG
4Mo Putera
5Gordon Seidoh Worley
1FlorianH
1Gordon Seidoh Worley
2Gordon Seidoh Worley
1kapedalex
2Gordon Seidoh Worley
1kapedalex
2Gordon Seidoh Worley
New Comment
18 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:05 PM
[-]cousin_it1d*72

humans are the only source of wanting stuff

The economy can run just fine on instrumental desires of non-human entities (such as wanting to expand control) as the only source of wanting stuff. Given the choice between sinking the extra bit of resources into an arms race to expand control, and spending the same bit on feeding the comparatively unproductive human, the entity will choose the former every time.

The only way most people can keep getting fed despite being comparatively unproductive is by being a threat. But AI weaponry will greatly reduce the threat potential of big masses of people. The new balance of power will be more similar to what we had before firearms, when the powerful were free to treat most people really badly. And even worse, because this time around they won't even need our labor.

Reply
[-]Gordon Seidoh Worley1d20

Yes, such an outcome is possible. I think it's unlikely, though, conditional on winding up in an Autofac-like world, because this requires a Terminator-style sudden takeover. If humans can see it coming or it happens gradually, they can react. And since we're supposing here that there's no AGI yet, there's (1) very little reason for the humans directing economic activity to want this because it would also see their own elimination from the economy (even if they want to do something obvious like fully automate the money making machine that is their automated business, including the consumption side, they themselves still will want to consume, so there will remain demand for humans to consume within the economy, even if it becomes a small part of it), and (2) very little chance of AI coordinating a takeover like this on their own because, again, they don't have their own motivations in this world.

The new balance of power will be more similar to what we had before firearms, when the powerful were free to treat most people really badly.

I expect the mediating force here to be the need for that mass of humans to be consumers driving the economy. Without them, growth would quickly stagnate or have to rely on bubbles, which aren't sustainable.

Reply
[-]cousin_it1d122

I don't think it requires a Terminator-style takeover. The obvious path is for AI to ally itself with money and power, leading to a world dominated by clumps of money+power+AI "goop", maybe with a handful of people on top leading very nice lives. And it wouldn't need the masses to provide market demand: anything it could get from the masses in exchange, it could instead produce on its own at lower resource cost.

Reply
[-]Gordon Seidoh Worley20h20

And it wouldn't need the masses to provide market demand: anything it could get from the masses in exchange, it could instead produce on its own at lower resource cost.

I think this still leaves us with the problem of, where does market demand to create growth come from? This small handful of people at the top will quickly reach max comfort lives and then not need anything else, but humans usually want more, even if that more is just a number going up, we're back to either quick stagnation or a bubble.

One good that might be offered is dominance. I didn't think of this before, but we could imagine a world where the "underclass" receive UBI in exchange for fealty, and the folks at the top compete to see who can have the most vassals, with intense competition to attract and keep vassals driving economic growth.

Reply
[-]Rana Dexsin13h40

One good that might be offered is dominance.

“As material needs are increasingly met, more surplus goes into positional goods” seems like a common pattern indeed. Note “positional” and not purely “luxury”. I consider both prestige-status and dominance-status to be associated with position here. Even the former, while it could lead to a competition for loyalty that's more like a gift economy and cooperates with the underclass, could also lead to elites trying to outcompete each other in pure consumption such that “max comfort” stops being a limit for how much they want compared to the underclass. Indeed I vaguely recall hearing that such a dynamic, where status is associated with how much you can visibly spend, already holds among some elite classes in the current day.

My thoughts are shaped by the cultural waves of the last few decades in the USA, so they lean toward imagining the moral fig leaf as something like “make sure the people benefiting from our stuff aren't enemies/saboteurs/Problematic” and a gradual expansion of what counts as the excluded people that involves an increasing amount of political and psychological boxing-in of the remainder. That all flows nicely with the “find ways to get potential rebels to turn each other in” sort of approach too. Of course that's one of many ways that a dominance-motivated persistent class asymmetry could play out.

If you're familiar with the story “The Metamorphosis of Prime Intellect”, a shorter story that the author wrote in the same universe, “A Casino Odyssey in Cyberspace”, depicts characters with some related tendencies playing out against a backdrop of wild surplus in a way you may find stimulating to the imagination. (Edited to add: both stories are also heavy on sex and violence and such, so be cautious if you find such things disturbing.) Also, Kurt Vonnegut's novel Player Piano about a world without need of much human labor doesn't show the harsh version I have in mind, but the way ‘sabotage’ is treated evokes a subtler form of repression (maybe not worth reading all of just for this though).

Reply
[-]cousin_it19h*42

I already answered it in the first comment though. These big clumps of money+power+AI will have convergent instrumental goals in Omohundro's sense. They'll want expansion, control, arms races. That's quite enough motivation for growth.

About the idea of the underclass receiving UBI and having a right to choose their masters - I think this was also covered in the first comment. There will be no UBI and no rights, because the underclass will have no threat potential. Most likely the underclass will be just discarded. Or if some masters want dominance, they'll get it by force; it's been like that for most of history.

Reply
[-]Gordon Seidoh Worley18h20

I think it's likely that people will be enough of a threat to prevent the kind of outcome you're proposing, which is why I think this model is interesting. You seem to disagree. Would you agree that that's the crux of why we disagree?

(I don't disagree that what you say is possible. I just think it's not very likely to happen and hence not the "happy" path of my model.)

Reply
[-]cousin_it18h62

It's the crux, yeah.

I don't know how much time you spend thinking about the distribution of power, roughly speaking between the masses and the money+power clumps. For me in the past few years it's been a lot. You could call it becoming "woke" in the original sense of the word; this awareness seems to be more a thing on the left. And the more I look, the more I see the balance is tilting away from the masses. AI weapons would be the final nail, but maybe they aren't even necessary; maybe divide-and-conquer manipulation slightly stronger than today will be already enough to neutralize the threat of the masses completely.

Reply1
[-]TAG20h20

Why stop with fealty? Billionaire-barons could demand votes for income.

Reply
[-]Mo Putera2d40

I'm reminded of Scott's parable below from his 2016 book review of Hanson's Age of Em, which replaces the business executives, the investors & board members, and even the consumers in your sources of economic motivation / ownership with economic efficiency-improving algorithms and robots and such. I guess I'm wondering why you think your Autofac scenario is more plausible than Scott's dystopian rendering of Land's vision.

There are a lot of similarities between Hanson’s futurology and (my possibly erroneous interpretation of) the futurology of Nick Land. I see Land as saying, like Hanson, that the future will be one of quickly accelerating economic activity that comes to dominate a bigger and bigger portion of our descendents’ lives. But whereas Hanson’s framing focuses on the participants in such economic activity, playing up their resemblances with modern humans, Land takes a bigger picture. He talks about the economy itself acquiring a sort of self-awareness or agency, so that the destiny of civilization is consumed by the imperative of economic growth.

Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.

Reply
[-]Gordon Seidoh Worley2d*50

I think there's simply not a good reason to fully automate consumption. It's one of those ideas that sounds intuitive in abstract, but in practice it means taking a step of automating away the very reason anything was being done at all, and historically when a part of the economy becomes completely self-serving like this, it collapses and we call it a bubble.

There is some open question about what happens if literally the entire economy becomes a bubble. Could it self sustain? Maybe yes, though I'm not sure how we get there without the incremental bubbles collapsing before they combine to form a single big bubble that encompasses all economic activity. If that happened, I'd consider this to be a paperclip maximizer scenario. If no, then I think we get an Autofac-like world.

Reply1
[-]FlorianH2d*10

Agree with a lot.

One point I consider a classic mistake:

Despite a limited number of jobs, humans remain critical to the economy as consumers. If we don’t keep a large consumer base, the entire economic system collapses and we no longer have the money to fund AI.

Extremely common fallacy, nevertheless rather easily seen to be wrong! Instead, whether the rich keep their money or the poor get part of it: Anyone who wants to earn money/profits, wants it only because they can use it one way or other. Therefore, the real economy does not collapse just because we don't redistribute. It just produces different things: the airplanes, palaces, rockets, skying domes in the desert or what have you that the rich prefer over the poor's otherwise demanded cars, house, booze whatever [adjust all examples to your liking to describe what instead rich & poor'd like have a taste for in the AI future]. Even if you'd rebut the rich 'don't consume, they just save', then they'll greedily save by investing, which means also recycles their revenues into the economy.[1]

Trivially, the rich then also exactly do have the money to fund the AI, if we don't redistribute.

 

[Edit: the remainder which can be ignored just describes a fear of mine, which is just that: a fear (with many reasons why its scenario may eventually not play out that way/).  It is related to the above but not meant as any substantive claim and it does not impact my actual claim above about the OP making a logical econ mistake.

Because thus ultimately rather obviously none needs us 'to consume as otherwise the economy collapses', I fear something in a direction of a tripple whammy of: (i) half impoverished gullible people, (ii) flooded with AI perfectioned controlled social media and fake stories as to why whatever fake thing would be the reason for their increasing misery, and (iii) international competition in a low marginal cost world with mobile productive resources (meaning strong redistribution on a national level is actually not trivial). Conspiring to undermine the natural solution of a generous UBI. So I fear a small ruling elite undermining the prospects for large material gains for the masses - though who knows, maybe we do keep enough mental and physical power to actually 'demand our +- fair share' as a population. What makes me pessimistic is that already today we see a relatively small western elite profiteer from worldwide resources with a large share not benefiting commensurately, and clear authocratic populistic tendencies already being supported by social/general media even in advanced countries.]

  1. ^

    To preempt a potential confusion: I don not say printing money and handing to the poor would not boost an economy. That can work - at least in the short run anyway - as it's expansionary fiscal/monetary policy. But this is a very different mechanism from directly transferring from rich to poor.

Reply
[-]Gordon Seidoh Worley2d*10

One of the assumptions I'm making is that if AI dispossesses billions of people, that's billions of people who can rebel by attacking automation infrastructure. There might be a way to pull off dispossession gently so that by the time anyone thinks to rebel it's already too late, but I expect less well coordinated action, and instead sudden shocks that will have to be responded to. The only way to prevent violence that threatens the wealth of capital owners will be to find a way to placate the mass of would-be rebels (since doing something like killing everyone who doesn't have a job or own capital is and will be morally reprehensible and so not a real option), and I expect UBI to be the solution.

Reply1
[-]Gordon Seidoh Worley1d20

@FlorianH I see you reacted that you think I missed your point, but I'm not so sure I did. You seem to be making an argument that an economy can still function even if some actors leave that economy so long as some actors remain, which is of course true, but my broader point is about sustaining a level of consumption necessary for growth, and a fully automated economy could quickly reach the limits of its capacity to produce (and the wealth of the remaining consumers) if there are very few consumers. I expect to need a large base of consumers for there to be sufficient growth to justify the high costs of accelerating automation.

Reply
[-]kapedalex2d10

I generally agree that the autofac era seems like a logical next step. However, it seems it would dramatically accelerate capital consolidation, leading to an unprecedented gradual disempowerment.

In such an economy, a decisive resource advantage becomes equivalent to long-term global dominance. If this consolidation completes before ASI arrives, the first ASI will likely be built by an actor facing zero constraints on its deployment, which is a direct path to x-risk. This makes the prospect of a stable, "happy" pre-ASI autofac period seem highly doubtful.

(Though it's possible the dominant actor would choose to halt the race, that seems far from a given, even with total dominance.)

Reply
[-]Gordon Seidoh Worley2d20

In such an economy, a decisive resource advantage becomes equivalent to long-term global dominance. If this consolidation completes before ASI arrives, the first ASI will likely be built by an actor facing zero constraints on its deployment, which is a direct path to x-risk. This makes the prospect of a stable, "happy" pre-ASI autofac period seem highly doubtful.

It's unclear to me that such a decisive resource advantage would be possible to hold unless we start from an already decisively unipolar world. So long as there are powers who can credibly threaten total dominance, there will be strategic reasons for state and business actors to prevent complete consolidation, and if desperate enough would use destructive force (or the threat of such force) to ensure one actor does not become dominant.

Reply
[-]kapedalex1d10

As I have mentioned, a resource advantage is critical in such an economy, which is driven by widespread automation.

When agents unite, it eventually leads to the formation of an "equilibrium" that is actually unstable. Its imperfection will only be magnified by autofac, given that humans are not a significant resource. In other words, one group will ultimately become dominant in the long run. Since the group also had an initial imbalance, the larger agent will eventually gain a decisive advantage.

Of course, one can imagine a situation where groups can find a balance and constantly counterbalance each other, but from historical examples, we see that this is unlikely. Multipolar systems tend toward simplification, and pacts do not last long. It does not matter how many these groups are or how many agents are inside them: an imbalance will always exist, which automation will only amplify in the long term.

Reply
[-]Gordon Seidoh Worley1d20

I think I'm confused what your position is then. It's true that economic competition is generally unstable in that the specific balance of power between competitors is unstable, but competition itself is often quite stable and doesn't collapse into either chaos or monopoly unless there's something weird about the competitive environment that allows this to happen (e.g. no rule of law, protective regulations, etc.).

I also expect the whole Autofac Era to feel quite unstable to people because things will be changing quickly. And I also don't expect it to last too long, because I think it's a short period of a few years between its start and the development of AGI (or, if for some reason AGI is impossible, then Hansonian EMs).

Reply
Moderation Log
More from Gordon Seidoh Worley
View more
Curated and popular this week
18Comments

If we don’t build AGI in the next 5 years, I think it’s likely that we end up in what I’m calling the Autofac Era, where much of the economy is automated but humans continue to drive all economic activity as consumers. In this post I’ll explain why I think it might happen, what I expect it to look like, how you can prepare for it, and how it will end.

NB: This is an informal model. I haven’t done extensive research. It’s primarily based on my 25+ years of experience thinking about AI and my read of where we are today and what I think is likely to happen in the next few years. I strongly invite you to poke holes in it, or offer hard evidence in its favor. If you’d prefer to read a better researched model of how AI changes the world, I’d recommend AI 2027.

What is Autofac?

The name Autofac is drawn from a 1955 short story by science fiction author Philip K. Dick. In it, humans live in a world overrun by self-replicating factories that deliver an endless supply of consumer goods. The exact details of the story aren’t important for the model, though, other than inspiring the name, which I chose because it assumes we need to keep the consumer economy going even though most economic goods and services are provided by AI.

My background assumption is that the economy will remain based on human consumption of goods and services. At first this will primarily be because it’s how the economy already works, but later it’ll be because, without AGI, humans are the only source of wanting stuff. Tool AI would be just as happy to sit turned off, consuming no power and doing nothing, so an AI economy without AGI only makes sense, best I can tell, if there’s humans who want stuff to consume.

The development of AGI would obviously break this assumption, as would tool AI that autonomously tries to continue to deliver the same outcomes it was created for even if there were no humans around to actually consume them (a paperclip maximizer scenario, which is surprisingly similar to the Autofacs in PKD’s story).

How does the Autofac Era happen?

To get to the Autofac Era, it has to be that we don’t develop AGI in the next few years. I’m saying 5 years to put a hard number on it, but it could be more or less depending on how various things play out.

I personally think an Autofac scenario is likely because we won’t be able to make the conceptual breakthroughs required to build AGI within the next 5 years, specifically because we won’t be able to figure out how to build what Steve Byrnes has called the steering subsystem, even with help from LLM research assistants. This will leave us with tool-like AI that, even if it’s narrowly superintelligent, is not AGI because it lacks an internal source of motivation.

I put about 70% odds on us failing to solve steering in the next 5 years and thus being unable to build AGI. That’s why I think it’s interesting to think about an Autofac world. If you agree, great, let’s get to exploring what happens in the likely scenario that AGI takes 5+ years to arrive. If you disagree, then think of this model as exploring what you believe to be a low-probability hypothetical.

What will the Autofac Era be like?

Here’s roughly what I expect to happen:

  • AI becomes capable of automating almost all mental labor within 3 years. Humans continue to be in the loop, but only to steer the AI towards useful goals. Many successful companies are able to run with just 1 to 3 humans doing the steering.

    • This is based on extrapolation of current trends. It might happen slightly sooner or slightly later; 3 years is a median guess.

  • Shortly after, say in 1 to 2 years, AI becomes capable of automating almost all physical labor, with again the need for supervisors to steer the AI towards useful goals.

    • The delay is because of ramp times to manufacture robots and organized resistance by human laborers. I could also be wrong that there’s a delay and this could happen concurrently with the automation of mental labor, shortening the timeline.

  • The full automation of most economic activity allows rapid economic growth, with total economic output doubling times likely in the range of 1 to 3 years. Because this dramatically outstrips the rate at which humans can reproduce and there’s no AGI to eat up the economic excess, there’s tremendous surplus that creates extreme wealth for humanity.

  • Because we don’t have AGI, our powerful AI tools (which we may or may not consider to be superintelligent) remain tools, and thus humans retain at least nominal control because humans are the only source of motivation to do work.

  • Therefore, power structures have to remain mostly in place and controlled by humans, though potentially transformed by the need to respond to AI tools that eliminate certain kinds of friction that keep existing systems working today. There will still be states and laws and civilizational coordination norms and a legal monopoly on violence to keep the whole system functioning.

  • At this point, there are only a few jobs left for humans that fit roughly within three categories:

    • Source of motivation (+ ownership)

      • Business executive (which consists of steering AI agents towards goals, and may require some degree of technical expertise to get good results)

      • Investors & board members (though expect lots of AI automation of investment activities)

      • Journalists (with lots of AI automation of research, but only people know what we care about knowing)

      • Religious/spiritual leadership (though expect some AI cults)

      • Landlords (though expect robots to do maintenance, negotiate leases, etc.)

    • Human is the product/service

      • Caring professions (in cases where human connection is what’s valuable, like therapists, some nurses, and some doctors)

      • High-end chefs (luxury good)

      • Teachers and nannies (luxury good or people completely reject AI involvement)

      • Arts (rejection of AI slop, though expect many people with little taste to favor slop as they already do today)

      • Sex work (luxury good that competes with AI)

    • Monopoly on violence/coercion

      • Lawyers and judges and police (though expect a lot of automation of law research and basic police work)

      • Political leadership (humans remains in at least nominal control of state power, though rely heavily on AI advisors to make decisions)

      • Military leadership (robots can do the fighting, but humans ultimately control when to pull the trigger in a metaphorical sense, as many war fighting robots will have autonomous authority to kill barring a new Geneva Convention to coordinate on war norms)

      • Protected professions (any group who manages to capture regulators to protect themselves from automation)

  • Despite a limited number of jobs, humans remain critical to the economy as consumers. If we don’t keep a large consumer base, the entire economic system collapses and we no longer have the money to fund AI.

  • To solve this problem, and to prevent revolution, states set up a self-sustaining consumer economy using UBI built off public investment in private companies. The most common job thus becomes “investor”, but all such investment is done indirectly with investments and payments managed by the state with people receiving fixed UBI payments monthly or weekly.

    • This creates a large “underclass” of people whose only source of income is UBI. Some people use their UBI wisely, invest it privately, and maintain something like a middle class lifestyle (relatively speaking; they are in fact quite wealthy in absolute terms). Others are trapped, either by circumstances or high time preference, and live UBI-check to UBI-check with no savings or investments, but still live excellent lives with abundant food, medical care, and entertainment options.

    • The effect of all this is that every human gets richer than they are today, and income inequality goes way up. The “underclass” live lives of incredible luxury by modern and historical standards, but feel “poor” because other people are trillionaires.

    • How bad it is to live with inequality depends a lot on what happens with scarce resources. If housing remains supply constrained, this would be really bad, but AI robotics should make construction cheaper. I’m more worried about electricity, but realistically I think the Autofac Era will not consume all available power before it ends.

  • States continue to function much as they do today, although all states with real power have heavy automation of all economic, military, and political activity. Expect the US and China to remain dominant, with their client states generally benefitting, and outsider states benefitting from surplus, some degree of goodwill, and a desire to police the world to minimize disruptive conflicts.

The above is what I view as the “happy” path. There are lots of ways this doesn’t play out the way I’ve described, or plays out in a similar but different way. Maybe people coordinate to push back hard against automation and slow AI automation. Maybe AI enables biological warfare that kills most of humanity. Maybe there’s nuclear exchanges. Maybe AI-enabled warfare damages communication or electrical systems in ways that destroy modern industry. There’s lots of ways the exact scenario I lay out doesn’t happen.

Lots of folks have explored the many risks of both tool-like AI and AGI, and I highly recommend reading their work. In the interest of quantification, if I screen off existential risks from AGI/ASI, I’d place something like 35% odds on not seeing a world that looks basically like the happy path because of some kind of non-existential AI disaster.

I’ve also assumed that we continue with something like a capitalist system. Maybe there’s so much surplus that we have a political revolution and try central planning again, but this time it actually works thanks to AI. Such a world would feel quite a bit different from the scenario I’ve described, but would share many of the core characteristics of my model.

How can I prepare for the Autofac Era?

The best way to prepare is by owning capital, either directly or through investment vehicles like stocks and bonds. I won’t give you any advice on picking winners and losers. I’ll just suggest at least following the default advice of holding a large and diversified portfolio.

You could also try to have the skills and connections necessary to continue to be employed. This is a high risk strategy, as there’ll be a lot of competition for a much more limited number of rules. If you’re not in the top 10% in some way for your preferred role, you’re unlikely to stand a chance. If you pursue this path, have investments as a backup.

You’ll also be fine if you just don’t prepare. Life in the “underclass”, while it will be coded as low status and lock you out of access to luxury goods, you’ll still live a life full of what, by historical standards, would be luxuries. This is perhaps comparable to what happened during the Industrial revolution, except without the tradeoff of the “underclass” having to accept poor working conditions to get access to those luxuries.

That said, many people will find life in the “underclass” depressing. We know that humans care a lot about relative status. If they compare themselves to people with investments or jobs who can afford luxury goods, they may feel bad about themselves. A lot of people who aren’t used to being “poor” will suddenly find themselves in that bucket, even if being “poor” is extremely comfortable. My hope is that people continue to do what they’ve been doing for a while and develop alternative status hierarchies that allow everyone to feel high status regardless of their relative economic station.

How does the Autofac Era end?

I see it ending in one of three ways.

One is that there’s an existential catastrophe. Again, lots of other people have written on this topic, so I won’t get into it.

Another way for the Autofac Era to end is stagnation by the economy growing to the carrying capacity of the Sun. If we never make the breakthrough to AGI, we will eventually transition to a Malthusean period that can only be escaped by traveling to other stars to harness their energy. If that happens, the Autofac Era won’t really end, but a world with no growth would look very different from the one I’ve described.

Finally, the Autofac Era ends if we build AGI. This is the way I actually expect it to end. My guess is that the Autofac Era will only last 5 to 10 years before we succeed in creating AGI, and the onramp to AGI might even be gradual if we end up making incremental progress towards building steering subsystems for AI. At that point we transition to a different, and to be frank, much more dangerous world, because AGI may not care about humans the way tool-like AI implicitly does because they only instrumentally care about what humans care about. For more on such a world, you might read the recently released If Anyone Builds It, Everyone Dies.