Why do you think self-modification and designing new beings are factors in favor of stability? I think they lead to instability. It's like faster evolution. Sure, some actors will find ways to lock-in forever, but they will lose to those who keep the ability to change.
I agree. If I am reading Will's point correctly, he is assuming more people will want to permanently enshrine beliefs such that even if they are immortal they never want to change.
What I see online is that people get bored and crave novelty. That might become an editable value, but it happens to be linked to a whole lot of other things, so editing it and only it is not guaranteed to be possible even with ASI unless the subject stops being human. Editing it at all is not guaranteed to be safe.
Moreover, the transhumanist movement is extremely diverse in terms of what they want. They will go in many directions when the capabilities become available. I see no reason why they would want to enshrine stability over exploration and self discovery through ongoing transformation.
Even Will's take on immortality is locked into current realities of gerontocracy. One primary goal of research into longevity is the ability to maintain brain plasticity and learning capabilities. An immortal will not remain part of the elite society if they cannot use current technology. So unless Will is proposing that immortals will freeze technological growth, they will have to be adaptive just to maintain their place in society.
Now cultural dominance? That is a short- to mid-term issue. I can see a single culture getting an early lead in AGI plus space and that becoming a lasting impact. But unless we figure out Ansibles (faster than light communication using quantum entanglement) or the speed of light stops being a barrier to communication, this cannot last long term. Within a star system (especially ours) sure. With multi-system expansion as longtermism presupposes? I cannot see how the system stays in contact regularly enough to prevent change in different directions.
I don't think he failed to consider these angles, so am curious to hear more on the topic.
Is there anything we can do to make the longterm future go better other than preventing the risk of extinction?
My paper, Persistent Path-Dependence, addresses that question. I suggest there are a number of mechanisms that are fairly likely to be developed within our lifetimes that would result in extremely persistent path-dependent effects of predictable expected value. These include the creation of AGI-enforced institutions, a global concentration of power, the widespread settlement of space, the first immortal beings, the widespread design of new beings, and the ability to self-modify in significant and lasting ways.
I’m not very confident that such events will occur, but in my view they’re likely enough to make work to steer them in better directions very valuable. Let’s take each mechanism in turn.
First, AGI-based institutions. Once we have AGI, decision-makers could:
This would be as if, rather than having the Supreme Court interpret the US Constitution, we could conjure up the ghosts of Madison and Hamilton and ask them directly - and their views were decisive. With these in place, this AGI-enforced constitution could operate indefinitely.
Second, immortality. Throughout history, death has functioned as a natural brake on the persistence of any particular set of values or power structures. Over time, even the most entrenched values eventually change as new generations replace the old.
Post-AGI technology could fundamentally alter this dynamic. Digital beings would inherently be immune to biological aging; when combined with perfect replication and hardware migration, we’ll be able to create the minds whose exact values and decision-making processes could persist unchanged indefinitely.
A similar dynamic could hold for biological immortality. A technological explosion driven by AGI could dramatically extend or effectively eliminate biological constraints on human lifespans through technologies targeting the fundamental mechanisms of aging.
Third, designing beings. Through history, change has happened in part because successive generations do not inherit the same values as their forebears. But this dynamic could change after AGI. Probably, the vast majority of beings that we create will be AI, and they will be products of design — we will be able to choose what preferences they have. And, with sufficient technological capability, we would likely be able to choose the preferences of our biological offspring, too. Even if people choose not to live forever, their values could continue to persist through perfect transmission from one generation to the next.
Fourth, strong self-modification. In the future, people will probably be able to modify their own beliefs and preferences such that they can precisely choose what beliefs and preferences to have. So, not only might people today be able to control society’s future values by living forever; they would also be able to control the values of their future selves.
A religious zealot might choose to have unshakeable certainty that their favoured religion is true; an ideological extremist might choose to have an irrevocable and unwavering preference in favour of their political party over any other.
As well as creating new mechanisms that enable persistent path-dependence, a post-AGI world could also reduce the causes of disruption, too. Throughout history, societal changes have often been driven by technological innovations that disrupt existing power structures. However, as civilisation approaches technological maturity—the hypothetical point at which all major technologies have been invented—this source of disruption would disappear.
Advanced technology would help prevent other sorts of disruption, too. It would dramatically improve prediction capabilities: advanced AI systems could process vastly more information, model complex systems with greater precision, and forecast outcomes over longer time horizons. So it would be much less likely people would relinquish their influence just by making some mistake.
Finally, a post-AGI world might be characterised by indefinite defense-dominance, enabling a permanently stable concentration of power. In particular, indefinite defense-dominance could come about as a result of widespread space settlement. If star systems are strongly defense-dominant, then the starting distribution of star systems could, in principle, be held onto indefinitely. It might be that, after the initial allocation, there is trade or gifting of some star systems; but even if so, there would still be very strong path-dependence, as the final allocation of star systems would be extremely influenced by the starting allocation.
These issues might seem like far-off concerns - but the intelligence and industrial explosions make them near-term. I think it’s over 1 in 3 that we see an intelligence explosion starting in the next 10 years. And if advanced AI results in explosive technological progress and industrial expansion, then many of the new mechanisms for persistence will arrive in quick succession.
To get regular updates on Forethought’s research, you can subscribe to our Substack newsletter here.