I love this article, but I disagree with the conclusion. You're essentially saying that a post-singularity world would be too impatient to explore the stars. I grant you that thinking a million times faster would make someone very impatient, but living a million times longer seems likely to counterbalance that.
Back in the days of cristopher columbus, what stopped people from sailing off and finding new continents wasn't laziness or impatience, it was ignorance and a high likelihood of dying at sea. If you knew you could build a rocket and fly it to mars or alpha centauri, and that it was 100% guaranteed to get there, and you'd have the mass and energy of an entire planet at your disposal once you did, (a wealth beyond imagining in this post-singularity world), I really doubt that any amount of transit time, or the minuscule resources necessary to make the rocket, would stand in anyone's way for long.
ESPECIALLY given the increased diversity. Every acre on earth has the matter and energy to go into space, and if every one of those 126 billion acres has its own essentially isolated culture, I'd be very surprised if not a single one ever did, even onto the end of the earth.
Honestly I'd be surprised if they didn't do it by tuesday. I'd expect a subjectively 10 billion year old civilization to be capable of some fairly long-term thinking.
Why WOULDN'T moore's law type growth end completely? Are you saying the speed of light is unbreakable but the planck limit isn't?
Aren't you anthropomorphizing AIs? If an AI's goals entail communicating with the rest of the world, the AI has the option to simply wait as long as it takes. Likewise, it's not obvious that an uploaded human would need or want to run at the fastest physically possible timescale all the time.
And if outward- and inward-looking civilizations ever need to compete for resources, it seems like the outward-looking ones would win.
Nothing in this scenario would hold back an AI with an expansionist value system, like a paperclip maximizer or other universe tilers.
Some linguistics nitpicks:
The greater monogenesis theory of all extant languages and cultures from a single distant historical proto-language is a matter of debate amongst linguistics, but the similarity in many low-level root words is far beyond chance.
If you mean the similarity between word roots on a world-wide scale, the answer is decisively no. Human language vocabularies are large enough that many seductive-looking similarities will necessarily exist by pure chance, and nothing more than that has ever been observed on a world-wide scale. Mark Rosenfelder has a good article dealing with this issue on his web pages.
In fact, the way human languages are known to change implies that common words inherited from a universal root language spoken many millenniums ago would not look at all the same today. It's a common misconception that there are some "basic" words that change more slowly than others, but in reality, the way it works is that the same phoneme changes the exact same way in all words, or at most depending on some simple rules about surrounding phonemes, with very few exceptions. So that "basic" words end up diverging like all others.
One confoun...
Upvoted for raising some very important topics. But I disagree on a few points.
One is the assumption that 'subjective time' is related to the discount rate - that if a super-intelligence can do as much thinking in a day as we can do in a century, then it will care as little about tomorrow as we care about the next century. I would make a different assumption - that the 'natural' discount rate is more closely related to the machine's expected lifetime (when it expects indexical utility flows to cease) and to its planning horizon (when its expectations regarding the future environment become no better than guesses).
The second is the failure to distinguish communication latencies from communication bandwidths. Both are important, but they play different roles. According to some theories of consciousness, it is an essentially serial phenomenon, and hence latencies matter a lot. So, while it may be possible to construct a mind whose physical substrate is distributed between Earth and Jupiter's moons, it probably won't be possible to construct a consciousness divided in this way. At least not a consciousness that could pass a Turing test.
Talking about whether an AI would or would not want to expand indefinitely is sort of missing the point. Barring a completely dominant singleton, someone is going to expand beyond Earth with overwhelming probability. The legacy of humans will be completely dominated by those who didn't stay on Earth. It doesn't matter whether the social impulse is generally towards expansion.
Edit: To be more precise, arguments that "most possible minds wouldn't want to expand" must be incredibly strong in order to have any bearing whatsoever on the long term likelihood of expansion. I don't really buy your argument at all (I would be happy to create new worlds inhabited by like-minded people even if there was a long communication delay between us...) but it seems like your argument isn't even claiming to be strong enough to matter.
Some other notes: you can't really expand inwards very much. You can only fit so much data into a small space (unless our understanding of relativity is wrong, in which case the discussion is irrelevant). Of course, you hit a much earlier limit if you aren't willing to send something to the stars to harvest resources. Maybe these limits seem distant to us, but t...
This seems to rest on unfounded anthropomorphization. If the AI doesn't have the patience to deal with processes that occur over extremely long time periods relative to its speed of thought, its usefulness to us is dramatically limited. The salient question is not whether it takes a long time from the AI's perspective, only whether, in the long run, it increases utility or not.
Small error at "It's difficult to conceive of an intelligence that experiences around 30,000 years in just one second"
One billion * one second = ~30 years, not ~30,000 years.
A related empirical data point is that we already see strong light cone effects in electronic markets. The machine decision speeds are so fast that it is not possible to usefully communicate with similarly fast machines outside of a radius of some small number of kilometers because the state of reality at one machine changes faster than it can propagate that information to another due to speed of light limitations. The diminishing ability to influence decisions as a function of distance raises questions about the relevancy of most long haul communication b...
I didn't like this article at all. Loads of references and mathematics all founded on an absurd premise. That unspecified AGIs and AGI supported humanity would prefer not to harvest the future light cone just because they can think really fast. Most possible mind designs just don't care.
Facing the future it appears that looking outwards into space is looking into the past, for the future lies in innerspace, not outerspace.
If there is just one agent that disagrees all the navel gazer AIs in the world become irrelevant.
I came to a similar conclusion after reading Accelerando, but don't forget about existential risk. Some intelligent agents don't care what happens in a future they never experience, but many humans do, and if a Friendly Singularity occurs, it will probably preserve our drive to make the future a good one even if we aren't around to see it. Matrioshka brain beats space colonization; supernova beats matrioshka brain; space colonization beats supernova.
If you care about that sort of thing, it pays to diversify.
Are you suggesting that AIs would get bored of exploring physical space, and just spend their time thinking to themselves? Or is your point that a hyper-accelerated civilisation would be more prone to fragmentation, making different thought patterns likely to emerge, maybe resulting in a war of some sort?
If I got bored of watching a bullet fly across the room, I'd probably just go to sleep for a few milliseconds. No need to waste processor cycles on consciousness when there are NP-complete problems that need solving.
Nick Bostrom seems to have introduced the Singleton concept to the Singularity/Futurist discourse here.
I don't think so. It dates back at least to early 2001 on SL4. It didn't come from Nick Bostrom.
Is it possible then, that with the inefficiencies inherent in planet-wide ultra-speed communication, that an AI on that level would not be competing for most of the world's resources, and so choose not to interfere too much with the slow-speed humans?
Interesting too is the concept of amorphous, distributed and time-lagged consciousness.
Our own consciousness arises from an asynchronous computing substrate, and you can't help but wonder what weird schizophrenia would inhabit a "single" brain that stretches and spreads for miles. What would that be like? Ideas that spread like wildfire, and moods that swing literally with the tides?
...Such a Mind would experience a million fold time dilation, or an entire subjective year every thirty seconds.
five minutes would correspond to an unimaginable decade of subjective time for an acceleration level 6 hyperintelligence.
architectural optimizations over the brain and higher clock rates could lead to acceleration level 9 hyperintelligences.
Acceleration level 9 stretches the limits of human imagination. It's difficult to conceive of an intelligence that experiences around 30 years in just one second, or a billion subjective years for every siderea
A very thought-provoking and well-written article. Thanks!
Your biggest conceptual jump seems to be reasoning about the subjective experience of hyperintelligences by analogy to human experiences. That is, and experience of some thought/communication speed ratio for a hyperintelligence would be "like" a human experience of that same ratio. But hyperintelligences aren't just faster. I think they'd probably be very very different qualitatively. Who knows if the costs / benefits of time-consuming communication will be perceived in similar or even recognizable ways?
Genesis 11: 1-9
Some elementary physical quantitative properties of systems compactly describe a wide spectrum of macroscopic configurations. Take for example the concept of temperature: given a basic understanding of physics this single parameter compactly encodes a powerful conceptual mapping of state-space.
It is easy for your mind to visualize how a large change in temperature would effect everything from your toast to a planetary ecosystem. It is one of the key factors which divides habitable planets such as Earth from inhospitably cold worlds like Mars or burning infernos such as Venus. You can imagine the Earth growing hotter and visualize an entire set of complex consequences: melting ice caps, rising water levels, climate changes, eventual loss of surface water, runaway greenhouse effect and a scorched planet.
Here is an unconsidered physical parameter that could determine much of the future of civilization: the speed of thought and the derived subjective speed of light.
The speed of thought is not something we are accustomed to pondering because we all share the same underlying neurological substrate which operates at a maximum frequency of around a kilohertz, and appears to have minor and major decision update cycles at rates in the vicinity of 33hz to 3hz.1
On the other hand the communication delay has changed significantly over the last ten thousand years as we evolved from hunter-gatherer tribes to a global civilization.
For much of early human history, the normal instantaneous communication distance limit would be the audible range of about 100 feet, and long distance communication consisted of sending physical human messengers; a risky endeavor that could take months to traverse a continent.
The long distance communication delay in this era (on the order of months) was more than 10^9 times the baseline thought-frequency (which is around a millisecond). The developmental outcome in this type of regime is divergence. New ideas and slight mutations of existing beliefs are generated in local ingroups far faster than they can ever propagate to remote outgroups.
In the divergent regime cultures fragment into sub-cultures; languages split into dialects; and dialects become new languages and cultures as groups expand geographically.2
Over time a steady accumulation of technological developments increased subjective bandwidth and reduce subjective latency in the global human network: the advent of agricultural civilization concentrated human populations into smaller regions, the domestication of horses decreased long distance travel time, books allowed stored communication from the past, and the printing press provided an efficient one to many communication amplifier.
Yet despite all of this progress, even as late as the mid 19th century the pony express was considered fast long distance communication. It was not until just very recently in the 20th century that near instantaneous long distance communication became relatively cheap and widespread.3
Today the communication delay for typical point to point communication around the world is somewhere around 200 to 300 ms, corresponding to a low delay/thought-frequency ratio of 10^2. This figure is close enough to the brain's natural update cycles to permit real time communication.
It is difficult to measure, but the general modern trend seems to have now finally shifted towards convergence rather than divergence. Enough people are moving between cultures, translating between languages and communicating new ideas fast enough relevant to the speed of thought to largely counter the tendency toward divergence.
But now consider that our global computational network consists of two very different substrates: the electronic substrate which operates at near-light speed, and a neural substrate which operates at much slower chemical speeds; more than one million times slower.
At the moment the vast majority of the world's knowledge and intelligence is encoded in the larger and slower neural substrate, but the electronic substrate is growing exponentially at a vastly faster pace.
Viewed as a single global cybernetic computational network we can see there is massive discrepancy between the neural and electronic sub-components.
So what happens when we shift completely to the electronic, when we have artificial brains and AGI's that think at full electronic speeds?
The speed of light measured in atomic seconds is the same for all physical frames of reference, but it's subjective speed varies based on one's subjective speed of thought. This subjective relativity causes effective time dilation proportional to one's level of acceleration.
For an AGI or upload that has an architecture similar to the brain but encoded in the electronic substrate using high effeciency neuromorphic circuitry, thoughts could be computed in around a thousand clock cycles or less at a rate of billions of clock cycles per second.
Such a Mind would experience a million fold time dilation, or an entire subjective year every thirty seconds.
Imagine the external universe, time itself, slowing down by a factor of a million. Watching a human walk to work would be similar to us watching grass grow. Actually it would be considerably worse; five minutes would correspond to an unimaginable decade of subjective time for an acceleration level 6 hyperintelligence.
A bullet would not appear to be much faster than a commuter, and the speed of light itself, the fastest signal propagation in the universe, would be slowed down to just 300 subjective meters per second, roughly the speed of a jetliner.
Real-time communication would thus only be possible with entities in the same building and on the same local network.
It would take a subjective day or two to reach distant external internet sites. Browsing the web would not be possible in the conventional sense. It would appear the only viable strategy would be to copy most of the internet into a local cache. But even this would be impeded by the million fold subjective bandwidth slowdown.
Today's fastest gigabyte direct ethernet backbone connections would be reduced back down to mere kilobyte per second modem speeds. A cable modem connection speed would require about as much fiber bandwidth as our entire current transatlantic fiber capacity.
Acceleration level 6 corresponds to a 10^8 value for the communication delay / thoughtspeed ratio, a shift backwards roughly equivalent to the era before the advent of the telegraph. This is the historical domain of both the Roman Empire and pre civil war America.
If Moore's Law continues well into the next decade, further levels of acceleration will be possible. A combination of denser circuitry, architectural optimizations over the brain and higher clock rates could lead to acceleration level 9 hyperintelligences. Overclocked circa 2011 CPUs are already approaching 10 GHZ, and test transistors have achieved speeds into the terrahertz range in the lab.4
The brain takes about 1000 'clocks' of the base neuron frequency to compute one second worth of thought. If a future massively dense and parallel neuromorphic architecture could do the same work 10 times more effeciently and thus compute one second of thought in 100 clock cycles while running at 100 GHZ this would enable acceleration level 9.5
Acceleration level 9 stretches the limits of human imagination. It's difficult to conceive of an intelligence that experiences around 30 years in just one second, or a billion subjective years for every sidereal year.
At this dilation factor light slows to just 300 centimeters per second, a slow walking pace. More crucially, light moves just 3 centimeters per clock cycle, which would place serious size constraints on the physical implementation of a single mind. To make integrated decisions with a unified knowledge base, in other words think in how we understand the term, the core of a Mind running at these speeds would have to be crammed into the space of a modern desktop box. (although it certainly could have a larger secondary knowledge store accessible with some delay)
The small size constraint would severely limit how much power/heat one could throw at the problem, and thus these high speeds will probably require much higher circuit densities to achieve the required energy efficiency than implied by memory requirements alone.
With light itself crawling along at 300 centimeters per second it would take data packets hundreds of millions of seconds, or on the order of years, to make typical transits across the internet. These speeds are already close to physical limits; even level 9 hyperintelligences will probably not be able to surmount the speed of light delay.
The entire fiber backbone of the circa 2011 transatlantic connection would be required to achieve end 20th century dialup modem speeds.6
Even using all of that fiber it would take on the order of ten physical seconds to transfer a 10^14 byte Mind, corresponding to hundreds of thousands of subjective years.
A level 9 world is one where the subjective communication delay, approaching 10^11, is a throwback to the prehistoric era. Strong Singletons and even weaker systems such as global governments or modern markets would be unlikely or impossible at such high levels of acceleration.7
From the social and cultural perspective high levels of thought acceleration are structurally equivalent to the world expanding to billions of times it's current size.
It is similar to the earth exploding into an intergalactic or hyperdimensional civilization linked together by a vast impossibly slow lightspeed transit network.
Entire new cultures and civilizations would form and play out complex histories in the blink of an eye.
With every increase in circuit density and speed the new metaverse will vasten exponentially in virtual space and time just as it physically shrinks and quickens down into the ever smaller, faster levels of the real.
And although all of this change will be unimaginably fast for a biological human, Moore's Law will be a distant ancestral memory for level 9 intelligences, as it depends on a complex series of events in the impossibly slow physical world of matter. Even if an entire new hardware generation transition could be compressed into just 8 hours of physical time through nanotechnological miracles, that's still an unimaginable million years of subjective time at acceleration level 9.
Another interesting subjective difference: computer speed or performance will not change much from the inside perspective of a hyperintelligence running on the same hardware. Traditional computers will indefinitely maintain roughly the same subjective slow speeds for minds running on the same substrate at those same speeds. Density shrinkings will enable more and or larger minds; but only a net shift towards the latter would entail a net increase in traditional parallel CPU performance available per capita. But as discussed previously, speed of light delays severely constrain the size of large unified minds.
The radical space-time compression of the Metaverse Singularity model suggests a reappraisal of the Fermi Paradox and the long-term fate of civilizations.
The speed of light barrier gives a natural gradient to the expansion of complexity: it is inwards, not outwards.
Humanity today could mount an expedition to a nearby solar system, but the opportunity cost of such an endeavor vastly exceeds any realistic discounted returns. The incredible resources space colonization would require are much better put to use increasing our planetary intelligence through investing in further semiconductor technology.
This might never change. Indeed such a change would be a complete reversal of the general universal trend towards smaller, faster complexity.
Each transition to a new level of acceleration and density will increase the opportunity cost of expansion in proportion. Light-years are vast units of space-time for humans today, but they are unimaginably vaster for future accelerated hyperintelligences.
Facing the future it appears that looking outwards into space is looking into the past, for the future lies in innerspace, not outerspace.
Notes
1 Human neuron action potentials have a measured maximum frequency of a little less than a millisecond. This is thus one measure of rough equivalence to the clock frequency in a digital circuit, but it is something of a conservative over-estimate as neurological circuits are not synchronous at that frequency. Many circuits in the brain are semi-synchronized over longer intervals roughly corresponding to the various measured 'brain wave' frequencies, and neuron driven mechanisms such as voice have upper frequencies of the same order. Humans can react in as quickly as 150ms in some conditions, but appear to initiate actions such as saccades at a rate of 3 to 4 per second. Smaller primate brains are similar but somewhat quicker.
2 The greater monogenesis theory of all extant languages and cultures from a single distant historical proto-language is a matter of debate amongst linguistics, but the similarity in many low-level root words is far beyond chance. The restrained theory of a common root Proto-Indo-European language is near universally accepted. This map and this tree help visualize the geographical historical divergence of this original language/cultural across the supercontinent along with it's characteristic artifact: the chariot. All of this divergence occurred on a timescale of five to six millenia.
3 Homing pigeons, where available, were of course much faster than the pony express, but were rare and low-bandwidth.
4 Apparently this has been done numerous times in the last decade in different ways. Here is one example. Of course making a few transistors run in the terahertz doesn't get you much closer to making a whole CPU actually run at that speed, for a large variety of reasons.
5 None of these particular numbers will seem outlandish a decade or two from now if Moore's Law holds it's pace. However getting a brain or AGI type design to run at these fantastic speeds will likely require more significant innovations such as a move to 3D integrated circuits and major interconnect breakthroughs. There are many technological uncertainties here, but less than that involved in drexler-style nano-tech, and this is all on the current main path.
6 It looks like we currently have around 8 tbps of transatlantic bandwidth circa 2011.
7 Nick Bostrom seems to have introduced the Singleton concept to the Singularity/Futurist discourse here. He mentions artificial intelligences as one potential Singleton promoting technology but doesn't consider their speed potential with respect to the speed of light.