Epistemic status: very speculative
Content warning: if true this is pretty depressing

This came to me when thinking about Eliezer's note on Twitter that he didn't think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.

I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.

But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe at long distances, if the only cosmic-censorship limit is closing a loop). That means the future becomes unobservable; no-one exists then (perhaps not even the AI, if it is not conscious or if it optimises its consciousness away after succeeding). Hence, by the anthropic principle, we should expect to be either the first or extremely close to it (and AIUI, frequency arguments like those in the Grabby Aliens paper suggest that "first in entire universe" should usually be significantly ahead of successors relative to time elapsed since Big Bang).

This is sort of an inverse version of Deadly Probes (which has been basically ruled out in the normal-Great-Filter sense, AIUI, by "if this is true we should be dead" concerns); we are, in this hypothesis, fated to release Deadly Probes that kill everything in the universe, which prevents observations except our own observations of nothing. It also resurrects the Doomsday Argument, as in this scenario there are never any sentient aliens anywhere or anywhen to drown out the doom signal; indeed, insofar as you believe it, the Doomsday Argument would appear to argue for this scenario being true.

Obvious holes in this:

1) FTL may be impossible, or limited to non-light-cone-breaking versions (e.g. wormholes that have to be towed at STL). Without light-cone-breaking FTL there are non-first species and non-Fermi-Paradox observations even with UFAI catastrophe being inevitable.

2) The universe might be too large for exponential growth to fill it up. It doesn't seem plausible for self-replication to be faster than exponential in the long-run, and if the universe is sufficiently large (like, bigger than 10^10^30 or so?) then it's impossible - even with FTL - to kill everything, and again the scenario doesn't work. I suppose an exception would be if there were some act that literally ends the entire universe immediately (thus killing everything without a need to replicate). Also, an extremely-large universe would require an implausibly-strong Great Filter for us to actually be the first this late.

3) AI Doom might not happen. If humanity is asserted to be not AI-doomed then this argument turns on its head and our existence (to at least the extent that we might not be the first) argues that either light-cone-breaking FTL is impossible or AI doom is a highly-unusual thing to happen to civilisations. This is sort of a weird point to mention since the whole scenario is an Outside View argument that AI Doom is likely, but how seriously to condition on these sorts of arguments is a matter of some dispute.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 5:03 PM

IMO, the Fermi Paradox is basically already dissolved, and the rough answer is "We made a mistake somewhere in the calculations, and forgot that even if the expected number of civilizations is high, there can be high probability that there are no aliens out there to interact."

The nice thing is it doesn't have to assume much, compared to a lot of other Fermi Paradox solutions, which is why I favor it.

I say this because the Fermi Paradox has already gotten good answers, so most discussion on the Fermi Paradox is basically doing what-if fiction, and at this point the interestingness of the solutions is distracting people from the fact that the paradox is dissolved. I might have this as a standard template comment for Fermi Paradox posts in the future, unless it's about meta-discussion on their paper.

https://www.lesswrong.com/posts/DHtjbcwcQv9qpHtJf/dissolving-the-fermi-paradox-and-what-reflection-it-provides

I've read most of that paper (I think I've seen it before, although there could be something else near-identical to it; I know I've read multiple[1] papers that claim to solve the Fermi Paradox and do not live up to their hype). TBH, I feel like it can be summed up as "well, there might be a Great Filter somewhere along the line, therefore no paradox". I mean, no shit there's probably a Great Filter somewhere, that's the generally-proposed resolution that's been going for decades now. The question is "what is the Filter?". And saying "a Filter exists" doesn't answer that question.

We've basically ruled out "Earthlike planets are rare". "Abiogenesis is rare" is possible, but note that you need "no lithopanspermia" for that one to hold up since otherwise one abiogenesis event (the one that led to us and which is therefore P = 1, whether on Earth or not) is enough to seed much of the galaxy. "Intelligence is rare" is a possibility but not obviously-true, ditto "technology is rare". Late filters (which, you will note, the authors assume to be a large possibility) appear somewhat less plausible but are not ruled out by any stretch. So yeah, it's still a wide-open question even if there are some plausible answers.

  1. ^

    The other one I recall, besides Grabby Aliens, was one saying that Earthly life is downstream of a lithopanspermia event so there's no Fermi paradox; I don't get it either.

TBH, I feel like it can be summed up as "well, there might be a Great Filter somewhere along the line, therefore no paradox".

Yep, this is the point, we should not be surprised to see no aliens, because there is a likely great filter, or at least a serious mistake in our calculations, and thus it doesn't matter that we live in a large universe, since there is quite a high probability that we are just alone.

But they also isolate the Great Filter to "Life is ridiculously rare", and they also isolate the Great Filter to the past, which means that there's not much implications other than "life is rare" from seeing no aliens.

The earliness of life appearing on Earth isn't amazingly-consistent with life's appearance on Earth being a filter-break. It suggests either abiogenesis is relatively-easy or that panspermia is easy (as I noted, in the latter case abiogenesis could be as hard as you like but that doesn't explain the Great Silence).

Frankly, it's premature to be certain it's "abiogenesis rare, no panspermia" before we've even got a close look at Earthlike exoplanets.

Thanks, I hate it.

The anthropic argument seems to make sense.

The more general version would be: we're observing from what would seem like very early in history if sentience is successful at spreading sentience. Therefore, it's probably not. The remainder of history might have very few observers, like the singleton misaligned superintelligences we and others will spawn. This form doesn't seem to depend on FTL.

Yuck. But I wouldn't want to remain willfully ignorant of the arguments, so thanks!

Hopefully I'm misunderstanding something about the existing thought on this issue. Corrections are more than welcome.

Your scenario does not depend on FTL.

However, its interaction with the Doomsday Argument is more complicated and potentially weaker (assuming you accept the Doomsday Argument at all). This is because P(we live in a Kardashev ~0.85 civilisation) depends strongly in this scenario on the per-civilisation P(Doom before Kardashev 2); if the latter is importantly different from 1 (even 0.9999), then the vast majority of people still live in K2 civilisations and us being in a Kardashev ~0.85 civilisation is still very unlikely (though less unlikely than it would be in No Doom scenarios where those K2+ civilisations last X trillion years and spread further).

I'm not sure how sane it is for me to be talking about P(P(Doom)), even in this sense (and frankly my entire original argument stinks of Lovecraft, so I'm not sure how sane I am in general), but in my estimation P(P(Doom before Kardashev 2) > 0.9999) < P(FTL is possible). AI would have to be really easy to invent and co-ordination to not build it would have to be fully impossible - whether Butlerian Jihad can work or not for RL humanity, it seems like it wouldn't need much difference in our risk curves for it to definitely happen, and while we have gotten to a point where we can build AI before we can build a Dyson Sphere, that doesn't seem like it's a necessary path. I can buy that P(AI Doom before Kardashev 3) could be extremely high in no-FTL worlds - that'd only require that alignment is impossible, since reaching Kardashev 3 STL takes millennia and co-ordination among chaotic beings is very hard at interstellar scales in a way it's not within a star system. But assured doom before K2 seems very weird. And FTL doesn't seem that unlikely to me; time travel is P = ϵ since we don't see time travellers, but I know one proposed mechanism (quantum vacuum misbehaving upon creation of a CTC system) that might ban time travel specifically and thus break the "FTL implies time travel" implication.

It also gets weird when you start talking about the chance that a given observer will observe the Fermi Paradox or not; my intuitions might be failing me, but it seems like a lot, possibly most, of the people in the "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" world would see aliens (due to K2 civilisations being able to be seen from further, and see much further - an Oort Cloud interferometer could detect 2000BC humanity anywhere in the Local Group via the Pyramids and land-use patterns, and detect 2000AD humanity even further via anomalous night-time illumination).

Note also that among "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" worlds, there's not much Outside View evidence that P(Human doom before K2) is high as opposed to low; P(random observer is Us) is not substantially affected by whether there are N or N+1 K2 civilisations the way it is by whether there are 0 or 1 such civilisations (this is what I was talking about with aliens breaking the conventional Doomsday Argument). So this would be substantially more optimistic than my proposal; the "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" scenario means we get wiped out eventually, but we (and aliens) could still have astronomically-positive utility before then, as opposed to being Doomed Right Now (though we could still be Doomed Right Now for Inside View reasons).

To your first point:

You're saying it seems more likely that FTL is possible than that every single civilization wipes itself out. Intuitively, I agree, but it's hard to be sure.

I'd say it's not that unilkely that P(doom before K2) > .9999. I know more about AI and alignment than I do physics, and I'd say it's looking a lot like AGI is surprisingly easy to build once you've got the compute (and less of that than we thought), and that coordination is quite difficult. Long-term stable AGI alignment in a selfish and shortsighted species doesn't seem impossible, but it might be really hard (and I think it's likely that any species creating AGI will have barely graduated from being animals like we have, so that could well be universal). On the other hand, I haven't kept up on physics, much less debates on how likely FTL is.

I think there's another, more likely possibility: other solutions to the Fermi paradox. I don't remember the author, but there's an astrophysicist arguing that it's quite possible we're the first in our galaxy, based on the frequency of sterilizing nova events, particularly nearer the galactic center. There are a bunch of other galaxies 100,000-1m light years away, which isn't that far on the timeline of the 14b universe lifespan. But this interacts with the timelines for creating habitable problems, and timelines of nova and supernova events sterilizing most planets frequently enough to prevent intelligent life. Whew.

Hooray, LessWrong for revealing that I don't understand the Fermi Paradox at all!

Let me just mention my preferred solution, even though I can't make an argument for its likelihood:

Aliens have visited. And they're still here, keeping an eye on things. Probably not any of the ones they talk about on Ancient Mysteries (although the current reports from the US military indicates that they believe they've observed vehicles we can't remotely build, and it's highly unlikely to be a secret US program, or any other world power, so maybe there are some oddly careless aliens buzzing around...)

My proposal is that a civilization that achieves aligned AGI might easily elect to stay dark. No Dyson spheres that can be seen by monkeys, and perhaps more elaborate means to conceal their (largely virtual) civilization. They may fear encountering either a hostile species with its own aligned AGI, or an unaligned AGI. One possible response is to stay hidden, possibly while preparing to fight. It does sound odd if hiding works, because an unaligned AGI should be expanding its paperclipping projects at near light speed anyway, but there are plenty of possible twists to the logic that I haven't thought through.

That interacts with your premise that K2 civilizations should be easy to spot. I guess it's a claim that advanced civilizations don't hit K2, because they prefer to live in virtual worlds, and have little interest in expanding as fast as possible.

Anyway, I should drag my head out of this fun space and go do something more pragmatically useful. I intend to help our odds of survival, even if we're ultimately doomed based on this anthropic reasoning.

I guess it's a claim that advanced civilizations don't hit K2, because they prefer to live in virtual worlds, and have little interest in expanding as fast as possible.

This would be hard. You would need active regulations against designer babies and/or reproduction.

Because, well, suppose 99.9% of your population wants to veg out in the Land of Infinite Fun. The other 0.1% thinks a good use of its time is popping out as many babies as possible. Maybe they can't make sure their offspring agree with this (hence the mention of regulations against designer babies, although even then natural selection will be selecting at full power for any genes producing a tendency to do this), but they can brute-force through that by having ten thousand babies each - you've presumably got immortality if you've gotten to this point, so there's not a lot stopping them. Heck, they could even flee civilisation to escape the persecution and start their own civilisation which rapidly eclipses the original in population and (if the original's not making maximum use of resources) power.

Giving up on expansion is an exclusive Filter, at the level of civilisations (they all need to do this, because any proportion of expanders will wind up dominating the end-state) but also at the level of individuals (individuals who decide to raise the birth rate of their civilisations can do it unilaterally unless suppressed). Shub-Niggurath always wins by default - it's possible to subdue her, but you are not going to do it by accident.

(The obvious examples of this in the human case are the Amish and Quiverfulls. The Amish population grows rapidly because it has high fertility and high retention. The Quiverfulls are not currently self-sustaining because they have such low retention that 12 kids/woman isn't enough to break even, but that will very predictably yield to technology. Unless these are forcibly suppressed, birth rate collapse is not going to make the human race peter out.)

Anyway, I should drag my head out of this fun space and go do something more pragmatically useful. I intend to help our odds of survival, even if we're ultimately doomed based on this anthropic reasoning.

Yes! Please do! I'm not at all trying to discourage people from fighting the good fight. It's just, y'know, I noticed it and so I figured I'd mention it.

I think this depends on whether you use SIA or SSA or some other theory of anthropics.

Pardon my ignorance; I don't actually know what SIA and SSA stand for.

Expansions to google: self-indicating assumption and self-sampling assumption. These are terrible names and I can never remember which one's which without a lookup; one of them is a halfer on the sleeping beauty problem and the other is a thirder.

and here's some random paper that came up when I googled that:

Another possible way for AI X-risk to be linked to the Fermi Paradox is if the ecosystems of superintelligent AIs tend to handle their own existential risk issues badly and tend to damage the fabric of reality badly enough to destroy themself and everything in their neighborhoods in the process.

For example, if one wants to discover FTL, then one probably needs to develop radical new physics and to perform radical novel physical experiments, and it might be the case that our reality is "locally fragile" to this kind of experiments and that an experiment like that would often bring an "end of the world in the local neighborhood".

[-][anonymous]4mo30

Or a whole class of equivalent scenarios. It is possible the universe is cheating somehow and modeling large complex objects like stars not as individual subatomic processes but as some entangled group where the universe calculates the behavior of the star in bulk. The outcome we can observe would be the same.

A Singularity of any form fills the space around the star with the most complex, densest technology that can be devised, and it cannot be modeled in any way but calculating every single interaction.

In a game this will fail and the error handling will clear the excess entities or delete the bad region of the map.

Yes, if one is in a simulation, the Fermi Paradox is easy, and there likely to be some fuses against excessive computational demands, one way or another (although, this is mostly a memory problem, otherwise it's also solvable by the simulation inner time being slowed with respect to the external time; this way external observers would see a slowdown of the simulation progression, and if the counterfactual of being able to "look outside, to view some of the enveloping simulation" were correct, that outside thing would be speeding up)...

[-][anonymous]4mo20

I thought about it and realized that it is still unsatisfactory. Imagine that solar systems do get reset but sometimes only after a starship has departed. The beings on the departing ship would figure out something happened and eventually discover the cause with experiments, and would then proceed to conquer the universe, avoiding overcrowding any 1 system.

This "at least 1 successful replicator" weakens most arguments to solve the paradox.

ASI is a great replicator and fails to really explain anything. Sure maybe on earth there might be a nuclear war to try to stop the ASI, and maybe in some timelines the ASI is defeated and humans die also. But this has to be the outcome everywhere in the universe or again we should see a sky crowded with Dyson swarms..

I'll note that most of the theorised catastrophes in that vein look like either "planet gets ice-nined", "local star goes nova", or "blast wave propagates at lightspeed forever". The first two of those are relatively-easy to work around for an intelligent singleton, and the last doesn't explain the Fermi observation since any instance of that in our past lightcone would have destroyed Earth.

My mental model of this class of disasters is different and assumes a much higher potential for discovery of completely novel physics.

I tend to assume that speaking in terms of ratio of today's physics knowledge to physics knowledge 500 years ago, there is still potential for a comparable jump.

So I tend to think in terms of either warfare with weapons involving short-term reversible changes of fundamental physical constants and/or Planck-scale-structure of space-time or careless experiments of this kind, resulting in both cases in a total destruction of local neighborhood.


In this sense, a singleton does indeed have better chances compared to multipolar scenarios, both in terms of much smaller potential for "warfare" and in terms of having much, much easier time to coordinate risks of "civilian activities".

However, I am not sure whether the notion of singleton is well-defined; a system can look like a singleton from the outside and behave like a singleton most of the time, but it still needs to have plenty of non-trivial structure inside and is still likely to be a "Society of Mind" (just like most humans look like singular entities from the outside, but have plenty of non-trivial structure inside themselves and are "Societies of Mind").

To compare, even the most totalitarian states (our imperfect approximations of singletons) have plenty of fractional warfare, and powerful fractions destroy each other all the time. So far those fractions have not used military weapons of mass destruction in those struggles, but this is mostly because those weapons have been relatively unwieldy.

And even without those considerations, experiments in search of new physics are tempting, and balancing risks and rewards of such experiments can easily go wrong even for a "true singleton".

Are you familiar with "grabby aliens"?

[-]Writer11mo30

The answer must be "yes", since it's mentioned in the post