Wiki Contributions

Comments

The earliness of life appearing on Earth isn't amazingly-consistent with life's appearance on Earth being a filter-break. It suggests either abiogenesis is relatively-easy or that panspermia is easy (as I noted, in the latter case abiogenesis could be as hard as you like but that doesn't explain the Great Silence).

Frankly, it's premature to be certain it's "abiogenesis rare, no panspermia" before we've even got a close look at Earthlike exoplanets.

I'll note that most of the theorised catastrophes in that vein look like either "planet gets ice-nined", "local star goes nova", or "blast wave propagates at lightspeed forever". The first two of those are relatively-easy to work around for an intelligent singleton, and the last doesn't explain the Fermi observation since any instance of that in our past lightcone would have destroyed Earth.

I've read most of that paper (I think I've seen it before, although there could be something else near-identical to it; I know I've read multiple[1] papers that claim to solve the Fermi Paradox and do not live up to their hype). TBH, I feel like it can be summed up as "well, there might be a Great Filter somewhere along the line, therefore no paradox". I mean, no shit there's probably a Great Filter somewhere, that's the generally-proposed resolution that's been going for decades now. The question is "what is the Filter?". And saying "a Filter exists" doesn't answer that question.

We've basically ruled out "Earthlike planets are rare". "Abiogenesis is rare" is possible, but note that you need "no lithopanspermia" for that one to hold up since otherwise one abiogenesis event (the one that led to us and which is therefore P = 1, whether on Earth or not) is enough to seed much of the galaxy. "Intelligence is rare" is a possibility but not obviously-true, ditto "technology is rare". Late filters (which, you will note, the authors assume to be a large possibility) appear somewhat less plausible but are not ruled out by any stretch. So yeah, it's still a wide-open question even if there are some plausible answers.

  1. ^

    The other one I recall, besides Grabby Aliens, was one saying that Earthly life is downstream of a lithopanspermia event so there's no Fermi paradox; I don't get it either.

There is also the possibility of the parties competing over it to avoid looking "soft on AI", which is of course the ideal.

To the extent that AI X-risk has the potential to become partisan, my general impression is that the more likely split is Yuddite-right vs. technophile-left. Note that it was a Fox News reporter who put the question to the White House Press Secretary following Eliezer's TIME article, and a Republican (John Kennedy) who talked about X-risk in the Senate hearing in May, while the Blue-Tribe thinkpieces typically take pains to note that they think X-risk is science fiction.

As a perennial nuclear worrier, I should mention that while any partisan split is non-ideal, this one's probably preferable to the reverse insofar as a near-term nuclear war would mean the culture war ends in Red Tribe victory.

>The fourth thing Bostrom says is that we will eventually face other existential risks, and AGI could help prevent them. No argument here, I hope everyone agrees, and that we are fully talking price.

>It is not sufficient to choose the ‘right level of concern about AI’ by turning the dial of progress. If we turn it too far down, we probably get ourselves killed. If we turn it too far up, it might be a long time before we ever build AGI, and we could lose out on a lot of mundane utility, face a declining economy and be vulnerable over time to other existential and catastrophic risks.

I feel that it's worth pointing out that for almost all X-risks other than AI, while AI could solve them, there are also other ways to solve them that are not in and of themselves X-risks and thus when talking price, only the marginal gain from using AI should be considered.

In particular, your classic "offworld colonies" solve most of the risks. There are two classes of thing where this is not foolproof:

  1. Intelligent adversary. AI itself and aliens fall into this category. Let's also chuck in divine/simulator intervention. These can't be blocked by space colonisation at all.
  2. Cases where you need out-of-system colonies to mitigate the risk. These pose a thorny problem because absent ansibles you can't maintain a Jihad reliably over lightyears. The obvious, albeit hilariously-long-term case here is the Sun burning out, although there are shorter-term risks like somebody making a black hole with particle physics and then punting it into the Sun (which would TTBOMK cause a nova-like event).

Still, your grey-goo problem and your pandemic problem are fixed, which makes the X-risk "price" of not doing AI a lot less than it might look.

Should be noted that while there are indeed tons of people who will fault you for taking steps to survive GCR, in the aftermath of a GCR most of those people will be dead (or at the very least, hypocrites who did the thing they're upset about) and thus not able to fault you for anything. History is written by, if not the winners, at least the survivors.

Admittedly, this is contingent on the GCR happening, but I think there's a pretty-high chance of nuclear war in particular in the near future (the Paul Symon interview in particular has me spooked; a random saying that a "linear path" leads to "major-power conflict" would be meh, but a Five Eyes intelligence chief saying it - well, I might be right or wrong about my guesses at what's prompting that, but I'll take the oracle statement at face value and that's P(WWIII) ~> 0.5).

I guess it's a claim that advanced civilizations don't hit K2, because they prefer to live in virtual worlds, and have little interest in expanding as fast as possible.

This would be hard. You would need active regulations against designer babies and/or reproduction.

Because, well, suppose 99.9% of your population wants to veg out in the Land of Infinite Fun. The other 0.1% thinks a good use of its time is popping out as many babies as possible. Maybe they can't make sure their offspring agree with this (hence the mention of regulations against designer babies, although even then natural selection will be selecting at full power for any genes producing a tendency to do this), but they can brute-force through that by having ten thousand babies each - you've presumably got immortality if you've gotten to this point, so there's not a lot stopping them. Heck, they could even flee civilisation to escape the persecution and start their own civilisation which rapidly eclipses the original in population and (if the original's not making maximum use of resources) power.

Giving up on expansion is an exclusive Filter, at the level of civilisations (they all need to do this, because any proportion of expanders will wind up dominating the end-state) but also at the level of individuals (individuals who decide to raise the birth rate of their civilisations can do it unilaterally unless suppressed). Shub-Niggurath always wins by default - it's possible to subdue her, but you are not going to do it by accident.

(The obvious examples of this in the human case are the Amish and Quiverfulls. The Amish population grows rapidly because it has high fertility and high retention. The Quiverfulls are not currently self-sustaining because they have such low retention that 12 kids/woman isn't enough to break even, but that will very predictably yield to technology. Unless these are forcibly suppressed, birth rate collapse is not going to make the human race peter out.)

Anyway, I should drag my head out of this fun space and go do something more pragmatically useful. I intend to help our odds of survival, even if we're ultimately doomed based on this anthropic reasoning.

Yes! Please do! I'm not at all trying to discourage people from fighting the good fight. It's just, y'know, I noticed it and so I figured I'd mention it.

Your scenario does not depend on FTL.

However, its interaction with the Doomsday Argument is more complicated and potentially weaker (assuming you accept the Doomsday Argument at all). This is because P(we live in a Kardashev ~0.85 civilisation) depends strongly in this scenario on the per-civilisation P(Doom before Kardashev 2); if the latter is importantly different from 1 (even 0.9999), then the vast majority of people still live in K2 civilisations and us being in a Kardashev ~0.85 civilisation is still very unlikely (though less unlikely than it would be in No Doom scenarios where those K2+ civilisations last X trillion years and spread further).

I'm not sure how sane it is for me to be talking about P(P(Doom)), even in this sense (and frankly my entire original argument stinks of Lovecraft, so I'm not sure how sane I am in general), but in my estimation P(P(Doom before Kardashev 2) > 0.9999) < P(FTL is possible). AI would have to be really easy to invent and co-ordination to not build it would have to be fully impossible - whether Butlerian Jihad can work or not for RL humanity, it seems like it wouldn't need much difference in our risk curves for it to definitely happen, and while we have gotten to a point where we can build AI before we can build a Dyson Sphere, that doesn't seem like it's a necessary path. I can buy that P(AI Doom before Kardashev 3) could be extremely high in no-FTL worlds - that'd only require that alignment is impossible, since reaching Kardashev 3 STL takes millennia and co-ordination among chaotic beings is very hard at interstellar scales in a way it's not within a star system. But assured doom before K2 seems very weird. And FTL doesn't seem that unlikely to me; time travel is P = ϵ since we don't see time travellers, but I know one proposed mechanism (quantum vacuum misbehaving upon creation of a CTC system) that might ban time travel specifically and thus break the "FTL implies time travel" implication.

It also gets weird when you start talking about the chance that a given observer will observe the Fermi Paradox or not; my intuitions might be failing me, but it seems like a lot, possibly most, of the people in the "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" world would see aliens (due to K2 civilisations being able to be seen from further, and see much further - an Oort Cloud interferometer could detect 2000BC humanity anywhere in the Local Group via the Pyramids and land-use patterns, and detect 2000AD humanity even further via anomalous night-time illumination).

Note also that among "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" worlds, there's not much Outside View evidence that P(Human doom before K2) is high as opposed to low; P(random observer is Us) is not substantially affected by whether there are N or N+1 K2 civilisations the way it is by whether there are 0 or 1 such civilisations (this is what I was talking about with aliens breaking the conventional Doomsday Argument). So this would be substantially more optimistic than my proposal; the "P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae" scenario means we get wiped out eventually, but we (and aliens) could still have astronomically-positive utility before then, as opposed to being Doomed Right Now (though we could still be Doomed Right Now for Inside View reasons).

Pardon my ignorance; I don't actually know what SIA and SSA stand for.