Extrapolating a straight line that far means visible cosmic consequences: a normal planet or a star rather suddenly starting to behave very much unlike what we expect from the known physics: growing very bright, or very dim, or disappearing completely.
Two objections to this.
1) Maybe Dyson spheres and the Kardashev scale are just good old sci-fi tropes and completely off the mark (same for computronium or hedonium). Maybe a superintelligence simply doesn't do that. We don't know. We might be squirrels imagining that a superintelligence ought to stockpile astronomical quantities of nuts, visible from kiloparsecs away.
2) Even granting Dyson spheres, the Kardashev scale, and the rest, our most complete catalogue of individually resolved stars in the Milky Way (Gaia DR3) covers on the order of 1% of them. Most of the rest are hidden behind interstellar dust. In the vast majority of cases, we simply couldn't tell the difference between a star occulted by a Dyson sphere and one obscured by dust. And even among the stars we do see, only a small fraction have been systematically searched for the thermal infrared excess a Dyson sphere should reemit at around 300K, the dedicated surveys (IRAS, then WISE with the G-HAT project) have screened tens of thousands of candidates, not hundreds of millions. As for stars outside our own galaxy, the fraction we can resolve individually is negligible.
Even stronger: if we saw a star go dark from a Dyson sphere, we'd probably be assimilated or swept aside almost immediately. Near-C probes are another likely consequence of a full singularity within our past light cone.
1) I agree, that is quite possible. Also, a Dyson sphere still radiates JUST AS MUCH OR NEARLY AS MUCH AS THE HOST STAR DOES, that's just physics. Best you can do is to shift the spectrum to the microwave region, masking it with CMB. But even there one would see odd luminosity bumps from specific directions. Subject to the known physics, which we have no reason to think is violated anywhere.
2) That is a good point. Though we do know where the dust is, from the observations, since it occults many stars at once in a very specific way. Though maybe you are right, I have not looked into it in enough detail. So, if your point is that Kardashev II is so rare that we do not see observational signatures of non-grabby aliens just living their lives, then yes, I guess it is not impossible. I have not checked the literature in the field recently. I guess there might be something like SAI out there that consumed its creators and is sitting inside one or many Dysonspheres, not a bubble of computronium expanding at near-light speed.
If there is a Great Filter, I’d still rather humans be safe and in control when we get there. Just because the future of anything originating from Earth might be bounded doesn’t mean we still don’t want to capture that future, compared to letting AI convert the Earth to paperclips before the Filter hits.
By the way, there’s this cool map someone made with like ~100 solutions to the Fermi paradox, in case that helps you consider more possibilities: https://www.lesswrong.com/posts/ifW65CHb3d9NWFBvQ/fermi-paradox-solutions-map
I don't disagree with any of it, my point is that the odds of the AI paperclipping the Earth and then randomly stopping without any observational consequences from far away are pretty low.
What if the Filter inevitably happens before you reach astronomical signatures, and doesn’t itself create a signature? Or the signature is brief enough, and civilizations uncommon enough, that 150 years is not enough time to catch an example.
That's definitely possible! And it is a worry. But there is no reason to worry specifically about AI x-risk, since there is no indication that an SAI will have this oddly specific behavior.
You use the Copernican principle (along with the fact that there are almost certainly billions of planetary systems in our past light cone) to conclude that (1) it is unlikely that we're the only technological civilization in our past light cone. Then you go on to use the Fermi paradox. But why in your mind does the Fermi paradox not lead you to believe that we probably are the only civilization in our past light cone (in spite of the Copernican principle)?
In other words, aren't you cherry picking by letting your argument rely on the lack of any evidence of a civilization-destroying AI's having reached our solar system while acting as if your argument (and in particular the component of your argument I have labelled (1) above) is immune to the lack of any evidence of a (non-human non-destroyed) civilization's having reached our solar system? Do you claim that an AI that destroyed the civilization that created it (if such existed in our past light cone) would have been more likely to expand than a civilization that avoided being destroyed would have been?
And I don't yet see how the concept of a Great Filter throws any light on the question you are wrestling with. If there is a Great Filter that prevents almost every simple form of life from evolving into a galaxy-spanning civilization, then why not just conclude that humanity has already passed this Great Filter?
I suspect that the OP's argument implies that we should double down on priors which imply the Great Filter and are yet consistent with observations. Suppose that the Milky way has
On the other hand, if the probability that interstellar travel is impossible was
What I struggle to understand is how many bits of evidence already point a perfect Bayesian in the direction of interstellar travel being possible. In a manner similar to Yudkowsky, I suspect that the amount of evidence pointing towards known physics is at least on the order of thousands of bits.
I have no good gears-level model of AI, and the expert views are all over the place (see AI Doc), so the only remaining argument is my physical intuition and a black-box view. Which, in this case, is based on two principles (best called rules of thumb):
Let's start with the latter: the main argument for x-risk, as I understand it, is that AI will become smarter than us in every way, becomes an agent with its own goals and will just take the Earth from us and use it for its own purposes. This seems like a reasonable possibility, given how things are going so far. Extrapolating a straight line that far means visible cosmic consequences: a normal planet or a star rather suddenly starting to behave very much unlike what we expect from the known physics: growing very bright, or very dim, or disappearing completely. Copernicanism says that if it can happen here, it probably happens in countless places elsewhere. And yet we do not see it. Why?
Hanson's Grabby Aliens is a proposed way out, where we do not see the aliens coming until they are upon us. This, too, does not survive even the modicum of Copernicanism: the Universe has been around for 14 billion years, our local group of galaxies for almost 10 billion years, and it contains over a trillion stars and is only 20 million light years across, and many stars there are as old as the Sun. So, if Grabby Aliens developed anywhere in the local group within the last 0.1% of the Universe's lifetime, we would have been eaten already. And maybe we have been, but then the argument is moot.
So we have a contradiction between the law of straight lines and Copernicanism: we expect visible effects of AIpocalypse elsewhere, while there are none. What gives? The Fermi paradox says that naive Copernicanism fails, and we are not like anyone else, to the degree where we cannot see anyone else. What gives? Either we are unique and the universe is not a good guide, or the straight lines stop being straight and peter out before Big Bad things happen.
There are still a fair few orders of magnitude between now and Astronomically Visible Big Bad. Probably between 3 (Kardashev I) and 13 (Kardashev II). For comparison, the energy consumption by humanity has increased by 7 orders of magnitude so far:
Credit: Gemini.
So, unless we are unique, some limiting factor will be at play somewhere between now and 5-7 orders of magnitude from now, a bit less than what the humanity has scaled so far.
What might be this limiting factor? Who knows. It may well be self-annihilation, but that is materially different from the predictions based on AI x-risk due to unbounded resource consumption growth by the indifferent amoral bots. So there is a big unknown ahead, and that means that the probability of x-risk, the proverbial p(doom) is not a useful number to wave around.
So there you have it: no good reason to worry about conventional x-risk, a lot of good reasons to worry about the Great Filter.