Robin Hanson's writing on Grabby Aliens is interesting to me, since it seems to be one of the more sound attempts to apply mathematical reasoning to the Fermi Paradox.  Unfortunately, it still relies on anthropics, so I wouldn't the slightest bit surprised if it was off by an order of magnitude (or two) in either direction.

What I would like to know (preferably from someone with a strong astronomy background) is: how confident are we that there are no (herein defined) Extremely Obvious Aliens?

Extremely Obvious Aliens

Define Extremely Obvious Aliens in the following way:

  1. They colonize every single star that they encounter by building a Dyson swarm around it that reduces visible radiation by at least 50%
  2. They expand in every direction at a speed of at least 0.5C
  3. They have existed for at least 1 billion years

If such aliens existed, it should be really easy to detect they by just looking for a cluster of galaxies that is 50% dimmer than it should be which is at least 0.5Billion light years across.

How confident are we that there are no Extremely Obvious Aliens?

As with Grabby aliens, it is safe to say there are no Extremely Obvious Aliens in the Solar System.  Nor, for that matter are there any Extremely Obvious Aliens within 0.5BLY of the Milky Way Galaxy.

So, for my astronomy friends.  What is the biggest radius for which we can confidently say there are 0 Extremely Obvious Aliens?  The best answer I can come up with is SLOAN, which was done at a redshift of z=0.1, which I think corresponds to a distance of 1.5BLY.

Is this accurate?  Namely, is it safe to say (with high confidence) there are no Extremely Obvious Aliens within 1.5BLY of Earth?

Is there another survey that would let us raise this number even higher?

What is the theoretical limit (using something like JWST)?  

Has someone written a good paper answering questions like these already?

New Answer
New Comment

3 Answers sorted by

Daphne_W

May 03, 2022

130

On the SLOAN webpage, there's a list of ongoing and completed surveys, some of which went out to z=3 (10 billion years ago/away), though the more distant ones didn't use stellar emissions as output. Here is a youtube video visualizing the data that eBOSS (a quasar study) added in 2020, but it shows it alongside visible/near-infrared galaxy data (blue to green datasets), which go up to about 6 billion years. Radial variations in density in the observed data can be explained by local obstructions (the galactic plane, gas clouds, nearby galaxies), while radially symmetric variations can be explained by different instruments' suitability to different timescales.

Just eyeballing it, it doesn't look like there are any spherical irregularities more than 0.5 billion light years across.

If you want to look more carefully, here are instructions for downloading the dataset or specific parts of it.

You should also note that Dyson spheres aren't just stars becoming invisible. Energy is conserved, so every star with a Dyson sphere around it emits the same amount of radiation as before, it's just shifted to a lower part of the spectrum. For example, a Dyson sphere located at 1 AU from the Sun would emit black body radiation at about 280 K. A Dyson sphere at 5 AU would be able to extract more negentropy at the cost of more material, and have a temperature of 12 K - low enough to show up on WMAP (especially once redshifted by distance).  I actually did my Bachelor thesis reworking some of the math on a paper that looked for circular irreglarities in the WMAP data and found none.

Carl Feynman

May 02, 2022

80

In 1982 or so, Eric Drexler had the idea of looking at photographs of the nearer galaxies and seeing if any had circular areas of darkness in them, suggesting a spreading civilization.  It was in an atlas of galaxies, that had one galaxy per page, so a few hundred galaxies at most. At least that’s what I remember from talking to him about it at the time.

Since then, automated galaxy surveys have looked at millions of galaxies, with “funny looking” ones reviewed by humans.  That’s how Julianne Dalcanton found Comet Dalcanton, for example: the program kicked it out and said “What’s with this funny looking galaxy?”  And when she looked at it, she realized it was not a galaxy, but a comet.  Perhaps this kind of survey would turn up a civilized galaxy, but I don’t know how to estimate the probability of it being detected.

Here‘s a 2015 study that looked for Dyson spheres in 1359 galaxies: https://iopscience.iop.org/article/10.1088/0004-637X/810/1/23

Ilio

May 08, 2022

20

The difficulty with this question is that we can easily miss « signs » that would be obvious with a better understanding of our word. As an example, imagine one century from now we have extremely good simulations of life emergence and the formation of our solar system, and it turns out that our moon is 10^-345 unlikely (unless something deliberately tried to get one), and it turns out that the emergence of our life critically depends on tides. In retrospect, we would say the signs were as obvious as the moon in the sky -we just couldn’t catch it before we know our own emergence better.

Notice that I don’t believe this particular SF scenario (it may comes from Isaac Asimov -not sure). The point is: there are so many possible scenarii where our capability to recognize obvious sign, at least in retrospect critically depends on the state of our sciences. How could we deal with this kind of knightian uncertainty?

See Randall Munroe for a more striking explanation of this idea.

https://xkcd.com/638/

22 comments, sorted by Click to highlight new comments since: Today at 5:03 AM

A civilization somehow constraining itself to merely use dyson swarms that block 50% of the light is implausible, it's much better to just create a small black hole (say, by concentrating very powerful lasers into a small region of space) and throw it into the star. That way you store up all the star's mass-energy for as long as possible until you use the Penrose process to extract however much of the energy you want from it. In fact, you could even launch these small black holes at close to the speed of light and aim them at stars, preserving their energy for the time it takes the slower-moving parts of civilization to show up.

And if this civilization converts every star they encounter, then this should show up as a completely dark sphere in the universe, which ought to be extremely obvious.

this should show up as a completely dark sphere in the universe

Which, notably, we do see (https://en.m.wikipedia.org/wiki/Boötes_void). Though they don't conflict with our models of how the universe would end up naturally.

Unlike what you would expect with black holes, we can see that the Boötes void contains very little mass by looking for gravitational lensing and the movement of surrounding galaxies.

In our simulations, we find it overwhelmingly likely that any such spherical volume of an alien civ would be much larger than the full moon in the sky. So no need to study distant galaxies in fine detail; look for huge spheres in the sky. 

I have not seen anyone do something like this but it sounds like something Anders Sandberg (FHI) would do. If you want a lead or want to find someone that might be interested in researching it, he might be it.

How sure are we that dark matter isn't computronium?

1:10^12 odds against the notion, easily. About as likely as the earth being flat.

  1. Dark matter does not interact locally with itself or visible matter. If it did, it would experience friction (like interstellar gas, dust and stars) and form into disk shapes when spiral galaxies form into disk shapes. A key observation of dark matter is that spiral galaxies' rotational velocity behaves as one would expect from an ellipsoid.
  2. The fraction of matter that is dark does not change over time, nor does the total mass of objects in the universe. Sky surveys do not find more visible matter further back in time.
  3. The fraction of matter that is dark does not change across space, even across distances that have not been bridgable since the inflation period of the big bang. All surveys show spherical symmetry.
  4. By the laws of thermodynamics, computation requires work. High-entropy energy needs to be converted into low-entropy energy, such as heat. We do not see dark matter absorb or emit energy.

I can imagine no situation where something that is a required part of computational processes could ever present itself to us as dark matter, and no mistake in physics thorough enough to allow it.

1:10^12 odds against the notion

How did you get this figure? Two one-in-a-million implausibilities? 

computation requires work

Quantum computers are close to reversible. Each halo could be a big quantum coherent structure, with e.g. neutrinos as ancillary qubits. The baryonic world might be where the waste information gets dumped. :-) 

Before learning about reversible computation only requiring work when bits are deleted I would have treated each of my points as roughly independent with about 10^1.5 , 10^4 , 10^4 , 10^2.5 odds against respectively. The last point is now down to 10^1.5 .

Dumping waste information in the baryonic world would be visible.

Dumping waste information in the baryonic world would be visible.

Not if the rate is low enough and/or astronomically localized enough. 

It would be interesting to make a model in which fuzzy dark matter is coupled to neutrinos, in a way that maximizes rate of quantum information transfer, while remaining within empirical bounds. 

Contra #1: Imagine you order a huge stack of computers for massive multiplayers game purpose. Would you expect it might collapse under it’s own weight, or would you expect the builders to be cautious enough that it won’t collapse like passive dust in free fall?

Contra #4: nope. Landauer’s principle implicates that reversible computation cost nothing (until you’d want to read the result, which then cost next to nothing time the size of the result you want to read, irrespective of the size of the computation proper). Present day computers are obviously very far from this limit, but you can’t assume « computronium » is too.

#2 and #3 sounds stronger, imo. Could you provide a glimpse of the confidence intervals and how it varies from one survey to the next?

#1 - Caution doesn't solve problems, it finds solutions if they exist. You can't use caution to ignore air resistance when building a rocket. (Though collapse is not necessarily expected - there's plenty of interstellar dust).

#4 - I didn't know about Landauer's principle, though going by what I'm reading, you're mistaken on its interpretation - it takes 'next to nothing' times the part of the computation you throw out, not the part you read out, where the part you throw out increases proportional to the negentropy you're getting. No free lunch, still, but one whose price is deferable to the moment you run out of storage space.

That would make it possible for dark matter to be part of a computation that hasn't been read out yet, though not necessarily a major part: I'm not sure the below reasoning is correct, but The Landauer limit with the current 2.7K universe as heat bath is 0.16 meV per bit. This means that the 'free' computational cycle you get from the fact that you only need to pay at the end would, to a maximally efficient builder, reward them with 0.16 meV extra for every piece of matter that can hold one bit. We don't yet have a lower bound for the neutrino mass, but the upper bound is 120 meV. If the upper bound is true, that would mean you would have to cram 10^3 bits in a neutrino before using it as storage nets you more than burning it for energy (by chucking it into an evaporating black hole).

I don't have data for #2 and #3 at hand. It's the scientific consensus, for what that's worth.

1-3: You are certainly right that cold and homogenous black matter is the scientific consensus right now (at least if by consensus we mean « most experts would either think that’s true or admit there is no data strong enough to convince most experts it’s wrong »).

The point I’m trying to make is: as soon as we say « computronium » we are outside of normal science. In normal science, you don’t suppose matter can choose to deploy itself like a solar sail and use that to progressively reach outside regions of the galaxy where dangerous SN are less frequent. You suppose if it exists it has no aim, then find the best non-weird model that fits the data.

In other words, I don’t think we can assume that the scientific consensus is automatically 10^4 or 10^8 strong evidence for « how sure are we that black matters is not a kind of matter that astrophysicist usually don’t botter to consider? », especially when the scientific consensus also includes « we need to keep spending ressources on figuring out what black matter is ». You do agree that’s also the scientific consensus, right? (And not just to keep labs open, but really to add data and visit and revisit new and old models because we’re still not sure what it is)

4: in the theory of purely reversible computation, the size of what you read dictates the size you must throw out. Your computation is however more sounded than the theory of pure reversible computation, because pure reversible computation may well be as impossible as perfectly analog computation. Now, suppose all black matters emits 0,16 mev/bit. How much computation per second and kilo would let the thermal radiation largely below our ability to detect it?

Contra #4: nope. Landauer’s principle implicates that reversible computation cost nothing (until you’d want to read the result, which then cost next to nothing time the size of the result you want to read, irrespective of the size of the computation proper). Present day computers are obviously very far from this limit, but you can’t assume « computronium » is too.

 

Reading the results isn't the only time you erase bits.  Any time you use an "IF" statement, you have to either erase the branch that you don't care about or double the size of your program in memory.

Any time you use an « IF » statement: 1) you’re not performing a reversible computation (e.g. your tech is not what minimise energy consumption); 2) the minimal cost is one bit, irrespective of the size of your program. Using MWI you could interpret this single bit as representing « half the branches », but not half the size in memory.

Unless it's computronium made of non-interacting matter, fairly. It's not just distant galaxies, there's plenty in the Milky Way too

Non interaction is exactly what you want if shielding things from causal interaction let's you slip down the entropy gradient slower, or something even more exotic.

This is an interesting question. Thank you for asking it.

Why do we think aliens would do things with stars? How can we be sure that our reasoning isn't similar to that of a medieval nobleman trying to gauge the power of the US today?

"How many castles are in their realm? No castles? What, they can field hundreds of thousands of men-at-arms but no horse? What sort of backwards land is this? Is this another realm like the Aztec empire I heard rumours about? Enormous gold reserves and huge armies but none of the refinements of the civilized world! Let's invade!"

You can see how they would make incorrect assumptions if they got to ask the questions! 

95% of the universe is 'dark'. What if there's some special method that makes Dyson Spheres obsolete? 

The medieval would be confused by all our fields left fallow for the grass to grow, all the woodland right next to cities. It must be a desolate land, laid waste by war. People need to cultivate food, people need to burn wood for fuel. Maybe there was a plague?

I think aliens are supremely obvious and outweigh the visible universe. I think they have some superior method of manipulating matter and energy. Maybe they are using pre-existing stocks of dark matter rather than tapping stars. We would see them disappearing if they did that. Or maybe they have some way to generate matter ex nihilo (which does admittedly go against thermodynamics). 

Anyway, we must be many tiers below them. Maybe if you increase the energy of your particle accelerator 100,000,000x you find something exciting that breaks all the rules! Maybe you have to be a superintelligence to get a grasp of what's really going on (quantum gravity) and do some unfathomably complicated technical feat with the resources of a Type 1.5 civilization to unlock the next tier of technologies.

Either there isn't any life or nanotech and Dyson Spheres aren't the final level of achievement. We can't even unite the fundamental forces. So I think the latter is the case.

 

A medieval lord would be very confused by our current world, yes.

He wouldn't fail to notice it. No, we don't have castles. But a medieval lord who saw New York City would be very unlikely to say 'hah, no castles, what weak primitives these people are', and even less likely to say 'what? No grain fields? Clearly there is no civilization here!'

It's possible that aliens are more advanced in ways we don't understand. However, they would e.g. have to be advanced in ways that result in them entirely ignoring thermodynamics to have no use for the huge amounts of energy in stars.

The medieval lord doesn't get to see New York. He's asking about things he knows well: troops, castles, woodland, farmland. Towns and cities are small and less significant remember? All societies are agrarian! He doesn't get to see what we want to show him, he's asking us questions and we're answering and wishing we could say 'yes but you should be asking about our arsenal of nuclear submarines that fire 12 missiles each with 8 warheads that can incinerate an entire army anywhere in the world within 30 minutes'

We're looking at stars, the things we know well. Stars, black holes, planets and dust are 5% of the universe. The entire visible universe is not huge nor does it have much energy, it is dwarfed by the dark stuff we don't understand.

The entire universe is being torn apart by two mysterious forces that we cannot identify! We are staring at something enormously powerful. If we can't identify life in the 5% we understand well, life is very likely in the other 95%.

I'm partial to this idea. People think of a dyson swarm as a pinnacle technology when it's really more of an engineering problem than a scientific problem for boring modern people. If there are highly advanced aliens out there I see it being very possible that their expansion is completely invisible to us because they're operating on a level we can't perceive.