I was originally going to post this as a comment into the UFAI & great filter thread, but since I noticed that my comment didn't include a single word of AIs I thought about making an entire new discussion thread and I continued writing to improve the quality from comment to post. The essay is intended as thought-provoking and I don't have the required knowledge in the related fields and I mostly pieced this together by browsing wikipedia, but hopefully it gets you thinking! 

Personally I think when considering the Drake Equation it's important to note that it actually took ridiculously long for intelligent life to evolve here and that we're on a finite timeline. The drake equation contains the rate of star formation, the number of planets in the stars, it even has a variable for the time it takes for life to evolve to the point of signaling detectably into outer space, etc. but it's also important to pay attention to that the average setup of the universe has changed.

On earth life has existed for almost 4 billion years and it has only been 43 years since our civilization first visited the moon and  ~1½ centuries since the invention of radio? That is a very small time frame. Particularly if we consider that ~4 billion years is between a quarter and a third of the age of the universe itself.

When we consider the Great Filter we can at least propose that there have been several mass extinction events which failed to end all life on earth. I think it's a valid argument to say that for an example any powerful impact could have ended all life or reset the evolution of life some/any number of degrees - and it has been ~70 years since the initiation of the Manhattan project and already humanity has the potential to go through a thermonuclear war that could end human life on the planet, or rollback the game of life through nuclear winter. Mars could have been habitable. For an example there's no liquid water on Mars now, though there should've been earlier. The habitable zone as theoretized is considerably narrow - For an example: If at any point in the history of (life on) earth the average surface temperature had climbed to 200 celsius for whatever reason I'm pretty sure that would've made our planet like all the other planets observed - so far - in that they don't seem contain intelligent life. What I mean by this is that even though a vast number of planets reside in the habitable zone of some star, they have to maintain those conditions for a very long time, and that's just one variable. Which by the way is a pretty important thing to note when talking about things such as the greenhouse effect for an example. Some people seem to have this idea of "natural balance" that occurs automatically. It's as if those people are not looking at the "natural balance" on some of the other planets. Where's the mechanism anyway? Milankovitch cycles? Even algae managed to start an ice age according to some theories, humans certainly have the potential to do more harm than that and it's not like we only have to care for extinction events that we brought upon ourselves.

In addition to this it seems to me frequently neglected that the conditions inside the universe have changed considerably with the aging universe. Earth is not constantly bombarded by collisions, it takes time for stars and planets and so forth to attain their form, the average age of stars has changed. In other words the habitability of the entire universe changes over time, though not in a particularly synchronous fashion. If this does not seem reasonable then consider the following: Was the likelihood for finding intelligent life in any location of the universe when it was 1 billion years old the same as it is today? How about when the universe was 4 billion years old? 8 billion years? Most stars are between 1 to 10 billion years of age according to wikipedia.

Human species itself has gone through some sort of a bottleneck, a historical token worth reflecting upon: Had the event been worse and those few remaining members of our ancenstry perished the planet earth would arguably still be without intelligent civilizations even today.

 

This line of reasoning in my opinion favors two different details:

1. Since our intelligence took almost 4 billion years to evolve, any event within that time that could've wiped out all the progress, would've occured before the rise of intelligent civilizations - and so all those events contribute to the Great Filter

2. The often contemplated likelihood that human intelligence is among the earliest intelligent species to arise, if life had been considerably less likely in the earlier stages of the universe. (which is very complatible with the fact that we have not observed life elsewhere - or at least that's somewhat complementary to likelihood of intelligent life) In otherwords if our species is within the first 5% of the intelligent civilizations to arise that should be reflected upon our observations. Of course the same is true for the last 5%, etc. This is an important point, because that's not the kind of reliability science rests upon.

 

Remember how life taking almost ~4 billion years to evolve on earth was a ~1/3 (rather ~2/7) of the age of the entire universe? Well our solar system is only 4.6 billion years old. Life on earth has been evolving practically since the formation of our solar system and at no point in time were all the replicators wiped out.

So, any thoughts?

 

New Comment
23 comments, sorted by Click to highlight new comments since:
[-][anonymous]230

I think recent research into star formation rates in the universe actually might shed a lot of light on the situation and make some sense of our position in space and time. (My handle is CellBioGuy but I very nearly pulled off a double major in astronomy back in college, know a number of astronomy grad students, and follow astronomy closely.)

ARXIV, PDF Popular press

To make a long story short, a survery was made looking into deep space, and back in time up to 11 billion years, of a pretty good proxy of star formation (the emission lines produced by emission nebulae that are lit up like neon lights by the ultraviolet light of freshly-born huge stars). After doing some fancy math to correct for the expansion of space and the like the conclusions are striking - the modern average rate of star formation in the universe is less than 1/30 the peak rate of 11 billion years ago, and half of the stars in the universe are over 9 billion years old. To make matters even more interesting, the empirically-derived relationship between time and star formation actually converges to a finite number of stars when you project it into the future, and at infinity reaches a total number of stars born only 5% more than currently exist today.

95% of stars that will ever exist already exist.

This makes sense and has some interesting implications when you think in terms of galactic evolution and the history of the universe. Grand spirals like our galaxy are nearly the only places in the universe where star formation happens for billions of years on end and produces generations of stars rich in heavy elements. Dwarf galaxies go through one burst of star formation and the supernovas from the big stars blow all the gas out of their weak gravity, stopping star formation. Big elliptical galaxies form with nearly no angular momentum, so all their gas falls to the center, most of it becomes stars in one huge burst of low-metal stars, and the rest gets blown out of the galaxy when the central black hole gets activated. Grand spirals only wind down slowly with time as most of their gas stays away from the center due to angular momentum but their gravity makes sure that (almost) all the gas stays bound, getting more and more enriched in heavy elements with time.

...That is, until they collide with each other, which as most galaxies are part of clusters does eventually happen. It will happen to us in another four to five billion years, with andromeda. When this happens, the colliding galaxies go through one burst of star formation and then settle down into another elliptical galaxy. So over time, the number of star forming galaxies only decreases.

So, consider our position in space and time. We are in a grand spiral galaxy, which is the only sort of place that really produces high-metallicity stars. Our star system formed about 1/3 of the way through our galaxy's productive life before it collides with Andromeda (probably more like halfway through its compliment of stars, seeing as even spirals settle down with age), and we find ourselves currently about 2/3 of the way through its productive lifetime. I would call this an utterly typical position for an origin of life as we know it. Seeing how rapidly star formation is winding down across the universe as a whole, we do not find ourselves in an anomalously early point in time, even given that we are only 13 billion years into an apparently open-ended universe that will be around for at the very least trillions of years.

Our position becomes even more typical when you consider that Earth will only be habitable for at most another billion years before inevitable geochemical and astronomical processes turn us into another Venus, and that hilariously enough for a chunk of time before then the carbon content of our atmosphere will probably be so low that the photosynthetic production will be close to nill. We find ourselves evolving near the end of our planet's habitability window (consistent with complex life taking a while), in a system that formed about halfway through the window that produces clement systems.

This typical position only becomes a problem when you assume that intelligent systems either last longer than, say, a few million years, or they spread beyond their points of origin and consume the universe. The sheer number of ways that these two things could not be true lead me to think that they just don't happen, and that both our position in space and time and our nature are fairly typical of things that are smart enough to figure out where and when they are.

Here's a link to an discussion in an open thread a while back where I posit that one rarely mentioned reason that humanity came to have a technological civilization is that earth's history was unusually conducive to the formation of large hydrocarbon accumulations just waiting for us to find and exploit: http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7bcy

A planet needs to host life for hundreds of millions of years to generates the level of oil, gas and coal that we find. Imagine our civilizational trajectory it there was no possibility of an industrial revolution.

At least I feel like there should be a factor for "fossil energy availability" in the Drake equation.

Is there something special enough about coal, oil and gas kickstarting an industrial revolution that can't be replicated using (larger amounts of) wood as a combustible?

The worst things I could think of being that burning wood can produce toxic carbon monoxide more than coal, produces ash, and may not pack as much energy per volume/weight. Still this doesn't sound like it would have been enough to prevent its use.

[-][anonymous]70

We already appropriate something like a fifth of primary photosynthesis productivity via agriculture and other harvests of non-agricultural productivity, and it's unclear how much more you can appropriate before you start really messing with ecologies we depend upon (more than we already are at least). To give an idea how this translates into energy terms, a third of the vast American corn harvest is turned into biofuel to produce something like ten percent of our automotive fuel, itself only a portion of energy use (Though the high-intensity farming used to cram large amounts of corn productivity per acre themselves use a lot of that fuel, so the net energy produced is less than it would seem. Between running the fields and fermenters I believe the consensus is that a given amount of ethanol energy actually requires between 50% and 80% of that energy in fossil fuel to be burned to produce it.)

Geologically processed fossil fuels are ridiculously dense in energy and represent a portion of hundreds of millions of years of photosynthetic productivity.

For one thing, you can't really power a moving vehicle without a pretty energy-dense fuel. It's hard to imagine an industrial revolution without the ability to move huge amounts of freight.

If you were just using the wood to generate electricity, there would be some probably pretty low maximum generation rate that could be sustained without quickly depleting all wood resources.

It's interesting to me that our culture has vilified fossil fuels to the extent that many people don't want to admit that our modern society depends on them.

A lot of points here I find plausible. On the changing habitability of the universe, Earth has quite a lot of heavy elements. Those are, as I understand it, generally thought to be formed by supernovas, and so they should be rarer the further back you go in the history of the universe, as the earlier you go, the fewer supernovas there would have been so far. Some of the heavy elements are definitely involved in life (indeed, though I was mostly thinking of further up the periodic table, carbon and oxygen are relatively heavy compared to the majority of the matter in the universe, and not produced in any quantity by the normal activity of young stars, though they don't require supernovas). Further, just having enough heavier elements around may be important in planet formation.

Of course, this is all quite speculative. Perhaps some substantial amount of heavy elements was produced in the big bang. And very big stars have very short life cycles, so if there were a lot of very big stars when the universe was young, there might have been a lot of supernovas very early on. Or there might be other ways for heavy elements to form. But obviously if none of these were the case, and so heavy elements were much rarer in the early universe, that may mean that there are unlikely to be many star systems with remotely earthlike planets that are older than our solar system.

Although we don't have good models yet for how it happens, there's increasing evidence that planets can form early on metal-poor stars. So one of the most obvious time based issues- needing time for planets to form on the right stars, seems to not that likely to matter. See here for prior discussion on LW.

Aren't metal-poor planets exactly the kind of planets that we would least expect to be able to create advanced technological alien life?

That's a good point. The precise degree of metal-poorness matters here. In fact, lower levels of carbon, silicon and phosphorus make life in general probably less likely to form, much less civilization. And almost all complex life involves some other elements to some extent (copper, iron and selenium are common examples).

And even if you had pure CHNOPS life on a planet with just say CHNOPS, and some iron, nickel and silicon in large quantities, then it is plausible that you wouldn't get to advance much at all. The problem is that you don't need large amounts of other chemical elements to get into space, so as long as you have enough to get to something close to our tech level, you might be ok. One possible related barrier though is a lack of fossil fuels to help bootstrap a civilization- but this only adds at most around fifty million or a hundred milliion or so to the minimum time frame.

But overall, your point seems to be strong: this may be an example where the Drake Equation's highly approximate nature (essentially treating all planets the same) is making a severe estimation error and we shouldn't consider such planets at all relevant for Filter discussions.

I think the Drake equation isn't really the way to estimate the probably of intelligent life - there is just too much we don't know to use the inside view. Take one thing like the Moon and we just don't know how important it was to have intelligent life on Earth (it definitely played a significant role, stabilizing the Earth orbit, and helping the sea => land transition with tidal forces) nor how rare such big satellites are.

To me the only way to look at it is to use the outside view, and do a reasoning like :

  1. Look at us, and, like you said and realize it took us 5 billions of years to exist, which is actually half of the lifespan of Earth. We don't know if we were exceptionally fast or exceptionally slow, but it gives an order of magnitude of the time it takes for intelligent life to appear on a planet with good conditions.

  2. Realize that it would be exceptional coincidence if another intelligent life would appear within 0.1% of the time it took us - so if there is intelligent life somewhere else, it's likely to be 5 millions of years in advance, or 5 millions of years late from us. 5 millions of years late means they are still apes.

  3. Look around us, and see no trace of alien life - no Von Neumann probe visiting our system, no Dyson spheres or ringworlds around. And yet, if they were 5 millions of years in advance, we would very likely see such things.

So either there is a "great filter" and most of them didn't make it to space, or just, well, intelligent life is rare, because it takes in average 20 billions of years to appear even if conditions are good, and we were lucky to be fast enough to exist before the Sun became a red giant. Which that there will be occasional intelligent life in the universe, but so far apart that we didn't see them yet. If our nearest neighbor exists since 0.3 billion of years, but is 0.5 billions of light year from us, well, we'll sill need a long time before communicating with them.

[-][anonymous]00

http://www.independent.co.uk/news/science/earth-was-hit-by-gamma-ray-burst-from-space-in-eighth-century-say-scientists-8460351.html

What do you gather from that bit of news in the context of Great Filter? Any thoughts?

[This comment is no longer endorsed by its author]Reply

some questions about the Fermi paradox I've been thinking about recently, which can probably be answered by physicists.

1: Is there signal attenuation in space? If there is, we should not expect to see evidence of alien radio transmissions from farther than a certain distance unless they're deliberately intended as an interstellar communication. This drastically reduces the level of evidence that lack of seeing them provides for lack of existence of aliens.

2: We can so far barely "see" planets from other stars. We need to use techniques like gravitational microlensing or monitoring transits of the parent star to notice them, and yet it seems to me like a whole planet should reflect far more energy than the relatively small transmissions created by artificial means. Why should we expect to be able to detect alien transmissions if we can't yet detect planets?

Is there signal attenuation in space?

Yes, the strength of radio waves goes out an inverse square law. Moreover, more modern radio signals are essentially wide-band and look much closer to noise than classical signals.

Most of the rest of your comment though is based on a faulty premise: that the best way to detect civilizations is by radio transmission. You are correct that unless one is aiming very tight beams, it is unlikely that one will detect that. But we have many other methods of potential detection. There's been for example serious searches for Dyson spheres. In general, we don't see in our galaxy or any other galaxy any signs of large-scale engineering.

Moreover, the more serious problem is that civilizations expand: if one is expanding at say 0.1% of the speed of light, it takes one on the order of 100 million years to expand across a galaxy. We know that galaxies (and even Earth like planets) have been around for much longer than that. There's no obvious a priori reason for example why an intelligent species couldn't have evolved 100 million years ago, or even more. So we'd expect to see signs of the usual slow growth if there were any intelligent civilizations out there, because the chance that they end up reaching space travel at the same timespan as us is vanishingly small and thus should expect there to be some old, spread-out civilizations.

The upshot of all of this is that lack of radio signals simply doesn't matter.

What would we expect to see in terms of signals from eg, Andromeda if alien life arose there 100 million years ago and is currently inhabiting almost all of it, assuming they weren't deliberately sending signals to us?

Well, the most obvious thing is we'd probably see signs that stars are being used. For example if a sizeable fraction of the stars in Andromeda had Dyson spheres, or had stellar lifting, we'd be able to see that from here because the star's profiles would look different. For example, in the case of a Dyson sphere we'd expect to see much more of the radiation in the infrared range . In fact, if one has Dysoned a large fraction of the stars of a galaxy, we should be able to notice this for galaxies orders of magnitude farther away than Andromeda.

[-][anonymous]40

Well, the most obvious thing is we'd probably see signs that stars are being used.

Why?

I mean, I get the assumption here -- that we've experienced a geometric surge in energy use, economic growth and several other things as a result of the Industrial Revolution, and it hasn't disappeared yet -- but it seems like a potentially important one to question. What would you expect to see if you're wrong, if the Cold War-era visions of high-energy, superadvanced civilizations or their Singularitarian cousins, the visions of superpowerful AGI and/or uploaded humans are based on the assumption things won't regress to the mean?

What would you ever see from Andromeda, in that case? Hell, what would you even see from Alpha Centauri if that were the case?

That's a good point. It is possible that the problem is that large scale engineering projects and the like simply don't happen. And if that's the case then it may be that things would look very similar to what we see. In a similar vein, there may be some as yet undiscovered loophole or exception to the laws of thermodynamics that makes harvesting stars unnecessary (this seems unlikely). But not all of these things are purely about energy consumption. While a Dyson sphere or ringworld is nice from an energy standpoint, they also provide living space for growing populations. Yes, it could be that populations simply level off (and Japan and Western Europe do show that that can happen). But at this point we need to now have a lot of assumptions about what every intelligence species does. If only a small fraction try to harvest a lot of energy and only a small fraction don't control their population growth, then one would expect to probably see something. The idea that not a single species out there tries these sorts of things seems about as surprising as there simply not being anyone out there to do it.

[-][anonymous]20

But at this point we need to now have a lot of assumptions about what every intelligence species does.

Not so -- you only need to posit that sustained growth spurts like the one we're living through are anomalies, and subject to some rather significant limits.

The essay you linked to is essentially focused on what happens if you stay at a single star. If anything, it should be an argument as to why to expect things to spread out: having a lot more planets and stars means one has a lot more energy at one's disposal.

[-][anonymous]40

Nnnooooo, read it again.

But the chief limitation in the preceding analysis is Earth’s surface area—pleasant as it is. We only gain 16 years by collecting the extra 30% of energy immediately bouncing away, so the great expense of placing an Earth-encircling photovoltaic array in space is surely not worth the effort. But why confine ourselves to the Earth, once in space? Let’s think big: surround the sun with solar panels. And while we’re at it, let’s again make them 100% efficient. Never-mind the fact that a 4 mm-thick structure surrounding the sun at the distance of Earth’s orbit would require one Earth’s worth of materials—and specialized materials at that. Doing so allows us to continue 2.3% annual energy growth for 1350 years from the present time.

At this point you may realize that our sun is not the only star in the galaxy. The Milky Way galaxy hosts about 100 billion stars. Lots of energy just spewing into space, there for the taking. Recall that each factor of ten takes us 100 years down the road. One-hundred billion is eleven factors of ten, so 1100 additional years. Thus in about 2500 years from now, we would be using a large galaxy’s worth of energy.

In other words, keep up the paltry growth rate listed at the start of the essay, and take the fantastically-favorable assumptions that go with it (like 100% efficiency) and you find that even if we're surrounding every star in Dyson spheres we can't keep up with growth, because within relatively short timescales we are consuming a whole galaxy's worth of energy. (At that point, we have to stop growing unless you think we can somehow harness ~3% of another galaxy within the next year after capping out our own, or for that matter spread across the entire Milky Way in 2500 years -- this is trivially, obviously absurd unless you bring FTL into the equation, and even attaining a few percent of c is currently unthinkable in practical terms).

Also, you're missing the point of the essay -- that this picture emerges from the most ludicrously favorable assumptions in favor of continued growth: that we can only focus on growing the energy sector to the exclusion of worrying about anything else, and that we don't need to worry about thermodynamic limits to efficiency, and that we don't need to worry about a piddly little thing like the speed of light once we've enclosed the sun in a Dyson sphere and need to keep up the growth rate by expanding into the galaxy at large. Even with that working in favor, we eventually run out of galaxy and have to stop growing. Once you understand that, the only thing left to be argued is where the inflection point lies.

Saying that limits to growth is an argument in favor of civilizations being more likely to spread out, is like saying that the possibility of extinction is an argument in favor of some arbitrary evolutionary adaptation in a population of organisms. It's getting the important causal bits backwards.

we should be able to notice this for galaxies orders of magnitude farther away than Andromeda.

If it happened long enough ago for the signal to reach us. Andromeda is close (2.5 millions light years), but for a galaxy two orders of magnitude farther away (250 millions light years away), this starts being significant : maybe there is a galaxy whose Dysonification started 250 millions of years ago, and was completed 200 millions of years ago, but we don't see it because the signal didn't reach us yet.

efm on IRC points out that we CAN see planets these days.

http://en.wikipedia.org/wiki/List_of_extrasolar_planets_directly_imaged

still not a large amount of them but a sign that if we could be on the verge of functionally being able to detect alien civilizations not actually trying to contact us.