byRuby 3mo6th May 20199 comments

18


The FHI paper, Eternity in Six Hours, is very optimistic about what can be done:

In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods.

Is this paper reasonable? Which parts of its assertions are most likely to be mistaken?

This question was inspired by a conversation with Nick Beckstead.

New Answer
Ask Related Question
New Comment

3 Answers

In the order that they appear in the paper, these are a few of the parts that seemed iffy to me. Some of them may be easily shown to be either definitely iffy, or definitely not-so-iffy, with a little more research:

As for nuclear fusion, the standard fusion reaction is 3H +2H→4He +n+ 17.59 MeV. In MeV, the masses of deuterium and tritium are 1876 and 2809, giving an η of 17.59/(1876 + 2809) = 0.00375. We will take this η to be the correct value,because though no fusion reactor is likely to be perfectly efficient, there is also the possibility of getting extra energy from the further fusion of helium and possibly heavier elements.

I'm not sure what existed at the time the paper was written, but there are now proposals for fusion rockets, and using the expected exhaust velocities from those might be better than using the theoretical value from DT fusion.

The overall efficiency of the solar captors is 1/3, by the time the solar energy is concentrated, transformed and beamed back to Mercury.

I feel like I'm the only one that thinks this Dyson sphere method is a little dubious. What system is going to be used to collect energy using the captors and send it to Mercury? How will it be received on Mercury? The total power collected toward the end is more than W. If whatever process is used to disassemble the planet is 90% efficient, the temperature required to radiate the waste heat over Mercury's surface area is about 7000K. This is hotter than the surface of the sun, and more than twice the boiling point of both iron and silica. In order to keep this temperature below the boiling point of silica, we would either need the process to be better than 99.98% efficient, to attach Mercury to a heat sink may times the size of Jupiter, or to limit power to about W. If melting the planet isn't our style, we need to limit power to about W.

I don't think this kills their overall picture. It "only" means the whole process takes a few orders of magnitude longer.

Of the energy available, 1/10 will be used to propel material into space(using mass-drivers for instance [37]), the rest going to breaking chemical bonds, reprocessing material, or just lost to inefficiency. Lifting a kilo of matter to escape velocity on Mercury requires about nine mega-joules, while chemical bonds have energy less that one mega-joule per mol. These numbers are comparable, considering that reprocessing the material will be more efficient than simply breaking all the bonds and discarding the energy.

The probes will need stored energy and reaction mass to get into the appropriate orbit, unless all the desired orbits intersect Mercury's orbit. Maybe this issue can be mitigated by gradually pushing Mercury into new orbits via reaction force from the probes. Or maybe it's just not much of a limitation. I'm not sure.

Because practical efficiency never reaches the theoretical limit, we’ll content ourselves with assuming that the launch system has an efficiency of at least 50%

This seems pretty optimistic. In particular, making a system that launches large objects at .5. Doing this over the distance from the sun to Earth requires an average force of about N per kg. For .9 and .99, it requires about 8 and about 35 this force/mass, respectively. I don't know what the limiting factor will be on these things, but this seems pretty high, and suggests that the launcher would need to be a huge structure, and possibly a bigger project than the Dyson swarm.

I also have some complaints about the notation, which I will post later, and possibly other things, but this is what I have for now.

One of the things the paper does is make the overall determination that power available >> power needed. So as part of assessing that, I identified all the components which contributed to each and examined how realistic/sensitive to assumptions they are.

To begin with, here are the factors from the paper which impact the energy required for colonizing the galaxy.

  • The mass of each payload / self-replicating probe to be sent.
    • Mass of the payload is a linear factor in the energy required.
  • The travel speed.
  • The efficiency of the fuel mass.
    • This is well understood physics. From a shallow investigation, this seems to be a highly sensitive point in the paper since 1) we have not yet constructed rockets/fuel source combinations with sufficient specific impulse values required by the paper, and 2) the specific impulse is part of an exponential factor in energy required.
  • The number of probes to be sent as function of number of destinations and probes sent per destination as redundancy against loss of probes due collision or other issues.
    • 1. I outsourced checking the paper’s work to calculate the number of reachable stars at different speeds. Physics contacts reported that the calculations made in the paper all appeared correct.
    • 2. I did not examine the redundancy requirements in depth since wasn’t easy to reconstruct how they derived the result, my intuition is this is not a crucial aspect of the paper, especially a lower speeds which are my main interest.
  • Launch system feasibility and efficiency.

Note that some of these variables directly the per-probe energy costs and others (specifically redundancy due to expected collisions and travel speed hence stars reachable) affect the total energy required.

In comments to this answer, I'll examine each in further depth.

Analyzing the rocket-equation section, I found the following statement:

The relativistic rocket equation is
where is the difference in velocity, is the initial mass of the probe, the final mass of the replicator and the term denotes the specific impulse of the fuel burning process. The term can be derived from η, the proportion of fuel transformed into energy during the burning process [24].

The fact that we can fully derive from the fuel energy-transformation efficiency seemed weird to me, so I looked it up in the underlying reference and found the following quote (I slightly cleaned up the math typesetting and replaced it with equivalent symbols above, emphasis mine):

For the relativistic case, there is a maximum exhaust velocity for the reaction mass that is given by:
where e is the fuel mass fraction converted into kinetic energy of the reaction mass. I was not able to improve on that derivation from a presentation point of view.

This is obviously a very similar equation, but importantly this equation specifies an upper bound on the exhaust velocity and does not say when that upper bound can be attained. Intuitively it seems to me not at all obvious that we should be able to attain that maximum exhaust velocity, since it would require the ability to perfectly direct the energy released in the fuel burning, which would naively be primarily released as heat.

The keyword that finally helped me understand the relevant rocket design is "Fission-Fragment Rocket", which at least according to Wikipedia could indeed reach specific impulses sufficient to support the conclusions in the paper.