by Ruby1 min read6th May 20199 comments

18

Frontpage

The FHI paper, Eternity in Six Hours, is very optimistic about what can be done:

In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods.

Is this paper reasonable? Which parts of its assertions are most likely to be mistaken?

This question was inspired by a conversation with Nick Beckstead.

New Answer
Ask Related Question
New Comment

3 Answers

In the order that they appear in the paper, these are a few of the parts that seemed iffy to me. Some of them may be easily shown to be either definitely iffy, or definitely not-so-iffy, with a little more research:

As for nuclear fusion, the standard fusion reaction is 3H +2H→4He +n+ 17.59 MeV. In MeV, the masses of deuterium and tritium are 1876 and 2809, giving an η of 17.59/(1876 + 2809) = 0.00375. We will take this η to be the correct value,because though no fusion reactor is likely to be perfectly efficient, there is also the possibility of getting extra energy from the further fusion of helium and possibly heavier elements.

I'm not sure what existed at the time the paper was written, but there are now proposals for fusion rockets, and using the expected exhaust velocities from those might be better than using the theoretical value from DT fusion.

The overall efficiency of the solar captors is 1/3, by the time the solar energy is concentrated, transformed and beamed back to Mercury.

I feel like I'm the only one that thinks this Dyson sphere method is a little dubious. What system is going to be used to collect energy using the captors and send it to Mercury? How will it be received on Mercury? The total power collected toward the end is more than W. If whatever process is used to disassemble the planet is 90% efficient, the temperature required to radiate the waste heat over Mercury's surface area is about 7000K. This is hotter than the surface of the sun, and more than twice the boiling point of both iron and silica. In order to keep this temperature below the boiling point of silica, we would either need the process to be better than 99.98% efficient, to attach Mercury to a heat sink may times the size of Jupiter, or to limit power to about W. If melting the planet isn't our style, we need to limit power to about W.

I don't think this kills their overall picture. It "only" means the whole process takes a few orders of magnitude longer.

Of the energy available, 1/10 will be used to propel material into space(using mass-drivers for instance [37]), the rest going to breaking chemical bonds, reprocessing material, or just lost to inefficiency. Lifting a kilo of matter to escape velocity on Mercury requires about nine mega-joules, while chemical bonds have energy less that one mega-joule per mol. These numbers are comparable, considering that reprocessing the material will be more efficient than simply breaking all the bonds and discarding the energy.

The probes will need stored energy and reaction mass to get into the appropriate orbit, unless all the desired orbits intersect Mercury's orbit. Maybe this issue can be mitigated by gradually pushing Mercury into new orbits via reaction force from the probes. Or maybe it's just not much of a limitation. I'm not sure.

Because practical efficiency never reaches the theoretical limit, we’ll content ourselves with assuming that the launch system has an efficiency of at least 50%

This seems pretty optimistic. In particular, making a system that launches large objects at .5. Doing this over the distance from the sun to Earth requires an average force of about N per kg. For .9 and .99, it requires about 8 and about 35 this force/mass, respectively. I don't know what the limiting factor will be on these things, but this seems pretty high, and suggests that the launcher would need to be a huge structure, and possibly a bigger project than the Dyson swarm.

I also have some complaints about the notation, which I will post later, and possibly other things, but this is what I have for now.

One of the things the paper does is make the overall determination that power available >> power needed. So as part of assessing that, I identified all the components which contributed to each and examined how realistic/sensitive to assumptions they are.

To begin with, here are the factors from the paper which impact the energy required for colonizing the galaxy.

  • The mass of each payload / self-replicating probe to be sent.
    • Mass of the payload is a linear factor in the energy required.
  • The travel speed.
  • The efficiency of the fuel mass.
    • This is well understood physics. From a shallow investigation, this seems to be a highly sensitive point in the paper since 1) we have not yet constructed rockets/fuel source combinations with sufficient specific impulse values required by the paper, and 2) the specific impulse is part of an exponential factor in energy required.
  • The number of probes to be sent as function of number of destinations and probes sent per destination as redundancy against loss of probes due collision or other issues.
    • 1. I outsourced checking the paper’s work to calculate the number of reachable stars at different speeds. Physics contacts reported that the calculations made in the paper all appeared correct.
    • 2. I did not examine the redundancy requirements in depth since wasn’t easy to reconstruct how they derived the result, my intuition is this is not a crucial aspect of the paper, especially a lower speeds which are my main interest.
  • Launch system feasibility and efficiency.

Note that some of these variables directly the per-probe energy costs and others (specifically redundancy due to expected collisions and travel speed hence stars reachable) affect the total energy required.

In comments to this answer, I'll examine each in further depth.

4Ruby2yENERGY REQUIRED TO ACCELERATE MASS To travel between the stars it is necessary to be able to accelerate an interstellar probe to some very fast speed, let it travel through space (without friction, it will keep going due to energy), and then decelerate it once it arrives at the target destination. Space is insanely large and to get most places you really want to be traveling at a significant fraction of the speed of light; however, this requires enormous amounts of energy since by special relativity, the faster you are travelling, the more energy is required to accelerate further. This enforces the limit that you cannot go faster than the speed of light since this would require infinite energy.) The relativistic kinetic energy of a rigid body [https://en.wikipedia.org/wiki/Kinetic_energy#Relativistic_kinetic_energy_of_rigid_bodies] is given by: This formula is linear in mass (m) and considerably superlinear in velocity (v) approaching infinity as velocity approaches the speed of light (c). The y-axis is incremented in units of 100 million gigajoules (=10^17 joules). Even accelerating a 1kg mass to 10% the speed of light requires 4.5*10^14 joules. 50% of c requires 1.4*10^16 joules. For comparison, world energy consumption in 2013 was estimated by the International Energy Association to be 5.67x10^20 joules. In other words, accelerating a single 5 tonne probe to 10% c would require ~1% of Earth’s entire energy consumption. Accelerating to 80% could required 100% of 2013’s energy consumption. Now consider that to colonize the universe, we need to send upwards of 100 million (10^8) probes. Since we need to both accelerate and decelerate a probe, this energy is required twice over. Doubling the mass of the probe doubles the energy required, but doubling the target speed multiplies the energy required many times over even at very small fractions of the speed of light.
4Ruby2yLAUNCH SYSTEM FEASIBILITY EFFICIENCY To save on the mass which has to be launched (which requires squaring the rocket equation), the authors of Eternity in Six Hours favor an external fixed launch system which accelerates the replicating probe including both probe and fuel for its later deceleration. In the paper, the authors briefly list coilguns, quenchguns, laser propulsion, and particle beam propulsion as potential means of accelerating the probes. They state even though the theoretical energy efficiency of these systems could approach 100%, since one never obtains the theoretical efficiency, they assume 50% efficiency. The question here: 1. Is it possible to build a launch system which launches (possibly quite large) probes to significant fractions of the speed of light? 2. What efficiency is realistically achievable? When contacted for comment, one of the authors, Anders Sandberg, stated [https://www.lesswrong.com/posts/NZiDAY9b4mZeRWbRc/space-colonization-what-can-we-definitely-do-and-how-do-we#yrKPQ5LnyHkpeCH7A] : Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go) . . . My shallow impression is that the proposed launch systems might only represent large engineering challenges more than difficult physics/designs breakthroughs. Coilguns have been constructed [https://en.wikipedia.org/wiki/Coilgun#Potential_uses] and laser propulsion has been demonstrated in the lab [https://www.youtube.com/watch?v=TzLEK8Zq7Pk]. What remains is a question of scale and efficiency. However, even a difference between 5% efficiency and 50% efficiency is only a single order of magnitude. Not a large difference in the overall picture here.
4Ruby2yNUMBER OF PROBES TO BE SENT The mass of replicator, specific impulse of the fuel, and travel speed determine the energy required to launch a single probe. The number of probes to be sent is determined by the number of destinations and the redundancy factor in number of probes sent to ensure that one probe arrives at each destination. Due to collisions with interstellar dust or other failures, we can imagine that not every self-replicating probe will arrive at its destinations. The number of destinations is limited by one’s travel speed since increasing large regions of space are moving beyond our reach due to expansion of the universe. The faster one travels, the more distant stars one is able to reach before they get too far away. The authors of Eternity in Six Hours (pg. 21) ``calculated that: * Travelling at 50% c, one could reach 1.16 x 10^8 galaxies * Travelling at 80%c, one could reach 7.62 x 10^8 galaxies * Travelling at 90%c, one could reach 4.13 x 10^9 galaxies For reference, an average galaxy might have 10^8 stars. The authors calculated travelling at 99%c, a redundancy factor of 40 is required, i.e. 40 probes for every destination. For 80%c and 50%x, the redundancy is less than 2. I did not prioritize looking into this calculation and am trusting the result that at lower speeds, the redundancy factor required is not very high.
4Ruby2ySPECIFIC IMPULSE AND THE FUEL COMPONENT OF THE PROBE’S MASS Importantly, part of what we need accelerate is the fuel required to decelerate the probe once it arrives at its destination. The greater the fuel we send along with a probe, the greater the initial launch energy required. We want fuel which is highly efficient by its mass, i.e, it has high specific impulse [https://en.wikipedia.org/wiki/Specific_impulse]. The amount of fuel mass required to decelerate a probe is extremely sensitive to the specific impulse (Isp) of the fuel used. Oliver Habryka noticed [https://www.lesswrong.com/posts/k774aKEogcCugmKPY/which-parts-of-the-paper-eternity-in-six-hours-are-iffy] that Eternity in Six Hours may be making unreasonable assumptions about achievable specific impulses attainable. Further investigation revealed that which specific impulses are attainable may determine whether or not space colonization is affordable at all. To me, it is a major sensitivity in the paper. Transformed to isolate initial mass, the relativistic rocket equation gives: m0: initial mass m1: final mass c: speed of light Isp: specific impulse Δv: change in velocity This formula for the initial mass is linear in final mass and exponential in specific impulse. For a fixed mass of 1kg, the initial fuel mass required to accelerate to different fractions of the speed of light are shown by: Isp can be measured in m/s; the x-axis gives Isp as a fraction of the speed of light. The dotted lines correspond to 4% of c (the Isp for fission given by the paper) and half that value, 2% of c. On page 11, Armstrong and Sandberg provide the following values of specific impulse (measured as a fraction of c). Of these, we have only actually attained nuclear fission, however not in efficient rocket form. As Habryka pointed out [https://www.lesswrong.com/posts/k774aKEogcCugmKPY/which-parts-of-the-paper-eternity-in-six-hours-are-iffy] , the paper makes the assumption that almost all of the energy rele

Analyzing the rocket-equation section, I found the following statement:

The relativistic rocket equation is
where is the difference in velocity, is the initial mass of the probe, the final mass of the replicator and the term denotes the specific impulse of the fuel burning process. The term can be derived from η, the proportion of fuel transformed into energy during the burning process [24].

The fact that we can fully derive from the fuel energy-transformation efficiency seemed weird to me, so I looked it up in the underlying reference and found the following quote (I slightly cleaned up the math typesetting and replaced it with equivalent symbols above, emphasis mine):

For the relativistic case, there is a maximum exhaust velocity for the reaction mass that is given by:
where e is the fuel mass fraction converted into kinetic energy of the reaction mass. I was not able to improve on that derivation from a presentation point of view.

This is obviously a very similar equation, but importantly this equation specifies an upper bound on the exhaust velocity and does not say when that upper bound can be attained. Intuitively it seems to me not at all obvious that we should be able to attain that maximum exhaust velocity, since it would require the ability to perfectly direct the energy released in the fuel burning, which would naively be primarily released as heat.

The keyword that finally helped me understand the relevant rocket design is "Fission-Fragment Rocket", which at least according to Wikipedia could indeed reach specific impulses sufficient to support the conclusions in the paper.

3habryka2yOk, I think the original calculations here are still correct, if you design your rocket to directly emit the fission material at high speeds. This [http://www.rbsp.info/rbs/PDF/aiaa05.pdf] is a paper that proposes such a rocket design:
3habryka2yFurther comments on this rocket design: https://forum.nasaspaceflight.com/index.php?PHPSESSID=3omp04d25qbe0qj5l5n1qfl0hp&topic=47693.20 [https://forum.nasaspaceflight.com/index.php?PHPSESSID=3omp04d25qbe0qj5l5n1qfl0hp&topic=47693.20]

3 Related Questions

Parent Question
Sub-Questions
4Answer by Ruby2yExtracting my response from this post [https://www.lesswrong.com/posts/8WCPDk3RJ6SLP2ZuR/claims-and-assumptions-made-in-eternity-in-six-hours] . CLAIMS AND ASSUMPTIONS (NOT EXHAUSTIVE) * Self-replicating probes for colonizations could be launched to a fraction of lightspeed using fixed launch systems such as coilguns or quenchguns as (opposed to rockets). * Only six hours of the sun's energy (3.8x10^26W) are required to commence the colonization of the entire universe. * A future human civilization could easily aspire to this amount of energy. * Since the procedure is conjunction of designs and yet each of the requirements have multiple pathways to implementation, the whole construction is robust. * Humans have generally been quite successful at copying or co-oping nature. We can assume that anything done in the natural world can be done under human control, e.g. self-replicators and AI. * Any task which can be performed can be automated. * It would be ruinously costly to send over a large colonization fleet, and is much more efficient to send over a small payload which builds what is required in situ, i.e. von Neumann probes. * Data storage will not be much an issue. * Example: can fit all the world's data and upload of everyone in Britain in gram of crystal. * 500 tons is a reasonable upper bound for the size of a self-replicating probe. * A replicator with mass of 30 grams would not be unreasonable. * Antimatter annihilation, nuclear fusion, and nuclear fission are all possible rocket types to be used for deceleration. * Processes like magnetic sail, gravitational assist, and "Bussard ramjet" are conceivable and possible, but to be conservative are not relied on. * Nuclear fission reactors could be made 90% efficient. Current reactor designs could reach efficiencies of over 50% of the theoretical maximum. * Any fall-off in fission efficiency results in a dramatic decrease in deceleration potential.They